Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
3,201 | 96,926 |
torch.onnx.export failed for models with Bernoulli operator
|
module: onnx, triaged
|
### 🐛 Describe the bug
torch.onnx.export failed for models such as **efficientnet, convnext and swin_transformer** (Bernoulli operator existing in these models).
The program crash and report the following error:
```
SymbolicValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
**Unsupported: ONNX export of operator Bernoulli, out parameter is not supported for bernoulli.** Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues [Caused by the value '822 defined in (%822 : Float(1, 1, 1, 1, strides=[1, 1, 1, 1], requires_grad=0, device=cpu) = onnx::Constant[value={0}](), scope: torchvision.models.efficientnet.EfficientNet::/torch.nn.modules.container.Sequential::features/torch.nn.modules.container.Sequential::features.1/torchvision.models.efficientnet.FusedMBConv::features.1.1/torchvision.ops.stochastic_depth.StochasticDepth::stochastic_depth # /root/anaconda3/lib/python3.9/site-packages/torchvision/ops/stochastic_depth.py:40:0
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Constant'.]
(node defined in /root/anaconda3/lib/python3.9/site-packages/torchvision/ops/stochastic_depth.py(40): **stochastic_depth**
/root/anaconda3/lib/python3.9/site-packages/torchvision/ops/stochastic_depth.py(62): forward
/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py(1178): _slow_forward
/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py(1190): _call_impl
/root/anaconda3/lib/python3.9/site-packages/torchvision/models/efficientnet.py(228): forward
...
```
I have checked the model definition of efficientnet and found that there is a call to "stochastic_depth" (torchvision.ops):
``` python
def stochastic_depth(input: Tensor, p: float, mode: str, training: bool = True) -> Tensor:
if not torch.jit.is_scripting() and not torch.jit.is_tracing():
_log_api_usage_once(stochastic_depth)
if p < 0.0 or p > 1.0:
raise ValueError(f"drop probability has to be between 0 and 1, but got {p}")
if mode not in ["batch", "row"]:
raise ValueError(f"mode has to be either 'batch' or 'row', but got {mode}")
if not training or p == 0.0:
return input
survival_rate = 1.0 - p
if mode == "row":
size = [input.shape[0]] + [1] * (input.ndim - 1)
else:
size = [1] * input.ndim
noise = torch.empty(size, dtype=input.dtype, device=input.device)
noise = noise.bernoulli_(survival_rate) ############ Here
if survival_rate > 0.0:
noise.div_(survival_rate)
return input * noise
```
**noise.bernoulli_(survival_rate)** seems a normal function call but failed in the following assertions (**out is not None, generator is not None**):
``` python
#torch.onnx -> symbolic_opset9.py
@_onnx_symbolic("aten::bernoulli")
@_beartype.beartype
def bernoulli(g: jit_utils.GraphContext, input, generator=None, out=None):
if out is not None:
symbolic_helper._unimplemented(
"Bernoulli", "out parameter is not supported for bernoulli", input
)
if generator is not None and not symbolic_helper._is_none(generator):
symbolic_helper._unimplemented(
"Bernoulli", "generator is not supported for bernoulli", input
)
dtype = symbolic_helper._try_get_scalar_type(input)
if dtype is None:
return symbolic_helper._unimplemented(
"Bernoulli", "input dtype not accessible", input
)
p = g.op(
"RandomUniformLike",
input,
high_f=1.0,
low_f=0.0,
dtype_i=_type_utils.JitScalarType.from_name(dtype).onnx_type(),
)
output = g.op("Less", p, input)
return g.op(
"Cast", output, to_i=_type_utils.JitScalarType.from_name(dtype).onnx_type()
)
```
I have tried to remove the following checks and it works as normal, i.e., the torch.onnx is able to export the abovementioned models.
``` python
if out is not None:
symbolic_helper._unimplemented(
"Bernoulli", "out parameter is not supported for bernoulli", input
)
if generator is not None and not symbolic_helper._is_none(generator):
symbolic_helper._unimplemented(
"Bernoulli", "generator is not supported for bernoulli", input
)
```
**Is it possible to revise the assertions to warnings? or how long can we expect to have a full implementation of aten::bernoulli in torch.onnx?**
### Versions
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 470.56.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2194.729
CPU max MHz: 3800.0000
CPU min MHz: 2200.0000
BogoMIPS: 7585.79
Virtualization: AMD-V
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.8
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.2
[pip3] numpydoc==1.1.0
[pip3] qtorch==0.2.0
[pip3] torch==1.13.0
[pip3] torch-mlir==20220522.467
[pip3] torchaudio==0.8.1
[pip3] torchpack==0.3.1
[pip3] torchsparse==2.0.0b0
[pip3] torchsummaryX==1.3.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl https://mirrors.ustc.edu.cn/anaconda/pkgs/free
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py39h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py39hd3c417c_0 defaults
[conda] mkl_random 1.2.2 py39h51133e4_0 defaults
[conda] msgpack-numpy 0.4.8 pypi_0 pypi
[conda] numpy 1.19.2 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1 defaults
[conda] pytorch 1.13.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] qtorch 0.2.0 pypi_0 pypi
[conda] torch-mlir 20220522.467 pypi_0 pypi
[conda] torchaudio 0.8.1 pypi_0 pypi
[conda] torchpack 0.3.1 pypi_0 pypi
[conda] torchsparse 2.0.0b0 dev_0 <develop>
[conda] torchsummaryx 1.3.0 pypi_0 pypi
[conda] torchvision 0.14.0 py39_cpu pytorch
| 0 |
3,202 | 96,908 |
Doing inplace on a inplace view of tensor that retains_grad triggers internal assert
|
module: autograd, triaged, has workaround
|
### 🐛 Describe the bug
`gradcheck` triggers INTERNAL ASSERT FAILED when transposing and adding the input tensor in-place
```py
import torch
x = torch.randn(5, 5, device='cuda', requires_grad=True)
def func(x):
x.t_().add_(x)
return x
torch.autograd.gradcheck(func, (x.clone(),))
# RuntimeError: out != nullptr INTERNAL ASSERT FAILED
# at "../torch/csrc/autograd/variable.cpp":188, please report a bug to PyTorch.
```
It seems that this is a regression bug since I cannot reproduce it in a old-version PyTorch
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230311+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.78.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7186.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+2c32f43999
[pip3] torch==2.1.0.dev20230311+cu117
[pip3] torchaudio==2.0.0.dev20230311+cu117
[pip3] torchvision==0.15.0.dev20230311+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+2c32f43999 pypi_0 pypi
[conda] torch 2.1.0.dev20230311+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230311+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230311+cu117 pypi_0 pypi
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
3,203 | 96,883 |
Expected scalar type Half but found Float when running nn.MultiheadAttention with AMP
|
oncall: transformer/mha
|
### 🐛 Describe the bug
When performing self-attention using nn.MultiheadAttention with precision=16, I get `RuntimeError: expected scalar type Half but found Float). I believe it is related to [this issue](https://github.com/pytorch/pytorch/issues/84396, and this [unmerged PR](https://github.com/pytorch/pytorch/pull/84722) from @erichan1.
I have gotten it to work by changing the code from `self.self_attn(x, x, x)` to `self.self_attn(x+1e-16, x, x)`, but I'm not sure if this could potentially cause other issues.
```
# from __init__()
self.self_attn = nn.MultiheadAttention(num_input_channels, 16, batch_first=True)
```
```
attn_out, attn_weights = self.self_attn(x, x, x)
File "/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1113, in forward
return torch._native_multi_head_attention(
RuntimeError: expected scalar type Half but found Float
```
### Versions
torch 1.12.1 + cu11.6
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 1 |
3,204 | 96,855 |
Performance Drop for linalg_ldl_factor and ldl_solve
|
triaged, enhancement, actionable, module: vmap, module: functorch
|
Hi, I have a function which vmaps over matrices, performs LDLT decomposition and solve them (together with other operations)
Are there plans to implement batching for this operation in the near future ?
Matrices which I'm handling are only positive semidefinite, so using LDLT decomposition would work much better than cholesky.
Thanks!
minimal example:
```python
import torch
import functorch
def solve(A, b):
LD, pivots = torch.linalg.ldl_factor(A)
return torch.linalg.ldl_solve(LD, pivots, b)
A = torch.randn(2, 3, 3)
A = A @ A.mT # make symmetric
B = torch.randn(2, 3, 1)
print(functorch.vmap(solve)(A, B))
```
```
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::linalg_ldl_factor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
LD, pivots = torch.linalg.ldl_factor(A)
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::linalg_ldl_solve. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
return torch.linalg.ldl_solve(LD, pivots, b)
```
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
3,205 | 96,789 |
`cumprod` triggers INTERNAL ASSERT FAILED when `out` is a tensor on cuda but input is on cpu
|
module: error checking, triaged, module: assert failure
|
### 🐛 Describe the bug
`cumprod` triggers INTERNAL ASSERT FAILED when `out` is a tensor on cuda but input is on cpu
```py
import torch
from torch.func import jacrev
import torch.nn as nn
from torch.autograd.functional import jacobian
torch.manual_seed(420)
m = torch.rand(2, 3, 4, 5, 6, 7)
out = torch.rand([]).cuda()
m_cumprod = torch.cumprod(m, dim=0, out=out)
# RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED
# at "../c10/cuda/impl/CUDAGuardImpl.h":25, please report a bug to PyTorch
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @malfet
| 0 |
3,206 | 96,779 |
Segmentation fault (core dumped) during Torch finetuning (at random step)
|
needs reproduction, module: crash, triaged
|
### 🐛 Describe the bug
I’m encountering a problem while running a finetune script for my FlanT5(-large) model. The script crashes with the message Segmentation fault (core dumped) after a few (thousand) steps. The error seems to occur randomly, sometimes within a few minutes and sometimes it takes around 40 minutes.
I initially attempted to use the stable version of PyTorch, but based on advice from @ptrblck, I decided to upgrade to the nightly version. Unfortunately, neither option resolved my issue, but the nightly version did provide a more detailed stack trace, which is shown below.
My imports:
```python
import torch
import torch.nn as nn
from tqdm import tqdm
from pathlib import Path
from collections import OrderedDict
from torch.utils.data import DataLoader, Dataset
from torch.optim import AdamW
from transformers import get_linear_schedule_with_warmup
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from tensorboardX import SummaryWriter
from transformers.optimization import Adafactor
from rouge import Rouge
import re
import os
import glob
import torch
import random
import logging
import numpy as np
import json
```
Some snippets from my training loop:
```python
epoch += 1
### Training
model.train()
with torch.enable_grad(), tqdm(total=num_train) as progress_bar:
for batch_num, batch in enumerate(train_loader):
real_batch_size = len(batch["source_ids"])
loss, logits = forward(model, device, batch)
loss = loss / k_gradient_accumulation_steps
loss = loss.mean()
loss.backward()
loss_val = loss.item() * k_gradient_accumulation_steps # get the item since loss is a tensor
if ((batch_num + 1) % k_gradient_accumulation_steps == 0) or (batch_num + 1 == len(train_loader)):
# Backward
nn.utils.clip_grad_norm_(model.parameters(), k_max_grad_norm)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
(...)
```
With the following forward function:
```python
def forward(model, device, batch):
src_ids = batch["source_ids"].to(device, dtype=torch.long)
src_mask = batch["source_mask"].to(device, dtype=torch.long)
tgt_ids = batch["target_ids"].to(device, dtype=torch.long)
tgt_ids[tgt_ids[:, :] == 0] = -100
label_ids = tgt_ids.to(device)
out_dict = model(src_ids, attention_mask=src_mask, labels=label_ids, return_dict=True)
loss, logits = out_dict['loss'], out_dict['logits']
return loss, logits
```
Let me know if the provided information is insufficient. In that case I can try to create a simple application.
```
stacktrace
Thread 1 "python3" received signal SIGSEGV, Segmentation fault.
__GI__dl_find_object (pc1=0x7ffff6706058 <_Unwind_RaiseException+72>, result=0x7fffffffb8c8) at ./elf/dl-find_object.c:442
442 ./elf/dl-find_object.c: No such file or directory.
(gdb) bt
#0 __GI__dl_find_object (pc1=0x7ffff6706058 <_Unwind_RaiseException+72>, result=0x7fffffffb8c8) at ./elf/dl-find_object.c:442
#1 0x00007ffff67080f6 in _Unwind_Find_FDE () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#2 0x00007ffff6704833 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#3 0x00007ffff6705ad0 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#4 0x00007ffff6706059 in _Unwind_RaiseException () from /lib/x86_64-linux-gnu/libgcc_s.so.1
#5 0x00007fffd88ae50b in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007fffd88a538a in std::__throw_logic_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6
#7 0x00007fffd88f66b1 in char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) ()
from /lib/x86_64-linux-gnu/libstdc++.so.6
#8 0x00007fffd88f6b14 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) ()
from /lib/x86_64-linux-gnu/libstdc++.so.6
#9 0x00007fffbde17b03 in torch::Library::_parseNameForLib(char const*) const () from /home/ruben/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#10 0x00007fffbde1df85 in torch::Library::_impl(char const*, torch::CppFunction&&, torch::_RegisterOrVerify) & ()
from /home/ruben/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
--Type <RET> for more, q to quit, c to continue without paging--
#11 0x00007fffbdb6c3e1 in at::native::TORCH_LIBRARY_IMPL_init_aten_Conjugate_3(torch::Library&) ()
from /home/ruben/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#12 0x0000555614ecd120 in ?? ()
#13 0x0000000000000000 in ?? ()
````
Originates from https://discuss.pytorch.org/t/segmentation-fault-core-dumped-during-torch-finetuning-on-new-setup/174570
### Versions
ollecting environment information...
PyTorch version: 2.1.0.dev20230313+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3800,0000
CPU min MHz: 2200,0000
BogoMIPS: 7600.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.1.0+2c32f43999
[pip3] torch==2.1.0.dev20230313+cu117
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.0.dev20230313+cu117
[pip3] torchmetrics==0.11.3
[pip3] torchvision==0.15.0.dev20230313+cu117
[conda] Could not collect
| 0 |
3,207 | 96,773 |
[MPS] pinverse dtype error
|
triaged, module: linear algebra, module: mps
|
I am trying to do the pinverse on MPS machine.
I am setting the variable as:
```
debug = torch.Tensor([[1,2,3],[4,5,6]]).to(device=torch.device('mps')).to(dtype=torch.float32)
```
confirming variable is loaded in mps:
````
>> debug.dtype
torch.float32
>> debug
tensor([[1., 2., 3.],
[4., 5., 6.]], device='mps:0')
````
Calling the following command would give me error:
```
>> debug_pinv = debug.pinverse()
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
```
### Versions
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchaudio 0.13.1 pypi_0 pypi
[conda] torchvision 0.14.1 pypi_0 pypi
### does this mean the pinverse's implementation is fixed on float64 for MPS?
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
3,208 | 96,769 |
`sparse.mm` triggers INTERNAL ASSERT FAILED when backwarding
|
module: sparse, module: autograd, triaged
|
### 🐛 Describe the bug
`sparse.mm` triggers INTERNAL ASSERT FAILED when backwarding
```py
import torch
a = torch.eye(3, 4).to_sparse_coo()
b = torch.eye(4, 5).to_sparse_coo()
def func(a, b):
result = torch.sparse.mm(a, b)
return result
ans = func(a, b)
# tensor(indices=tensor([[0, 1, 2],
# [0, 1, 2]]),
# values=tensor([1., 1., 1.]),
# size=(3, 5), nnz=3, layout=torch.sparse_coo)
ans_grad = func(a.clone().requires_grad_(), b.clone().requires_grad_())
# tensor(indices=tensor([[0, 1, 2],
# [0, 1, 2]]),
# values=tensor([1., 1., 1.]),
# size=(3, 5), nnz=3, layout=torch.sparse_coo,
# grad_fn=<SparseSparseMatmulBackward0>)
ans_grad.sum().backward()
# RuntimeError: mat2_.is_sparse() INTERNAL ASSERT FAILED at
# "../aten/src/ATen/native/sparse/SparseMatMul.cpp":255,
# please report a bug to PyTorch
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ezyang @albanD @zou3519 @gqchen @soulitzer @Lezcano @Varal7
| 1 |
3,209 | 96,766 |
Follow-ups to do after adding nested checkpoint
|
module: autograd, triaged, actionable
|
Tracking some follow ups to do after https://github.com/pytorch/pytorch/pull/90105 lands:
- [x] update docs to reflect ability to do backward within checkpointed region https://github.com/pytorch/pytorch/pull/96862
- [x] enable early-stopping by default, add an API to disable https://github.com/pytorch/pytorch/pull/96866
- [x] update docs to mention the early stop feature https://github.com/pytorch/pytorch/pull/96866
- [ ] update docs/tutorial to feature nested use cases
- [ ] https://github.com/pytorch/pytorch/issues/96764
Missing anything?
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7
| 0 |
3,210 | 96,764 |
Improve checkpoint thread-safety
|
module: autograd, triaged, module: multithreading, needs design
|
### 🐛 Describe the bug
Checkpoint's uses a dictionary to track whether a given checkpoint has already been recomputed, and this dictionary can be shared between multiple threads. As a result we might run into something like the following (note that the GIL doesn't protect you here!):
Possible race condition:
- perform forward on two devices to force backward to be performed on two threads, since they spawned from the same backward, the two threads share the same graph_task_id
- both threads simultaneously try to execute two separate nodes created under the same checkpoint
- the test `if not frame.is_recomputed[gid]:` passes for both threads
- both threads do recompute for the same checkpoint! (something bad happens, at the very least, we recompute checkpoint unnecessarily)
See https://github.com/pytorch/pytorch/pull/90105#discussion_r1135722492
### Versions
main
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7
| 0 |
3,211 | 96,759 |
[inductor] flaky rexnet_100 accuracy tests
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
rexnet_100 has accuracy flakiness, sometimes marking accuracy failure due to `stem.bn.weight.grad`.
#96691 and #96474 disable rexnet_100 in CI.
For an example, see https://github.com/pytorch/pytorch/actions/runs/4402868441/jobs/7710977874.
### Versions
master / CI
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
| 3 |
3,212 | 96,757 |
[ONNX] FX exporter 'test_models_onnxruntime.py' tracker
|
module: onnx, triaged, onnx-triaged
|
#### List of unsupported call_functions
```
(Draft) Steps to proceed
1. If operator is in [Core ATen IR](https://pytorch.org/docs/master/ir.html).
then
2. Implement in ATenlib. Validate w/ exporter.
else
3. If operator has decomposition.
then
4. Debug exporter why it is not decomposed. Then goto 1.
else
5. Decide whether to add decomposition for the op or goto 2.
```
Top errors in UnsupportedCallFunctions (3635 total):
- [ ] 1696 errors like: aten.new_zeros.default examples:
:test_googlenet
:test_mnasnet
:test_mobilenet
:test_dcgan_netG
:test_fcn
:test_r2plus1d_18_video
:test_shufflenet_v2_dynamic_axes
:test_resnet
:test_shufflenet
:test_mobilenet_v3
:test_deeplab
:test_dcgan_netD
:test_inception
:test_ops
:test_densenet
- [ ] 848 errors like: aten.empty.memory_format examples:
:test_googlenet
:test_mnasnet
:test_mobilenet
:test_dcgan_netG
:test_fcn
:test_r2plus1d_18_video
:test_shufflenet_v2_dynamic_axes
:test_resnet
:test_shufflenet
:test_mobilenet_v3
:test_deeplab
:test_dcgan_netD
:test_inception
:test_ops
:test_densenet
- [ ] 761 errors like: aten.copy_.default examples:
:test_alexnet
:test_squeezenet
:test_googlenet
:test_mnasnet
:test_mobilenet
:test_dcgan_netG
:test_fcn
:test_r2plus1d_18_video
:test_shufflenet_v2_dynamic_axes
:test_resnet
:test_shufflenet
:test_mobilenet_v3
:test_deeplab
:test_dcgan_netD
:test_inception
:test_densenet
- [ ] 127 errors like: aten.cat.default examples:
:test_squeezenet
:test_googlenet
:test_shufflenet_v2_dynamic_axes
:test_shufflenet
:test_deeplab
:test_inception
:test_densenet
- [ ] 90 errors like: aten.add_.Tensor examples:
:test_resnet
:test_fcn
:test_r2plus1d_18_video
:test_deeplab
- [ ] 35 errors like: aten.hardtanh.default examples:
:test_mobilenet
- [ ] 31 errors like: aten.max_pool2d_with_indices.default examples:
:test_alexnet
:test_squeezenet
:test_googlenet
:test_fcn
:test_shufflenet_v2_dynamic_axes
:test_resnet
:test_shufflenet
:test_mnist
:test_deeplab
:test_inception
:test_densenet
- [ ] 19 errors like: aten.mean.dim examples:
:test_mnasnet
:test_googlenet
:test_mobilenet
:test_r2plus1d_18_video
:test_shufflenet_v2_dynamic_axes
:test_resnet
:test_shufflenet
:test_mobilenet_v3
:test_deeplab
:test_inception
:test_densenet
- [ ] 15 errors like: aten.avg_pool2d.default examples:
:test_alexnet
:test_squeezenet
:test_inception
:test_ops
:test_densenet
- [ ] 12 errors like: aten.index.Tensor examples:
:test_fcn
:test_deeplab
- [ ] 1 errors like: aten.squeeze.dim examples:
:test_ops
| 0 |
3,213 | 96,743 |
Pruning under channels_last format
|
triaged, module: memory format, module: pruning
|
### 🐛 Describe the bug
Pruning do not seem to be compatible with models stored in channels_last format because torch.view() is currently used, and it requires a contiguous input tensor.
Minimal Working Example :
```
import torch
from torch.nn.utils import prune
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)
model.to(memory_format=torch.channels_last)
for name, module in model.named_modules():
if isinstance(module, torch.nn.Conv2d):
prune.l1_unstructured(module, name="weight", amount=0.5)
break
```
Error log:
```
---------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/scripts/mwe.py", line 8, in <module>
prune.l1_unstructured(module, name="weight", amount=0.5)
File "/home/user/.local/lib/python3.9/site-packages/torch/nn/utils/prune.py", line 923, in l1_unstructured
L1Unstructured.apply(
File "/home/user/.local/lib/python3.9/site-packages/torch/nn/utils/prune.py", line 562, in apply
return super(L1Unstructured, cls).apply(
File "/home/user/.local/lib/python3.9/site-packages/torch/nn/utils/prune.py", line 205, in apply
raise e
File "/home/user/.local/lib/python3.9/site-packages/torch/nn/utils/prune.py", line 191, in apply
mask = method.compute_mask(importance_scores, default_mask=default_mask)
File "/home/user/.local/lib/python3.9/site-packages/torch/nn/utils/prune.py", line 536, in compute_mask
topk = torch.topk(torch.abs(t).view(-1), k=nparams_toprune, largest=False)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.10.0-20-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 SUPER
GPU 1: NVIDIA GeForce RTX 2080 SUPER
Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Stepping: 7
CPU MHz: 1605.327
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 5199.96
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] numpy-groupies==0+unknown
[pip3] numpydoc==1.1.0
[pip3] torch==1.12.0
[pip3] torchvision==0.13.0
[conda] Could not collect
cc @jamesr66a
| 2 |
3,214 | 96,742 |
Pytorch2.0 compile error
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
I can run the code normally in eager mode, but it raises an error when compiling:
``` File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 118, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 254, in _fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1129, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1083, in _run_ddp_forward
return self.module(*inputs[0], **kwargs[0]) # type: ignore[index]
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "/tmp/amlt_code/code/model/crab/clip.py", line 52, in forward
image_embeddings = self.encode_image(image)
File "/tmp/amlt_code/code/model/crab/clip.py", line 53, in <resume in forward>
text_embeddings = self.encode_text(text, text_attention_mask)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 388, in catch_errors
return hijacked_callback(frame, cache_size, hooks)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 406, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 105, in _fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 263, in _convert_frame_assert
return _compile(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 326, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 313, in transform
tracer.run()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1840, in run
super().run()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 597, in run
and self.step()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 560, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1919, in RETURN_VALUE
self.output.compile_subgraph(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 569, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 615, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 701, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 697, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 202, in compile_fn
return self.backend_compile_fn(gm, example_inputs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 1064, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/lib/python3.10/site-packages/torch/__init__.py", line 1382, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 488, in compile_fx
return aot_autograd(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2873, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2554, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1694, in aot_wrapper_dedupe
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 616, in inner
flat_f_outs = f(*flat_f_args)
File "/opt/conda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2836, in functional_call
out = Interpreter(mod).run(*args[params_len:], **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 137, in run
self.env[node] = self.run_node(node)
File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 179, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/fx/interpreter.py", line 251, in call_function
return target(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1492, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2489, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1057, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1136, in dispatch
args, kwargs = self.validate_and_convert_non_fake_tensors(
File "/opt/conda/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1290, in validate_and_convert_non_fake_tensors
return tree_map_only(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_pytree.py", line 266, in tree_map_only
return tree_map(map_only(ty)(fn), pytree)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_pytree.py", line 196, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_pytree.py", line 196, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_pytree.py", line 247, in inner
return f(x)
File "/opt/conda/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1282, in validate
raise Exception(
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in c10d.allgather_.default(*([[_to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0'), _to_functional_tensor(FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0),
device='cuda:0')]], [FakeTensor(FakeTensor(..., device='meta', size=(128, 512)), cuda:0)], <torch.ScriptObject object at 0x7f54973871f0>, -1), **{})
While executing %all_gather : [#users=0] = call_function[target=torch.distributed.distributed_c10d.all_gather](args = ([%ones_like, %ones_like_1, %ones_like_2, %ones_like_3, %ones_like_4, %ones_like_5, %ones_like_6, %ones_like_7], %normalize), kwargs = {async_op: False})
Original traceback:
File "/tmp/amlt_code/code/model/crab/clip.py", line 57, in <resume in forward>
return self.criterion(
File "/tmp/amlt_code/code/model/modules/clip_loss.py", line 29, in forward
image_embed_all, text_embed_all = all_gather_batch([image_embed, text_embed])
File "/tmp/amlt_code/code/utils/dist.py", line 58, in all_gather_batch
dist.all_gather(tensor_all, tensor, async_op=False) # performance opt
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230309+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-197-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
Stepping: 4
CPU MHz: 2959.349
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230309+cu118
[pip3] torchaudio==2.0.0.dev20230309+cu118
[pip3] torchvision==0.15.0.dev20230309+cu118
[conda] mkl 2023.0.0 h6d00ec8_25399
[conda] mkldnn 0.16.1 0 mingfeima
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230309+cu118 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230309+cu118 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230309+cu118 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 7 |
3,215 | 96,738 |
Many padding Module fail memory_format tests
|
module: nn, triaged, module: memory format, actionable, module: intel
|
See https://github.com/pytorch/pytorch/pull/96641 that added these skips.
We should investigate why this happens and most likely fix them.
cc @ezyang @gchanan @zou3519 @mruberry @jbschlosser @walterddr @mikaylagawarecki @jamesr66a @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @saketh-are
| 5 |
3,216 | 96,735 |
when run python run_test.py -i test_ops_jit error like this. ValueError: option names {'--junit-xml-reruns'} already added
|
oncall: jit
|
### 🐛 Describe the bug
cd pytorch/test
python run_test.py -i test_ops_jit
### Versions
by torch version v1.13.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
3,217 | 96,726 |
Memory not release after jit.trace/freeze
|
oncall: jit
|
### 🐛 Describe the bug
Can not use `del` and `gc` to release memory after trace/freeze.
To track memory malloc/release
```
diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp
index 6ca9ea10967..c4fd33ae701 100644
--- a/c10/core/impl/alloc_cpu.cpp
+++ b/c10/core/impl/alloc_cpu.cpp
@@ -6,6 +6,8 @@
#include <c10/util/irange.h>
#include <c10/util/numa.h>
+#include <iostream>
+
// TODO: rename flags to C10
C10_DEFINE_bool(
caffe2_cpu_allocator_do_zero_fill,
@@ -94,7 +96,7 @@ void* alloc_cpu(size_t nbytes) {
} else if (FLAGS_caffe2_cpu_allocator_do_junk_fill) {
memset_junk(data, nbytes);
}
-
+ std::cout << "malloc data in c++" << data << " size " << float(nbytes) / 1024 / 1024 << "MB" << std::endl;
return data;
}
@@ -103,6 +105,7 @@ void free_cpu(void* data) {
_aligned_free(data);
#else
// NOLINTNEXTLINE(cppcoreguidelines-no-malloc)
+ std::cout << "free data " << data << std::endl;
free(data);
#endif
}
```
```python
import time
import psutil, os
import torch
import gc
class M(torch.nn.Module):
def __init__(self):
super(M, self).__init__()
self.w1 = torch.rand(int(1e7), 10)
print("malloc input", hex(self.w1.data_ptr()), "size", 1e7 * 100 * 4 / 1024 / 1024, "MB")
def forward(self, x):
x = self.w1 + x
return x
def run_leak():
process = psutil.Process(os.getpid())
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
a = M().eval()
input = torch.zeros(int(1e7), 1)
print("malloc input", hex(input.data_ptr()), "size", 1e7 * 4 / 1024 / 1024, "MB")
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
print("trace==============")
a_trace = torch.jit.trace(a, input)
del(input)
print("===============================================delete input")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
gc.collect()
print("gc=================================================")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
del(a_trace)
print("===============================================delete a_trace")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
gc.collect()
print("gc=================================================")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
del(a)
print("===============================================delete a")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
gc.collect()
print("gc=================================================")
time.sleep(2)
print("crurent mem usage:", process.memory_info().rss / 1024/1024, "MB")
if __name__ == '__main__':
run_leak()
print("process exit===========================")
exit()
```
Output:
```
crurent mem usage: 282.78515625 MB
malloc data in c++0x7f4c2f8da040 size 381.47MB
malloc w1 0x7f4c2f8da040 size 3814.697265625 MB
malloc data in c++0x7f4c2d2b4040 size 38.147MB
malloc input 0x7f4c2d2b4040 size 38.14697265625 MB
crurent mem usage: 701.67578125 MB
trace==============
...
...
===============================================delete input
crurent mem usage: 713.7421875 MB
gc=================================================
crurent mem usage: 713.7421875 MB
crurent mem usage: 713.7421875 MB
free data 0x7f4c2d2b4040
===============================================delete a_trace
crurent mem usage: 675.59375 MB
gc=================================================
crurent mem usage: 675.59375 MB
===============================================delete a
crurent mem usage: 675.59375 MB
gc=================================================
crurent mem usage: 675.59375 MB
process exit===========================
free data 0x7f4c2f8da040
```
` 0x7f4c2f8da040` is released just before process exit, the `delete` and `gc.collect` do not work.
The release back trace, I can see the last decrese of ref count is from `torch::jit::CompilationUnit` which own the `graph` and `w1` is a node of `graph`. But since I have delete all related variable in python, I do not know whether this `PyObject` is reachable to release the memory`.
```
#26 0x00007fffdf9e885e in torch::jit::Node::~Node (this=0x555559e66a50, __in_chrg=<optimized out>)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/torch/csrc/jit/ir/ir.h:820
#27 0x00007fffdf9e2628 in torch::jit::Graph::~Graph (this=0x555559e61920, __in_chrg=<optimized out>)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/torch/csrc/jit/ir/ir.cpp:2003
#28 0x00007fffdf9a1092 in std::_Sp_counted_ptr<torch::jit::Graph*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (
this=0x555559e65440) at /usr/include/c++/11/bits/shared_ptr_base.h:348
#29 0x00007fffd9f88ab6 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x555559e65440)
at /usr/include/c++/11/bits/shared_ptr_base.h:168
#30 0x00007fffd9f867b5 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x555559e68798,
__in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr_base.h:702
#31 0x00007fffddf4a334 in std::__shared_ptr<torch::jit::Graph, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (
this=0x555559e68790, __in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr_base.h:1149
#32 0x00007fffddf4a354 in std::shared_ptr<torch::jit::Graph>::~shared_ptr (this=0x555559e68790,
__in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr.h:122
#33 0x00007fffdf82aa1e in torch::jit::GraphFunction::~GraphFunction (this=0x555559e68710, __in_chrg=<optimized out>)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/torch/csrc/jit/api/function_impl.h:11
#34 0x00007fffdf82aa5a in torch::jit::GraphFunction::~GraphFunction (this=0x555559e68710, __in_chrg=<optimized out>)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/torch/csrc/jit/api/function_impl.h:11
#35 0x00007fffef24c7ce in std::default_delete<torch::jit::Function>::operator() (this=0x555559e66458,
__ptr=0x555559e68710) at /usr/include/c++/11/bits/unique_ptr.h:85
#36 0x00007fffef23c448 in std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> >::~unique_ptr (this=0x555559e66458, __in_chrg=<optimized out>) at /usr/include/c++/11/bits/unique_ptr.h:361
#37 0x00007fffef2ba6f3 in std::_Destroy<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> > > (__pointer=0x555559e66458) at /usr/include/c++/11/bits/stl_construct.h:140
#38 0x00007fffef29f424 in std::_Destroy_aux<false>::__destroy<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> >*> (__first=0x555559e66458, __last=0x555559e66460)
--Type <RET> for more, q to quit, c to continue without paging--
at /usr/include/c++/11/bits/stl_construct.h:152
#39 0x00007fffef28555c in std::_Destroy<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> >*> (__first=0x555559e66450, __last=0x555559e66460) at /usr/include/c++/11/bits/stl_construct.h:185
#40 0x00007fffef26148f in std::_Destroy<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> >*, std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> > > (__first=0x555559e66450,
__last=0x555559e66460) at /usr/include/c++/11/bits/alloc_traits.h:746
#41 0x00007fffef2dad91 in std::vector<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> >, std::allocator<std::unique_ptr<torch::jit::Function, std::default_delete<torch::jit::Function> > > >::~vector (
this=0x555558bede60, __in_chrg=<optimized out>) at /usr/include/c++/11/bits/stl_vector.h:680
#42 0x00007fffef2d1d7a in torch::jit::CompilationUnit::~CompilationUnit (this=0x555558bede60,
__in_chrg=<optimized out>)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/torch/csrc/jit/api/compilation_unit.h:48
#43 0x00007fffef2eed48 in __gnu_cxx::new_allocator<torch::jit::CompilationUnit>::destroy<torch::jit::CompilationUnit> (
this=0x555558bede60, __p=0x555558bede60) at /usr/include/c++/11/ext/new_allocator.h:162
#44 0x00007fffef2eebf3 in std::allocator_traits<std::allocator<torch::jit::CompilationUnit> >::destroy<torch::jit::CompilationUnit> (__a=..., __p=0x555558bede60) at /usr/include/c++/11/bits/alloc_traits.h:531
#45 0x00007fffef2ee407 in std::_Sp_counted_ptr_inplace<torch::jit::CompilationUnit, std::allocator<torch::jit::CompilationUnit>, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x555558bede50)
at /usr/include/c++/11/bits/shared_ptr_base.h:528
#46 0x00007fffee809326 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x555558bede50)
at /usr/include/c++/11/bits/shared_ptr_base.h:168
#47 0x00007fffee803465 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x7ffb98f11310,
__in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr_base.h:702
#48 0x00007fffeed6dc84 in std::__shared_ptr<torch::jit::CompilationUnit, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (
this=0x7ffb98f11308, __in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr_base.h:1149
#49 0x00007fffeed6dcce in std::shared_ptr<torch::jit::CompilationUnit>::~shared_ptr (this=0x7ffb98f11308,
__in_chrg=<optimized out>) at /usr/include/c++/11/bits/shared_ptr.h:122
#50 0x00007fffef259523 in pybind11::class_<torch::jit::CompilationUnit, std::shared_ptr<torch::jit::CompilationUnit> >::dealloc (v_h=...)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/third_party/pybind11/include/pybind11/pybind11.h:1863
#51 0x00007fffee7fd46a in pybind11::detail::clear_instance (self=0x7ffb98f112f0)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/third_party/pybind11/include/pybind11/detail/class.h:424
#52 0x00007fffee7fd581 in pybind11::detail::pybind11_object_dealloc (self=0x7ffb98f112f0)
at /home/haozhe/rebase/frameworks.ai.pytorch.private-cpu/third_party/pybind11/include/pybind11/detail/class.h:448
#53 0x0000555555663fac in _Py_Dealloc (op=<optimized out>)
at /tmp/build/80754af9/python-split_1634043551344/work/Objects/object.c:2215
#54 _Py_DECREF () at /tmp/build/80754af9/python-split_1634043551344/work/Include/object.h:478
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitd9f822b
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0
Clang version: 9.0.1-12
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 2600.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==2.1.0a0+gitd9f822b
[pip3] torchvision==0.15.0a0+135a0f9
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.1.0 pypi_0 pypi
[conda] mkl-static 2022.1.0 pypi_0 pypi
[conda] numpy 1.21.2 py38hd8d4704_0
[conda] numpy-base 1.21.2 py38h2b8c604_0
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchvision 0.15.0a0+135a0f9 dev_0 <develop>
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 11 |
3,218 | 96,716 |
[MPS] `.to('mps')` zeroes out elements in tensors taking up >=2^32 bytes
|
triaged, module: mps
|
### 🐛 Describe the bug
`.to('mps')` zeroes out elements in float or int tensors that at least 2^32 bytes in size. In the example below, all assertions pass.
```python
import torch
def test_to(dtype):
element_size = torch.ones(1, dtype=dtype).element_size()
width = 32_768
height = 2**32 // width // element_size
lt_2to32 = torch.ones(width-1, height, dtype=dtype)
eq_2to32 = torch.ones(width, height, dtype=dtype)
assert element_size * lt_2to32.numel() < 2**32 # 2^31.9999 bytes
assert element_size * eq_2to32.numel() == 2**32 # 2^32 bytes
assert torch.all(lt_2to32.to("mps").to("cpu") == 1) # 2^31.9999 bytes -> all ones
assert torch.all(eq_2to32.to("mps").to("cpu") == 0) # 2^32 bytes -> all zeros
del lt_2to32, eq_2to32
test_to(torch.float16)
test_to(torch.float32)
test_to(torch.int32)
test_to(torch.int64)
```
```
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.7
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 (main, Aug 29 2022, 10:06:59) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-13.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] torch==1.13.1
[pip3] torchaudio==2.0.0
[pip3] torchtext==0.13.1
[pip3] torchvision==0.15.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
3,219 | 96,713 |
[Inductor] [CPU] Huggingface model MobileBertForQuestionAnswering performance regression > 10% on 2023-03-12 nightly release
|
module: performance, triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
Compare with the 2023-03-08, there is a performance regression on huggingface model **MobileBertForQuestionAnswering** on [TorchInductor CPU Performance Dashboard](https://github.com/pytorch/pytorch/issues/93531#issuecomment-1467230382) on 2023-03-12 as bellow:
| 2023-03-12 | | | | 2023-03-08 | | | | Result Comp | | |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| batch_size | speedup | inductor | eager | batch_size | speedup | inductor | eager | speedup ratio | eager ratio | inductor ratio |
|1 |1.2856 |0.0493953 |0.063502598 |1 |1.5266 |0.0414146 |0.063223528 |0.84 |1 |0.84
2023-03-12 nightly release SW information:
SW | Nightly commit | Master/Main commit
-- | -- | --
Pytorch|[1238ae3](https://github.com/pytorch/pytorch/commit/1238ae3)|[82d3d05](https://github.com/pytorch/pytorch/commit/82d3d05)
Torchbench|/|[916ab70](https://github.com/pytorch/benchmark/commit/916ab70b96f1a57de2a7cdf2823938416926b8b0)
torchaudio|[a9a520a](https://github.com/pytorch/audio/commit/a9a520af0810a843da2c551c4e5eeee33060db40)|[de54d86](https://github.com/pytorch/audio/commit/de54d864b271a5dde42aa650c828de18ea37580a)
torchtext|[08c309a](https://github.com/pytorch/text/commit/08c309a8373a0541f0baed20e8987fd564c9ef03)| [bb0efcd](https://github.com/pytorch/text/commit/bb0efcd41d34661ae167029162d0fc73344140a8)
torchvision|[9c365bd](https://github.com/pytorch/vision/commit/9c365bd9b1d500a6e2a9e22d59fcfac27db83320)|[c05ad81](https://github.com/pytorch/vision/commit/c05ad81bcb8a042a67abfd57efabb0778584b922)
torchdata|[eed68d9](https://github.com/pytorch/data/commit/eed68d98b5dfcbbb498d400be48e0b60bc99d7ce)|[e14927f](https://github.com/pytorch/data/commit/e14927f32a6cf99658f72b765425bf032116f724)
dynamo_benchmarks|[cb47373](https://github.com/pytorch/pytorch/commit/cb47373166c3cba2514589e917e228ca9f9682ca)|/
2023-03-08 nightly release SW information:
SW | Nightly commit | Master/Main commit
-- | -- | --
Pytorch|[47cb449](https://github.com/pytorch/pytorch/commit/47cb449)|[3a42752](https://github.com/pytorch/pytorch/commit/3a42752)
Torchbench|/|[916ab70](https://github.com/pytorch/benchmark/commit/916ab70b96f1a57de2a7cdf2823938416926b8b0)
torchaudio|[a9a520a](https://github.com/pytorch/audio/commit/a9a520af0810a843da2c551c4e5eeee33060db40)|[de54d86](https://github.com/pytorch/audio/commit/de54d864b271a5dde42aa650c828de18ea37580a)
torchtext|[08c309a](https://github.com/pytorch/text/commit/08c309a8373a0541f0baed20e8987fd564c9ef03)| [bb0efcd](https://github.com/pytorch/text/commit/bb0efcd41d34661ae167029162d0fc73344140a8)
torchvision|[9c365bd](https://github.com/pytorch/vision/commit/9c365bd9b1d500a6e2a9e22d59fcfac27db83320)|[c05ad81](https://github.com/pytorch/vision/commit/c05ad81bcb8a042a67abfd57efabb0778584b922)
torchdata|[eed68d9](https://github.com/pytorch/data/commit/eed68d98b5dfcbbb498d400be48e0b60bc99d7ce)|[e14927f](https://github.com/pytorch/data/commit/e14927f32a6cf99658f72b765425bf032116f724)
dynamo_benchmarks|[cb47373](https://github.com/pytorch/pytorch/commit/cb47373166c3cba2514589e917e228ca9f9682ca)|/
Graph dump by cosim:
2023-03-12:
[graph.txt](https://github.com/pytorch/pytorch/files/10963542/graph.txt)
2023-03-08:
[graph.txt](https://github.com/pytorch/pytorch/files/10963543/graph.txt)
### Versions
Minified repro:
```
python -m torch.backends.xeon.run_cpu --core_list 0 --ncores_per_instance 1 benchmarks/dynamo/huggingface.py --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only MobileBertForQuestionAnswering --cold_start_latency --batch_size 1 --threads 1
```
cc @ngimel @ezyang @soumith @msaroufim @wconstab @bdhirsh
| 2 |
3,220 | 96,704 |
`logical_xx` operations trigger INTERNAL ASSERT FAIL when `input` is complex tensor on cuda and `other` is on cpu
|
triaged, module: complex
|
### 🐛 Describe the bug
`logical_and`, `logical_or`, `logical_xor` operations trigger INTERNAL ASSERT FAIL when `input` is complex tensor on cuda and `other` is on cpu
```py
import torch
from torch.func import jacrev
import torch.nn as nn
from torch.autograd.functional import jacobian
torch.manual_seed(420)
x = torch.tensor(1j)
# x = torch.tensor(1)
def func(x):
y = torch.tensor(0)
z = torch.logical_and(x, y)
return z
print(func(x))
# tensor(False)
print(func(x.cuda()))
# RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAILED
# at "../aten/src/ATen/native/cuda/JitLoops.cuh":79,
# please report a bug to PyTorch. argument 2:
# expected a CUDA device but found cpu
```
But when `x` is float tensor, it will return without any exception
```py
import torch
from torch.func import jacrev
import torch.nn as nn
from torch.autograd.functional import jacobian
torch.manual_seed(420)
x = torch.tensor(1)
def func(x):
y = torch.tensor(0)
z = torch.logical_and(x, y)
return z
print(func(x))
# tensor(False)
print(func(x.cuda()))
# tensor(False, device='cuda:0')
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 1 |
3,221 | 96,693 |
torch.compile mode="max-autotune" precision appears to be lower
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
I'm sorry, this is going to be a terrible bug report, but I wasn't able to simplify the problem.
I'm running a training run with a model that is compiled with `torch.compile`. The blue line below is loss with the `default` mode, the red line below is loss with `max-autotune`, the training runs are otherwise identical:

It appears as if precision/stability is noticeably lower on the max-autotune training run. This is running on an RTXA600 GPU. The forward pass is wrapped in
`torch.autocast`, there are no graph breaks or warning messages. Max-autotune code is also noticeably a bit faster, so it is definitely doing something that the default mode is not.
Feel free to close this issue if this is not enough info. I could also test more specific things or try to make a mini version to get to the bottom of this, but I have no idea what to look for, or which part would be relevant.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
```
PyTorch version: 2.1.0.dev20230307
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] lion-pytorch==0.0.7
[pip3] numpy==1.23.5
[pip3] torch==2.1.0.dev20230307
[pip3] torchaudio==2.0.0.dev20230307
[pip3] torchvision==0.15.0.dev20230307
[conda] blas 1.0 mkl
[conda] lion-pytorch 0.0.7 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.1.0.dev20230307 py3.10_cuda11.8_cudnn8.7.0_0 pytorch-nightly
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230307 py310_cu118 pytorch-nightly
[conda] torchtriton 2.0.0+b8b470bc59 py310 pytorch-nightly
[conda] torchvision 0.15.0.dev20230307 py310_cu118 pytorch-nightly
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @soumith @ngimel
| 7 |
3,222 | 96,692 |
[H100] `test_ops.py::TestFakeTensorCUDA.test_fake_crossref_backward_amp_nn_functional_scaled_dot_product_attention_cuda_float32` failed
|
module: cuda, triaged, module: fakeTensor
|
### 🐛 Describe the bug
```console
$ cd /path/to/pytorch/test
$ pytest test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_scaled_dot_product_attention_cuda_float32
```
```
================================================================================================================================================================================== test session starts ===================================================================================================================================================================================
platform linux -- Python 3.10.9, pytest-7.2.2, pluggy-1.0.0
rootdir: /opt/pytorch/pytorch, configfile: pytest.ini
plugins: xdoctest-1.0.2, xdist-3.2.1, shard-0.1.2, rerunfailures-11.1.2, hypothesis-5.35.1
collected 1 item
Running 1 items in this shard
test_ops.py F [100%]
======================================================================================================================================================================================== FAILURES ========================================================================================================================================================================================
_______________________________________________________________________________________________________________________________________ TestFakeTensorCUDA.test_fake_crossref_backward_amp_nn_functional_scaled_dot_product_attention_cuda_float32 _______________________________________________________________________________________________________________________________________
Unexpected success
================================================================================================================================================================================ short test summary info =================================================================================================================================================================================
FAILED test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_scaled_dot_product_attention_cuda_float32
=================================================================================================================================================================================== 1 failed in 5.64s ====================================================================================================================================================================================
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git6eca391
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.9 (main, Mar 13 2023, 00:37:55) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7413 24-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2222.492
CPU max MHz: 2650.0000
CPU min MHz: 1500.0000
BogoMIPS: 5300.15
Virtualization: AMD-V
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_l
egacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irper
f xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0a0+git6eca391
[pip3] torchvision==0.15.0a0+7d2acaa
[conda] Could not collect
```
cc @ngimel
| 1 |
3,223 | 96,686 |
No GPU found, using CPU during preprocessing Error processing dataset with NsfHifiGAN
|
needs reproduction, module: windows, module: cuda, triaged
|
### 🐛 Describe the bug
Description
I'm trying to process a dataset using the extract_features.py script in Python, which uses the NsfHifiGAN model to generate audio features. However, when I run the script on my GPU machine, I get the following error message:
```
`python` tools/preprocessing/extract_features.py --config configs/svc_hubert_soft.py --path dataset --clean
2023-03-13 20:45:32.352 | INFO | __main__:<module>:213 - Using 1 workers
2023-03-13 20:45:32.352 | WARNING | __main__:<module>:218 - No GPU found, using CPU
2023-03-13 20:45:32.352 | INFO | __main__:<module>:221 - Cleaning *.npy files...
2023-03-13 20:45:32.383 | INFO | __main__:<module>:227 - Done!
2023-03-13 20:45:32.586 | INFO | __main__:<module>:231 - Found 476 files, processing...
0%| | 0/476 [00:00<?, ?it/s]2023-03-13 20:45:32.586 | INFO | __main__:init:50 - Rank 0 uses device cpu
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7'
Using cache found in C:\Users\MSI/.cache\torch\hub\bshall_hubert_main
2023-03-13 20:45:34.221 | ERROR | __main__:safe_process:192 - Error processing dataset\train\96.wav
2023-03-13 20:45:34.222 | ERROR | __main__:safe_process:193 - class `NsfHifiGAN` in fish_diffusion/modules/vocoders/nsf_hifigan/nsf_hifigan.py: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the `CPU.`
```
Steps to reproduce
1. Run the following command:
`python tools/preprocessing/extract_features.py --config configs/svc_hubert_soft.py --path dataset --clean`
2. Expected behavior
The script should process the dataset without any errors.
3. Actual behavior
The script throws the above-mentioned error message.
4. Workarounds attempted
I tried running these codes to set the device to GPU, as follows:
`pip uninstall torch torchaudio torchvisison -y`
`pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117`
However, this did not resolve the issue. It almost froze my computer instead when it says collecting torch
Thank you in advance for any help or suggestions you can provide. Please let me know if you need any further information.
### Versions
Additional information
I'm running Python 3.10 on a Windows 11 machine.
I have installed PyTorch and other dependencies as required.
The dataset consists of 476 audio .wav files based on my voice.
My NVIDIA model with CUDA drivers
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @ngimel
| 3 |
3,224 | 96,677 |
[FSDP] Make FSDP support local optimizer state_dict
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
The current FSDP support FULL_STATE_DICT, LOCAL_STATE_DICT, and SHARDED_STATE_DICT for model state_dict. However, for optimizer state_dict, FSDP only supports FULL_STATE_DICT and SHARDED_STATE_DICT -- the local state_dict version is missing. It is hard for users to do distributed checkpoint if the returned local state_dict is not wrapped with ShadedTensor or DTensor.
**Code pointer:**
1. Add the flag for local_state_dict version for `_optim_state_dict` in _optim_utils.py:
2. Adding a conversion from tensor to ShardedTensor wrapping in
https://github.com/pytorch/pytorch/blob/31137a63a75b40d9721544cc22843ede6b12e79d/torch/distributed/fsdp/_optim_utils.py#L1464
The subsequent logic also needs to be adjusted for local_state_dict.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
3,225 | 96,670 |
Harden composable fully_shard: Checklist
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
The following are features that should be checked / hardened in order to roll out fully_shard as an alternative to class-based FSDP:
[ ] Test with ShardedGradScaler
[ ] Test clip_grad_norm support
[ ] End to end and unittest for optimizer state checkpointing
[ ] support + API for summon_full_params
[ ] able to print to debug model wrapping structure like class-based FSDP
[ ] Any integration needed with torch.distributed.checkpoint
... more to be added as uncovered.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,226 | 96,662 |
torch.compile gets stuck
|
triaged, oncall: pt2, upstream triton
|
### 🐛 Describe the bug
Hi! I found `torch.compile` will get stuck when compiling the following snippet for CUDA.
```
import torch
x = torch.rand([2,1,1,1], device='cuda')
def forward():
a = x.argmax(3) # [2,1,1]
b = a.max(2).values #[2,1]
c = b.sum(0) # [1]
return torch.add(b, c)
fn_compiled = torch.compile(forward)
print(fn_compiled())
```
The hottest function calls given by `perf top` are
```
58.03% libtriton.so [.] mlir::multiRootTopologicalSort
16.37% libtriton.so [.] llvm::DenseMap<mlir::Operation*, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<mlir::Operation*, void>, llvm::detail::DenseSetPair<mlir::Operation
12.57% libtriton.so [.] mlir::(anonymous namespace)::DFSState::addToReadyQueue
3.37% libtriton.so [.] mlir::Value::getDefiningOp
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
PyTorch version: 2.1.0a0+gitfe05266
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.4
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 1
Model name: AMD Ryzen Threadripper 1950X 16-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2027.719
CPU max MHz: 3400.0000
CPU min MHz: 2200.0000
BogoMIPS: 6786.49
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 1 MiB
L2 cache: 8 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.1.0a0+git30b968f
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,227 | 96,643 |
PyTorch SGEMV is using 1 single core on AMD CPUs (very slow)
|
module: rocm, triaged
|
### 🐛 Describe the bug
```python
import torch
import time
a = torch.randn((2048, 2048))
b = torch.randn((2048, 2048))
v = torch.randn((2048,))
c = torch.randn((2048, 2))
d = torch.randn((2048, 4))
e = torch.randn((2048, 8))
print("Warmup")
for i in range(5):
a @ b
a @ v
a @ c
a @ d
a @ e
print("Bench large mat @ large mat")
totaldur = 0
N = 25
for i in range(N):
start = time.time()
a @ b
totaldur += time.time() - start
print("Average: %.3fms" % (totaldur / N * 1000,))
print()
print("Bench large mat @ vec")
totaldur = 0
N = 1000
for i in range(N):
start = time.time()
a @ v
totaldur += time.time() - start
print("Average: %.1fus" % (totaldur / N * 1000000,))
print()
print("Bench large mat @ 2 columns")
totaldur = 0
for i in range(N):
start = time.time()
a @ c
totaldur += time.time() - start
print("Average: %.1fus" % (totaldur / N * 1000000,))
print()
print("Bench large mat @ 4 columns")
totaldur = 0
for i in range(N):
start = time.time()
a @ d
totaldur += time.time() - start
print("Average: %.1fus" % (totaldur / N * 1000000,))
print()
print("Bench large mat @ 8 columns")
totaldur = 0
for i in range(N):
start = time.time()
a @ e
totaldur += time.time() - start
print("Average: %.1fus" % (totaldur / N * 1000000,))
```
24-core 48-thread Threadripper 2970WX @ 3GHz
128GB RAM, DDR4 2800MHz
Output:
```
Warmup
Bench large mat @ large mat
Average: 64.457ms
Bench large mat @ vec <====== THIS IS VERY SLOW
Average: 1271.8us
Bench large mat @ 2 columns <====== THIS IS SLOW TOO
Average: 351.3us
Bench large mat @ 4 columns <====== THIS IS SLOW TOO
Average: 356.3us
Bench large mat @ 8 columns
Average: 502.6us
```
### Versions
Note: this is using torch rocm, but torch cpu has the same issue
```
Collecting environment information...
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
PyTorch version: 1.13.1+rocm5.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.2.21151-afdc89f8
OS: NixOS 23.05 (Stoat) (x86_64)
GCC version: (GCC) 12.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.10 (main, Feb 7 2023, 12:19:31) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.15-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon Graphics
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.2.21151
MIOpen runtime version: 2.17.0
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] facenet-pytorch==2.5.2
[pip3] numpy==1.24.1
[pip3] torch==1.13.1+rocm5.2
[pip3] torchaudio==0.13.1+cpu
[pip3] torchvision==0.14.1+cpu
[conda] Could not collect
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
| 1 |
3,228 | 96,639 |
Completely different output between .pt and .ptl
|
oncall: mobile
|
### 🐛 Describe the bug
I traced a model using `torch.jit.trace()`. After tracing, I can confirm that the result compared with my original PyTorch model is the same. However, when I convert it to `.ptl` the results become very (drastically) different.
How I traced the model:
```
model = torch.load("model_kld_full.pt")
model.eval()
inp = torch.rand(1, 3, 112, 112)
model_traced = torch.jit.trace(model, inp)
model_traced.save('model_kld_tr.pt')
```
running:
```
emb = model(inp)
model_traced = torch.jit.load("model_kld_tr.pt")
emb2 = model_traced(inp)
assert emb == emb2
```
Works.
Then I convert from torchscript `.pt` to `.ptl`
```
model_traced = torch.jit.load("model_kld_tr.pt")
traced_script_module_optimized = optimize_for_mobile(model_traced)
traced_script_module_optimized._save_for_lite_interpreter("model_kld_tr.ptl")
```
running
```
emb = model(inp)
model_traced = torch.jit.load("model_kld_tr.ptl") #ptl version
emb2 = model_traced(inp)
assert emb == emb2
```
produced completely different output.
The Android team has also complained about this from their end.
### Versions
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (conda-forge gcc 12.2.0-19) 12.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-40-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 6000
GPU 1: Quadro RTX 6000
GPU 2: Quadro RTX 6000
GPU 3: Quadro RTX 6000
Nvidia driver version: 510.73.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] magma 2.7.0 h8db6258_0
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.5 py38h14f4228_0
[conda] numpy-base 1.23.5 py38h31eccc5_0
[conda] pytorch 1.13.1 py3.8_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.13.1 py38_cu116 pytorch
[conda] torchvision 0.14.1 py38_cu116 pytorch
| 2 |
3,229 | 96,637 |
Not allow force merge when lint fails and not because of broken trunk
|
module: ci, triaged
|
### 🐛 Describe the bug
### Motivation
Lint jobs are not flaky (https://github.com/pytorch/pytorch/issues/93156, https://github.com/pytorch/pytorch/pull/94255) and failures usually end up with reverts (https://github.com/pytorch/pytorch/pull/96580, https://github.com/pytorch/pytorch/pull/96612). Even when force merging, it's a good practice to wait until the jobs finish first.
Arguably, not allowing force merge when lint fails is a better UX than getting the PR reverted "only" because of lint.
#### Proposal
Not allow force merge when lint fails not because of broken trunk. If the latter happens where lint jobs are already broken in trunk, these failures are likely not related to the PR.
### Versions
PyTorch CI
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi
| 4 |
3,230 | 96,633 |
Request for adding Warning/Error feature when dropout set to 1.0 in Transformer layer
|
oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The model is not "turned off during training". With dropout=1.0, for dropout layers you'll get all zero at train and, apparently, identity at test. I don't think pytorch should have allowed dropout=1.0. It should be ValueError, not sure I get the reasoning there.</p>— Andrej Karpathy (@karpathy) <a href="https://twitter.com/karpathy/status/1635062921398206464?ref_src=twsrc%5Etfw">March 12, 2023</a></blockquote>
As suggested by Andrej Karpathy, it makes sense to raise an error or atleast a warning when the user sets dropout in the transformer layer. So it would be great if someone can decide whether it should raise an error or warning , and which warning or error to raise in the above case.
P.S : I would like to be **assigned** the task for **implementation** of this issue.
### Alternatives
_No response_
### Additional context
Could you look into this? @ezyang @gchanan @jerryzh168
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 0 |
3,231 | 96,632 |
torch.cuda.graph "Invalid capture" with torch.linalg.solve
|
triaged, module: cuda graphs
|
### 🐛 Describe the bug
I'm trying a simple CUDAGraph capture example. It works with a torch.matmul operation but reported "Invalid Capture" with torch.linalg.solve. See below. I'm hoping to understand what is ok and what is not ok with torch.cuda.graph. If this is due to un-determined number of iterations inside torch.linalg.solve, if we can set a fix number of iterations.
```
import torch
N = 640
# Placeholders used for capture
A = torch.randn(N, N, device='cuda')
b = torch.randn(N, 1, device='cuda')
# warmup
s = torch.cuda.Stream()
s.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(s):
for i in range(3):
#dat = torch.matmul(A, b) # this command runs
dat = torch.linalg.solve(A, b) # this command does NOT run
torch.cuda.current_stream().wait_stream(s)
# capture
g = torch.cuda.CUDAGraph()
with torch.cuda.graph(g):
#dat = torch.matmul(A, b) # this command runs
dat = torch.linalg.solve(A, b) # this command does NOT run
real_inputs = [torch.rand_like(A) for _ in range(10)]
real_targets = [torch.rand_like(b) for _ in range(10)]
for data, target in zip(real_inputs, real_targets):
# Fills the graph's input memory with new data to compute on
A.copy_(data)
b.copy_(target)
g.replay()
#print(dat - torch.matmul(A, b))
```
The returned information is
```
---------------------------------------------------------------------------
_LinAlgError Traceback (most recent call last)
[<ipython-input-2-f302cf725f49>](https://localhost:8080/#) in <module>
22 #dat = torch.matmul(A, b) # this command runs
---> 23 dat = torch.linalg.solve(A, b) # this command does NOT run
24
_LinAlgError: torch.linalg.solve: The solver failed because the input matrix is singular.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
2 frames
[<ipython-input-2-f302cf725f49>](https://localhost:8080/#) in <module>
21 with torch.cuda.graph(g):
22 #dat = torch.matmul(A, b) # this command runs
---> 23 dat = torch.linalg.solve(A, b) # this command does NOT run
24
25 real_inputs = [torch.rand_like(A) for _ in range(10)]
[/usr/local/lib/python3.9/dist-packages/torch/cuda/graphs.py](https://localhost:8080/#) in __exit__(self, exc_type, exc_value, traceback)
157
158 def __exit__(self, exc_type, exc_value, traceback):
--> 159 self.cuda_graph.capture_end()
160 self.stream_ctx.__exit__(exc_type, exc_value, traceback)
161 # returning None should propagate exceptions from either capture_end or stream_ctx.__exit__()
[/usr/local/lib/python3.9/dist-packages/torch/cuda/graphs.py](https://localhost:8080/#) in capture_end(self)
79 which call ``capture_end`` internally.
80 """
---> 81 super(CUDAGraph, self).capture_end()
82
83 def replay(self):
RuntimeError: Invalid capture.
```
### Versions
--2023-03-13 01:51:20-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21415 (21K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================>] 20.91K --.-KB/s in 0.001s
2023-03-13 01:51:20 (23.4 MB/s) - ‘collect_env.py’ saved [21415/21415]
Collecting environment information...
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.6
Libc version: glibc-2.31
Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.1+cu116
[pip3] torchaudio==0.13.1+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
cc @mcarilli @ezyang
| 1 |
3,232 | 96,629 |
Dataloader should kill & restart workers when timeout is hit
|
module: dataloader, triaged
|
### 🚀 The feature, motivation and pitch
When using `timeout`, instead of crashing when the timeout is hit, the Dataloader should instead kill and restart problematic workers. Ideally, the worker should also be able to report the stack frame it is stuck on when being killed. This would be extremely useful for debugging code that works when `num_workers=0` but doesn't when `num_workers>0`. It also can save quite a bit of frustration when training hangs.
### Alternatives
_No response_
### Additional context
Currently, I am running into an issue where the dataloader keeps hanging. I suspect it is something to do with my machine, but have no way to debug it since the workers hang instead of crash. No error is reported when `timeout` is used, and I do not run into any issues with using `num_workers=0`. Hence, having the workers be killed on hang would be great.
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 0 |
3,233 | 96,622 |
[fix] angle for -0.0
|
module: cpu, module: mkldnn, open source, NNC, release notes: quantization, topic: not user facing, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Fixes #78413
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @EikanWang @voznesenskym @penguinwu @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @anijain2305
| 2 |
3,234 | 96,617 |
Build errors in two Vulkan files
|
oncall: mobile, module: vulkan
|
### 🐛 Describe the bug
Build problems occur in two files with GCC 12.2.0:
```
aten/src/ATen/native/vulkan/impl/Arithmetic.cpp:20:1: error: control reaches end of non-void function [-Werror=return-type]
aten/src/ATen/native/vulkan/impl/Registry.cpp:36:27: error: loop variable 'key' of type 'const std::string&' {aka 'const std::__cxx11::basic_string<char>&'} binds to a temporary constructed from type 'const char* const' [-Werror=range-loop-construct]
```
### Versions
I am trying to build v2.0.0-rc6 from the release/2.0 branch.
| 0 |
3,235 | 96,615 |
Tensor Permutation Along Given Axis
|
feature, triaged, module: python frontend
|
### 🚀 The feature, motivation and pitch
The current `torch.perm` gives shared permutation ordering like the example below:
```python
import torch
# permute on the second dimension
x = torch.randn(3, 5, 4)
perm = torch.randperm(x.size(dim))
shuffled_x = x[:, perm, :]
```
The perm will shuffle the second dimension in the same ordering that is given by perm, which is the same of each other dimensions.
It will be nice to have a function that permutes that given dimension of `x` **independently**.
### Alternatives
I've write a possible solution that has been tested by myself, and it works as intended.
```python
import torch
def shuffle_tensor(x: torch.Tensor, dim: int = 0) -> torch.Tensor:
"""
Shuffle the tensor along the specified dimension independently
for each slice along that dimension.
[NOTE] this is different to torch.perm which is shared permutation ordering.
For example, for a 2D tensor:
x = tensor([[0, 0, 1, 6, 2],
[5, 4, 7, 2, 4]])
Aftering shuffling `x` becomes:
tensor([[0, 0, 6, 2, 1],
[5, 7, 2, 4, 4]])
This works for any given dimension `dim` for an arbitary shape input tensor `x`.
Args:
x (torch.Tensor): The tensor to shuffle.
dim (int, optional): The dimension along which to shuffle. Default is 0.
Returns:
torch.Tensor: The shuffled tensor.
"""
# Get the shape of the slice of x along the specified dimension
x_shape = tuple(x.shape[:dim+1])
# Get the indices that would sort a random tensor along the specified dimension
indices = torch.argsort(torch.rand(x_shape), dim=dim)
# Add singleton dimensions to indices to match the shape of x
n = x.dim() - dim - 1
indices = indices[(..., ) + (None,) * n]
indices = indices.to(x.device) * torch.ones_like(x)
indices = indices.long()
# Shuffle the tensor using gather
shuffled_x = torch.gather(x, dim=dim, index=indices)
return shuffled_x
```
Hopefully this can be added to the pytorch update.
Thanks,
Jie
### Additional context
_No response_
cc @albanD
| 0 |
3,236 | 96,614 |
[MPS] Incorrect results for cumsum with bool tensors
|
triaged, module: mps
|
### 🐛 Describe the bug
MPS produces incorrect results for cumsum with bool tensors. To reproduce this issue, run cumsum on the same bool tensor with CPU and MPS and compare the result.
```python
import torch
input = torch.randn(1000,2) < 0.5
mps_result = input.to('mps').cumsum(0)
cpu_result = input.cumsum(0)
torch.testing.assert_close(mps_result, cpu_result, check_device=False)
```
Output:
```
Traceback (most recent call last):
File "/Users/brkirch/Desktop/test.py", line 6, in <module>
torch.testing.assert_close(mps_result, cpu_result, check_device=False)
File "/Users/brkirch/stable-diffusion-webui/venv-torch-nightly/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1514, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not equal!
Mismatched elements: 1619 / 2000 (81.0%)
Greatest absolute difference: 768 at index (913, 1)
Greatest relative difference: 2.0 at index (190, 1)
```
A workaround is to cast the bool tensor to int32 before the cumsum operation. Also, MPS support for int64 with cumsum is currently broken on macOS 13.3 beta, see #96610.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.10.10 (main, Feb 8 2023, 05:34:50) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.7.0
[pip3] torch==2.1.0.dev20230311
[pip3] torchvision==0.15.0.dev20230311
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
3,237 | 96,613 |
The output of torch.histc is incorrect on both CPU and CUDA
|
triaged, module: edge cases
|
### 🐛 Describe the bug
```python3
import torch
torch.set_printoptions(linewidth=160)
print("case 1", torch.histc(torch.linspace(0, 1, 2**28, device="cuda"), bins=3))
print("case 2", torch.histc(torch.linspace(0, 1, 2**28), bins=3))
print("case 3", torch.histc(torch.linspace(0, 1, 2**28, device="cuda"), bins=2048, min=0, max=2047))
print("case 4", torch.histc(torch.linspace(0, 1, 2**28), bins=2048, min=0, max=2047))
```
The output of case 2, 3, 4 are wrong.
```
case 1 tensor([89478488., 89478488., 89478480.], device='cuda:0')
case 2 tensor([50331648., 55924060., 50331648.])
case 3 tensor([16777216., 131081., 0., ..., 0., 0., 0.], device='cuda:0')
case 4 tensor([1.3422e+08, 1.3108e+05, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00])
```
Expected results:
```python3
import numpy as np
print("case 1 & 2", np.histogram(np.linspace(0, 1, 2**28), bins=3)[0])
print("case 3 & 4", np.histogram(np.linspace(0, 1, 2**28), bins=2048, range=(0, 2047))[0])
'''
case 1 & 2 [89478485 89478485 89478486]
case 3 & 4 [268304384 131072 0 ... 0 0 0]
'''
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版
GCC version: (x86_64-posix-sjlj-rev0, Built by MinGW-W64 project) 12.2.0
Clang version: 14.0.6
CMake version: version 3.25.3
Libc version: N/A
Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 528.49
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2904
DeviceID=CPU0
Family=198
L2CacheSize=2048
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2904
Name=Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==1.13.1+cu117
[pip3] torchaudio==0.13.1+cu117
[pip3] torchvision==0.14.1+cu117
[conda] Could not collect
```
| 7 |
3,238 | 96,609 |
Triton compile error for pad + broadcast + pad on GPU
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Hi! I found this snippet runs well with eager mode but fails to compile.
```
import torch
a = torch.rand((1,1,2), device='cuda')
b = torch.rand((1,1,2,3,1), device='cuda')
def forward():
c = torch.nn.functional.pad(a, (0, 1, 0, 0), 'reflect') # [1,1,3]
d = torch.add(b, c) # [1,1,2,3,1] + [1,1,3] -> [1,1,2,3,3]
return torch.nn.functional.pad(d, (-2, 0, 0, 0, 0, 0, 0, 0, 0, 1))
fn_compiled = torch.compile(forward)
print(fn_compiled())
```
### Error logs
```
error: 'tt.load' op requires the same shape for all operands and results
Traceback (most recent call last):
File "/home/su/accdiff/test.py", line 9, in <module>
print(fn_compiled())
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/eval_frame.py", line 254, in _fn
return fn(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/eval_frame.py", line 391, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 406, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 105, in _fn
return fn(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 263, in _convert_frame_assert
return _compile(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 326, in _compile
out_code = transform_code_object(code, transform)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 313, in transform
tracer.run()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 1841, in run
super().run()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 597, in run
and self.step()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 560, in step
getattr(self, inst.opname)(inst)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 1920, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 545, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 615, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 701, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 697, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/debug_utils.py", line 1064, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/su/accdiff/thirdparty/pytorch/torch/__init__.py", line 1382, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 488, in compile_fx
return aot_autograd(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 2874, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 2555, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 1735, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 1346, in aot_dispatch_base
compiled_fw = aot_config.fw_compiler(fw_module, flat_args_with_views_handled)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 462, in fw_compiler
return inner_compile(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/debug_utils.py", line 598, in debug_wrapper
compiled_fn = compiler_fn(gm, example_inputs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/debug.py", line 239, in inner
return fn(*args, **kwargs)
File "/usr/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 180, in compile_fx_inner
compiled_fn = graph.compile_to_fn()
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/graph.py", line 633, in compile_to_fn
return self.compile_to_module().call
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/graph.py", line 622, in compile_to_module
mod = PyCodeCache.load(code)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 608, in load
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_su/fw/cfwhttjndmawak3ik6e2a353vdcgchxg56mbs6ua5cygmb6hixmr.py", line 67, in <module>
async_compile.wait(globals())
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 795, in wait
scope[key] = result.result()
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 653, in result
self.future.result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
RuntimeError: PassManager::run failed
```
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitfe05266
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.4
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 1
Model name: AMD Ryzen Threadripper 1950X 16-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3400.0000
CPU min MHz: 2200.0000
BogoMIPS: 6786.49
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 1 MiB
L2 cache: 8 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.1.0a0+git1e69615
[conda] Could not collect
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,239 | 96,608 |
No matching distribution found for torch==1.13.1+cu117
|
oncall: binaries, triaged
|
### 🐛 Describe the bug
Hi, I am trying to install pytorch with the specific CUDA version of 11.7 and I can only use pip for now. But the recommended command on the official website for Linux-pip-python-cuda 11.7 is a general command “pip3 install torch torchvision torchaudio” and this command always gave me CUDA version of 11.4 which is not what I want. However, when I tried to specify the version based on other commands suggested from this forum or other forum (e,g, github, StackOverflow etc.), it always said something about the 403 forbidden error and no matching distribution found for torch==1.13.1+cu117. I have attached a screenshot as the example. Could anybody please help me?

My python version is either 3.8.10 or 3.10.2 (I tried both and none of them worked) and pip version is either 21.2.4 or 23.0.1 (default and updated version, I tried both and none of them worked)
Many thanks in advance.
### Versions
I am actually using a cloud computer so I don't have any rights to download this script, I am very sorry about this.
cc @ezyang @seemethere @malfet
| 2 |
3,240 | 96,602 |
[MPS] softmax returns NaN attention probabilities for large tensors, in float16 and float32.
|
triaged, module: mps
|
### 🐛 Describe the bug
I've been making large images in stable-diffusion in float16 on my 4090 for a while.
I tried on my M1 Max, generating an image at a size which _is_ possible in float16 on CUDA.
Using a recent PyTorch nightly (`2.1.0.dev20230310`), macOS 13.3 public beta.
fails in float32 also.
```python
from torch import full, softmax, FloatTensor
import torch
# fill value is based on mean() from some real attention scores I had
attention_scores: FloatTensor = full([10, 12416, 12416], 0.0598, device='mps', dtype=torch.float16)
attention_probs: FloatTensor = attention_scores.softmax(dim=-1)
assert not attention_probs.isnan().any().item(), "found NaN in attention_probs"
# interestingly, attention_scores *also* becomes NaN. does softmax() mutate the input?
```
to pinpoint _where_ in the softmax it fails: here's a decomposed softmax.
```python
from torch import full, FloatTensor
import torch
attention_scores: FloatTensor = full([10, 12416, 12416], 0.0598, device='mps', dtype=torch.float16)
def softmax(x: FloatTensor, dim=1) -> FloatTensor:
maxes = torch.max(x, dim, keepdim=True).values
assert not maxes.isnan().any().item(), "found NaN in maxes"
diffs = x-maxes
assert not diffs.isnan().any().item(), "found NaN in diffs" # this assertion gets tripped
x_exp = torch.exp(diffs)
assert not x_exp.isnan().any().item(), "found NaN in x_exp"
x_exp_sum = torch.sum(x_exp, dim, keepdim=True)
assert not x_exp_sum.isnan().any().item(), "found NaN in x_exp_sum"
quotient = x_exp/x_exp_sum
assert not quotient.isnan().any().item(), "found NaN in quotient"
return quotient
attention_probs: FloatTensor = softmax(attention_scores, dim=-1)
assert not attention_probs.isnan().any().item(), "found NaN in attention_probs"
```
interestingly, the NaNs are introduced when computing `diffs = x-maxes`. even though this really should just be computing lots of `0.0598-0.0598=0`s.
I have exported a realistic `attention_scores` tensor, but it's 3GB. not sure how to share a file that large. tell me if it'd be useful to have.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230310
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:26:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.3-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] numpy==1.24.2
[pip3] torch==2.1.0.dev20230310
[pip3] torchdiffeq==0.2.3
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.0.dev20230310
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch 2.1.0.dev20230310 py3.11_0 pytorch-nightly
[conda] sympy 1.11.1 py311_0 pytorch-nightly
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchsde 0.2.5 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230310 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
3,241 | 96,595 |
Why doesn't PyTorch install the REAL nvidia cuDNN pip package?
|
oncall: binaries, module: cuda, triaged
|
### 🐛 Describe the bug
I requested that PyTorch upgrade the cuDNN v8.5 version that it was previously bundling to v8.7 due to a serious perf issue in [92288](https://github.com/pytorch/pytorch/issues/92288)
When I pip install the default torch of 1.13.1 it also installs nvidia-cudnn-cu11 to provide the old cuDNN libraries. For some reason to fix issue-92288 instead of upgrading "THE" cuDNN package to v8.7 you instead simply threw the cuDNN v8.7 libraries into torch/lib.
NVidia has long ago released cuDNN v8.7 and recently released v8.8. Why not just install the REAL THING instead of just copying out the libraries and leaving nearly 1GB of the old nvidia-cudnn-cu11==8.5.0.96 stuff to potentially cause conflicts and waste disk space?
Of course, if I don't "UPGRADE" an old pytorch 1.13 to V2 and just "INSTALL" V2 in a clean env then V2 seems to no longer has the dependency on nvidia/cudnn so I don't get two copies of the lib's. But here is the problem; I know how to see EXACTLY where the libraries loaded into my python process is coming from using "pmap" on Linux. If I tried to upgrade nvidia-cudnn-cu11 to v8.8 it DOES NOT use the cuDNN libraries in the OFFICIAL cuDNN package. It sees the torch/lib copies first and loads those. I can only image the problems less sophisticated users will incur when one day some says: Oh, BTW if you upgrade to cuDNN v8.8 in your env you can fix that issue you've been running into. Stealing a pile of libraries, from cuDNN, isn't a real install of a package.
### Versions
.
cc @ezyang @seemethere @malfet @ngimel
| 6 |
3,242 | 96,585 |
Proposal: Disable GC in test suite; GC after every test case
|
triaged, module: flaky-tests, module: infra, module: testing, module: devx
|
### 🐛 Describe the bug
GC triggering at unpredictable times is sometimes a cause of nondeterminism in unit tests. It seems to me that it should be feasible to disable GC entirely, and only run GC at the end of tests (and perhaps selectively enable GC for particular tests that need it to work correctly). WDYT.
cc @ZainRizvi @kit1980 @huydhn @clee2000 @albanD
### Versions
master
| 6 |
3,243 | 96,579 |
Wrong return type from operation on custom tensor inside registered hook
|
module: autograd, triaged, needs research, module: __torch_function__, tensor subclass
|
### 🐛 Describe the bug
Hi!
I implemented some subclass of pytorch tensor in my applications, and need to operate such custom tensors in a call back function hooked to a tensor’s backward pass. However, i found that any operation of such custom tensor would yield torch tensor inside such hooked callbacks, while i am hoping them to return tensor of the custom class.
A minimal example:
```
import torch
class CTensor(torch.Tensor):
pass
def hook(grad):
print('within hook:', type(a[:1]))
a = CTensor([1,2,3]).requires_grad_()
b = CTensor([1,2,3]).requires_grad_()
b.register_hook(hook)
c = b.sum()
print('outside hook:', type(a[:1]))
c.backward()
```
This prints
```
outside hook: <class '__main__.CTensor'>
within hook: <class 'torch.Tensor'>
```
However,I suppose the slicing operation inside the hook() should also return tensor of my custom class CTensor. Is this a bug, or something expected?
Thanks!!
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.16 (main, Jan 11 2023, 16:05:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.78.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i9-13900KF
Stepping: 1
CPU MHz: 990.165
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Virtualization: VT-x
L1d cache: 576 KiB
L1i cache: 384 KiB
L2 cache: 24 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch3d==0.7.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchgeometry==0.1.2
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.7.2 py39_cu113_pyt1110 pytorch3d
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchgeometry 0.1.2 pypi_0 pypi
[conda] torchvision 0.12.0 py39_cu113 pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @hameerabbasi @rgommers @peterbell10 @msaroufim
| 4 |
3,244 | 96,560 |
Enable functorch testing for rocm
|
module: rocm, triaged, module: functorch
|
While trying to enable functorch testing for rocm I see a couple of failures. Putting this issue up to track.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
3,245 | 96,559 |
tests for linearize fail under the dynamo CI config
|
triaged, oncall: pt2, module: functorch, module: dynamo
|
See skips over at https://github.com/pytorch/pytorch/blob/acd9df8a72195c226ddbe08fa0381f29cf502f0c/test/functorch/test_eager_transforms.py#L2532
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @Chillee @samdow @kshitij12345 @janeyx99 @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 1 |
3,246 | 96,555 |
early stopping
|
feature, triaged, needs design
|
### 🚀 The feature, motivation and pitch
It would be nice and easy to have the early stopping function/feature in Pytorch so that neural network training can stop before overfitting based on some criteria for instance algorithm stops after 3 consequent epochs with very small changes in validation loss. At the moment this is possible in the training loop by adding a condition but if this is done by a separate specific function, it would be much easier for users to utilize it.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
3,247 | 96,538 |
[ONNX] FX exporter 'test_pytorch_onnx_onnxruntime.py' tracker
|
module: onnx, triaged, onnx-triaged
|
Note: tracker for model tests https://github.com/pytorch/pytorch/issues/96757
#### List of unsupported call_functions
```
Top errors in UnsupportedCallFunctions (5617 total):
- 4164 errors like: aten.copy_.default examples:
:test_index_put_if
:test_index_put_if_4
:test_index_put_inplace_ops
:test_index_put
:test_multiple_conv_bn
:test_input_mask_model
:test_pad_circular_dynamic_axes
:test_pad_circular
:test_batchnorm_training_mode_fix_layer
:test_copy_
:test_index_add_dim_size_differ
:test_new_empty
:test_index_put_ellipsis
:test_index_add_normal
:test_index_put_loop
:test_inplace_fill
:test_index_add_in_loop
:test_copy_tracing
:test_index_put_if_5
:test_index_put_if_2
:test_index_add_dynamic_axes
:test_index_put_if_3
:test_index_put_slice_index
:test_index_put_to_masked_fill
:test_index_put_singular
:test_batchnorm_training
:test_pad_circular_negative
:test_index_put_to_masked_scatter
:test_fill
:test_copy_ellipsis_script
:test_copy_ellipsis
- 486 errors like: aten.scalar_tensor.default examples:
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_True
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_border_True
:test_crossentropyloss
:test_grid_sample_mode_bicubic_padding_mode_border_False
:test_grid_sample_mode_nearest_padding_mode_border_True
:test_nllloss_dynamic_ignore_index
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_nllloss
:test_grid_sample_mode_bilinear_padding_mode_border_True
:test_grid_sample_mode_bilinear_padding_mode_border_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_False
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_grid_sample_mode_nearest_padding_mode_border_False
- 379 errors like: aten.logical_and.default examples:
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_True
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_border_True
:test_grid_sample_mode_bicubic_padding_mode_border_False
:test_grid_sample_mode_nearest_padding_mode_border_True
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_logical_and
:test_grid_sample_mode_bilinear_padding_mode_border_True
:test_grid_sample_mode_bilinear_padding_mode_border_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_False
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_grid_sample_mode_nearest_padding_mode_border_False
- 135 errors like: aten.index.Tensor examples:
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_True
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_border_True
:test_index_put_ellipsis
:test_grid_sample_mode_bicubic_padding_mode_border_False
:test_interpolate_downsample
:test_grid_sample_mode_nearest_padding_mode_border_True
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_border_True
:test_grid_sample_mode_bilinear_padding_mode_border_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_False
:test_grid_sample_mode_nearest_padding_mode_zeros_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_False
:test_interpolate_upsample
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_index_put_slice_index
:test_grid_sample_mode_bilinear_padding_mode_zeros_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_grid_sample_mode_nearest_padding_mode_border_False
:test_im2col
- 96 errors like: aten.floor.default examples:
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_True
:test_grid_sample_mode_bilinear_padding_mode_border_True
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_True
:test_grid_sample_mode_bilinear_padding_mode_border_False
:test_grid_sample_mode_bicubic_padding_mode_border_False
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_zeros_False
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_False
:test_grid_sample_mode_bicubic_padding_mode_border_True
- 74 errors like: aten.bitwise_and.Tensor examples:
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_ones_bool
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_and_or_xor
- 42 errors like: aten.squeeze.dims examples:
:test_instancenorm_training_mode_fix_layer
:test_batchnorm_training_mode_fix_layer
:test_instancenorm_training
:test_instancenorm_eval_mode_train_layer
:test_batchnorm_training
:test_multiple_conv_bn
- 26 errors like: aten.new_zeros.default examples:
:test_instancenorm_training_mode_fix_layer
:test_new_zeros
:test_batchnorm_training_mode_fix_layer
:test_fuse_conv_bn1d
:test_list_pass
:test_instancenorm_training
:test_instancenorm_eval_mode_train_layer
:test_batchnorm_training
:test_batchnorm_eval_mode_train_layer
:test_fuse_conv_bn3d
:test_conv_bn
:test_fuse_conv_bn2d
- 25 errors like: aten.empty.memory_format examples:
:test_instancenorm_training_mode_fix_layer
:test_batchnorm_training_mode_fix_layer
:test_fuse_conv_bn1d
:test_instancenorm_training
:test_instancenorm_eval_mode_train_layer
:test_batchnorm_training
:test_multiple_conv_bn
:test_batchnorm_eval_mode_train_layer
:test_fuse_conv_bn3d
:test_conv_bn
:test_fuse_conv_bn2d
- 17 errors like: aten.var_mean.correction examples:
:test_instancenorm_training_mode_fix_layer
:test_batchnorm_training_mode_fix_layer
:test_instancenorm_training
:test_instancenorm_eval_mode_train_layer
:test_batchnorm_training
:test_multiple_conv_bn
- 17 errors like: aten.index_put.default examples:
:test_index_add_dynamic_axes
:test_index_add_dim_size_differ
:test_index_add_in_loop
:test_index_put_slice_index
:test_index_put_to_masked_fill
:test_index_put_singular
:test_index_put_ellipsis
:test_index_add_normal
:test_index_put
:test_input_mask_model
:test_index_copy
:test_index_put_to_masked_scatter
:test_index_put_accumulate
- 16 errors like: aten.add_.Tensor examples:
:test_batchnorm_training_mode_fix_layer
:test_index_put_slice_index
:test_index_put_ellipsis
:test_batchnorm_training
:test_inplace_arithmetic_half
:test_index_put_inplace_ops
:test_multiple_conv_bn
:test_lstm_cell
:test_empty_constant_shape
- 9 errors like: aten.new_ones.default examples:
:test_instancenorm_training_mode_fix_layer
:test_batchnorm_training_mode_fix_layer
:test_instancenorm_training
:test_instancenorm_eval_mode_train_layer
:test_batchnorm_training
:test_batchnorm_eval_mode_train_layer
:test_new_ones
- 9 errors like: aten.linalg_vector_norm.default examples:
:test_normalize
:test_frobenius_norm
:test_frobenius_norm_keepdim
:test_linalg_matrix_norm
:test_l1_norm
:test_linalg_vector_norm
:test_l2_norm
:test_linalg_norm
:test_linalg_vector_norm_zero
- 6 errors like: aten.nll_loss2d_forward.default examples:
:test_nllloss_2d_mean_weights
:test_nllloss_2d_sum
:test_nllloss_2d_mean
:test_nllloss_2d_mean_ignore_index_weights
:test_nllloss_2d_none
:test_nllloss_2d_mean_ignore_index
- 5 errors like: aten.constant_pad_nd.default examples:
:test_im2col
:test_pad_int
:test_pad_types_scalar_list
- 5 errors like: aten.new_empty.default examples:
:test_pad_circular_dynamic_axes
:test_pad_circular_negative
:test_pad_circular
:test_new_empty
- 4 errors like: aten.__lshift__.Scalar examples:
:test_bitshift_uint8
:test_bitshift
- 4 errors like: aten.linalg_cross.default examples:
:test_linalg_cross
:test_cross
- 4 errors like: aten.gather.default examples:
:test_nllloss_dynamic_ignore_index
:test_crossentropyloss
:test_nllloss
:test_gather
- 4 errors like: aten.lerp.Tensor examples:
:test_lerp
- 3 errors like: aten.squeeze.dim examples:
:test_nllloss_dynamic_ignore_index
:test_crossentropyloss
:test_nllloss
- 3 errors like: aten.eye.m examples:
:test_eye
- 3 errors like: aten.fake_quantize_per_channel_affine_cachemask.default examples:
:test_fake_quantize_per_channel
:test_fake_quantize_per_tensor
- 3 errors like: aten.hann_window.default examples:
:test_hann_window_default_values
:test_hann_window_not_periodic
:test_hann_window_periodic
- 3 errors like: aten.lerp.Scalar examples:
:test_lerp
- 3 errors like: aten.sigmoid_.default examples:
:test_lstm_cell
- 3 errors like: aten.rand.default examples:
:test_rand_dtype
:test_rand
:test_rand_dynamic_size
- 3 errors like: aten.randn.default examples:
:test_randn
:test_randn_dynamic_size
:test_randn_dtype
- 2 errors like: aten.mean.default examples:
:test_kldiv_loss
:test_MSELoss
- 2 errors like: aten.addcmul.default examples:
:test_addcmul
- 2 errors like: aten.bitwise_or.Tensor examples:
:test_and_or_xor
:test_input_mask_model
- 2 errors like: aten.as_strided.default examples:
:test_as_strided
- 2 errors like: aten.__rshift__.Scalar examples:
:test_bitshift_uint8
:test_bitshift
- 2 errors like: aten.__rshift__.Tensor examples:
:test_bitshift_uint8
:test_bitshift
- 2 errors like: aten.bucketize.Tensor examples:
:test_bucketize
- 2 errors like: aten.cat.default examples:
:test_concat
:test_cast_to_bool
- 2 errors like: aten.diagonal.default examples:
:test_einsum
:test_diagonal
- 2 errors like: aten._embedding_bag_forward_only.default examples:
:test_embedding_bag_1d_per_sample_weights
:test_embedding_bag_2d_per_sample_weights
- 2 errors like: aten.eye.default examples:
:test_eye
- 2 errors like: aten.div_.Tensor examples:
:test_index_put_inplace_ops
- 2 errors like: aten.index_select.default examples:
:test_index_select_constant_scaler_index
:test_index_select_scaler_index
- 2 errors like: aten.zero_.default examples:
:test_inplace_zero_qkv
:test_inplace_zero
- 2 errors like: aten.max_pool2d_with_indices.default examples:
:test_maxpool_default_stride
:test_multiple_conv_bn
- 2 errors like: aten.rand_like.default examples:
:test_rand_like_dtype
:test_rand_like
- 2 errors like: aten.randn_like.default examples:
:test_randn_like_dtype
:test_randn_like
- 1 errors like: aten.all.default examples:
:test_all
- 1 errors like: aten.bitwise_xor.Tensor examples:
:test_and_or_xor
- 1 errors like: aten.any.default examples:
:test_any
- 1 errors like: aten.sort.default examples:
:test_argsort
- 1 errors like: aten.avg_pool2d.default examples:
:test_avgpool_default_stride
- 1 errors like: aten.bernoulli.default examples:
:test_bernoulli
- 1 errors like: aten.bernoulli.p examples:
:test_bernoulli_p
- 1 errors like: aten._cdist_forward.default examples:
:test_cdist
- 1 errors like: aten.stack.default examples:
:test_clip_boxes_to_image
- 1 errors like: aten.conv_tbc.default examples:
:test_conv_tbc
- 1 errors like: aten._convolution.default examples:
:test_convolution_allow_tf32
- 1 errors like: aten._fake_quantize_per_tensor_affine_cachemask_tensor_qparams.default examples:
:test_fake_quantize_per_tensor_dynamic_scale_zeropoint
- 1 errors like: aten.flip.default examples:
:test_flip
- 1 errors like: aten.glu.default examples:
:test_glu
- 1 errors like: aten.index_fill.int_Scalar examples:
:test_index_fill
- 1 errors like: aten.upsample_linear1d.default examples:
:test_interpolate_half_pixel
- 1 errors like: aten.isnan.default examples:
:test_isnan
- 1 errors like: aten.logical_not.default examples:
:test_logical_not
- 1 errors like: aten.logical_or.default examples:
:test_logical_or
- 1 errors like: aten.logical_xor.default examples:
:test_logical_xor
- 1 errors like: aten.tanh_.default examples:
:test_lstm_cell
- 1 errors like: aten.masked_scatter.default examples:
:test_masked_scatter
- 1 errors like: aten.mish.default examples:
:test_mish
- 1 errors like: aten.nan_to_num.default examples:
:test_nan_to_num
- 1 errors like: aten.norm.ScalarOpt_dim_dtype examples:
:test_norm_with_dtype
- 1 errors like: aten.pixel_shuffle.default examples:
:test_pixel_shuffle
- 1 errors like: aten.pixel_unshuffle.default examples:
:test_pixel_unshuffle
- 1 errors like: aten.mean.dim examples:
:test_reduced_mean
- 1 errors like: aten.min.dim examples:
:test_reduced_min_max
- 1 errors like: aten.max.dim examples:
:test_reduced_min_max
- 1 errors like: aten.prod.dim_int examples:
:test_reduced_prod
- 1 errors like: aten.hardtanh.default examples:
:test_relu6
```
#### Full error logs
```
- 162 errors like: torch._dynamo.exc.Unsupported: dynamic shapes: arange (examples:
:test_arange_start_out
:test_dynamic_repeat_interleave
:test_arange_with_floats
:test_arange_with_floats_out
:test_arange_with_floats_override
:test_expand
:test_add_inplace
:test_arange_no_type
:test_arange_out
:test_dynamic_arange_out
:test_dynamic_arange_start_out
:test_index_mask
:test_index_mask_nd
:test_masked_fill
:test_pad_types_scalar_tensor_list
:test_quantize_per_tensor
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_multinomial
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_resize_images
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_nonzero
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_with_variable_length_sequences_without_dropout
:test_multiple_dynamic_repeat_interleave
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_with_variable_length_sequences_without_dropout
:test_squeeze_runtime_dim
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_with_batch_first_sequence_lengths_without_dropout
:test_symbolic_shape_inference_box_if
:test_symbolic_shape_inference_box
:test_symbolic_shape_inference_nonzero
:test_split_size_as_list
:test_topk_int32_k
:test_topk_smallest_unsorted
:test_transform_images
:test_unique_along_dim
:test_where_condition
:test_where_condition_script
:test_where_with_byte_tensor
:test_unique
:test_zeros_ones_with_tensor_input
- 127 errors like: AssertionError (examples:
:test_hardshrink
:test_hardshrink_dtype
:test_hardsigmoid
:test_hardswish
:test_avgpool_2d_padding_1_ceil_mode_False_count_include_pad_True
:test_batchnorm2d
:test_hardtanh
:test_avgpool_2d_padding_1_ceil_mode_True_count_include_pad_False
:test_avgpool_2d_padding_1_ceil_mode_True_count_include_pad_True
:test_batchnorm2d_noaffine
:test_batchnorm2d_norunningstats
:test_batchnorm3d
:test_embedding_bag
:test_batchnorm3d_noaffine
:test_if_list
:test_avgpool_3d_ceil
:test_if_transpose
:test_groupnorm
:test_groupnorm_noaffine
:test_avgpool
:test_avgpool_1d_ceil
:test_gt_primitive
:test_fake_quantize_activation
:test_list_append
:test_list_append_in_block
:test_list_append_in_nested_block
:test_list_append_nested
:test_avgpool_2d_padding_0_ceil_mode_False_count_include_pad_False
:test_list_append_nested_2
:test_all_optional_default_none
:test_all_optional_default_tensor
:test_list_append_nested_mixed_dtype
:test_list_del
:test_list_del_in_block
:test_list_del_nested
:test_loop_transpose
:test_avgpool_2d_padding_0_ceil_mode_False_count_include_pad_True
:test_div_promotion_script
:test_avgpool_2d_padding_0_ceil_mode_True_count_include_pad_False
:test_div_promotion_trace
:test_batchnorm1d
:test_avgpool_2d_padding_0_ceil_mode_True_count_include_pad_True
:test_inplace_sequence_with_loop
:test_inplace_with_loop
:test_avgpool_2d_padding_1_ceil_mode_False_count_include_pad_False
:test_inplace_with_loop_2
:test_list_unpack_scripted_runs_without_error_with_constructed_list_as_input
:test_lower_tuple_2
:test_logsoftmax
:test_logsoftmax_dim
:test_batchnorm1d_noaffine
:test_list_unpack_slice_scripted
:test_if_view
:test_batchnorm1d_norunningstats
:test_instancenorm1d_norunningstats
:test_embedding_renorm
:test_instancenorm1d_runningstats
:test_mixed_optional
:test_instancenorm2d_norunningstats
:test_instancenorm2d_runningstats
:test_pairwise_distance
:test_instancenorm3d_norunningstats
:test_maxpool
:test_maxpool_3d_ceil
:test_maxpool_1d_ceil
:test_maxpool_adaptive
:test_constant_pad
:test_instancenorm3d_runningstats
:test_nested_tuple_output
:test_fuse_conv_in_block
:test_maxpool_2d
:test_maxpool_2d_ceil
:test_index_add_if
:test_layer_norm
:test_maxpool_dilation
:test_maxpool_with_indices
:test_replication_pad
:test_quantized_adaptive_avg_pool2d
:test_relu_int
:test_quantized_conv1d_relu
:test_device_eq
:test_list_idx_sum
:test_list_pop
:test_list_pop_in_block
:test_list_pop_nested
:test_list_set
:test_list_unpack
:test_list_unpack_scripted
:test_reflection_pad
:test_item
:test_rnn_no_bias
:test_set_attr_3
:test_set_attr_in_loop
:test_set_attr
:test_set_attr_4
:test_set_attr_in_loop_with_list
:test_set_attr_5
:test_set_attr_2
:test_softshrink
:test_softshrink_dtype
:test_softmax_large_values
:test_softmax
:test_cosine_similarity
:test_sequance_loopcarried
:test_spectral_norm
:test_tanhshrink
:test_tuple_input
:test_tuple_output_from_if_with_raised_exception
:test_tuple_with_none_outputs
:test_uninitialized
:test_uninitialized_intList
:test_tuple_output
:test_symbolic_shape_inference_time
:test_uninitialized_tensorList_dynamic
:test_uninitialized_tensorList_shape
:test_uninitialized_dynamic
:test_weight_norm_nodim
:test_uninitialized_tensorList
:test_weight_norm
:test_rrelu_eval
:test_quantized_conv2d
:test_quantized_conv2d_relu
:test_quantized_linear
:test_prelu_scalar
:test_primitive_input_bool
:test_inplace_attr_copy_with_loop
:test_inplace_attr_with_loop
- 78 errors like: torch._dynamo.exc.Unsupported: TorchDynamo purposely graph breaks on RNN, GRU, LSTMs (examples:
:test_gru_constant_folding
:test_gru_no_bias
:test_lstm
:test_lstm_constant_folding
:test_lstm_default_init_state
:test_lstm_fixed_batch_size
:test_lstm_no_bias
:test_lstm_no_hidden
:test_lstm_post_fix_init_state
:test_lstm_sequence
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_trilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_unilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_tanh_trilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_gru_nonlinearity_None_unilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_gru_nonlinearity_None_trilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_no_initial_state_without_sequence_lengths_without_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_bidirectional_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_trilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_lstm_nonlinearity_None_unilayer_forward_with_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_without_sequence_lengths_with_dropout
:test_rnn_name_elman_nonlinearity_relu_unilayer_bidirectional_no_initial_state_without_sequence_lengths_without_dropout
:test_word_language_model_RNN_RELU
:test_word_language_model_RNN_TANH
:test_word_language_model_GRU
:test_word_language_model_LSTM
- 69 errors like: TypeError: missing a required argument: 'x' (examples:
:test_cste_script
:test_hardswish_script
:test_hardtanh_script_with_default_values
:test_floating_point
:test_floating_point_infer_dtype
:test_getitem
:test_dim
:test_dim_1
:test_full_script
:test_grid_sample_mode_nearest_padding_mode_zeros_True
:test_arithmetic_infer_dtype
:test__dim_arange
:test_data
:test_arange_end
:test_arange_end_notype
:test_arange_start_end
:test_arange_start_end_notype
:test_arange_start_end_step
:test_arange_start_end_step_notype
:test_loop_nested
:test_loop_with_list
:test_grid_sample_mode_bilinear_padding_mode_border_False
:test_inplace_list
:test_cast_to
:test_dtype
:test_dtype_eq
:test_loop_dynamic
:test_loop_multi_dim
:test_empty_branch
:test_grid_sample_mode_bilinear_padding_mode_border_True
:test_clamp_dyn
:test_masked_fill_inplace
:test_grid_sample_mode_bilinear_padding_mode_reflection_False
:test_concat_dynamic
:test_len
:test_len_list
:test_list
:test_grid_sample_mode_bilinear_padding_mode_reflection_True
:test_interpolate_function_substitution
:test_interpolate_no_shape
:test_interpolate_upsample
:test_grid_sample_mode_bicubic_padding_mode_border_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_False
:test_grid_sample_mode_bilinear_padding_mode_zeros_True
:test_split_dynamic
:test_grid_sample_mode_nearest_padding_mode_border_False
:test_grid_sample_mode_nearest_padding_mode_border_True
:test_stack_dynamic
:test_slice_dynamic_script
:test_grid_sample_mode_nearest_padding_mode_reflection_False
:test_tensor
:test_tensor_factories_script
:test_topk_script
:test_tensor_like_factories_script
:test_transpose_infer_shape
:test_tolist
:test_unbind_dynamic
:test_grid_sample_mode_nearest_padding_mode_reflection_True
:test_unfold_infer_shape
:test_grid_sample_mode_nearest_padding_mode_zeros_False
:test_interpolate_adaptive_pooling_error
:test_interpolate_downsample
:test_grid_sample_mode_bicubic_padding_mode_border_True
:test_prim_min
:test_grid_sample_mode_bicubic_padding_mode_reflection_False
:test_grid_sample_mode_bicubic_padding_mode_reflection_True
:test_grid_sample_mode_bicubic_padding_mode_zeros_False
:test_grid_sample_mode_bicubic_padding_mode_zeros_True
:test_inplace_arithmetic
- 38 errors like: TypeError: forward() takes 2 positional arguments but 3 were given (examples:
:test_arithmetic_prim_float
:test_arithmetic_prim_long
:test_arange_dynamic
:test_eye
:test_hann_window_default_values
:test_hann_window_not_periodic
:test_fill
:test_hann_window_periodic
:test_arithmetic_prim_bool
:test_index_put_if_2
:test_index_put_if_3
:test_index_put_if_4
:test_index_put_if_5
:test_index_put_inplace_ops
:test_index_put_if
:test_clip_boxes_to_image
:test_primitive_input_floating
:test_pad_types_scalar_list
:test_primitive_input_integer
:test_list_pass
:test_scalar_tensor
:test_sequence_to_bool
:test_sequence_to_float
:test_sequence_to_int
:test_set_attr_modules_2
:test_size
:test_split_size_with_slice
:test_symbolic_shape_inference_arange_2
:test_symbolic_shape_inference_arange
:test_squeeze_dynamic_dim
:test_symbolic_shape_inference_expand_2
:test_symbolic_shape_inference_slice
:test_tensor_factories
:test_symbolic_shape_inference
:test_to_device
:test_unsqueeze_dynamic_dim
:test_view_dynamic
:test_index_put_loop
- 26 errors like: torch._dynamo.exc.Unsupported: Unsupported: quantized nyi in meta tensors with fake tensor propagation. (examples:
:test_dequantize
:test_quantized_arithmetic
:test_quantized_arithmetic_qfunctional
:test_quantized_cat_different_scale
:test_quantized_cat_different_shape
:test_quantized_unary_ops_group_norm
:test_quantized_cat_different_zero_point
:test_quantized_unary_ops_hardsigmoid
:test_quantized_unary_ops_hardtanh
:test_quantized_cat_different_zero_point_and_scale
:test_quantized_unary_ops_instance_norm
:test_quantized_cat_when_concatinating_the_same_tensor
:test_quantized_unary_ops_layer_norm
:test_quantized_unary_ops_leaky_relu
:test_quantized_unary_ops_quantized_hardswish
:test_quantized_unary_ops_quantized_leaky_relu
:test_quantized_unary_ops_view
:test_quantized_unary_ops_quantized_sigmoid
:test_quantized_unary_ops_relu
:test_quantized_unary_ops_select
:test_quantized_unary_ops_sigmoid
:test_quantized_unary_ops_tanh
:test_quantized_unary_ops_transpose
:test_quantized_flatten
:test_quantized_unary_ops_as_strided
:test_quantized_unary_ops_expand
- 24 errors like: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (aten_fmod_1) Op (aten_fmod) [ShapeInferenceError] (op_type:Mod, node name: n0): B has inco... (examples:
:test_fmod_scalar
:test_if_fold
:test_arithmetic
:test_conv_shape_inference
:test_gt_scalar
:test_minimum_dtypes
:test_max_int
:test_maximum_dtypes
:test_lt_scalar
:test_outer
:test_numel
:test_numel_empty
:test_reduced_sum
:test_min_int
:test_remainder
:test_remainder_scalar
:test_scalar_type
:test_binary_cross_entropy_with_logits
:test_shape_constant_fold
:test_tensordot_dynamic_dim
:test_tensordot_dim_count
:test_where_with_bool_tensor
:test_tensordot_dim_list
:test_prelu
- 23 errors like: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. (examples:
:test_ge
:test_eq
:test_dot
:test_gelu
:test_gt
:test_logsoftmax_dtype
:test_max_tensors
:test_lt
:test_narrow
:test_narrow_dynamic
:test_matmul
:test_le
:test_matmul_batch
:test_meshgrid
:test_meshgrid_scalar
:test_round
:test_rsqrt
:test_mv
:test_rsqrt_zeros
:test_split_tensor_scalar
:test_symbolic_shape_inference_expand_1
:test_tanh_gelu
:test_topk
- 14 errors like: torch._dynamo.exc.Unsupported: data dependent operator: aten._local_scalar_dense.default (examples:
:test_full_like_value
:test_full_trace
:test_baddbmm_dynamic
:test_qat_conv2d_relu_fused
:test_qat_avg_pool2d
:test_qat_linear_per_channel
:test_qat_maxpool2d
:test_qat_relu
:test_qat_conv2d
:test_linspace
:test_linspace_negative_start
:test_qat_upsample_nearest2d
:test_qat_conv2d_relu
:test_one_hot
- 11 errors like: RuntimeError: Unknown call_function target: aten.add_.Tensor (examples:
:test_fuse_conv_bn1d
:test_empty_constant_shape
:test_lstm_cell
:test_fuse_conv_bn2d
:test_batchnorm_eval_mode_train_layer
:test_fuse_conv_bn3d
:test_batchnorm_training
:test_batchnorm_training_mode_fix_layer
:test_multiple_conv_bn
:test_conv_bn
:test_inplace_arithmetic_half
- 10 errors like: RuntimeError: Unknown call_function target: aten.index_put.default (examples:
:test_index_add_normal
:test_index_copy
:test_index_put
:test_index_put_accumulate
:test_index_add_dim_size_differ
:test_index_add_dynamic_axes
:test_index_add_in_loop
:test_index_put_singular
:test_index_put_to_masked_fill
:test_index_put_to_masked_scatter
- 9 errors like: RuntimeError: Unknown call_function target: aten.linalg_vector_norm.default (examples:
:test_frobenius_norm
:test_frobenius_norm_keepdim
:test_linalg_matrix_norm
:test_linalg_norm
:test_linalg_vector_norm
:test_linalg_vector_norm_zero
:test_l1_norm
:test_l2_norm
:test_normalize
- 9 errors like: RuntimeError: Unknown call_function target: aten.as_strided.default (examples:
:test_as_strided
:test_stft_non_divisible_hop_length
:test_stft_not_onesided
:test_stft_hop_length
:test_stft_one_dimension
:test_stft_default
:test_stft_window_int_same_size
:test_stft_wrong_return_complex
:test_stft_normalize
- 7 errors like: RuntimeError: Unknown call_function target: aten.scatter_add.default (examples:
:test_scatter_add_index_not_unique
:test_scatter_add_dynamic_index_src_indices_dynamic_combination2
:test_scatter_add
:test_scatter_add_different_size_index_src
:test_scatter_add_dynamic_index_src_indices_dynamic_combination3
:test_scatter_add_dynamic_index_src_indices_dynamic_combination1
:test_scatter_add_dynamic_index_src_indices_dynamic_combination4
- 7 errors like: RuntimeError: Unknown call_function target: aten.copy_.default (examples:
:test_copy_
:test_copy_ellipsis
:test_copy_ellipsis_script
:test_slice_with_input_index
:test_copy_tracing
:test_stft_window_int_different_size
:test_inplace_fill
- 6 errors like: RuntimeError: Unknown call_function target: aten.nll_loss2d_forward.default (examples:
:test_nllloss_2d_none
:test_nllloss_2d_sum
:test_nllloss_2d_mean
:test_nllloss_2d_mean_ignore_index
:test_nllloss_2d_mean_ignore_index_weights
:test_nllloss_2d_mean_weights
- 6 errors like: RuntimeError: Unknown call_function target: aten.squeeze.dim (examples:
:test_squeeze
:test_squeeze_neg
:test_squeeze_neg_without_no_op
:test_squeeze_without_no_op
:test_squeeze_no_op
:test_squeeze_dynamic
- 6 errors like: RuntimeError: Unknown call_function target: aten.var_mean.correction (examples:
:test_std_mean_correction
:test_var_mean_mixed_dims
:test_var_mean_keepdim
:test_var_mean
:test_var_mean_correction
:test_var_mean_along_dims
- 5 errors like: torch._dynamo.exc.Unsupported: call_function BuiltinVariable(RuntimeError) [ConstantVariable(str)] {} (examples:
:test_batched_nms
:test_nms
:test_roi_align
:test_roi_align_aligned
:test_roi_pool
- 5 errors like: onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type 'tensor(double)' of input parameter (... (examples:
:test_celu_cast
:test_reduce_log_sum_exp
:test_mul_bool
:test_isfinite
:test_reciprocal
- 5 errors like: RuntimeError: Unknown call_function target: aten.index.Tensor (examples:
:test_index_put_ellipsis
:test_tensor_index_advanced_indexing
:test_tensor_index_advanced_indexing_ellipsis
:test_tensor_index_advanced_indexing_consecutive
:test_index_put_slice_index
- 4 errors like: AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.int32. (examples:
:test_div_rounding_mode
:test_argmin_argmax
:test_argmin_argmax_select_last_index
:test_div
- 4 errors like: RuntimeError: Unknown call_function target: aten.new_empty.default (examples:
:test_pad_circular
:test_new_empty
:test_pad_circular_dynamic_axes
:test_pad_circular_negative
- 4 errors like: RuntimeError: Unknown call_function target: aten.std.correction (examples:
:test_std
:test_std_correction
:test_std_along_dims
:test_std_keepdim
- 4 errors like: RuntimeError: Unknown call_function target: aten.var.correction (examples:
:test_var
:test_var_correction
:test_var_along_dims
:test_var_keepdim
- 3 errors like: RuntimeError: Unknown call_function target: aten.sort.default (examples:
:test_argsort
:test_sort
:test_sort_ascending
- 3 errors like: AttributeError: 'tuple' object has no attribute 'cpu' (examples:
:test_lower_tuple_3
:test_mixed_optional_default_none
:test_mixed_optional_default_tensor
- 3 errors like: torch._dynamo.exc.Unsupported: dynamic shape operator: aten.masked_select.default (examples:
:test_masked_select
:test_repeat_interleave
:test_repeat_interleave_noop
- 3 errors like: RuntimeError: Unknown call_function target: aten.rand.default (examples:
:test_rand
:test_rand_dtype
:test_rand_dynamic_size
- 3 errors like: RuntimeError: Unknown call_function target: aten.randn.default (examples:
:test_randn
:test_randn_dtype
:test_randn_dynamic_size
- 3 errors like: RuntimeError: Unknown call_function target: aten.empty.memory_format (examples:
:test_instancenorm_eval_mode_train_layer
:test_instancenorm_training
:test_instancenorm_training_mode_fix_layer
- 3 errors like: RuntimeError: Unknown call_function target: aten.scalar_tensor.default (examples:
:test_nllloss_dynamic_ignore_index
:test_crossentropyloss
:test_nllloss
- 3 errors like: RuntimeError: Unknown call_function target: aten.std_mean.correction (examples:
:test_std_mean_keepdim
:test_std_mean_along_dims
:test_std_mean
- 2 errors like: RuntimeError: Unknown call_function target: aten.diagonal.default (examples:
:test_einsum
:test_diagonal
- 2 errors like: RuntimeError: Unknown call_function target: aten.__rshift__.Scalar (examples:
:test_bitshift
:test_bitshift_uint8
- 2 errors like: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (GreaterOrEqual) bound to different types (tensor(float) ... (examples:
:test_ge_scalar
:test_le_scalar
- 2 errors like: torch._dynamo.exc.Unsupported: missing: POP_FINALLY (examples:
:test_dist_normal
:test_dist_uniform
- 2 errors like: TypeError: multiple values for argument 'dtype' (examples:
:test_embedding_bag_2d_per_sample_weights
:test_im2col
- 2 errors like: RuntimeError: Unknown call_function target: aten.mean.default (examples:
:test_MSELoss
:test_kldiv_loss
- 2 errors like: torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default (examples:
:test_baddbmm
:test_dynamic_expand_as
- 2 errors like: RuntimeError: Unknown call_function target: aten.cat.default (examples:
:test_cast_to_bool
:test_concat
- 2 errors like: RuntimeError: Unknown call_function target: aten.zero_.default (examples:
:test_inplace_zero
:test_inplace_zero_qkv
- 2 errors like: RuntimeError: Unknown call_function target: aten.linalg_cross.default (examples:
:test_linalg_cross
:test_cross
- 2 errors like: AssertionError: Tensor-likes are not close! (examples:
:test_maximum_minimum
:test_pow
- 2 errors like: RuntimeError: Unknown call_function target: aten.rand_like.default (examples:
:test_rand_like
:test_rand_like_dtype
- 2 errors like: RuntimeError: Unknown call_function target: aten.randn_like.default (examples:
:test_randn_like
:test_randn_like_dtype
- 2 errors like: RuntimeError: Unknown call_function target: aten.scatter.value (examples:
:test_scatter_with_scalar
:test_scatter_with_scalar_different_types
- 2 errors like: RuntimeError: Unknown call_function target: aten.new_zeros.default (examples:
:test_slice_dynamic_shape_script
:test_new_zeros
- 2 errors like: RuntimeError: Unknown call_function target: aten.hann_window.default (examples:
:test_stft_window_custom
:test_stft_window_size_with_win_len
- 2 errors like: TypeError: unsupported operand type(s) for +: 'FakeTensor' and 'tuple' (examples:
:test_tuple_of_optional
:test_tuple_of_optional_default_tensor
- 2 errors like: RuntimeError: Unknown call_function target: aten.unfold.default (examples:
:test_unfold
:test_unfold_dynamic_inputs
- 2 errors like: RuntimeError: Unknown call_function target: aten.index_select.default (examples:
:test_index_select_constant_scaler_index
:test_index_select_scaler_index
- 1 errors like: torch._dynamo.exc.Unsupported: call_function UserDefinedClassVariable() [] {'tensor_out': TensorVariable(), 'tuple_out': TupleVariable(), 'list_out': ListVariable()} (examples:
:test_dict_output
- 1 errors like: RuntimeError: Unknown call_function target: aten.flip.default (examples:
:test_flip
- 1 errors like: RuntimeError: Unknown call_function target: aten.bitwise_xor.Tensor (examples:
:test_and_or_xor
- 1 errors like: RuntimeError: Unknown call_function target: aten.any.default (examples:
:test_any
- 1 errors like: RuntimeError: Unknown call_function target: aten.glu.default (examples:
:test_glu
- 1 errors like: TypeError: add(): argument 'input' (position 1) must be Tensor, not dict (examples:
:test_dict_str
- 1 errors like: RuntimeError: Unknown call_function target: aten._embedding_bag_forward_only.default (examples:
:test_embedding_bag_1d_per_sample_weights
- 1 errors like: RuntimeError: Unknown call_function target: aten.bernoulli_.float (examples:
:test_dropout
- 1 errors like: RuntimeError: Unknown call_function target: aten.avg_pool2d.default (examples:
:test_avgpool_default_stride
- 1 errors like: RuntimeError: Unknown call_function target: aten.addcmul.default (examples:
:test_addcmul
- 1 errors like: RuntimeError: Unknown call_function target: aten.all.default (examples:
:test_all
- 1 errors like: RuntimeError: Unknown call_function target: aten.conv_tbc.default (examples:
:test_conv_tbc
- 1 errors like: RuntimeError: Unknown call_function target: aten.fake_quantize_per_channel_affine_cachemask.default (examples:
:test_fake_quantize_per_channel
- 1 errors like: RuntimeError: Unknown call_function target: aten.fake_quantize_per_tensor_affine_cachemask.default (examples:
:test_fake_quantize_per_tensor
- 1 errors like: IndexError: list index out of range (examples:
:test_aminmax
- 1 errors like: RuntimeError: Unknown call_function target: aten._fake_quantize_per_tensor_affine_cachemask_tensor_qparams.default (examples:
:test_fake_quantize_per_tensor_dynamic_scale_zeropoint
- 1 errors like: RuntimeError: Unknown call_function target: aten.bucketize.Tensor (examples:
:test_bucketize
- 1 errors like: RuntimeError: Unknown call_function target: aten.logical_not.default (examples:
:test_logical_not
- 1 errors like: RuntimeError: Unknown call_function target: aten.logical_or.default (examples:
:test_logical_or
- 1 errors like: RuntimeError: Unknown call_function target: aten.logical_xor.default (examples:
:test_logical_xor
- 1 errors like: RuntimeError: Unknown call_function target: aten.index_fill.int_Scalar (examples:
:test_index_fill
- 1 errors like: RuntimeError: Unknown call_function target: aten.bitwise_or.Tensor (examples:
:test_input_mask_model
- 1 errors like: RuntimeError: Unknown call_function target: aten._cdist_forward.default (examples:
:test_cdist
- 1 errors like: RuntimeError: Unknown call_function target: aten.mish.default (examples:
:test_mish
- 1 errors like: RuntimeError: Unknown call_function target: aten.logical_and.default (examples:
:test_logical_and
- 1 errors like: RuntimeError: Unknown call_function target: aten.nan_to_num.default (examples:
:test_nan_to_num
- 1 errors like: RuntimeError: Unknown call_function target: aten.masked_scatter.default (examples:
:test_masked_scatter
- 1 errors like: TypeError: forward() missing 2 required positional arguments: 'b_1_0_' and 'b_1_1_' (examples:
:test_nested_tuple_input
- 1 errors like: RuntimeError: Unknown call_function target: aten.gather.default (examples:
:test_gather
- 1 errors like: RuntimeError: Unknown call_function target: aten.max_pool2d_with_indices.default (examples:
:test_maxpool_default_stride
- 1 errors like: onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: ONNX Schema aten_new_full: failed validatin... (examples:
:test_new_full
- 1 errors like: RuntimeError: Unknown call_function target: aten.constant_pad_nd.default (examples:
:test_pad_int
- 1 errors like: torch._dynamo.exc.Unsupported: call_function BuiltinVariable(format) [ConstantVariable(str), TensorVariable()] {} (examples:
:test_print_tensor_within_torch_nn_module_float32
- 1 errors like: torch._dynamo.exc.Unsupported: construct nn.Module: ZeroPad2d (examples:
:test_gather_constant_fold
- 1 errors like: RuntimeError: Unknown call_function target: aten.lerp.Scalar (examples:
:test_lerp
- 1 errors like: RuntimeError: Unknown call_function target: aten.mean.dim (examples:
:test_reduced_mean
- 1 errors like: RuntimeError: Unknown call_function target: aten.min.dim (examples:
:test_reduced_min_max
- 1 errors like: RuntimeError: Unknown call_function target: aten.bitwise_and.Tensor (examples:
:test_ones_bool
- 1 errors like: RuntimeError: Unknown call_function target: aten.prod.dim_int (examples:
:test_reduced_prod
- 1 errors like: RuntimeError: Unknown call_function target: aten.upsample_linear1d.default (examples:
:test_interpolate_half_pixel
- 1 errors like: torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(dict) keys [] {} (examples:
:test_dict
- 1 errors like: RuntimeError: Unknown call_function target: aten.hardtanh.default (examples:
:test_relu6
- 1 errors like: RuntimeError: Unknown call_function target: aten.isnan.default (examples:
:test_isnan
- 1 errors like: RuntimeError: Unknown call_function target: aten.bernoulli.default (examples:
:test_bernoulli
- 1 errors like: RuntimeError: Unknown call_function target: aten.bernoulli.p (examples:
:test_bernoulli_p
- 1 errors like: RuntimeError: Unknown call_function target: aten._convolution.default (examples:
:test_convolution_allow_tf32
- 1 errors like: RuntimeError: Unknown call_function target: aten.set_.source_Tensor (examples:
:test_set_
- 1 errors like: RuntimeError: Unknown call_function target: aten.norm.ScalarOpt_dim_dtype (examples:
:test_norm_with_dtype
- 1 errors like: RuntimeError: Unknown call_function target: aten.roll.default (examples:
:test_roll
- 1 errors like: TypeError: forward() missing 1 required positional argument: 'self_module_module_float_tensor' (examples:
:test_set_attr_modules
- 1 errors like: RuntimeError: Unknown call_function target: aten.softplus.default (examples:
:test_softplus
- 1 errors like: RuntimeError: Unknown call_function target: aten.scatter.src (examples:
:test_scatter
- 1 errors like: RuntimeError: Unknown call_function target: aten.squeeze.default (examples:
:test_squeeze_all_dims
- 1 errors like: RuntimeError: Unknown call_function target: aten.stack.default (examples:
:test_stack
- 1 errors like: onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ReduceSum node. Name:'... (examples:
:test_sum_empty_tensor
- 1 errors like: torch._dynamo.exc.Unsupported: Dynamic slicing on data-dependent value is not supported (examples:
:test_trace_script
- 1 errors like: RuntimeError: Unknown call_function target: aten.take.default (examples:
:test_take
- 1 errors like: TypeError: can only concatenate tuple (not "FakeTensor") to tuple (examples:
:test_tuple_primitive_input
- 1 errors like: RuntimeError: Unknown call_function target: aten.triu.default (examples:
:test_triu
- 1 errors like: RuntimeError: Unknown call_function target: aten.tril.default (examples:
:test_tril
- 1 errors like: RuntimeError: Unknown call_function target: aten.unbind.int (examples:
:test_unbind
- 1 errors like: RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For f... (examples:
:test_rpn
- 1 errors like: RuntimeError: Unknown call_function target: aten.pixel_shuffle.default (examples:
:test_pixel_shuffle
- 1 errors like: RuntimeError: Unknown call_function target: aten.pixel_unshuffle.default (examples:
:test_pixel_unshuffle
- 1 errors like: RuntimeError: Unknown call_function target: aten.new_ones.default (examples:
:test_new_ones
- 1 errors like: RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. (examples:
:test_index_put_with_1d_mask_to_masked_scatter
===================== 791 failed, 85 passed, 26 skipped in 27.45s =====================
```
| 1 |
3,248 | 96,523 |
MPS Backend Doc, model = YourFavoriteNet() not defined
|
module: docs, triaged, actionable, module: mps
|
### 📚 The doc issue
I have used MacBook MPS documentation and when I run the code, it gives an error: YourFavoriteNet() is undefined.
Reference:[(https://pytorch.org/docs/stable/notes/mps.html)]
### Suggest a potential alternative/fix
Fixing the document code and potentially adding YourFavoriteNet() function or class.
cc @svekars @carljparker @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
3,249 | 96,518 |
fft should ignore dims with shape 1
|
triaged, module: fft
|
### 🐛 Describe the bug
FFT functions should ignore dimensions with `shape==1` for optimal results:
Example: non-optimal performance for `rfft` if last dimension has `shape == 1`:
```
import torch
x = torch.zeros(10,10,1)
torch.fft.rfftn(x).shape # => torch.Size([10, 10, 1])
x = torch.zeros(10,1,10)
torch.fft.rfftn(x).shape # => torch.Size([10, 1, 6])
```
Its not a real bug, but it would save some wrapper code on the user side. Currently, i have to use something like this:
```
torch.fft.rfftn(N, dim = [i for i in range(3) if n[i] > 1]
```
Unfortunately, this workaround fails for `n=(1,1,1)` which requires some additional effort.
Makeing pytorch's ffts ignore dimentions with `shape == 1` would fix this issue.
### Versions
Collecting environment information...
/home/florian/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:82: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:112.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz
Stepping: 12
CPU MHz: 3016.778
CPU max MHz: 4900,0000
CPU min MHz: 400,0000
BogoMIPS: 4599.93
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] difftorch==1.2.2
[pip3] numpy==1.21.5
[pip3] pytorch-memlab==0.2.4
[pip3] torch==1.11.0
[pip3] torchdiffeq==0.2.2
[pip3] torchmin==0.0.2
[pip3] xitorch==0.3.0
[conda] Could not collect
cc @mruberry @peterbell10
| 0 |
3,250 | 96,494 |
The sign of torch.distributions.transforms.PowerTransform seems to be incorrect
|
module: distributions, triaged
|
### 🐛 Describe the bug
The sign of the Jacobian determinant of `torch.distributions.transforms.PowerTransform` seems to be incorrect. In the current version, the sign is assigned to be a constant `sign = +1` on line 550. However, I think the sign of the Jacobian determinant should depend on both the exponent and the input instead of just a constant.
To further describe the issue, see the customized construction of a Generalized Pareto Distribution (GPD) below. I have to manually overwrite the `_monotonize_cdf` method in order to construct a correct CDF. Specifically, there are 3 transformations I use to transform u ~ Unif(0,1) into GPD. The sign of `AffineTransform` is the same as `self.shape`, which is correct. Therefore, if the sign of `PowerTransform` is +1, then this makes the sign of the full transformation dependent on the sign of `self.alpha`, which does not make sense. Also, according to the definition of `_monotonize_cdf` in `torch.distributions.transformed_distribution.TransformedDistribution`, the sign of the full transformation ensures that the CDF is monotonically increasing. Therefore, if the sign of the full transformation is dependent on `self.alpha`, then it is not guaranteed that the CDF is always monotonically increasing.
```
class GeneralizedPareto(TransformedDistribution):
"""
Samples from a Generalized Pareto distribution
Arguments
--------------
loc: location parameter of the distribution
scale: scale parameter of the distribution
alpha: shape parameter of the distribution
"""
arg_constraints = {'loc': constraints.real, 'scale': constraints.positive, 'alpha': _NonZero()}
def __init__(self, loc, scale, alpha, validate_args=None):
self.loc, self.scale, self.alpha = broadcast_all(loc, scale, alpha)
base_dist = Uniform(0, 1)
transforms = [
PowerTransform(-self.alpha),
AffineTransform(loc=-1, scale=torch.ones_like(self.scale)),
AffineTransform(loc=loc, scale=self.scale/self.alpha)
]
super(GeneralizedPareto, self).__init__(base_dist, transforms, validate_args=validate_args)
def _monotonize_cdf(self, value):
"""
Overwrite the default _monotonize_cdf method to construct the correct CDF
(Subject to change) The sign of det(jacobian) of the PowerTransform in the current version
of PyTorch is incorrect. This is a temporary fix.
"""
return 1 - value
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 12.0.1 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.1
[conda] numpy 1.23.1 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
3,251 | 96,487 |
[Inductor] C++ compile error when using integer type lower than int32
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
Hi! I met a C++ compile error when using int16 together with `+` and `min` for CPU code generation. Here is the reproducing script:
```
import torch
a = torch.randint(4, (2, ), dtype=torch.int16)
def forward(a):
a = a + 1
return a.min()
fn_compiled = torch.compile(forward)
print(fn_compiled(a))
```
### Error logs
```
Traceback (most recent call last):
File "/home/su/accdiff/test.py", line 9, in <module>
print(fn_compiled(a))
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/eval_frame.py", line 254, in _fn
return fn(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/eval_frame.py", line 391, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 406, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 105, in _fn
return fn(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 263, in _convert_frame_assert
return _compile(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 326, in _compile
out_code = transform_code_object(code, transform)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/convert_frame.py", line 313, in transform
tracer.run()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 1840, in run
super().run()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 597, in run
and self.step()
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 560, in step
getattr(self, inst.opname)(inst)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/symbolic_convert.py", line 1919, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 545, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 615, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 701, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/output_graph.py", line 697, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/debug_utils.py", line 1064, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/su/accdiff/thirdparty/pytorch/torch/__init__.py", line 1382, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 488, in compile_fx
return aot_autograd(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 2873, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 2554, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 1735, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/home/su/accdiff/thirdparty/pytorch/torch/_functorch/aot_autograd.py", line 1346, in aot_dispatch_base
compiled_fw = aot_config.fw_compiler(fw_module, flat_args_with_views_handled)
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 462, in fw_compiler
return inner_compile(
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/debug_utils.py", line 598, in debug_wrapper
compiled_fn = compiler_fn(gm, example_inputs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/debug.py", line 239, in inner
return fn(*args, **kwargs)
File "/usr/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/compile_fx.py", line 180, in compile_fx_inner
compiled_fn = graph.compile_to_fn()
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/graph.py", line 633, in compile_to_fn
return self.compile_to_module().call
File "/home/su/accdiff/thirdparty/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/graph.py", line 622, in compile_to_module
mod = PyCodeCache.load(code)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 608, in load
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_su/lp/clpahzraewfwreemm2z673mix7qi2pppelqxtu4p4e32tppqpkkq.py", line 42, in <module>
async_compile.wait(globals())
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 795, in wait
scope[key] = result.result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 772, in task
return CppCodeCache.load(source_code).kernel
File "/home/su/accdiff/thirdparty/pytorch/torch/_inductor/codecache.py", line 582, in load
raise exc.CppCompileError(cmd, e.output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp -shared -fPIC -Wall -std=c++17 -Wno-unused-variable -I/home/su/accdiff/thirdparty/pytorch/torch/include -I/home/su/accdiff/thirdparty/pytorch/torch/include/torch/csrc/api/include -I/home/su/accdiff/thirdparty/pytorch/torch/include/TH -I/home/su/accdiff/thirdparty/pytorch/torch/include/THC -I/usr/include/python3.9 -L/home/su/accdiff/thirdparty/pytorch/torch/lib -L/usr/lib/x86_64-linux-gnu -lc10 -ltorch -ltorch_cpu -ltorch_python -lgomp -DCPU_CAPABILITY_AVX2 -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -D C10_USING_CUSTOM_GENERATED_MACROS -o/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.so
Output:
In file included from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256.h:8,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:10,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec_base.h:1011: warning: ignoring #pragma unroll [-Wunknown-pragmas]
1011 | # pragma unroll
|
In file included from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256.h:10,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:10,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_float.h:438: warning: ignoring #pragma unroll [-Wunknown-pragmas]
438 | #pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_float.h:442: warning: ignoring #pragma unroll [-Wunknown-pragmas]
442 | #pragma unroll
|
In file included from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256.h:12,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:10,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:693: warning: ignoring #pragma unroll [-Wunknown-pragmas]
693 | #pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_bfloat16.h:698: warning: ignoring #pragma unroll [-Wunknown-pragmas]
698 | #pragma unroll
|
In file included from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256.h:13,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:10,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_double.h:403: warning: ignoring #pragma unroll [-Wunknown-pragmas]
403 | #pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_double.h:407: warning: ignoring #pragma unroll [-Wunknown-pragmas]
407 | #pragma unroll
|
In file included from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256.h:14,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional_base.h:6,
from /home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/functional.h:3,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:10,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_int.h:287: warning: ignoring #pragma unroll [-Wunknown-pragmas]
287 | # pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_int.h:295: warning: ignoring #pragma unroll [-Wunknown-pragmas]
295 | # pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_int.h:307: warning: ignoring #pragma unroll [-Wunknown-pragmas]
307 | # pragma unroll
|
/home/su/accdiff/thirdparty/pytorch/torch/include/ATen/cpu/vec/vec256/vec256_int.h:315: warning: ignoring #pragma unroll [-Wunknown-pragmas]
315 | # pragma unroll
|
In file included from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:66: warning: ignoring #pragma unroll [-Wunknown-pragmas]
66 | #pragma unroll
|
/tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:75: warning: ignoring #pragma unroll [-Wunknown-pragmas]
75 | #pragma unroll
|
/tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:92: warning: ignoring #pragma unroll [-Wunknown-pragmas]
92 | #pragma unroll
|
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp: In function ‘void kernel(const short int*, short int*)’:
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:14:43: error: no matching function for call to ‘min(short int&, int&)’
14 | tmp3 = std::min(tmp3, tmp2);
| ^
In file included from /usr/include/c++/9/algorithm:61,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:1,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/usr/include/c++/9/bits/stl_algobase.h:198:5: note: candidate: ‘template<class _Tp> constexpr const _Tp& std::min(const _Tp&, const _Tp&)’
198 | min(const _Tp& __a, const _Tp& __b)
| ^~~
/usr/include/c++/9/bits/stl_algobase.h:198:5: note: template argument deduction/substitution failed:
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:14:43: note: deduced conflicting types for parameter ‘const _Tp’ (‘short int’ and ‘int’)
14 | tmp3 = std::min(tmp3, tmp2);
| ^
In file included from /usr/include/c++/9/algorithm:61,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:1,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/usr/include/c++/9/bits/stl_algobase.h:246:5: note: candidate: ‘template<class _Tp, class _Compare> constexpr const _Tp& std::min(const _Tp&, const _Tp&, _Compare)’
246 | min(const _Tp& __a, const _Tp& __b, _Compare __comp)
| ^~~
/usr/include/c++/9/bits/stl_algobase.h:246:5: note: template argument deduction/substitution failed:
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:14:43: note: deduced conflicting types for parameter ‘const _Tp’ (‘short int’ and ‘int’)
14 | tmp3 = std::min(tmp3, tmp2);
| ^
In file included from /usr/include/c++/9/algorithm:62,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:1,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/usr/include/c++/9/bits/stl_algo.h:3450:5: note: candidate: ‘template<class _Tp> constexpr _Tp std::min(std::initializer_list<_Tp>)’
3450 | min(initializer_list<_Tp> __l)
| ^~~
/usr/include/c++/9/bits/stl_algo.h:3450:5: note: template argument deduction/substitution failed:
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:14:43: note: mismatched types ‘std::initializer_list<_Tp>’ and ‘short int’
14 | tmp3 = std::min(tmp3, tmp2);
| ^
In file included from /usr/include/c++/9/algorithm:62,
from /tmp/torchinductor_su/dl/cdljpywww2h2ag4o35mwbvm45hhasxnxkhqgbupxnk3y7olula65.h:1,
from /tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:2:
/usr/include/c++/9/bits/stl_algo.h:3456:5: note: candidate: ‘template<class _Tp, class _Compare> constexpr _Tp std::min(std::initializer_list<_Tp>, _Compare)’
3456 | min(initializer_list<_Tp> __l, _Compare __comp)
| ^~~
/usr/include/c++/9/bits/stl_algo.h:3456:5: note: template argument deduction/substitution failed:
/tmp/torchinductor_su/he/ches4h3plhq5uqf3yf5dvsxje3yzz2tr6usd3yzufedzfgmciyd6.cpp:14:43: note: mismatched types ‘std::initializer_list<_Tp>’ and ‘short int’
14 | tmp3 = std::min(tmp3, tmp2);
| ^
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Minified repro
_No response_
### Versions
PyTorch version: 2.1.0a0+gitfe05266
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.4
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.4
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 1
Model name: AMD Ryzen Threadripper 1950X 16-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3400.0000
CPU min MHz: 2200.0000
BogoMIPS: 6786.49
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 1 MiB
L2 cache: 8 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.1.0a0+gitfe05266
[conda] Could not collect
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,252 | 96,486 |
Make aten.rand and aten.empty as core aten ops
|
triaged, oncall: pt2, module: decompositions
|
### 🐛 Describe the bug
aten.rand and aten.empty should be part of the core aten ops. AOT autograd with core_aten_decompositions retains these ops in the FX graph -
```
import torch
import torch._dynamo as dynamo
from torch._functorch.aot_autograd import aot_module_simplified
from torch._decomp import core_aten_decompositions
def toy_backend(gm, sample_inputs):
def my_compiler(gm, sample_inputs):
print(gm.code)
return gm.forward
# Invoke AOTAutograd
return aot_module_simplified(
gm,
sample_inputs,
decompositions=core_aten_decompositions(),
fw_compiler=my_compiler
)
def toy_example():
t1 = torch.rand((8, 8))
t2 = torch.empty(8, 8)
return t1 + t2
compiled_fn = torch.compile(backend=toy_backend)(toy_example)
r = compiled_fn()
```
This produces the following FX graph -
```
def forward(self):
rand = torch.ops.aten.rand.default([8, 8], device = device(type='cpu'), pin_memory = False)
empty = torch.ops.aten.empty.memory_format([8, 8], device = device(type='cpu'), pin_memory = False)
add = torch.ops.aten.add.Tensor(rand, empty); rand = empty = None
return (add,)
/usr/local/lib/python3.9/dist-packages/torch/_dynamo/eval_frame.py:332: UserWarning: changing options to `torch.compile()` may require calling `torch._dynamo.reset()` to take effect
warnings.warn(
```
### Versions
PyTorch version: 2.1.0.dev20230309+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230309+cu117
[pip3] torchaudio==0.13.1+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @SherlockNoMad
| 2 |
3,253 | 96,469 |
Torch Dynamo backend compilation error with dynamic = True
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
I am training a model described here
https://github.com/agunapal/examples/blob/pt2.0_example/pt2.0/mnist/main.py
with ```torch.compile(mode, dynamic=True)```
I am getting a compilation failure
### Error logs
[dynamic-error.txt](https://github.com/pytorch/pytorch/files/10937442/dynamic-error.txt)
### Minified repro
```
import os
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import functools
import torch._dynamo
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
from torch._dynamo.backends.registry import lookup_backend
from torch._dynamo.testing import rand_strided
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._dynamo.config.load_config(b'\x80\x02}q\x00(X\x0b\x00\x00\x00output_codeq\x01\x89X\r\x00\x00\x00log_file_nameq\x02NX\x07\x00\x00\x00verboseq\x03\x89X\x11\x00\x00\x00output_graph_codeq\x04\x89X\x12\x00\x00\x00verify_correctnessq\x05\x89X\x12\x00\x00\x00minimum_call_countq\x06K\x01X\x15\x00\x00\x00dead_code_eliminationq\x07\x88X\x10\x00\x00\x00cache_size_limitq\x08K@X\x14\x00\x00\x00specialize_int_floatq\t\x89X\x0e\x00\x00\x00dynamic_shapesq\n\x88X\x10\x00\x00\x00guard_nn_modulesq\x0b\x89X\x1b\x00\x00\x00traceable_tensor_subclassesq\x0cc__builtin__\nset\nq\r]q\x0e\x85q\x0fRq\x10X\x0f\x00\x00\x00suppress_errorsq\x11\x89X\x15\x00\x00\x00replay_record_enabledq\x12\x89X \x00\x00\x00rewrite_assert_with_torch_assertq\x13\x88X\x12\x00\x00\x00print_graph_breaksq\x14\x89X\x07\x00\x00\x00disableq\x15\x89X*\x00\x00\x00allowed_functions_module_string_ignorelistq\x16h\r]q\x17(X\x13\x00\x00\x00torch.distributionsq\x18X\x0c\x00\x00\x00torch._primsq\x19X\r\x00\x00\x00torch._decompq\x1aX\x0b\x00\x00\x00torch._refsq\x1bX\r\x00\x00\x00torch.testingq\x1ce\x85q\x1dRq\x1eX\x12\x00\x00\x00repro_forward_onlyq\x1f\x89X\x0f\x00\x00\x00repro_toleranceq G?PbM\xd2\xf1\xa9\xfcX\x16\x00\x00\x00capture_scalar_outputsq!\x89X\x19\x00\x00\x00enforce_cond_guards_matchq"\x88X\x0c\x00\x00\x00optimize_ddpq#\x88X\x1a\x00\x00\x00raise_on_ctx_manager_usageq$\x88X\x1c\x00\x00\x00raise_on_unsafe_aot_autogradq%\x89X\x17\x00\x00\x00raise_on_backend_changeq&\x89X\x18\x00\x00\x00error_on_nested_fx_traceq\'\x88X\t\x00\x00\x00allow_rnnq(\x89X\x08\x00\x00\x00base_dirq)XG\x00\x00\x00/home/ubuntu/anaconda3/envs/test_2.0_py310/lib/python3.10/site-packagesq*X\x0e\x00\x00\x00debug_dir_rootq+X:\x00\x00\x00/home/ubuntu/fork/examples/pt2.0/mnist/torch_compile_debugq,X)\x00\x00\x00DO_NOT_USE_legacy_non_fake_example_inputsq-\x89X\x13\x00\x00\x00_save_config_ignoreq.h\r]q/(X\x0b\x00\x00\x00repro_afterq0X\x0b\x00\x00\x00repro_levelq1X!\x00\x00\x00skipfiles_inline_module_allowlistq2X\x12\x00\x00\x00constant_functionsq3e\x85q4Rq5u.')
torch._inductor.config.load_config(b'\x80\x02}q\x00(X\x05\x00\x00\x00debugq\x01\x89X\x10\x00\x00\x00disable_progressq\x02\x88X\x10\x00\x00\x00verbose_progressq\x03\x89X\x0b\x00\x00\x00cpp_wrapperq\x04\x89X\x03\x00\x00\x00dceq\x05\x89X\x14\x00\x00\x00static_weight_shapesq\x06\x88X\x0c\x00\x00\x00size_assertsq\x07\x88X\x10\x00\x00\x00pick_loop_ordersq\x08\x88X\x0f\x00\x00\x00inplace_buffersq\t\x88X\x11\x00\x00\x00benchmark_harnessq\n\x88X\x0f\x00\x00\x00epilogue_fusionq\x0b\x89X\x15\x00\x00\x00epilogue_fusion_firstq\x0c\x89X\x0f\x00\x00\x00pattern_matcherq\r\x88X\n\x00\x00\x00reorderingq\x0e\x89X\x0c\x00\x00\x00max_autotuneq\x0f\x89X\x17\x00\x00\x00realize_reads_thresholdq\x10K\x04X\x17\x00\x00\x00realize_bytes_thresholdq\x11M\xd0\x07X\x1b\x00\x00\x00realize_acc_reads_thresholdq\x12K\x08X\x0f\x00\x00\x00fallback_randomq\x13\x89X\x12\x00\x00\x00implicit_fallbacksq\x14\x88X\x0b\x00\x00\x00tune_layoutq\x15\x89X\x11\x00\x00\x00aggressive_fusionq\x16\x89X\x0f\x00\x00\x00max_fusion_sizeq\x17K@X\x1b\x00\x00\x00unroll_reductions_thresholdq\x18K\x08X\x0e\x00\x00\x00comment_originq\x19\x89X\x12\x00\x00\x00developer_warningsq\x1a\x88X\x0f\x00\x00\x00compile_threadsq\x1bK\x08X\x13\x00\x00\x00kernel_name_max_opsq\x1cK\nX\r\x00\x00\x00shape_paddingq\x1d\x89X\x0e\x00\x00\x00permute_fusionq\x1e\x89X\x1a\x00\x00\x00profiler_mark_wrapper_callq\x1f\x89X\x18\x00\x00\x00_raise_error_for_testingq \x89X\x0b\x00\x00\x00cpp.threadsq!J\xff\xff\xff\xffX\x13\x00\x00\x00cpp.dynamic_threadsq"\x89X\x0b\x00\x00\x00cpp.simdlenq#NX\x12\x00\x00\x00cpp.min_chunk_sizeq$M\x00\x10X\x07\x00\x00\x00cpp.cxxq%NX\x03\x00\x00\x00g++q&\x86q\'X\x19\x00\x00\x00cpp.enable_kernel_profileq(\x89X\x12\x00\x00\x00cpp.weight_prepackq)\x88X\x11\x00\x00\x00triton.cudagraphsq*\x89X\x17\x00\x00\x00triton.debug_sync_graphq+\x89X\x18\x00\x00\x00triton.debug_sync_kernelq,\x89X\x15\x00\x00\x00triton.dense_indexingq-\x89X\x10\x00\x00\x00triton.max_tilesq.K\x02X\x19\x00\x00\x00triton.autotune_pointwiseq/\x88X\'\x00\x00\x00triton.tiling_prevents_pointwise_fusionq0\x88X\'\x00\x00\x00triton.tiling_prevents_reduction_fusionq1\x88X\x1b\x00\x00\x00triton.ordered_kernel_namesq2\x89X\x1f\x00\x00\x00triton.descriptive_kernel_namesq3\x89X\x1c\x00\x00\x00triton.persistent_reductionsq4\x89X\r\x00\x00\x00trace.enabledq5\x89X\x0f\x00\x00\x00trace.debug_logq6\x88X\x0e\x00\x00\x00trace.info_logq7\x89X\x0e\x00\x00\x00trace.fx_graphq8\x88X\x1a\x00\x00\x00trace.fx_graph_transformedq9\x88X\x13\x00\x00\x00trace.ir_pre_fusionq:\x88X\x14\x00\x00\x00trace.ir_post_fusionq;\x88X\x11\x00\x00\x00trace.output_codeq<\x88X\x13\x00\x00\x00trace.graph_diagramq=\x89X\x15\x00\x00\x00trace.compile_profileq>\x89X\x10\x00\x00\x00trace.upload_tarq?Nu.')
torch._functorch.config.load_config(b'\x80\x02}q\x00(X\x11\x00\x00\x00use_functionalizeq\x01\x88X\x0f\x00\x00\x00use_fake_tensorq\x02\x88X\x16\x00\x00\x00fake_tensor_allow_metaq\x03\x88X\x0c\x00\x00\x00debug_assertq\x04\x88X\x14\x00\x00\x00debug_fake_cross_refq\x05\x89X\x11\x00\x00\x00debug_partitionerq\x06\x89X\x0c\x00\x00\x00debug_graphsq\x07\x89X\x0b\x00\x00\x00debug_jointq\x08\x89X\x12\x00\x00\x00use_dynamic_shapesq\t\x89X\x14\x00\x00\x00static_weight_shapesq\n\x88X\x03\x00\x00\x00cseq\x0b\x88X\x10\x00\x00\x00max_dist_from_bwq\x0cK\x03X\t\x00\x00\x00log_levelq\rK\x14u.')
# REPLACEABLE COMMENT FOR TESTING PURPOSES
args = [((s0, 1, s1, s1), (s1**2, s1**2, s1, 1), torch.float32, 'cuda', False)]
args = [rand_strided(sh, st, dt, dev).requires_grad_(rg) for (sh, st, dt, dev, rg) in args]
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
self.self_conv1 = Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)).cuda()
self.self_conv2 = Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)).cuda()
self.self_dropout1 = Dropout(p=0.25, inplace=False)
self.self_fc1 = Linear(in_features=9216, out_features=128, bias=True).cuda()
self.self_dropout2 = Dropout(p=0.5, inplace=False)
self.self_fc2 = Linear(in_features=128, out_features=10, bias=True).cuda()
def forward(self, x : torch.Tensor):
self_conv1 = self.self_conv1(x); x = None
relu = torch.nn.functional.relu(self_conv1); self_conv1 = None
self_conv2 = self.self_conv2(relu); relu = None
relu_1 = torch.nn.functional.relu(self_conv2); self_conv2 = None
max_pool2d = torch.nn.functional.max_pool2d(relu_1, 2); relu_1 = None
self_dropout1 = self.self_dropout1(max_pool2d); max_pool2d = None
flatten = torch.flatten(self_dropout1, 1); self_dropout1 = None
self_fc1 = self.self_fc1(flatten); flatten = None
relu_2 = torch.nn.functional.relu(self_fc1); self_fc1 = None
self_dropout2 = self.self_dropout2(relu_2); relu_2 = None
self_fc2 = self.self_fc2(self_dropout2); self_dropout2 = None
log_softmax = torch.nn.functional.log_softmax(self_fc2, dim = 1); self_fc2 = None
return (log_softmax,)
mod = Repro()
# Setup debug minifier compiler
torch._dynamo.debug_utils.MINIFIER_SPAWNED = True
compiler_fn = lookup_backend("dynamo_minifier_backend")
dynamo_minifier_backend = functools.partial(
compiler_fn,
compiler_name="inductor",
)
opt_mod = torch._dynamo.optimize(dynamo_minifier_backend)(mod)
with torch.cuda.amp.autocast(enabled=False):
opt_mod(*args)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.834
BogoMIPS: 5599.66
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.0+cu118
[pip3] torchvision==0.15.0+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.0+cu118 pypi_0 pypi
[conda] torchvision 0.15.0+cu118 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 13 |
3,254 | 96,467 |
[MINIFIER] Running code snippet with TORCHDYNAMO_REPRO_AFTER="dynamo" leads to error
|
triaged, oncall: pt2, module: dynamic shapes, module: minifier
|
### 🐛 Describe the bug
The following code snippet runs fine with `torch.compile` (mostly, aside from a fallback on aten.sort) without minifying. With `TORCHDYNAMO_REPRO_AFTER="dynamo"` an error is produced and the `minifier_launcher.py` cannot be executed.
I wasn't sure whether this would count as a `torch.compile` problem or not?
Reproducible example:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, outputs, labels, num_masks=2):
mask_positions = labels != -100
# outputs = outputs[mask_positions] # not allowed as dynamic shape op
# torch.masked_select(labels, mask_positions) # also not allowed as a dynamic shape operator
indices = torch.argsort(mask_positions.int())[-num_masks:] # ugh
outputs = outputs[indices] # not allowed as dynamic shape op, but ok with indices
labels = labels[indices]
return {"loss": outputs.mean(), "outputs": outputs}
labels = torch.ones((16,), dtype=torch.long) * -100
labels[3] = 8
labels[5] = 3
outputs = torch.randn(16, 128)
model = torch.compile(Model(), fullgraph=True, dynamic=True)
loss = model(outputs, labels)["loss"]
```
Error message with minifier:
```
[2023-03-09 17:50:42,027] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-03-09 17:50:42,429] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-03-09 17:50:42,431] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-03-09 17:50:42,875] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 0
[2023-03-09 17:50:42,879] torch._inductor.graph: [INFO] Using FallbackKernel: aten.sort
[2023-03-09 17:50:44,780] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-03-09 17:50:44,783] torch._dynamo.debug_utils: [WARNING] Compiled Fx GraphModule failed. Creating script to minify the error.
[2023-03-09 17:50:44,785] torch._dynamo.debug_utils: [WARNING] Writing minified repro to [.../torch_compile_debug/run_2023_03_09_17_50_44_784924-pid_141514/minifier/minifier_launcher.py
[2023-03-09 17:50:44,785] torch._dynamo.debug_utils: [WARNING] Compiled Fx GraphModule failed. Creating script to minify the error.
[2023-03-09 17:50:44,786] torch._dynamo.debug_utils: [WARNING] Writing minified repro to /torch_compile_debug/run_2023_03_09_17_50_44_784924-pid_141514/minifier/minifier_launcher.py
Traceback (most recent call last):
File "minimal_error.py", line 53, in <module>
loss = model(outputs, labels)["loss"]
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 95, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 368, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1839, in run
super().run()
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 596, in run
and self.step()
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 560, in step
getattr(self, inst.opname)(inst)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1918, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 575, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 622, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 708, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 704, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 1031, in debug_wrapper
compiled_gm = compiler_fn(copy.deepcopy(gm), example_inputs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 1032, in debug_wrapper
run_fwd_maybe_bwd(compiled_gm, example_inputs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/debug_utils.py", line 633, in run_fwd_maybe_bwd
out = gm(args)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g
return f(*args)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2847, in forward
return compiled_fn(full_args)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g
return f(*args)
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1901, in runtime_wrapper
all_outs = call_func_with_args(
File "/home/jonas/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args
out = normalize_as_list(f(args))
File "/tmp/torchinductor_jonas/6x/c6xwttnuklgllfqp6jhtkgsip4mlmqx5s56fo2h3ejkhrpp2blwz.py", line 103, in call
buf0 = empty_strided((s0, ), (1, ), device='cpu', dtype=torch.int32)
torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised:
RuntimeError: /opt/conda/conda-bld/pytorch_1678176698518/work/build/aten/src/ATen/RegisterCPU.cpp:5148: SymIntArrayRef expected to contain only concrete integers
Minifier script written to [...] minifier/minifier_launcher.py. Run this script to find the smallest traced graph which reproduces this error.
```
Subsequent error on minifier script:
```
/torch_compile_debug/run_2023_03_09_17_50_44_784924-pid_141514/minifier/minifier_launcher.py
Traceback (most recent call last):
File "/torch_compile_debug/run_2023_03_09_17_50_44_784924-pid_141514/minifier/minifier_launcher.py", line 24, in <module>
args = [((s0, s1), (s1, 1), torch.float32, 'cpu', False), ((s0,), (1,), torch.int64, 'cpu', False)]
NameError: name 's0' is not defined
```
### Versions
```
PyTorch version: 2.1.0.dev20230307
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,255 | 96,459 |
[torch.compile] Warning using BuiltinVariable.call_dict but not for {}
|
good first issue, triaged, oncall: pt2, release notes: dynamo
|
### 🐛 Describe the bug
I would expect torch.compile to work equally well for dictionaries created using `dict` as for dictionaries created with `{}`, but using `dict` leads to the warning `incorrect arg count <bound method BuiltinVariable.call_dict of BuiltinVariable(dict)> got an unexpected keyword argument 'loss' and no constant handler`.
Is this expected behavior?
Here's a small reproduction:
```
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.embedding = torch.nn.Embedding(10, 8)
def forward(self, input_ids):
outputs = self.embedding(input_ids)
loss = outputs.mean()
return dict(loss=loss, outputs=outputs) # warning
# return {"loss": loss, "outputs": outputs} ok
```
test with
```
model = torch.compile(Model())
model(torch.randint(10, (4, 16)))
```
### Error logs
2.1.0.dev20230307
[2023-03-09 17:17:55,890] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-03-09 17:17:55,895] torch._dynamo.variables.builtin: [WARNING] incorrect arg count <bound method BuiltinVariable.call_dict of BuiltinVariable(dict)> got an unexpected keyword argument 'loss' and no constant handler
[2023-03-09 17:17:55,895] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-03-09 17:17:56,774] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 0
[2023-03-09 17:17:58,713] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-03-09 17:17:58,713] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
### Minified repro
_No response_
### Versions
PyTorch version: 2.1.0.dev20230307
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @davidberard98
| 9 |
3,256 | 96,456 |
Shape Error when training HF deberta-base with Inductor
|
good first issue, triaged, module: meta tensors, oncall: pt2
|
### 🐛 Describe the bug
When using HuggingFace's Trainer API I noticed that PyTorch eager mode succeeds as expected but inductor fails with a shape mismatch error:
```
ValueError: Cannot view a tensor with shape torch.Size([1, 256, 12, 64]) and strides (196608, 64, 16384, 1) as a tensor with shape (1, 256, 768)!
```
This only happens with the deberta-base model
### Error logs
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/output_graph.py:670 in call_user_compiler │
│ │
│ 667 │ │ │ elif config.DO_NOT_USE_legacy_non_fake_example_inputs: │
│ 668 │ │ │ │ compiled_fn = compiler_fn(gm, self.example_inputs()) │
│ 669 │ │ │ else: │
│ ❱ 670 │ │ │ │ compiled_fn = compiler_fn(gm, self.fake_example_inputs()) │
│ 671 │ │ │ _step_logger()(logging.INFO, f"done compiler function {name}") │
│ 672 │ │ │ assert callable(compiled_fn), "compiler_fn did not return callable" │
│ 673 │ │ except Exception as e: │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/debug_utils.py:1055 in debug_wrapper │
│ │
│ 1052 │ │ │ │ │ ) │
│ 1053 │ │ │ │ │ raise │
│ 1054 │ │ else: │
│ ❱ 1055 │ │ │ compiled_gm = compiler_fn(gm, example_inputs) │
│ 1056 │ │ │
│ 1057 │ │ return compiled_gm │
│ 1058 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/__init__.py:1390 in __call__ │
│ │
│ 1387 │ def __call__(self, model_, inputs_): │
│ 1388 │ │ from torch._inductor.compile_fx import compile_fx │
│ 1389 │ │ │
│ ❱ 1390 │ │ return compile_fx(model_, inputs_, config_patches=self.config) │
│ 1391 │
│ 1392 │
│ 1393 def compile(model: Optional[Callable] = None, *, │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_inductor/compile_fx.py:455 in compile_fx │
│ │
│ 452 │ │ # TODO: can add logging before/after the call to create_aot_dispatcher_function │
│ 453 │ │ # in torch._functorch/aot_autograd.py::aot_module_simplified::aot_function_simpl │
│ 454 │ │ # once torchdynamo is merged into pytorch │
│ ❱ 455 │ │ return aot_autograd( │
│ 456 │ │ │ fw_compiler=fw_compiler, │
│ 457 │ │ │ bw_compiler=bw_compiler, │
│ 458 │ │ │ decompositions=select_decomp_table(), │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/backends/common.py:48 in compiler_fn │
│ │
│ 45 │ │ try: │
│ 46 │ │ │ # NB: NOT cloned! │
│ 47 │ │ │ with enable_aot_logging(): │
│ ❱ 48 │ │ │ │ cg = aot_module_simplified(gm, example_inputs, **kwargs) │
│ 49 │ │ │ │ counters["aot_autograd"]["ok"] += 1 │
│ 50 │ │ │ │ return eval_frame.disable(cg) │
│ 51 │ │ except Exception: │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:2805 in │
│ aot_module_simplified │
│ │
│ 2802 │ full_args.extend(params_flat) │
│ 2803 │ full_args.extend(args) │
│ 2804 │ │
│ ❱ 2805 │ compiled_fn = create_aot_dispatcher_function( │
│ 2806 │ │ functional_call, │
│ 2807 │ │ full_args, │
│ 2808 │ │ aot_config, │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/utils.py:163 in time_wrapper │
│ │
│ 160 │ │ │ if key not in compilation_metrics: │
│ 161 │ │ │ │ compilation_metrics[key] = [] │
│ 162 │ │ │ t0 = time.time() │
│ ❱ 163 │ │ │ r = func(*args, **kwargs) │
│ 164 │ │ │ time_spent = time.time() - t0 │
│ 165 │ │ │ # print(f"Dynamo timer: key={key}, latency={latency:.2f} sec") │
│ 166 │ │ │ compilation_metrics[key].append(time_spent) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:2498 in │
│ create_aot_dispatcher_function │
│ │
│ 2495 │ │ compiler_fn = partial(aot_wrapper_dedupe, compiler_fn=compiler_fn) │
│ 2496 │ │ # You can put more passes here │
│ 2497 │ │ │
│ ❱ 2498 │ │ compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config) │
│ 2499 │ │ │
│ 2500 │ │ if not hasattr(compiled_fn, "_boxed_call"): │
│ 2501 │ │ │ compiled_fn = make_boxed_func(compiled_fn) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1713 in │
│ aot_wrapper_dedupe │
│ │
│ 1710 │ │ │ │ break │
│ 1711 │ │ │
│ 1712 │ │ if ok: │
│ ❱ 1713 │ │ │ return compiler_fn(flat_fn, leaf_flat_args, aot_config) │
│ 1714 │ │
│ 1715 │ # Strategy 2: Duplicate specialize. │
│ 1716 │ # │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:2087 in │
│ aot_dispatch_autograd │
│ │
│ 2084 │ if config.use_functionalize: │
│ 2085 │ │ with enable_python_dispatcher(): │
│ 2086 │ │ │ flattened_joints, _ = pytree.tree_flatten(joint_inputs) │
│ ❱ 2087 │ │ │ fx_g = make_fx(joint_forward_backward, aot_config.decompositions)( │
│ 2088 │ │ │ │ *joint_inputs │
│ 2089 │ │ │ ) │
│ 2090 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:714 in wrapped │
│ │
│ 711 │ │ # thus irrelevant to any external functional trace. │
│ 712 │ │ with decompose(decomposition_table), fake_tensor_mode, python_dispatcher_mode, \ │
│ 713 │ │ │ sym_mode, proxy_mode, disable_autocast_cache(), disable_proxy_modes_tracing │
│ ❱ 714 │ │ │ t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concre │
│ 715 │ │ │
│ 716 │ │ # TODO: kind of a bad way to do it, should maybe figure out a better way │
│ 717 │ │ if tracing_mode == "symbolic": │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:209 in _fn │
│ │
│ 206 │ │ │ dynamic_ctx = enable_dynamic(self.dynamic) │
│ 207 │ │ │ dynamic_ctx.__enter__() │
│ 208 │ │ │ try: │
│ ❱ 209 │ │ │ │ return fn(*args, **kwargs) │
│ 210 │ │ │ finally: │
│ 211 │ │ │ │ set_eval_frame(prior) │
│ 212 │ │ │ │ dynamic_ctx.__exit__(None, None, None) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:443 in │
│ dispatch_trace │
│ │
│ 440 │ │ tracer: Tracer, │
│ 441 │ │ concrete_args: Optional[Tuple[Any, ...]] = None, │
│ 442 ) -> GraphModule: │
│ ❱ 443 │ graph = tracer.trace(root, concrete_args) │
│ 444 │ name = root.__class__.__name__ if isinstance(root, torch.nn.Module) else root.__name │
│ 445 │ return GraphModule(tracer.root, graph, name) │
│ 446 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:209 in _fn │
│ │
│ 206 │ │ │ dynamic_ctx = enable_dynamic(self.dynamic) │
│ 207 │ │ │ dynamic_ctx.__enter__() │
│ 208 │ │ │ try: │
│ ❱ 209 │ │ │ │ return fn(*args, **kwargs) │
│ 210 │ │ │ finally: │
│ 211 │ │ │ │ set_eval_frame(prior) │
│ 212 │ │ │ │ dynamic_ctx.__exit__(None, None, None) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py:778 in trace │
│ │
│ 775 │ │ │ │ self.create_node( │
│ 776 │ │ │ │ │ "output", │
│ 777 │ │ │ │ │ "output", │
│ ❱ 778 │ │ │ │ │ (self.create_arg(fn(*args)),), │
│ 779 │ │ │ │ │ {}, │
│ 780 │ │ │ │ │ type_expr=fn.__annotations__.get("return", None), │
│ 781 │ │ │ │ ) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py:652 in flatten_fn │
│ │
│ 649 │ │ │ │
│ 650 │ │ │ def flatten_fn(*args): │
│ 651 │ │ │ │ tree_args = pytree.tree_unflatten(list(args), in_spec) │
│ ❱ 652 │ │ │ │ tree_out = root_fn(*tree_args) │
│ 653 │ │ │ │ out_args, out_spec = pytree.tree_flatten(tree_out) │
│ 654 │ │ │ │ assert isinstance(self.graph._codegen, _PyTreeCodeGen) │
│ 655 │ │ │ │ self.graph._codegen.pytree_info = ( │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:459 in wrapped │
│ │
│ 456 │ │ with _pop_mode_temporarily(): │
│ 457 │ │ │ track_tensor_tree(flat_tensors, flat_proxies, constant=None, tracer=tracer) │
│ 458 │ │ │
│ ❱ 459 │ │ out = f(*tensors) │
│ 460 │ │ out = pytree.tree_map_only( │
│ 461 │ │ │ torch.Tensor, │
│ 462 │ │ │ lambda t: get_proxy_slot(t, tracer, t, lambda x: x.proxy), │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1156 in traced_joint │
│ │
│ 1153 │ # the joint needs have args named "primals" and "tangents", │
│ 1154 │ # which are hardcoded into the partitioning logic. │
│ 1155 │ def traced_joint(primals, tangents): │
│ ❱ 1156 │ │ return functionalized_f_helper(primals, tangents) │
│ 1157 │ │
│ 1158 │ def traced_forward(*primals): │
│ 1159 │ │ return functionalized_f_helper(primals) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1108 in │
│ functionalized_f_helper │
│ │
│ 1105 │ │ torch._enable_functionalization(reapply_views=True) │
│ 1106 │ │ try: │
│ 1107 │ │ │ # Run the joint │
│ ❱ 1108 │ │ │ f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_in │
│ 1109 │ │ finally: │
│ 1110 │ │ │ torch._disable_functionalization() │
│ 1111 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1076 in │
│ flat_fn_no_input_mutations │
│ │
│ 1073 │ │ ] │
│ 1074 │ else: │
│ 1075 │ │ primals_after_cloning = primals │
│ ❱ 1076 │ outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, may │
│ 1077 │ return outs │
│ 1078 │
│ 1079 # This creates the final function that we want to trace using make_fx(), │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1048 in │
│ flat_fn_with_synthetic_bases_expanded │
│ │
│ 1045 │ # *after* we clone inputs for autograd (see below), to preserve the view relationshi │
│ 1046 │ primals = unpack_synthetic_bases(primals_after_cloning, meta.synthetic_base_info) │
│ 1047 │ assert len(meta.fw_metadata.input_info) == len(primals) │
│ ❱ 1048 │ outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, k │
│ 1049 │ return outs │
│ 1050 │
│ 1051 # This function adds extra clone() calls on any inputs in the forward that get mutated. │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1017 in forward_or_joint │
│ │
│ 1014 │ # Call the backwards pass │
│ 1015 │ if grad_primals: │
│ 1016 │ │ with fx_traceback.preserve_node_meta(): │
│ ❱ 1017 │ │ │ backward_out = torch.autograd.grad( │
│ 1018 │ │ │ │ needed_outs, │
│ 1019 │ │ │ │ grad_primals, │
│ 1020 │ │ │ │ grad_outputs=needed_tangents, │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py:269 in grad │
│ │
│ 266 │ t_inputs = cast(Tuple[torch.Tensor, ...], (inputs,) if is_tensor_like(inputs) else t │
│ 267 │ overridable_args = t_outputs + t_inputs │
│ 268 │ if has_torch_function(overridable_args): │
│ ❱ 269 │ │ return handle_torch_function( │
│ 270 │ │ │ grad, │
│ 271 │ │ │ overridable_args, │
│ 272 │ │ │ t_outputs, │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/overrides.py:1534 in handle_torch_function │
│ │
│ 1531 │ │ # if we're here, the mode must be set to a TorchFunctionStackMode │
│ 1532 │ │ # this unsets it and calls directly into TorchFunctionStackMode's torch function │
│ 1533 │ │ with _pop_mode_temporarily() as mode: │
│ ❱ 1534 │ │ │ result = mode.__torch_function__(public_api, types, args, kwargs) │
│ 1535 │ │ if result is not NotImplemented: │
│ 1536 │ │ │ return result │
│ 1537 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_inductor/overrides.py:38 in __torch_function__ │
│ │
│ 35 │ │ │ and replacements[func] in replacements_using_triton_random │
│ 36 │ │ ): │
│ 37 │ │ │ return replacements[func](*args, **kwargs) │
│ ❱ 38 │ │ return func(*args, **kwargs) │
│ 39 │
│ 40 │
│ 41 patch_functions = AutogradMonkeypatch │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py:303 in grad │
│ │
│ 300 │ │ │ │ allow_unused, accumulate_grad=False) # Calls into the C++ engine to run │
│ 301 │ │ return _vmap_internals._vmap(vjp, 0, 0, allow_none_pass_through=True)(grad_outpu │
│ 302 │ else: │
│ ❱ 303 │ │ return Variable._execution_engine.run_backward( # Calls into the C++ engine to │
│ 304 │ │ │ t_outputs, grad_outputs_, retain_graph, create_graph, t_inputs, │
│ 305 │ │ │ allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the │
│ 306 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/utils/_stats.py:20 in wrapper │
│ │
│ 17 │ │ if fn.__qualname__ not in simple_call_counter: │
│ 18 │ │ │ simple_call_counter[fn.__qualname__] = 0 │
│ 19 │ │ simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 │
│ ❱ 20 │ │ return fn(*args, **kwargs) │
│ 21 │ return wrapper │
│ 22 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:487 in │
│ __torch_dispatch__ │
│ │
│ 484 │ @count │
│ 485 │ def __torch_dispatch__(self, func, types, args=(), kwargs=None): │
│ 486 │ │ with self.sym_mode.enable(False): │
│ ❱ 487 │ │ │ return self.inner_torch_dispatch(func, types, args, kwargs) │
│ 488 │ │
│ 489 │ def __enter__(self): │
│ 490 │ │ # sym mode first, then us... │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:512 in │
│ inner_torch_dispatch │
│ │
│ 509 │ │ if func in [prim.device.default]: │
│ 510 │ │ │ return func(*args, **kwargs) │
│ 511 │ │ │
│ ❱ 512 │ │ out = proxy_call(self, func, args, kwargs) │
│ 513 │ │ return out │
│ 514 │
│ 515 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:345 in proxy_call │
│ │
│ 342 │ │ else: │
│ 343 │ │ │ args[0].proxy = proxy_out │
│ 344 │ │
│ ❱ 345 │ out = func(*args, **kwargs) │
│ 346 │ │
│ 347 │ # In some circumstances, we will be tracing in a situation where a tensor │
│ 348 │ # is *statically* known to be a constant (currently, this only happens if │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_ops.py:284 in __call__ │
│ │
│ 281 │ │ ) │
│ 282 │ │
│ 283 │ def __call__(self, *args, **kwargs): │
│ ❱ 284 │ │ return self._op(*args, **kwargs or {}) │
│ 285 │ │
│ 286 │ def __hash__(self): │
│ 287 │ │ return hash(self._op) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/utils/_stats.py:20 in wrapper │
│ │
│ 17 │ │ if fn.__qualname__ not in simple_call_counter: │
│ 18 │ │ │ simple_call_counter[fn.__qualname__] = 0 │
│ 19 │ │ simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 │
│ ❱ 20 │ │ return fn(*args, **kwargs) │
│ 21 │ return wrapper │
│ 22 │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py:987 in │
│ __torch_dispatch__ │
│ │
│ 984 │ @count │
│ 985 │ def __torch_dispatch__(self, func, types, args=(), kwargs=None): │
│ 986 │ │ try: │
│ ❱ 987 │ │ │ return self.dispatch(func, types, args, kwargs) │
│ 988 │ │ except TypeError: │
│ 989 │ │ │ log.exception("fake tensor raised TypeError") │
│ 990 │ │ │ raise │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py:1170 in dispatch │
│ │
│ 1167 │ │ # python meta registrations, prims, decomps, and c++ meta fns (structured kernel │
│ 1168 │ │ try: │
│ 1169 │ │ │ with in_kernel_invocation_manager(self): │
│ ❱ 1170 │ │ │ │ r = func(*args, **kwargs) │
│ 1171 │ │ except NotImplementedError as not_implemented_error: │
│ 1172 │ │ │ # no meta kernel registered, fallback to kernel for the device │
│ 1173 │ │ │ if not self.allow_fallback_kernels: │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_ops.py:284 in __call__ │
│ │
│ 281 │ │ ) │
│ 282 │ │
│ 283 │ def __call__(self, *args, **kwargs): │
│ ❱ 284 │ │ return self._op(*args, **kwargs or {}) │
│ 285 │ │
│ 286 │ def __hash__(self): │
│ 287 │ │ return hash(self._op) │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_refs/__init__.py:3988 in view │
│ │
│ 3985 # TODO: Turn this into a decomposition (currently fails on reshape meta tests) │
│ 3986 @register_decomposition(aten.view) │
│ 3987 def view(a: TensorLikeType, *shape: ShapeType) -> TensorLikeType: │
│ ❱ 3988 │ return _reshape_view_helper(a, *shape, allow_copy=False) │
│ 3989 │
│ 3990 │
│ 3991 # CompositeImplicitAutograd - don't register decomp │
│ │
│ /opt/conda/lib/python3.8/site-packages/torch/_refs/__init__.py:3237 in _reshape_view_helper │
│ │
│ 3234 │ │ │ │ msg = "Cannot view a tensor with shape {0} and strides {1} as a tensor w │
│ 3235 │ │ │ │ │ a.shape, a.stride(), shape │
│ 3236 │ │ │ │ ) │
│ ❱ 3237 │ │ │ │ raise ValueError(msg) │
│ 3238 │ │ │ │
│ 3239 │ │ │ a_ = flatten(a_, idx, end) │
│ 3240 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Cannot view a tensor with shape torch.Size([1, 256, 12, 64]) and strides (196608, 64, 16384, 1) as a tensor with shape (1, 256, 768)!
```
### Minified repro
Minifier was unable to repro the error
```
pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
cd examples/pytorch/language-modeling
pip install -r requirements.txt
WANDB_DISABLED=true python run_mlm.py --model_name_or_path microsoft/deberta-base --output_dir . --fp16 --dataloader_drop_last --dataset_config_name wikitext-2-raw-v1 --dataset_name wikitext --do_train --evaluation_strategy no --logging_strategy epoch --max_seq_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 128 --save_strategy no --torch_compile_backend inductor
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0a0+git9cfa076
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
GPU 4: NVIDIA A10G
GPU 5: NVIDIA A10G
GPU 6: NVIDIA A10G
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.534
BogoMIPS: 5599.06
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 384 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.10.5
[pip3] ema-pytorch==0.2.1
[pip3] functorch==1.14.0a0+408bcf1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.1
[pip3] sagemaker-pytorch-training==2.7.0
[pip3] torch==2.0.0a0+git9cfa076
[pip3] torch-fidelity==0.3.0
[pip3] torch-struct==0.5
[pip3] torchaudio==2.0.0a0+b96a7eb
[pip3] torchdata==0.5.1+a246b31
[pip3] torchmetrics==0.11.3
[pip3] torchrec-nightly==2023.3.6
[pip3] torchtext==0.14.0a0+5b78d07
[pip3] torchvision==0.14.1a0+b69fce3
[pip3] vector-quantize-pytorch==1.1.1
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.10.5 pypi_0 pypi
[conda] ema-pytorch 0.2.1 pypi_0 pypi
[conda] functorch 1.14.0a0+408bcf1 pypi_0 pypi
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] mkl-include 2023.0.0 h84fe81f_26648 conda-forge
[conda] numpy 1.21.2 pypi_0 pypi
[conda] pytorch 1.13.1 cpu_py38hbac4b8a_1 conda-forge
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.1 pypi_0 pypi
[conda] sagemaker-pytorch-training 2.7.0 pypi_0 pypi
[conda] torch 2.0.0a0+git9cfa076 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.0.0a0+b96a7eb pypi_0 pypi
[conda] torchdata 0.5.1 py38h60d003c_1 conda-forge
[conda] torchmetrics 0.11.3 pypi_0 pypi
[conda] torchrec-nightly 2023.3.6 pypi_0 pypi
[conda] torchtext 0.14.0a0+5b78d07 pypi_0 pypi
[conda] torchvision 0.15.0a0+0bdd01a pypi_0 pypi
[conda] vector-quantize-pytorch 1.1.1 pypi_0 pypi
```
cc @ezyang @eellison @bdhirsh @msaroufim @wconstab @anijain2305 @ngimel @soumith
| 11 |
3,257 | 96,449 |
'aten::affine_grid_generator' to ONNX opset version 14 is not supported
|
module: onnx, triaged
|
### 🐛 Describe the bug
UnsupportedOperatorError: Exporting the operator 'aten::affine_grid_generator' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues
### Versions
torch 1.13.1
| 2 |
3,258 | 96,448 |
Unable to move torch.jit.load-ed models to XLA devices
|
oncall: jit, module: xla
|
### 🐛 Describe the bug
Here are the steps to reproduce and the error:
```
>>> import torch
>>> import torch_xla.core.xla_model as xm
>>> x = torch.nn.Linear(8, 32)
>>> torch.jit.save(torch.jit.script(x), 'x.pt')
>>> xsr = torch.jit.load('x.pt')
>>> xsr.to(xm.xla_device())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 987, in to
return self._apply(convert)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 668, in _apply
assert isinstance(param, Parameter)
AssertionError
```
Here is when it works if I try just scripting (without save&load):
```
>>> x.to(xm.xla_device())
Linear(in_features=8, out_features=32, bias=True)
>>> xs = torch.jit.script(x)
>>> xs.to(xm.xla_device())
RecursiveScriptModule(original_name=Linear)
```
However, the version that's read back into xsr fails:
```
>>> xsr
RecursiveScriptModule(original_name=Linear)
>>> xsr.to(xm.xla_device())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 987, in to
return self._apply(convert)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 668, in _apply
assert isinstance(param, Parameter)
AssertionError
```
Let me know if you need any more information or a way to install pytorch/xla.
### Versions
pt-1.13
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @bdhirsh @yyetim
| 5 |
3,259 | 96,447 |
Information about CPU in `collect_env` is too verbose
|
module: collect_env.py, triaged
|
### 🐛 Describe the bug
See this report for example:
```
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.834
BogoMIPS: 5599.66
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.0+cu118
[pip3] torchvision==0.15.0+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.0+cu118 pypi_0 pypi
[conda] torchvision 0.15.0+cu118 pypi_0 pypi
```
I.e. report is 70 lines long and info about CPU is 50+% of that.
Plan to just trim it to 10 lines max (and add offset)
cc: @jingxu10
### Versions
CI
| 2 |
3,260 | 96,435 |
Not implemented error for `aten.quantize_per_tensor.tensor_qparams`
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
The aforementioned error exists for quantized models in torchbench like `resnet50_quantized_qat` and `mobilenet_v2_quantized_qat`. My repro:
```
python benchmarks/dynamo/torchbench.py --only resnet50_quantized_qat -dcpu --performance
```
Stacktrace:
```
Traceback (most recent call last):
File "/home/yj/pytorch/torch/_subclasses/fake_tensor.py", line 1243, in dispatch
r = func(*args, **kwargs)
File "/home/yj/pytorch/torch/_ops.py", line 284, in __call__
return self._op(*args, **kwargs or {})
File "/home/yj/pytorch/torch/_ops.py", line 377, in _get_dispatch
final_key = resolve_key(self, key)
File "/home/yj/pytorch/torch/_ops.py", line 106, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.quantize_per_tensor.tensor_qparams at dispatch key DispatchKey.Meta
```
### Versions
PyTorch version: 2.0.0a0+git64b8fae
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 510.108.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 4950.1948
CPU min MHz: 2200.0000
BogoMIPS: 7400.08
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.0
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.10.5
[pip3] ema-pytorch==0.1.4
[pip3] functorch==1.14.0a0+408bcf1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-triton==2.0.0+0d7e753227
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.1
[pip3] torch==2.0.0a0+git64b8fae
[pip3] torch-fidelity==0.3.0
[pip3] torch-struct==0.5
[pip3] torchaudio==2.0.0a0+9368f33
[pip3] torchdata==0.7.0a0+f083d52
[pip3] torchmetrics==0.11.0
[pip3] torchrec-nightly==2023.1.25
[pip3] torchtext==0.15.0a0+bb0efcd
[pip3] torchvision==0.15.0a0+85983a5
[pip3] torchx==0.4.0
[pip3] vector-quantize-pytorch==0.10.15
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.0 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.10.5 pypi_0 pypi
[conda] ema-pytorch 0.1.4 pypi_0 pypi
[conda] functorch 1.14.0a0+408bcf1 pypi_0 pypi
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-triton 2.0.0+0d7e753227 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.1 pypi_0 pypi
[conda] torch 2.0.0a0+git64b8fae dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.0.0a0+9368f33 pypi_0 pypi
[conda] torchdata 0.7.0a0+f083d52 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchrec-nightly 2023.1.25 pypi_0 pypi
[conda] torchtext 0.15.0a0+bb0efcd pypi_0 pypi
[conda] torchvision 0.15.0a0+85983a5 pypi_0 pypi
[conda] torchx 0.4.0 pypi_0 pypi
[conda] vector-quantize-pytorch 0.10.15 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
3,261 | 96,428 |
Compressed sparse constructor allows mixed `int32/int64` indices which leads to dtype promotion/demotion in conversions.
|
module: sparse, triaged
|
### 🐛 Describe the bug
As per title, namely we can create a tensor with `compressed_indices.dtype != plain_indices.dtype`. That not only can potentially break some kernels, it creates inconsistencies in `to_sparse(...)` conversions. For example:
```python
In [1]: import torch
In [2]: x = torch.rand(3, 3).to_sparse_csr()
In [3]: y = torch.sparse_csr_tensor(x.crow_indices().to(torch.int32), x.col_indices().to(torch.int64), x.values())
In [4]: z = y.to_sparse_csc()
In [5]: z.ccol_indices().dtype
Out[5]: torch.int32
In [6]: z.row_indices().dtype
Out[6]: torch.int32
In [7]: y = torch.sparse_csr_tensor(x.crow_indices().to(torch.int64), x.col_indices().to(torch.int32), x.values())
In [8]: z = y.to_sparse_csc()
In [9]: z.ccol_indices().dtype
Out[9]: torch.int64
In [10]: z.row_indices().dtype
Out[10]: torch.int64
```
### Versions
Current master.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer
| 2 |
3,262 | 96,420 |
Add location information when exception are thrown in `torch.jit.annotations.try_ann_to_type`
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
The functions `is_tuple`, `is_list`, `is_dict`, `is_optional`, `is_union`, `is_future` (all in `torch/_jit_internal`) might throw exceptions when called from `try_ann_to_type`. These exception are not very useful. For example, one says:
```
RuntimeError: Attempted to use Dict without contained types. Please add contained type, e.g. Dict[int, int]
```
This exception together with the location information would be much more useful. For example, the location says:
```
(Pdb) print(loc)
SourceRange at:
File "/home/schuetze/Documents/work/github/prediction_net/multimodal/models/heads/retina_head.py", line 187
def forward(self, fpn_features: t.Dict, inputs: t.Dict,
~~~~~~ <--- HERE
gts: t.Dict = None) -> t.Dict[str, t.Any]:
"""
```
Could we maybe add `location` as an optional parameter to the `is_X` function? Would you be open for a PR in that case?
### Alternatives
Include location information when throwing the exception.
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
3,263 | 96,412 |
Proxy Options for Pytorch Hub
|
triaged, module: hub
|
### 🚀 The feature, motivation and pitch
Pytorch Hub is frequently used to download models from GitHub to simplify dependency management for deep learning projects. For many people, me included, easy internet access to GitHub is not a given, and a proxy server may be useful or even necessary. Currently, Pytorch Hub can only be configured to use a proxy by setting the environment variable `HTTPS_PROXY`, and this method still does not work when using a SOCKS proxy server.
This issue is compounded by the fact that loading models using Pytorch Hub requires internet access even if the model is already cached. Even if GitHub is sporadically accessible, or if some workaround can be used to download the weights without a proxy server, the problem is not solved. A connection to GitHub must still be established every time the deep learning project is run, not just the first time when it needs to download the models. In my case, though I can jump through hoops to make a connection possible on my server, having to make the necessary changes every time my server runs a script that contains `torch.hub.load` is very problematic.
It would be helpful if Pytorch Hub can be configured to use a proxy server, even if it might not be realistic to make Pytorch Hub usable without internet access on every call to `torch.hub.load`.
### Alternatives
**1. Making the default network interface of the machine a tunnel**: Can negatively affect other applications and users of the machine. Complicated to set up, if split tunneling is needed to avoid some of the impact. Requires intervention from the system administrator.
**2. Using a router with proxy support**: Needs dedicated hardware. Network quality may be worse through the proxy, yet all applications on the system generally are forced to the same network connection. Still requires intervention from the system administrator.
**3. Prefixing all python calls with `HTTPS_PROXY=...`**: can be difficult when tools such as IDEs, Jupyter, bash scripts etc. automatically spawn the python processes. Does not work with SOCKS proxy.
### Additional context
I've prepared a pull request that adds an additional function to the Pytorch Hub.
cc @nairbv @NicolasHug @vmoens @jdsgomes
| 1 |
3,264 | 96,409 |
Initialization on `meta` device failing for models containing `nn.utils.weight_norm`, with `NotImplementedError: Could not run 'aten::_weight_norm_interface' with arguments from the 'Meta' backend.`
|
triaged, actionable, module: meta tensors
|
### 🐛 Describe the bug
As in the title, Hubert contains a `nn.utils.weight_norm`: https://github.com/huggingface/transformers/blob/3ec8171bedff6139a23dff192b9e8af33c1fca9a/src/transformers/models/hubert/modeling_hubert.py#L283 . Trying to load it on `meta` device fails.
Minimal reproduction:
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(10, 20)
self.lin = nn.utils.weight_norm(self.lin, name="weight", dim=1)
def forward(self, x):
return self.lin(x)
with torch.device("meta"):
model = Model()
```
raising:
```
Traceback (most recent call last):
File "/home/fxmarty/test_spda.py", line 32, in <module>
model = Model()
File "/home/fxmarty/test_spda.py", line 26, in __init__
self.lin = nn.utils.weight_norm(self.lin, name="weight", dim=1)
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/nn/utils/weight_norm.py", line 109, in weight_norm
WeightNorm.apply(module, name, dim)
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/nn/utils/weight_norm.py", line 50, in apply
setattr(module, name, fn.compute_weight(module))
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/nn/utils/weight_norm.py", line 25, in compute_weight
return _weight_norm(v, g, self.dim)
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/utils/_device.py", line 63, in __torch_function__
return func(*args, **kwargs)
NotImplementedError: Could not run 'aten::_weight_norm_interface' with arguments
from the 'Meta' backend. This could be because the operator doesn't exist for t
his backend, or was omitted during the selective/custom build process (if using
custom build). If you are a Facebook employee using PyTorch on mobile, please vi
sit https://fburl.com/ptmfixes for possible resolutions. 'aten::_weight_norm_int
erface' is only available for these backends: [CPU, CUDA, BackendSelect, Python,
FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroT
ensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, A
utogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, Auto
gradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2,
AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, Fu
ncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, Pyth
onTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31085 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:44060 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:15861 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_0.cpp:16728 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
```
### Versions
PyTorch version: 2.1.0.dev20230302+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-1280P
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4800,0000
CPU min MHz: 400,0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11,5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230302+cu117
[pip3] torch-model-archiver==0.6.1
[pip3] torch-workflow-archiver==0.2.5
[pip3] torchaudio==2.0.0.dev20230302+cu117
[pip3] torchinfo==1.7.0
[pip3] torchserve==0.6.1
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20230302+cu117
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230302+cu117 pypi_0 pypi
[conda] torch-model-archiver 0.6.1 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.5 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230302+cu117 pypi_0 pypi
[conda] torchinfo 1.7.0 pypi_0 pypi
[conda] torchserve 0.6.1 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230302+cu117 pypi_0 pypi
cc @ezyang @eellison @bdhirsh @soumith
| 3 |
3,265 | 96,396 |
[dynamo] add hook to modify instructions before/after instructions be generated
|
feature, triaged, module: dynamo
|
### 🚀 The feature, motivation and pitch
Our team is working on dynamo and pytorch code optimize, found it's easy to implement dynamo backend,but hard to modify instructions that generated by dynamo;
There are some usecase show that modify instructions is necessary:
1. want take whole graph infomation about compute and communication ,but `InstructionTranslator` break graph when it not support some function such as `AutogradFunction`, as dynamo backend, we can only see partial graph which called `compute graph`
2. modify just a little instructions after dynamo have generated them, such as auto add `checkpoint` function call or add context
we can make some discuss if this feature request is accepted ,and I will make a PR to implement this
for this feature requests ,I will modify code:
1. _dynamo/convert_frame.py _compile closure: transform function,before `InstructionTranslator(instructions) `
2. _dynamo/convert_frame.py _compile closure: transform function, before `instructions[:] = output.output_instructions`
There also are some issue will be consider
1. _dynamo.hooks.py `Hooks` only have two hooks, using at: `_dynamo.optimize` like `def optimize(..,guard_export_fn=None,
guard_fail_fn=None,)`,should it change to `optimize(..,hooks=None,)` to add more hooks? it will cause not compatible with old version code/unittest/doc .I will modify them all
@ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
### Alternatives
_No response_
### Additional context
_No response_
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 5 |
3,266 | 96,391 |
Mnist model training with "reduce-overhead" mode is flaky
|
module: cudnn, triaged, oncall: pt2
|
### 🐛 Describe the bug
I was trying out an end-to-end training/test example with torch.compile
https://github.com/agunapal/examples/blob/pt2.0_example/pt2.0/mnist/main.py#L140:L146
When I don't specify a mode, the code runs correctly.
When I specify the mode "reduce-overhead", I see that its flaky. It runs correctly some times, errors most times
### Error logs
[success.txt](https://github.com/pytorch/pytorch/files/10927346/success.txt)
[failure.txt](https://github.com/pytorch/pytorch/files/10927348/failure.txt)
### Minified repro
```
import os
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import functools
import torch._dynamo
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
from torch._dynamo.backends.registry import lookup_backend
from torch._dynamo.testing import rand_strided
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._dynamo.config.load_config(b'\x80\x02}q\x00(X\x0b\x00\x00\x00output_codeq\x01\x89X\r\x00\x00\x00log_file_nameq\x02NX\x07\x00\x00\x00verboseq\x03\x89X\x11\x00\x00\x00output_graph_codeq\x04\x89X\x12\x00\x00\x00verify_correctnessq\x05\x89X\x12\x00\x00\x00minimum_call_countq\x06K\x01X\x15\x00\x00\x00dead_code_eliminationq\x07\x88X\x10\x00\x00\x00cache_size_limitq\x08K@X\x14\x00\x00\x00specialize_int_floatq\t\x88X\x0e\x00\x00\x00dynamic_shapesq\n\x89X\x10\x00\x00\x00guard_nn_modulesq\x0b\x89X\x1b\x00\x00\x00traceable_tensor_subclassesq\x0cc__builtin__\nset\nq\r]q\x0e\x85q\x0fRq\x10X\x0f\x00\x00\x00suppress_errorsq\x11\x89X\x15\x00\x00\x00replay_record_enabledq\x12\x89X \x00\x00\x00rewrite_assert_with_torch_assertq\x13\x88X\x12\x00\x00\x00print_graph_breaksq\x14\x89X\x07\x00\x00\x00disableq\x15\x89X*\x00\x00\x00allowed_functions_module_string_ignorelistq\x16h\r]q\x17(X\x13\x00\x00\x00torch.distributionsq\x18X\x0b\x00\x00\x00torch._refsq\x19X\r\x00\x00\x00torch.testingq\x1aX\r\x00\x00\x00torch._decompq\x1bX\x0c\x00\x00\x00torch._primsq\x1ce\x85q\x1dRq\x1eX\x12\x00\x00\x00repro_forward_onlyq\x1f\x89X\x0f\x00\x00\x00repro_toleranceq G?PbM\xd2\xf1\xa9\xfcX\x16\x00\x00\x00capture_scalar_outputsq!\x89X\x19\x00\x00\x00enforce_cond_guards_matchq"\x88X\x0c\x00\x00\x00optimize_ddpq#\x88X\x1a\x00\x00\x00raise_on_ctx_manager_usageq$\x88X\x1c\x00\x00\x00raise_on_unsafe_aot_autogradq%\x89X\x17\x00\x00\x00raise_on_backend_changeq&\x89X\x18\x00\x00\x00error_on_nested_fx_traceq\'\x88X\t\x00\x00\x00allow_rnnq(\x89X\x08\x00\x00\x00base_dirq)XG\x00\x00\x00/home/ubuntu/anaconda3/envs/test_2.0_py310/lib/python3.10/site-packagesq*X\x0e\x00\x00\x00debug_dir_rootq+X:\x00\x00\x00/home/ubuntu/fork/examples/pt2.0/mnist/torch_compile_debugq,X)\x00\x00\x00DO_NOT_USE_legacy_non_fake_example_inputsq-\x89X\x13\x00\x00\x00_save_config_ignoreq.h\r]q/(X!\x00\x00\x00skipfiles_inline_module_allowlistq0X\x12\x00\x00\x00constant_functionsq1X\x0b\x00\x00\x00repro_afterq2X\x0b\x00\x00\x00repro_levelq3e\x85q4Rq5u.')
torch._inductor.config.load_config(b'\x80\x02}q\x00(X\x05\x00\x00\x00debugq\x01\x89X\x10\x00\x00\x00disable_progressq\x02\x88X\x10\x00\x00\x00verbose_progressq\x03\x89X\x0b\x00\x00\x00cpp_wrapperq\x04\x89X\x03\x00\x00\x00dceq\x05\x89X\x14\x00\x00\x00static_weight_shapesq\x06\x88X\x0c\x00\x00\x00size_assertsq\x07\x88X\x10\x00\x00\x00pick_loop_ordersq\x08\x88X\x0f\x00\x00\x00inplace_buffersq\t\x88X\x11\x00\x00\x00benchmark_harnessq\n\x88X\x0f\x00\x00\x00epilogue_fusionq\x0b\x89X\x15\x00\x00\x00epilogue_fusion_firstq\x0c\x89X\x0f\x00\x00\x00pattern_matcherq\r\x88X\n\x00\x00\x00reorderingq\x0e\x89X\x0c\x00\x00\x00max_autotuneq\x0f\x89X\x17\x00\x00\x00realize_reads_thresholdq\x10K\x04X\x17\x00\x00\x00realize_bytes_thresholdq\x11M\xd0\x07X\x1b\x00\x00\x00realize_acc_reads_thresholdq\x12K\x08X\x0f\x00\x00\x00fallback_randomq\x13\x89X\x12\x00\x00\x00implicit_fallbacksq\x14\x88X\x0b\x00\x00\x00tune_layoutq\x15\x89X\x11\x00\x00\x00aggressive_fusionq\x16\x89X\x0f\x00\x00\x00max_fusion_sizeq\x17K@X\x1b\x00\x00\x00unroll_reductions_thresholdq\x18K\x08X\x0e\x00\x00\x00comment_originq\x19\x89X\x12\x00\x00\x00developer_warningsq\x1a\x88X\x0f\x00\x00\x00compile_threadsq\x1bK\x08X\x13\x00\x00\x00kernel_name_max_opsq\x1cK\nX\r\x00\x00\x00shape_paddingq\x1d\x89X\x0e\x00\x00\x00permute_fusionq\x1e\x89X\x1a\x00\x00\x00profiler_mark_wrapper_callq\x1f\x89X\x18\x00\x00\x00_raise_error_for_testingq \x89X\x0b\x00\x00\x00cpp.threadsq!J\xff\xff\xff\xffX\x13\x00\x00\x00cpp.dynamic_threadsq"\x89X\x0b\x00\x00\x00cpp.simdlenq#NX\x12\x00\x00\x00cpp.min_chunk_sizeq$M\x00\x10X\x07\x00\x00\x00cpp.cxxq%NX\x03\x00\x00\x00g++q&\x86q\'X\x19\x00\x00\x00cpp.enable_kernel_profileq(\x89X\x12\x00\x00\x00cpp.weight_prepackq)\x88X\x11\x00\x00\x00triton.cudagraphsq*\x89X\x17\x00\x00\x00triton.debug_sync_graphq+\x89X\x18\x00\x00\x00triton.debug_sync_kernelq,\x89X\x15\x00\x00\x00triton.dense_indexingq-\x89X\x10\x00\x00\x00triton.max_tilesq.K\x02X\x19\x00\x00\x00triton.autotune_pointwiseq/\x88X\'\x00\x00\x00triton.tiling_prevents_pointwise_fusionq0\x88X\'\x00\x00\x00triton.tiling_prevents_reduction_fusionq1\x88X\x1b\x00\x00\x00triton.ordered_kernel_namesq2\x89X\x1f\x00\x00\x00triton.descriptive_kernel_namesq3\x89X\x1c\x00\x00\x00triton.persistent_reductionsq4\x89X\r\x00\x00\x00trace.enabledq5\x89X\x0f\x00\x00\x00trace.debug_logq6\x88X\x0e\x00\x00\x00trace.info_logq7\x89X\x0e\x00\x00\x00trace.fx_graphq8\x88X\x1a\x00\x00\x00trace.fx_graph_transformedq9\x88X\x13\x00\x00\x00trace.ir_pre_fusionq:\x88X\x14\x00\x00\x00trace.ir_post_fusionq;\x88X\x11\x00\x00\x00trace.output_codeq<\x88X\x13\x00\x00\x00trace.graph_diagramq=\x89X\x15\x00\x00\x00trace.compile_profileq>\x89X\x10\x00\x00\x00trace.upload_tarq?Nu.')
torch._functorch.config.load_config(b'\x80\x02}q\x00(X\x11\x00\x00\x00use_functionalizeq\x01\x88X\x0f\x00\x00\x00use_fake_tensorq\x02\x88X\x16\x00\x00\x00fake_tensor_allow_metaq\x03\x88X\x0c\x00\x00\x00debug_assertq\x04\x88X\x14\x00\x00\x00debug_fake_cross_refq\x05\x89X\x11\x00\x00\x00debug_partitionerq\x06\x89X\x0c\x00\x00\x00debug_graphsq\x07\x89X\x0b\x00\x00\x00debug_jointq\x08\x89X\x12\x00\x00\x00use_dynamic_shapesq\t\x89X\x14\x00\x00\x00static_weight_shapesq\n\x88X\x03\x00\x00\x00cseq\x0b\x88X\x10\x00\x00\x00max_dist_from_bwq\x0cK\x03X\t\x00\x00\x00log_levelq\rK\x14u.')
# REPLACEABLE COMMENT FOR TESTING PURPOSES
args = [((64, 1, 28, 28), (784, 784, 28, 1), torch.float32, 'cuda', False)]
args = [rand_strided(sh, st, dt, dev).requires_grad_(rg) for (sh, st, dt, dev, rg) in args]
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
self.self_conv1 = Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)).cuda()
self.self_conv2 = Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)).cuda()
self.self_dropout1 = Dropout(p=0.25, inplace=False)
self.self_fc1 = Linear(in_features=9216, out_features=128, bias=True).cuda()
self.self_dropout2 = Dropout(p=0.5, inplace=False)
self.self_fc2 = Linear(in_features=128, out_features=10, bias=True).cuda()
def forward(self, x : torch.Tensor):
self_conv1 = self.self_conv1(x); x = None
relu = torch.nn.functional.relu(self_conv1); self_conv1 = None
self_conv2 = self.self_conv2(relu); relu = None
relu_1 = torch.nn.functional.relu(self_conv2); self_conv2 = None
max_pool2d = torch.nn.functional.max_pool2d(relu_1, 2); relu_1 = None
self_dropout1 = self.self_dropout1(max_pool2d); max_pool2d = None
flatten = torch.flatten(self_dropout1, 1); self_dropout1 = None
self_fc1 = self.self_fc1(flatten); flatten = None
relu_2 = torch.nn.functional.relu(self_fc1); self_fc1 = None
self_dropout2 = self.self_dropout2(relu_2); relu_2 = None
self_fc2 = self.self_fc2(self_dropout2); self_dropout2 = None
log_softmax = torch.nn.functional.log_softmax(self_fc2, dim = 1); self_fc2 = None
return (log_softmax,)
mod = Repro()
# Setup debug minifier compiler
torch._dynamo.debug_utils.MINIFIER_SPAWNED = True
compiler_fn = lookup_backend("dynamo_minifier_backend")
dynamo_minifier_backend = functools.partial(
compiler_fn,
compiler_name="inductor",
)
opt_mod = torch._dynamo.optimize(dynamo_minifier_backend)(mod)
with torch.cuda.amp.autocast(enabled=False):
opt_mod(*args)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.516
BogoMIPS: 5599.03
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.0+cu118
[pip3] torchvision==0.15.0+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.0+cu118 pypi_0 pypi
[conda] torchvision 0.15.0+cu118 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
```
| 7 |
3,267 | 96,386 |
[export] "strict subset of traced input/output" error when huggingface `ModelOutput` is returned
|
triaged, module: dynamo, module: export
|
### 🐛 Describe the bug
Example script below.
The error stems from discrepancy between outputs that were eagerly traced
https://github.com/pytorch/pytorch/blob/fe05266fda4f908130dea7cbac37e9264c0429a2/torch/_dynamo/eval_frame.py#L706
and outputs that were computed by captured graph
https://github.com/pytorch/pytorch/blob/fe05266fda4f908130dea7cbac37e9264c0429a2/torch/_dynamo/eval_frame.py#L688
The former returns `ModelOutput` type while the latter returns tuple. And since `pytree.tree_flatten` is unable to open up `ModelOutput`, the following `produce_matching` triggers the error.
https://github.com/pytorch/pytorch/blob/fe05266fda4f908130dea7cbac37e9264c0429a2/torch/_dynamo/eval_frame.py#L716-L720
```python
import traceback
import torch
import dataclasses
from torch import _dynamo as dynamo
from typing import Optional
from transformers.modeling_outputs import ModelOutput
@dataclasses.dataclass
class UserDefinedModelOutput(ModelOutput):
x: Optional[torch.Tensor] = None
y: Optional[torch.Tensor] = None
def fn(x, y):
return UserDefinedModelOutput(x + y, x - y)
x = torch.randn(3, 4)
y = torch.randn(3, 4)
# AssertionError: Dynamo input and output is a strict subset of traced input/output
try:
gm, guards = dynamo.export(fn, x, y)
except AssertionError as e:
traceback.print_exc()
out = dynamo.optimize("eager", nopython=True)(fn)(x, y)
# UserDefinedModelOutput(x=tensor(...), y=tensor(...))
print(out)
```
### Versions
master
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 0 |
3,268 | 96,379 |
`dynamo.export` "input not consistent with traced input" error when input default value type is `torch.Tensor`.
|
triaged, onnx-needs-info, module: dynamo, module: export
|
### 🐛 Describe the bug
Example script below. It looks like `b` is always traced as input, so the assertion error happens when `b` is not provided.
Similar scenario happens if the default is a container of `torch.Tensor`.
I know it is bad practice to set mutable defaults, and such code should be avoided, (but there might be models that are written this way). I wonder how should `dynamo.export` respond for these cases?
```python
import torch
import traceback
from torch import _dynamo as dynamo
def func(x, b=torch.tensor(1.0)):
return x + b
x = torch.randn(3, 4)
b = torch.randn(3, 4)
# Trace without 'b'
# AssertionError: Dynamo input/output is not consistent with traced input/output
try:
gm, guards = dynamo.export(func, x)
gm.print_readable()
except AssertionError as e:
traceback.print_exc()
# Trace with 'b'
# Succeed.
gm, guards = dynamo.export(func, x, b=b)
gm.print_readable()
```
### Versions
master
cc @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 0 |
3,269 | 96,372 |
[BE] Avoid .data usage in FSDP buffer casting
|
oncall: distributed, better-engineering, module: fsdp
|
### 🐛 Describe the bug
FSDP casts buffers for mixed precision, but uses `buf.data` to assign to, we should avoid the .data usage by de-registering the old buffer and re-registering the low precision buffer.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,270 | 96,362 |
[minifier] hf_Longformer fp32 accuracy pass error cannot be minified
|
triaged, oncall: pt2, module: minifier
|
### 🐛 Describe the bug
**TL;DR**: hf_Longformer has an inductor issue from aten._local_scalar_dense.default. It cannot be minified.
**Inductor failure**: full log: https://gist.github.com/davidberard98/6a3875a0f71641349beea7bde64560ce. Excerpt below:
```
...
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/symbolic_convert.py", line 379, in wrapper
self.output.compile_subgraph(self, reason=reason)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/output_graph.py", line 579, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/output_graph.py", line 626, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/_dynamo/output_graph.py", line 713, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: inductor raised LoweringException: AssertionError: Found <class 'torch._inductor.ir.DynamicScalar'>, which is not a supported top level IR node. See [Note: Inductor IR]
target: aten._local_scalar_dense.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.int64,
tmp0 = constant(1024, torch.int64)
tmp1 = constant(512, torch.int64)
tmp2 = truncdiv(tmp0, tmp1)
return tmp2
,
ranges=(),
origins={div}
)
))
While executing %_local_scalar_dense : [#users=0] = call_function[target=torch.ops.aten._local_scalar_dense.default](args = (%div,), kwargs = {})
Original traceback:
File "/data/home/dberard/miniconda/envs/bisectdynamo/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 839, in <graph break in _sliding_chunks_query_key_matmul>
query = self._chunk(query, window_overlap, self.config.__dict__.get("onnx_export", False))
File "/data/home/dberard/miniconda/envs/bisectdynamo/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 769, in _chunk
hidden_states = hidden_states.view(
```
**Repro instructions**:
```
python benchmarks/dynamo/torchbench.py --accuracy --float32 -dcuda --output=bisect.csv --training --no-skip --dashboard --cold_start_latency --inductor --only hf_Longformer
```
This needs to be done on a commit **after** https://github.com/pytorch/pytorch/pull/95902 and **before** https://github.com/pytorch/pytorch/pull/96221.
**Issue with TORCHDYNAMO_REPRO_AFTER=aot**:
* `TORCHDYNAMO_REPRO_AFTER=aot [repro cmd]` runs fine.
* python minifier_launcher.py fails due to trying to call local_scalar_dense:
```
Traceback (most recent call last):
File "torch_compile_debug/run_2023_03_08_23_32_30_111734-pid_3083040/minifier/minifier_launcher.py", line 55, in <module>
mod = make_fx(Repro(), tracing_mode='real')(*args)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/_symbolic_trace.py", line 756, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 409, in call_module
return forward(*args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/_symbolic_trace.py", line 749, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/nn/modules/module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "torch_compile_debug/run_2023_03_08_23_32_30_111734-pid_3083040/minifier/minifier_launcher.py", line 48, in forward
_local_scalar_dense = torch.ops.aten._local_scalar_dense.default(div); div = None
File "/scratch/dberard/bisectdynamo/pytorch/torch/_ops.py", line 284, in __call__
return self._op(*args, **kwargs or {})
File "/scratch/dberard/bisectdynamo/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 487, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 512, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/scratch/dberard/bisectdynamo/pytorch/torch/fx/experimental/proxy_tensor.py", line 282, in proxy_call
raise RuntimeError(
RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar. It may be possible to trace this with dynamic shapes; try setting tracing_mode='symbolic' in your make_fx call.
```
**Issue with TORCHDYNAMO_REPRO_AFTER=dynamo**:
* `TORCHDYNAMO_REPRO_AFTER=dynamo [repro cmd]` **fails with the wrong error**. This fails with `... Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.copy_.default(...`. Full error: https://gist.github.com/davidberard98/6a3875a0f71641349beea7bde64560ce
**Dashboard context**: This comes from `hf_Longformer` in the inductor accuracy benchmarks.
* **Before** https://github.com/pytorch/pytorch/pull/95902: the accuracy check is skipped due to eager_variation.
* **After** PR:
+ The PR updates determinism checks, which makes eager_variation stop.
+ Then this reveals an inductor issue. The inductor issue is not introduced by this PR, but this PR fixes the baseline which reveals the inductor issue.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git1359d16
Is debug build: True
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] torch==2.1.0a0+git1359d16
[pip3] torchvision==0.15.0a0+beb4bb7
[conda] numpy 1.23.1 pypi_0 pypi
[conda] torch 2.1.0a0+git1359d16 dev_0 <develop>
[conda] torchvision 0.15.0a0+beb4bb7 dev_0 <develop>
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,271 | 96,319 |
views created in __torch_dispatch__ share storage but not version_counter
|
module: autograd, module: molly-guard, triaged, needs design, module: __torch_dispatch__
|
Usually the ADInplaceOrView kernel is responsible for handling this, but since we're operating under autograd, the version counter information is not correctly propagated.
```python
saved_b = None
class SaveATensorMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
global saved_b
kwargs = {} if kwargs is None else kwargs
out = func(*args, **kwargs)
if func == torch.ops.aten.sin.default:
saved_b = out.view_as(out)
return out
a = torch.tensor(1.)
with SaveATensorMode():
b = torch.sin(a)
assert b.data_ptr() == saved_b.data_ptr()
old_b_version = b._version
old_saved_b_version = saved_b._version
b.mul_(2)
print(b._version > old_b_version) # True
print(saved_b._version > old_saved_b_version) # False
old_b_version = b._version
old_saved_b_version = saved_b._version
saved_b.mul_(2)
print(b._version > old_b_version) # False
print(saved_b._version > old_saved_b_version) # True
```
Previously this was probably intentional for inference mode, but it may be useful to support version counter propagation for `__torch_dispatch__` use cases to prevent common silently correctness issues.
If this were a Tensor subclass, we could probably enable_reentrant_dispatch, but that may not work for modes (can we fix this?).
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7 @Chillee @samdow
| 11 |
3,272 | 96,318 |
Support managed memory backed dlpack with torch.from_dlpack
|
feature, triaged, module: dlpack
|
### 🚀 The feature, motivation and pitch
Pytorch should support using managed memory based dlpack capsules.
### Set Pools
```python3
import rmm
from rmm.allocators.torch import rmm_torch_allocator
from rmm.allocators.cupy import rmm_cupy_allocator
import torch
import cupy as cp
rmm.reinitialize(managed_memory= True)
cp.cuda.set_allocator(rmm_cupy_allocator)
torch.cuda.memory.change_current_allocator(rmm_torch_allocator)
```
```python3
dlpack_cap = cp.ones(shape=100).toDlpack()
t2 = torch.from_dlpack(dlpack_cap)
```
```python3
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 2
1 dlpack_cap = cp.ones(shape=100).toDlpack()
----> 2 t2 = torch.from_dlpack(dlpack_cap)
File /datasets/vjawa/miniconda3/envs/cugraph_dev_feb_27/lib/python3.10/site-packages/torch/utils/dlpack.py:120, in from_dlpack(ext_tensor)
117 else:
118 # Old versions just call the converter
119 dlpack = ext_tensor
--> 120 return _from_dlpack(dlpack)
RuntimeError: Unsupported device_type: 13
```
### Alternatives
A user can use cuda array interface to work around it for now.
```python3
ar = cp.ones(shape=100)
t2 = torch.as_tensor(ar, device='cuda')
```
### Additional context
CC: @leofang , @jakirkham
| 1 |
3,273 | 96,316 |
`FractionalMaxPool3d` INTERNAL ASSERT FAILED when computing `jacrev`
|
module: nn, triaged, actionable, module: edge cases
|
### 🐛 Describe the bug
`FractionalMaxPool3d` INTERNAL ASSERT FAILED when computing `jacrev`
```py
import torch
from torch.func import jacrev
torch.manual_seed(420)
input = torch.randn(1, 1, 5, 5, 5)
def func(input):
model = torch.nn.FractionalMaxPool3d(kernel_size=0, output_size=(1, 1, 1))
output = model(input)
return output
print(func(input))
# tensor([[[[[-inf]]]]])
jacrev(func)(input)
# RuntimeError: index >= 0 && index < inputT * inputH * inputW
# INTERNAL ASSERT FAILED at
# "/opt/conda/conda-bld/pytorch_1672906354936/work/aten/src/ATen/native/FractionalMaxPool3d.cpp":285,
# please report a bug to PyTorch.
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 4 |
3,274 | 96,305 |
Reuse autograd.grad graph for rapid, repeated gradient calculation
|
feature, module: autograd, triaged, module: cuda graphs
|
### 🚀 The feature, motivation and pitch
In Scientific Machine Learning (SciML), there is often the need to run a function forward and obtain gradient wrt inputs. This is not to tune nn parameters but for solving large systems of equations. This task happens at every time step with fixed input shapes and is often the largest time consumer. The workflow can be written concisely as:
```
for i in range(nt):
torch.set_grad_enabled(True)
gg = G(x, p, aux)
#x, p are tensors. They may have very many elements, but fixed shapes over time steps.
dGdx = torch.autograd.grad(gg,x,grad_outputs=v) ## the jacobian dGdx is often sparse
torch.set_grad_enabled(False)
#use dGdx in some calculations, without need for Automatic Differentiation
```
through some optimization with v we were already making torch.autograd.grad parallel for two dimensions, but it is still the majority of the time. I suspect it is due to repeatedly calling the autograd engine, but the graph is fixed for every time! if we can capture the graph for the two main lines "gg=G" and "autograd.grad", only change the data (x,y) and reuse the graph every time step, it can go much faster:
```
# somehow initialize the graph capturedGraph for both G and autograd.grad
for i in range(nt):
dGdx = capturedGraph(x,p)
# use dGdx in some calculations
```
Motivation:
This is a core task for almost all engineering domains. It's a super big deal!! For example, read here.
https://www.mathworks.com/help/optim/ug/nonlinear-equations-with-analytic-jacobian.html
### Alternatives
I've been reading torch.fx to no avail.
### Additional context
_No response_
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mcarilli
| 3 |
3,275 | 96,296 |
Inductor guards are not propagated to Dynamo with dynamic shapes
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
```
@dataclasses.dataclass
class PositiveGuard:
"""
An expression we should check for > 0
Guards are currently not checked. Plan to add this later.
"""
expr: Expr
```
This means we may incorrectly reuse inductor kernels for sizes they are not valid for.
### Versions
master
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,276 | 96,292 |
Better error message when trying to run fp16 weights on CPU
|
good first issue, module: error checking, triaged
|
### 🚀 The feature, motivation and pitch
Hey :wave: from the Hugging Face Open-Source team,
We're seeing the following issue over and over again across libraries
```
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
```
or:
```
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
```
E.g.: https://github.com/runwayml/stable-diffusion/issues/23
The problem here is that a PyTorch model has been converted to fp16 and the user tried to run it on CPU, e.g. the following:
```py
from torch import nn
import torch
linear = nn.Linear(2,2, dtype=torch.float16)
tensor = torch.ones((2,), dtype=torch.float16)
linear(tensor)
```
yields:
```
"addmm_impl_cpu_" not implemented for 'Half'
```
Could we maybe catch such errors in the forward of https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
and return a simpler error message that just says "Float16 cannot be run on CPU"?
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet
| 6 |
3,277 | 96,287 |
Pytorch 2.0 Segmentation error / IMA on model compile on GPT2
|
high priority, needs reproduction, triaged, oncall: pt2
|
### 🐛 Describe the bug
Driver and CUDA details (RTX 3060 laptop GPU)
```
nvidia-smi
Wed Mar 8 13:10:57 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| N/A 52C P8 11W / N/A | 56MiB / 6144MiB | 22% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5072 G /usr/lib/xorg/Xorg 55MiB |
+-----------------------------------------------------------------------------+
alex@pop-os:~$ export PATH=$PATH:/usr/local/cuda/bin
alex@pop-os:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Fri_Dec_17_18:16:03_PST_2021
Cuda compilation tools, release 11.6, V11.6.55
Build cuda_11.6.r11.6/compiler.30794723_0
```
Torch isntalled via
```
pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu116
```
Installation okay though got some pip dependency error
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
dvc 2.34.1 requires tqdm<5,>=4.63.1, but you have tqdm 4.62.3 which is incompatible.
dvc-objects 0.12.2 requires tqdm<5,>=4.63.1, but you have tqdm 4.62.3 which is incompatible.
Successfully installed certifi-2022.12.7 charset-normalizer-2.1.1 cmake-3.25.0 filelock-3.9.0 idna-3.4 mpmath-1.2.1 networkx-3.0rc1 numpy-1.24.1 pillow-9.3.0 pytorch-triton-2.0.0+0d7e753227 requests-2.28.1 sympy-1.11.1 torch-2.0.0.dev20230202+cu116 torchaudio-2.0.0.dev20230201+cu116 torchvision-0.15.0.dev20230201+cu116 typing-extensions-4.4.0 urllib3-1.26.13
```
Code snippet used
```
if int(re.search(r'\d+', torch.__version__).group()) >= 2:
# for pytorch 2.0
model =torch.compile(model)
log.info(f"Compiled the model for speed up")
model.to(device)
```
Segmentation fault (no core files were generated on path)
```
2023-03-08 13:14:33,508 [INFO] Compiled the model for speed up
2023-03-08 13:14:34,487 [INFO] Epoch 1 of 50
[2023-03-08 13:14:34,491] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward
[2023-03-08 13:14:36,103] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-03-08 13:14:36,145] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-03-08 13:14:40,579] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 0
[2023-03-08 13:14:40,630] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager
[2023-03-08 13:14:46,275] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-03-08 13:14:46,276] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
Segmentation fault (core dumped)```
### Versions
$ python3 collect_env.py
Collecting environment information...
PyTorch version: 2.0.0.dev20230202+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 510.73.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800H with Radeon Graphics
CPU family: 25
Model: 80
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4462.5000
CPU min MHz: 1200.0000
BogoMIPS: 6388.32
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+0d7e753227
[pip3] torch==2.0.0.dev20230202+cu116
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==2.0.0.dev20230201+cu116
[pip3] torchvision==0.15.0.dev20230201+cu116
cc @ezyang @gchanan @zou3519 @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 9 |
3,278 | 96,277 |
A Segment Fault can be triggered in torch.adaptive_max_pool1d with an edge case
|
module: crash, triaged, module: edge cases
|
### 🐛 Describe the bug
This bug is similar to those issues I reported recently, a large value and a zero in the shape of `input` might trigger a segment fault. And this bug is in `torch.adaptive_max_pool1d`:
````python
import torch
input = torch.rand([0, 9, 1402528952189899978], dtype=torch.float32)
output_size = [1]
res = torch.adaptive_max_pool1d(
input=input,
output_size=output_size,
)
````
The output is:
````
Segmentation fault (core dumped)
````
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230307+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 11.0.0-2~ubuntu20.04.1
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 24 MiB
L3 cache: 33 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230307+cu117
[pip3] torchaudio==2.0.0.dev20230307+cu117
[pip3] torchvision==0.15.0.dev20230307+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230307+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230307+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230307+cu117 pypi_0 pypi
| 0 |
3,279 | 96,276 |
A Segment Fault can be triggered in torch.geqrf with an edge case
|
module: crash, triaged, module: edge cases
|
### 🐛 Describe the bug
A Segment Fault can be triggered by the following code which calls torch.geqrf :
````python
import torch
input = torch.rand([1, 0, 8602409350401326287, 14, 16, 10], dtype=torch.float32)
res = torch.geqrf(
input=input,
)
````
The output is:
````
Segmentation fault (core dumped)
````
The large value and zero in input's shape might be the cause of this bug, which is similar to https://github.com/pytorch/pytorch/issues/96275.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230307+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 11.0.0-2~ubuntu20.04.1
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 24 MiB
L3 cache: 33 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230307+cu117
[pip3] torchaudio==2.0.0.dev20230307+cu117
[pip3] torchvision==0.15.0.dev20230307+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230307+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230307+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230307+cu117 pypi_0 pypi
| 0 |
3,280 | 96,275 |
A Segment Fault can be triggered in torch.pinverse
|
module: crash, triaged, module: edge cases
|
### 🐛 Describe the bug
The following code can trigger a segment fault in `torch.pinverse`, the cause of this bug might be the large value and 0 in input's shape:
````python
import torch
input = torch.rand([11, 0, 1500908595704918919, 13, 3], dtype=torch.float32).cuda()
rcond = 1
res = torch.pinverse(
input=input,
rcond=rcond,
)
````
Output:
````
Segmentation fault (core dumped)
````
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230307+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 11.0.0-2~ubuntu20.04.1
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 24 MiB
L3 cache: 33 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230307+cu117
[pip3] torchaudio==2.0.0.dev20230307+cu117
[pip3] torchvision==0.15.0.dev20230307+cu117
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230307+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230307+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230307+cu117 pypi_0 pypi
| 0 |
3,281 | 96,274 |
Dynamic batch size support when combine `torchdynamo.export` and `compile_fx`
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
When running resnet50, it supports case with batch size changed in eager model. Same as if we capture the fx graph through `torchdynamo.export` with `tracing_mode = "symbolic"`. However, if we combine `torchdynamo.export` and `compile_fx` together, the generated code still has [fixed batch size value](https://gist.github.com/leslie-fang-intel/8d4e6185aafb86ce93a03e5d6481139d#file-generated-code-py-L5119).
Here is the [example code](https://gist.github.com/leslie-fang-intel/e3f105fda6619c9db8a009aab425ccc9) to reproduce this error. And the running command is: `clear && TORCHDYNAMO_DYNAMIC_SHAPES=1 python test_rn50.py`
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git9e3f173
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.19.5-1.el7.elrepo.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Genuine CPU
Stepping: 10
CPU MHz: 1341.019
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==2.1.0a0+git9e3f173
[pip3] torchvision==0.15.0a0+5850f37
[conda] mkl 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 intel_25398 intel
[conda] mkl-service 2.4.0 py38h3605609_14 intel
[conda] mkl_fft 1.3.1 py38hcab1719_22 intel
[conda] mkl_random 1.2.2 py38hbf47bc3_22 intel
[conda] mkl_umath 0.1.1 py38hf66a691_32 intel
[conda] numpy 1.22.3 py38hf0956d0_5 intel
[conda] numpy-base 1.22.3 py38h45c9ace_5 intel
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 6 |
3,282 | 96,265 |
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
|
triaged
|
### 🐛 Describe the bug
When I was testing the pipedream code with version-updated torch, I encountered the following error (1.1.0 -> 1.11.0):
```
/opt/conda/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py:175: UserWarning: Error detected in ConvolutionBackward0. Traceback of forward call that caused the error:
File "main_with_runtime_1.py", line 580, in <module>
main()
File "main_with_runtime_1.py", line 307, in main
train(train_loader, r, optimizer, epoch)
File "main_with_runtime_1.py", line 356, in train
r.run_forward()
File "../runtime_3.py", line 511, in run_forward
self._run_forward(tensors)
File "../runtime_3.py", line 559, in _run_forward
for input_name in input_names])
File "/opt/conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/pipeline/runtime/image_classification/models/alexnet/gpus=4_straight/stage2.py", line 25, in forward
out5 = self.layer5(out4)
File "/opt/conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 447, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/opt/conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 444, in _conv_forward
self.padding, self.dilation, self.groups)
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:104.)
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "main_with_runtime_1.py", line 580, in <module>
main()
File "main_with_runtime_1.py", line 307, in main
train(train_loader, r, optimizer, epoch)
File "main_with_runtime_1.py", line 407, in train
r.run_backward()
File "../runtime_3.py", line 648, in run_backward
for output_name in outputs]))
File "/opt/conda/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 256, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
```
def train(train_loader, r, optimizer, epoch):
batch_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
# switch to train mode
n = r.num_iterations(loader_size=len(train_loader))
if args.num_minibatches is not None:
n = min(n, args.num_minibatches)
r.train(n)
if not is_first_stage(): train_loader = None
r.set_loader(train_loader)
end = time.time()
epoch_start_time = time.time()
if args.no_input_pipelining:
num_warmup_minibatches = 0
else:
num_warmup_minibatches = r.num_warmup_minibatches
if args.verbose_frequency > 0:
print("Letting in %d warm-up minibatches" % num_warmup_minibatches)
print("Running training for %d minibatches" % n)
# start num_warmup_minibatches forward passes
for i in range(num_warmup_minibatches):
# flag.FLAG_DISABLE_COMPRESSION = True
r.run_forward()
for i in range(n - num_warmup_minibatches):
# flag.FLAG_DISABLE_COMPRESSION = False
# perform forward pass
r.run_forward()
# Adjust learning rate
adjust_learning_rate(optimizer, epoch, args.epochs, r, args.lr_policy, i, n)
if is_last_stage():
# measure accuracy and record loss
output, target, loss = r.output, r.target, r.loss
prec1, prec5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), output.size(0))
top1.update(prec1[0], output.size(0))
top5.update(prec5[0], output.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
epoch_time = (end - epoch_start_time) / 3600.0
full_epoch_time = (epoch_time / float(i+1)) * float(n)
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time: {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Epoch time [hr]: {epoch_time:.3f} ({full_epoch_time:.3f})\t'
'Memory: {memory:.3f} ({cached_memory:.3f})\t'
'Loss: {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1: {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5: {top5.val:.3f} ({top5.avg:.3f})'.format(
epoch, i, n, batch_time=batch_time,
epoch_time=epoch_time, full_epoch_time=full_epoch_time,
loss=losses, top1=top1, top5=top5,
memory=(float(torch.cuda.memory_allocated()) / 10**9),
cached_memory=(float(torch.cuda.memory_cached()) / 10**9)))
import sys; sys.stdout.flush()
else:
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\tMemory: {memory:.3f} ({cached_memory:.3f})'.format(
epoch, i, n, memory=(float(torch.cuda.memory_allocated()) / 10**9),
cached_memory=(float(torch.cuda.memory_cached()) / 10**9)))
import sys; sys.stdout.flush()
# perform backward pass
if args.fp16:
r.zero_grad()
else:
optimizer.zero_grad()
optimizer.load_old_params()
r.run_backward()
optimizer.load_new_params()
optimizer.step()
# finish remaining backward passes
for i in range(num_warmup_minibatches):
optimizer.zero_grad()
optimizer.load_old_params()
r.run_backward()
optimizer.load_new_params()
optimizer.step()
# wait for all helper threads to complete
r.wait()
print("Epoch %d: %.3f seconds" % (epoch, time.time() - epoch_start_time))
print("Epoch start time: %.3f, epoch end time: %.3f" % (epoch_start_time, time.time()))
```
The logic of Pipedream is that some stages will perform multiple forward passes before performing one backward pass. It seems that there may be issues with this in the new version of Torch. I would like to ask how to avoid this problem.
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu115
Is debug build: False
CUDA used to build PyTorch: 11.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.6 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Clang version: Could not collect
CMake version: version 3.5.1
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-204-generic-x86_64-with-debian-stretch-sid
Is CUDA available: True
CUDA runtime version: 10.1.163
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla P100-PCIE-12GB
GPU 1: Tesla P100-PCIE-12GB
GPU 2: Tesla P100-PCIE-12GB
GPU 3: Tesla P100-PCIE-12GB
GPU 4: Tesla P100-PCIE-12GB
GPU 5: Tesla P100-PCIE-12GB
GPU 6: Tesla P100-PCIE-12GB
GPU 7: Tesla P100-PCIE-12GB
Nvidia driver version: 515.65.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 20
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2200.102
BogoMIPS: 4404.71
Hypervisor vendor: vertical
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-19
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.3.2
[pip3] numpy==1.21.5
[pip3] torch==1.11.0+cu115
[pip3] torchvision==0.12.0+cu115
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] magma-cuda100 2.1.0 5 local
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] msgpack-numpy 0.4.3.2 py37_0
[conda] numpy 1.21.5 py37h7a5d4dd_2
[conda] numpy-base 1.21.5 py37hb8be1f0_2
[conda] torch 1.11.0+cu115 pypi_0 pypi
[conda] torchvision 0.12.0+cu115 pypi_0 pypi
| 2 |
3,283 | 96,236 |
nn.interpolate scale_factor floors output size with floating
|
module: nn, triaged, module: edge cases
|
## Issue description
nn.interpolate(data, scale_factor) is affected by floating point error
for instance if input image is shape (1,3,88,88) and scale_factor=120/88, resulting image is (1,3,119,119)
This probably results from the floating point error in 88*(120/88) = 119.99999999999999
This is not a problem when using the `size` argument.
### Possible desired solutions
* introduce an `atol` kwarg
* introduce float->int behaviour options, 'round' 'floor' 'ceil'
This can be fixed by bypassing the scale->size calculation.
## Code example
```python
from torch.nn.functional import interpolate
def test_interpolate():
size = 120
scale_modes = ['bicubic', 'bilinear', 'nearest']
devices = ['cpu', 'cuda']
dtypes = [torch.float32, torch.float16, torch.float64]
for mode in scale_modes:
for device in devices:
for dtype in dtypes:
for _s in range(size//2 -1, size*2 + 1):
out = torch.randn(1,3,_s,_s, device=device, dtype=dtype)
scale = size/out.shape[-2]
_h, _w = interpolate(out, scale_factor=scale, mode=mode).shape[2:]
_msg = f"interpolate (x:(N,C,{_s},{_s}), scale={scale}) -> (N,C,{_h},{_w}) "
_msg += f"dtype {dtype}, device {device}"
assert _h == _w == size, _msg
```
```bash
>>> test_interpolate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/z/work/ubuso/test/test_imgio.py", line 55, in test_interpolate
assert _h == _w == size, _msg
AssertionError: interpolate (x:(N,C,88,88), scale=1.3636363636363635) -> (N,C,119,119) dtype torch.float32, device cpu
```
## System Info
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.19.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12800H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch3d==0.7.2
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.8.0 h37601d7_11 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.7.2 py39_cu117_pyt1131 pytorch3d
[conda] torchaudio 0.13.1 py39_cu117 pytorch
[conda] torchvision 0.14.1 py39_cu117 pytorch
cc @albanD @mruberry @jbschlosser @walterddr @saketh-are
| 0 |
3,284 | 96,225 |
[MPS] F.conv1d and F.conv2d produce incorrect gradients when minibatch >= 2^16
|
triaged, module: mps
|
### 🐛 Describe the bug
Both `F.conv1d` and `F.conv2d` (and `torch.nn.Conv1d` and `torch.nn.Conv2d`) produce incorrect gradients
when the minibatch size is 65_536 (=2^16) or greater. In the example below, the bottom two assertions fail.
```python
import torch
def test_grad(minibatch, dimensions=1):
if dimensions == 1:
view = (-1, 1, 1)
op = torch.nn.functional.conv1d
elif dimensions == 2:
view = (-1, 1, 1, 1)
op = torch.nn.functional.conv2d
# create identical input tensors
input_cpu = torch.ones(minibatch)
input_mps = copy.deepcopy(input_cpu).to("mps")
# create identical weight tensors
weight_cpu = torch.ones(1)
weight_mps = copy.deepcopy(weight_cpu).to("mps")
weight_cpu.requires_grad_()
weight_mps.requires_grad_()
# get output
out_cpu = op(input_cpu.view(view), weight_cpu.view(view)).mean()
out_mps = op(input_mps.view(view), weight_mps.view(view)).mean()
out_cpu.backward()
out_mps.backward()
# compare CPU and MPS outputs
assert torch.allclose(out_cpu, out_mps.to("cpu"))
assert torch.allclose(weight_cpu.grad, weight_mps.grad.to("cpu")) # fails with minibatch >= 65_536
test_grad(65_535, dimensions=1)
test_grad(65_535, dimensions=2)
test_grad(65_536, dimensions=1) # second assertion fails
test_grad(65_536, dimensions=2) # second assertion fails
```
The output might be non-deterministic, but in the testing below the error rate with `minibatch=65_535` is 6%,
whereas the error rate with `minibatch=65_536` is 100%.
```python
n_iter = 100
errors_65_535 = 0
errors_65_536 = 0
for _ in range(n_iter):
try:
test_grad(65_535)
except AssertionError:
errors_65_535 += 1
try:
test_grad(65_536)
except AssertionError:
errors_65_536 += 1
print(f"65_535 AssertionError rate = {errors_65_535 / n_iter : .0%}")
print(f"65_536 AssertionError rate = {errors_65_536 / n_iter : .0%}")
```
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.7
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 (main, Aug 29 2022, 10:06:59) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-13.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] torch==1.13.1
[pip3] torchtext==0.13.1
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
3,285 | 96,205 |
[Dynamo] HuggingFace transformers configuration_utils graph break workaround
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
HF worked around dynamo problem here https://github.com/huggingface/transformers/pull/21648/files#r1107482201
This should not be necessary, we should fix our code.
### Versions
master
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @soumith
| 3 |
3,286 | 96,198 |
dynamo + dict subclass + tensor instance check: NotImplementedError
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Torchdynamo fails to understand the `self.__dict__` type when it's `isinstance` checked with `torch.Tensor`
Note that `dict(self.__dict__)` does work as a workaround
### Error logs
```python
Traceback (most recent call last):
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1014, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 561, in call_function
result = handler(tx, *args, **kwargs)
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 799, in call_isinstance
arg_type = arg.python_type()
File "/home/carmocca/git/py310/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 146, in python_type
raise NotImplementedError(f"{self} has no type")
NotImplementedError: GetAttrVariable(UserDefinedObjectVariable(MyDict), __dict__) has no type
from user code:
File "/home/carmocca/git/lightning/kk4.py", line 7, in foo
print(isinstance(self.__dict__, torch.Tensor))
```
### Minified repro
```python
import torch
class MyDict(dict):
def foo(self):
print(isinstance(self.__dict__, torch.Tensor))
def x():
d = MyDict()
d.foo()
c_x = torch.compile(x)
c_x()
```
### Versions
`torch==2.1.0.dev20230302+cpu`
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,287 | 96,187 |
`gradgradcheck` does not work with sparse inputs.
|
module: sparse, module: autograd, triaged
|
### 🐛 Describe the bug
As per title. None of the combinations below worked, and the error messages are not always relevant/helpful:
Case 1: call as is, but make sure that `fn` calls to `to_dense`:
```python
In [1]: import torch
In [2]: x = torch.rand(3, 3, dtype=torch.double).to_sparse().requires_grad_(True)
In [3]: torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,))
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1694, in gradgradcheck(func, inputs, grad_outputs, eps, atol, rtol, gen_non_contig_grad_outputs, raise_exception, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_fwd_over_rev, check_rev_over_rev, fast_mode, masked)
1691 grad_inputs = tuple(g for g in grad_inputs if g is not None)
1692 return grad_inputs
-> 1694 return gradcheck(
1695 new_func, tupled_inputs + tupled_grad_outputs, eps=eps, atol=atol, rtol=rtol, raise_exception=raise_exception,
1696 nondet_tol=nondet_tol, check_undefined_grad=check_undefined_grad,
1697 check_grad_dtypes=check_grad_dtypes, check_batched_grad=check_batched_grad, fast_mode=fast_mode,
1698 check_forward_ad=check_fwd_over_rev, check_backward_ad=check_rev_over_rev, masked=masked)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1536, in gradcheck(func, inputs, eps, atol, rtol, raise_exception, check_sparse_nnz, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad, check_backward_ad, fast_mode, masked)
1534 return False
1535 else:
-> 1536 return _gradcheck_helper(**args)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1547, in _gradcheck_helper(func, inputs, eps, atol, rtol, check_sparse_nnz, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad, check_backward_ad, fast_mode, masked)
1545 func_out = func(*tupled_inputs)
1546 outputs = _differentiable_outputs(func_out)
-> 1547 _check_outputs(outputs)
1549 gradcheck_fn = functools.partial(_fast_gradcheck if fast_mode else _slow_gradcheck, masked=masked)
1550 _gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
1551 rtol, atol, check_grad_dtypes, check_forward_ad=check_forward_ad,
1552 check_backward_ad=check_backward_ad, nondet_tol=nondet_tol,
1553 check_undefined_grad=check_undefined_grad)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:772, in _check_outputs(outputs)
768 def _check_outputs(outputs) -> None:
769 if any(_is_sparse_any_tensor(t) for t in outputs if isinstance(t, torch.Tensor)):
770 # it is easier to call to_dense() on the sparse output than
771 # to modify analytical jacobian
--> 772 raise ValueError('Sparse output is not supported at gradcheck yet. '
773 'Please call to_dense() on the output of fn for gradcheck.')
774 if any(t.layout == torch._mkldnn for t in outputs if isinstance(t, torch.Tensor)): # type: ignore[attr-defined]
775 raise ValueError('MKLDNN output is not supported at gradcheck yet. '
776 'Please call to_dense() on the output of fn for gradcheck.')
ValueError: Sparse output is not supported at gradcheck yet. Please call to_dense() on the output of fn for gradcheck.
```
Case 2: specify `masked=True`.
```python
In [4]: torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,), masked=True)
---------------------------------------------------------------------------
GradcheckError Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,), masked=True)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1694, in gradgradcheck(func, inputs, grad_outputs, eps, atol, rtol, gen_non_contig_grad_outputs, raise_exception, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_fwd_over_rev, check_rev_over_rev, fast_mode, masked)
1691 grad_inputs = tuple(g for g in grad_inputs if g is not None)
1692 return grad_inputs
-> 1694 return gradcheck(
1695 new_func, tupled_inputs + tupled_grad_outputs, eps=eps, atol=atol, rtol=rtol, raise_exception=raise_exception,
1696 nondet_tol=nondet_tol, check_undefined_grad=check_undefined_grad,
1697 check_grad_dtypes=check_grad_dtypes, check_batched_grad=check_batched_grad, fast_mode=fast_mode,
1698 check_forward_ad=check_fwd_over_rev, check_backward_ad=check_rev_over_rev, masked=masked)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1536, in gradcheck(func, inputs, eps, atol, rtol, raise_exception, check_sparse_nnz, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad, check_backward_ad, fast_mode, masked)
1534 return False
1535 else:
-> 1536 return _gradcheck_helper(**args)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:1543, in _gradcheck_helper(func, inputs, eps, atol, rtol, check_sparse_nnz, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad, check_backward_ad, fast_mode, masked)
1539 def _gradcheck_helper(func, inputs, eps, atol, rtol, check_sparse_nnz, nondet_tol, check_undefined_grad,
1540 check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad,
1541 check_backward_ad, fast_mode, masked):
1542 tupled_inputs = _as_tuple(inputs)
-> 1543 _check_inputs(tupled_inputs, check_sparse_nnz, masked)
1545 func_out = func(*tupled_inputs)
1546 outputs = _differentiable_outputs(func_out)
File ~/git/Quansight/pytorch/torch/autograd/gradcheck.py:732, in _check_inputs(tupled_inputs, check_sparse_nnz, masked)
730 def _check_inputs(tupled_inputs, check_sparse_nnz, masked) -> bool:
731 if masked and not check_sparse_nnz and any(_is_sparse_any_tensor(t) for t in tupled_inputs if isinstance(t, torch.Tensor)):
--> 732 raise GradcheckError('gradcheck expects all tensor inputs are dense'
733 ' when check_sparse_nnz is set to False and masked is set to True.')
734 # Make sure that gradients are saved for at least one input
735 any_input_requiring_grad = False
GradcheckError: gradcheck expects all tensor inputs are dense when check_sparse_nnz is set to False and masked is set to True.
```
Case 3: well, if `masked=True` does not work, we should use both `masked=True` and `check_sparse_nnz=True`, but:
```python
In [5]: torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,), masked=True, check_sparse_nnz=True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [5], in <cell line: 1>()
----> 1 torch.autograd.gradgradcheck(lambda x: x.to_dense(), (x,), masked=True, check_sparse_nnz=True)
TypeError: gradgradcheck() got an unexpected keyword argument 'check_sparse_nnz'
```
### Versions
Current master.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer @ezyang @albanD @zou3519 @gqchen @soulitzer @Lezcano @Varal7
| 3 |
3,288 | 96,185 |
`ld: error: unknown argument '-force_load'` when linking libtorch on Android
|
module: build, oncall: mobile
|
### 🐛 Describe the bug
What I did:
1. Built PyTorch 1.13.1 from source by running `scripts/build_android.sh -DUSE_LITE_INTERPRETER_PROFILER=OFF` for arm64-v8a (it builds as static libraries this way).
2. Linked against the resulting installation in CMake roughly like so:
```cmake
find_package(Torch REQUIRED PATHS "${torch_SOURCE_DIR}/share/cmake/Torch")
target_link_libraries(my_lib PRIVATE ${TORCH_LIBRARIES})
target_include_directories(my_lib PRIVATE ${TORCH_INCLUDE_DIRS})
```
3. Tried to build my Android app that uses `my_lib` though JNI.
During the build I got `ld: error: unknown argument '-force_load'` -- this flag is set [here](https://github.com/pytorch/pytorch/blob/12ab4f08b77e1f2685ff15e7cd2a07d15ed80a43/cmake/TorchConfig.cmake.in#L32) in `TorchConfig.cmake`.
Full error message:
```
FAILED: <project dir>/build/intermediates/cxx/Debug/4m2x1c2x/obj/arm64-v8a/libmy_lib.so
cmd.exe /C "cd . && C:\Users\<my username>\AppData\Local\Android\Sdk\ndk\23.1.7779620\toolchains\llvm\prebuilt\windows-x86_64\bin\clang++.exe --target=aarch64-none-linux-android21 --sysroot=C:/Users/<my username>/AppData/Local/Android/Sdk/ndk/23.1.7779620/toolchains/llvm/prebuilt/windows-x86_64/sysroot -fPIC -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fexceptions -frtti -stdlib=libc++ -g -fno-limit-debug-info -Wl,--build-id=sha1 -Wl,--no-rosegment -Wl,--fatal-warnings -Qunused-arguments -Wl,--no-undefined -shared -Wl,-soname,libmy_lib.so -o <other .o and .a/.so files> -pthread -ldl -Wl,-force_load <project dir>/.cxx/Debug/4m2x1c2x/arm64-v8a/_deps/torch-src/lib/libtorch.a -Wl,-force_load <project dir>/.cxx/Debug/4m2x1c2x/arm64-v8a/_deps/torch-src/lib/libtorch_cpu.a _deps/torch-src/lib/libc10.a _deps/torch-src/lib/libnnpack.a _deps/torch-src/lib/libpytorch_qnnpack.a _deps/torch-src/lib/libXNNPACK.a _deps/torch-src/lib/libcpuinfo.a _deps/torch-src/lib/libclog.a _deps/torch-src/lib/libpthreadpool.a -static-libstdc++ -latomic -lm && cd ."
ld: error: unknown argument '-force_load'
ld: error: unknown argument '-force_load'
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
```
### Versions
- Windows 11 x64
- Android NDK 23.1.7779620 (also tried 25.2.9519653)
- CMake 3.22.1
- PyTorch built from sources, `v1.13.1` git tag as described above
cc @malfet @seemethere
| 1 |
3,289 | 96,161 |
[torchdistx] Future of the large model initialization
|
module: nn, triaged, ezyang's list, module: meta tensors, module: fsdp
|
For deep learning, when the model is large, model creation and initialization on host device will require tremendous time and sometimes causes host OOM. The existing [torchdistx](https://github.com/pytorch/torchdistx) package resolved this issue efficiently by introducing the `deferred_init` API which will first create model with fake tensor then materialized those tensor after the model is already partitioned, either by FSDP or other model parallelism methods.
However, it seems there is no active development in the torchdistx package, while the current FSDP implementation still depends on it for `deferred_init`. I'm wondering if there is any plan to merge the `deferred_init` API into the native Pytorch, and if no, what will be the native solution for large model initialization?
This issue is associated with this [PR](https://github.com/pytorch/xla/pull/4664) in the Pytorch/XLA repo.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @eellison @bdhirsh @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu @saketh-are @soumith
| 8 |
3,290 | 96,153 |
mps bug: failed assertion `[MPSNDArrayDescriptor sliceDimension:withSubrange:] error: subRange.start (6) is not less than length of dimension[0] (6)'
|
triaged, module: regression, module: viewing and reshaping, module: mps
|
### 🐛 Describe the bug
I got a bug that looks similar to #95883 that is just closed. Not sure if it's the same. I'm running on the latest nightly version.
Code:
```
import torch
t = torch.ones((2,6,), device='mps')[1].reshape(2,3)
print("No error yet")
t = t + 1
```
Output:
```
No error yet
/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:82: failed assertion `[MPSNDArrayDescriptor sliceDimension:withSubrange:] error: subRange.start (6) is not less than length of dimension[0] (6)'
zsh: abort python test/torch_mps_test.py
```
Interesting, if I change the code to
```
import torch
t = torch.ones((2,6,), device='mps')[1].reshape(3,2)
print("No error yet")
t = t + 1
```
then there is no error.
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230306
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.3 (arm64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.28)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.16 (main, Mar 1 2023, 12:19:04) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.6.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.1.0.dev20230306
[pip3] torchdiffeq==0.2.3
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 2.1.0.dev20230306 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 7 |
3,291 | 96,140 |
mkldnn matmul kernel may be slower than openblas kernel for very small tensor shapes
|
module: performance, triaged, module: mkldnn
|
### 🐛 Describe the bug
mkldnn matmul kernel may be slower than openblas kernel for very small tensor shapes. While mkldnn matmul backend accelerated multiple matmul shapes, it's additional graph rewrite latency is not acceptable for very small tensors, for example, for the below shapes:
12x12x64:12x64x12:12x12x12
12x16x16:12x16x64:12x16x64
so, till some heuristic is implemented to dynamically select the best kernel, it is good to have at least runtime control to enable and disable mkldnn matmul path.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (aarch64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-aarch64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-7
Off-line CPU(s) list: 8-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: ARM
Model: 1
Stepping: r1p1
BogoMIPS: 2100.00
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.1.0a0+git893aa5d
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @ngimel @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 5 |
3,292 | 96,136 |
`torch.utils.checkpoint` should avoid updating BatchNorm statistics twice
|
module: checkpoint, triaged
|
### 🚀 The feature, motivation and pitch
`torch.utils.checkpoint` provides two checkpointing flavors
1. Reentrant
2. non-reentrant
In both cases if the wrapped module contains `BatchNorm` layers, the batch norm will be updated twice because the forward pass has to be ran twice. This behavior makes reproducing experiment results between checkpointed and un-checkpointed runs harder.
To repro the problem, one can run https://github.com/facebookresearch/fairscale/blob/main/tests/nn/checkpoint/test_checkpoint_activations_norm.py test without the fairscale wrapper.
### Alternatives
Fairscale provides a patched Reentrant flavour of activation checkpoint wrapper, that uses 2 hooks to avoid the second forward pass updating the batchnorm stats.
But currently we don't have a clean API to support this behavior in the non-reentrant case. I can think of one method to achieve this effect is to make the following modification in the non-reentrant pass, but not sure if this will work. UPDATE: it doesn't seemed to work.
1. mark the `running_mean` and `running_var` tensor inside BatchNorm with a special attribute.
2. detect that special attribute during [pack](https://github.com/pytorch/pytorch/blob/master/torch/utils/checkpoint.py#L378), and return the normal tensor instead of the holder object
3. during `unpack`, if a tensor is passed in as argument, return the tensor directly instead of loading it from `storage`
4. during `unpack`, run the forward pass with `track_running_stats=False`, so the `running_mean` and `running_var` does not get updated.
### Additional context
_No response_
| 1 |
3,293 | 96,130 |
torch.compile fails when compiling a T5-style model with HF interfaces
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
When using a Huggingface T5 model to generate text, one usually uses generate(). This function ends up calling the forward() method of the model with a set of options that result in the function returning a Seq2SeqLMOutput(). If this dataclass only has 1 argument passed into it, torch.compile() fails. If more than 1 argument is passed into the constructor, the code works as expected.
cc: @HamidShojanazeri @ezyang @raghukiran1224 @mudhakar
### Error logs
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in wrapper
return inner_fn(self, inst)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1063, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 517, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 127, in call_function
return variables.DataClassVariable.create(self.value, args, kwargs, options)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/variables/dicts.py", line 344, in create
if len(items) == 1 and not isinstance(items[keys[0]], variables.TensorVariable):
KeyError: loss
from user code:
File "/workspace/foundation-model-stack/nlp/scripts/inference/torch_compile_repro.py", line 8, in forward
return Seq2SeqLMOutput(logits=inputs)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/foundation-model-stack/nlp/scripts/inference/torch_compile_repro.py", line 15, in <module>
main()
File "/workspace/foundation-model-stack/nlp/scripts/inference/torch_compile_repro.py", line 13, in main
model(torch.tensor([0.1, 0.2]))
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 95, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 368, in catch_errors
return callback(frame, cache_size, hooks)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError
```
### Minified repro
python mini_repro.py
```
import torch
import torch.nn as nn
from transformers.modeling_outputs import Seq2SeqLMOutput
class ReproError(nn.Module):
def forward(self, inputs):
return Seq2SeqLMOutput(logits=inputs)
def main():
model = torch.compile(ReproError())
model(torch.tensor([0.1, 0.2]))
main()
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230304+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Xeon Processor (Cascadelake)
Stepping: 5
CPU MHz: 2399.998
BogoMIPS: 4799.99
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 160 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.0.0+b8b470bc59
[pip3] torch==2.1.0.dev20230304+cu117
[pip3] torchvision==0.15.0.dev20230304+cu117
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch-triton 2.0.0+b8b470bc59 pypi_0 pypi
[conda] torch 2.1.0.dev20230304+cu117 pypi_0 pypi
[conda] torchvision 0.15.0.dev20230304+cu117 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire @davidberard98
| 6 |
3,294 | 96,123 |
make_fx tracing with dynamic shapes should also disable_slice_optimization
|
triaged, module: aotdispatch
|
### 🐛 Describe the bug
This will help remove some guards related to slicing.
Test case:
```
diff --git a/test/test_proxy_tensor.py b/test/test_proxy_tensor.py
index d1f5de669b3..d8f2d6f1c0b 100644
--- a/test/test_proxy_tensor.py
+++ b/test/test_proxy_tensor.py
@@ -970,6 +970,16 @@ def forward(self, crop_camera_1, mask_1):
index_put_ = torch.ops.aten.index_put_.default(crop_camera_1, [mask_1], view_2); crop_camera_1 = mask_1 = view_2 = None
return None""")
+ def test_unbacked_slice(self):
+ def f(x, m):
+ x = x[m]
+ return x[slice(None, None, None), slice(None, None, None), slice(None, 2, None)]
+
+ make_fx(f, tracing_mode="symbolic")(
+ torch.randn((12, 3, 3)),
+ torch.randint(0, 2, (12,), dtype=torch.bool)
+ )
+
@unittest.skipIf(not USE_TORCHVISION, "test requires torchvision")
def test_unbacked_batch_resnet(self):
mod = torchvision.models.resnet18()
```
### Versions
master
| 0 |
3,295 | 96,118 |
Activation Checkpointing PT2 - AOTAutograd cannot handle set_rng_state
|
module: checkpoint, triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
### What am I trying to do?
My end goal is to support activation checkpointing using TorchDynamo escape hatch - `allow_in_graph`. As shown in the snippet below, `torch.utils.checkpoint.checkpoint` is inserted in the Dynamo generated FX graph as it is. Now, AOT Autograd traces it, and gets `set_rng_state` related error.
I am first showing the full code that triggers the problem. And later, I will show a smaller repro
~~~
import logging
import torch
import torch._dynamo
import weakref
import traceback
import torch.utils.checkpoint
from functorch.compile import aot_function, aot_module, draw_graph, nop, print_compile
# torch._dynamo.config.log_level = logging.DEBUG
class Mock(torch.nn.Module):
def __init__(self):
super().__init__()
self.relu = torch.nn.ReLU()
def forward(self, x, y):
# return self.relu(x + y)
a = torch.mm(x, y)
# What happens if we do a graph break here :(
# torch._dynamo.graph_break()
a = torch.mm(a, y)
b = self.relu(a)
return b
mod = Mock()
@torch._dynamo.allow_in_graph
def allowed_checkpoint(x, y, preserve_rng_state):
return torch.utils.checkpoint.checkpoint(mod, x, y, preserve_rng_state=preserve_rng_state, use_reentrant=False)
def fn(x, y, preserve_rng_state=True):
a = allowed_checkpoint(x, y, preserve_rng_state=preserve_rng_state)
# a = allowed_checkpoint(mod, a, y)
return a
x = torch.randn(4, 4, device="cuda", requires_grad=True)
y = torch.randn(4, 4, device="cuda", requires_grad=True)
# Case 3 - Try aot_eager backend - Dont preserve rng state
opt_fn = torch.compile(fn, backend="aot_eager")
opt_z = opt_fn(x, y, True)
opt_z.sum().backward()
print("Success")
~~~
This triggers following error.
~~~
Traceback (most recent call last):
File "/scratch/anijain/work/pytorch/torch/random.py", line 133, in fork_rng
yield
File "/scratch/anijain/work/pytorch/torch/utils/checkpoint.py", line 414, in unpack
set_device_states(fwd_gpu_devices, fwd_gpu_states)
File "/scratch/anijain/work/pytorch/torch/utils/checkpoint.py", line 58, in set_device_states
torch.cuda.set_rng_state(state)
File "/scratch/anijain/work/pytorch/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/scratch/anijain/work/pytorch/torch/cuda/__init__.py", line 192, in _lazy_call
callable()
File "/scratch/anijain/work/pytorch/torch/cuda/random.py", line 62, in cb
default_generator.set_state(new_state_copy)
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/anijain/work/pytorch/torch/_dynamo/output_graph.py", line 708, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/scratch/anijain/work/pytorch/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/scratch/anijain/work/pytorch/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 2810, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 164, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 2503, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1717, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 2092, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward, aot_config.decompositions)(
File "/scratch/anijain/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/scratch/anijain/work/pytorch/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/scratch/anijain/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/scratch/anijain/work/pytorch/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/scratch/anijain/work/pytorch/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/scratch/anijain/work/pytorch/torch/fx/_symbolic_trace.py", line 652, in flatten_fn
tree_out = root_fn(*tree_args)
File "/scratch/anijain/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1156, in traced_joint
return functionalized_f_helper(primals, tangents)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1108, in functionalized_f_helper
f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_input_mutations)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1076, in flat_fn_no_input_mutations
outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, maybe_tangents, meta, keep_input_mutations)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1048, in flat_fn_with_synthetic_bases_expanded
outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, keep_input_mutations)
File "/scratch/anijain/work/pytorch/torch/_functorch/aot_autograd.py", line 1017, in forward_or_joint
backward_out = torch.autograd.grad(
File "/scratch/anijain/work/pytorch/torch/autograd/__init__.py", line 307, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/scratch/anijain/work/pytorch/torch/utils/checkpoint.py", line 420, in unpack
_unused = function(*args, **kwargs)
File "/scratch/anijain/work/env/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/scratch/anijain/work/pytorch/torch/random.py", line 137, in fork_rng
torch.cuda.set_rng_state(gpu_rng_state, device)
File "/scratch/anijain/work/pytorch/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/scratch/anijain/work/pytorch/torch/cuda/__init__.py", line 192, in _lazy_call
callable()
File "/scratch/anijain/work/pytorch/torch/cuda/random.py", line 62, in cb
default_generator.set_state(new_state_copy)
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
~~~
### Why am I using allow_in_graph? Why not do a graph break?
* In the case of checkpointing, Dynamo tracing checkpointing code does not really add any benefit. If there is a graph break in the mod thats getting checkpointed, Dynamo generated subgraphs will not respect checkpointing. This is because the second subgraph will need to store the intermediate tensor for computing its backward graph. This storing of intermediate tensor breaks the eager checkpointing requirements. So, a module must not have graph breaks in order for PT2 stack to compile through the checkpoint-generated code. So, Dynamo does not add a benefit here.
* Checkpointing uses saved_tensors_hooks, which internally is a pair of hooks - pack (called during fwd) and unpack (called during bwd). These hooks are called from C++, and are not visible to Dynamo. We can ignore these hooks in Dynamo (assuming that hooks are traceable etc) and let AOTAutograd trace through them. However, in the case of checkpointing, the hooks have an unusual nonlocal mutation, which further upsets AOT Autograd.
* Graph break means less performance opportunity. However, this is less of an issue if the checkpointed code is decently large.
### Show me smaller code to repro the problem
Alright, here it is. I have removed all the checkpointing related code. This one directly uses `aot_function` and avoid Dynamo stack.
~~~
import torch
from functorch.compile import aot_function, aot_module, draw_graph, nop, print_compile
def fn(x):
state = torch.cuda.get_rng_state()
out = torch.randn(4) + x
torch.cuda.set_rng_state(state)
return out
x = torch.randn(4)
opt_fn = aot_function(fn, nop)
z = opt_fn(x)
print(z)
~~~
So, the question is - how can we support `set_rng_state`? IIUC, these need to be in the AOT generated graph, so that we do preserve the rng state (in the actual checkpointed code). But, these rng state ops are not aten ops.
Looking for ideas.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
NA
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @soumith
| 1 |
3,296 | 96,111 |
Static size boolean masking
|
triaged, module: advanced indexing
|
### 🐛 Describe the bug
A long standing request is https://github.com/pytorch/pytorch/issues/62320 ; Executorch team has agreed to implement it.
Once this is implemented, we can also get static size boolean masking to work too. The easiest way is to convert the boolean mask into an index tensor. You can use the meta implementation for indexing to do this:
```
@register_meta(aten.index.Tensor)
def meta_index_Tensor(self, indices):
result: List[Optional[Tensor]] = []
for i, index in enumerate(indices):
if index is not None:
check(
index.dtype in [torch.long, torch.int, torch.int8, torch.bool],
lambda: "tensors used as indices must be long, int, byte or bool tensors",
)
if index.dtype in [torch.int8, torch.bool]:
nonzero = index.nonzero()
k = len(result)
check(
k + index.ndim <= self.ndim,
lambda: f"too many indices for tensor of dimension {self.ndim}",
IndexError,
)
for j in range(index.ndim):
check(
index.shape[j] == self.shape[k + j],
lambda: f"The shape of the mask {index.shape} at index {i} "
f"does not match the shape of the indexed tensor {self.shape} at index {k + j}",
IndexError,
)
result.append(nonzero.select(1, j))
else:
result.append(index)
else:
result.append(index)
return result
```
The main annoyance is that if there are not enough elements to fill the nonzero, it will be zero padded. For a boolean mask, a zero pad is inappropriate; instead you want an invalid index, and then to fill the indexing op with some placeholder element like 0.
### Versions
master
| 6 |
3,297 | 96,110 |
torch.where behaves differently from in place replacement
|
needs reproduction, module: autograd, triaged, oncall: pt2
|
### 🐛 Describe the bug
I'm implementing an stft layer that is not trainable using 1d convolutions. One of the steps involves the normalization of a tensor. I'm finding that when I use the layer in a model, it only trains if I use in-place indexing.
I'm trying to divide one tensor by another in torch, but only when the values of the denominator exceed a certain threshold. This implementation works.
```python
wsq_ola = wsq_ola.to(wav).expand_as(wav).clone()
min_mask = wsq_ola.abs() < eps
wav[~min_mask] = wav[~min_mask] / wsq_ola[~min_mask]
```
I tried to implement the same thing with torch.where instead as follows:
```python
wsq_ola = wsq_ola.to(wav).expand_as(wav).clone()
min_mask = wsq_ola.abs() < eps
wav = torch.where(min_mask, wav, wav / wsq_ola)
```
Unfortunately, once I make this change, the model no longer learns. Is the gradient not propagated through torch.where?
### Versions
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 4568.1641
CPU min MHz: 2200.0000
BogoMIPS: 7600.15
Virtualization: AMD-V
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] ema-pytorch==0.0.10
[pip3] mypy==0.971
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.8.0
[pip3] pytorch-ranger==0.1.1
[pip3] separate-torch==0.0.0
[pip3] torch==1.13.0
[pip3] torch-optimizer==0.3.0
[pip3] torch-summary==1.4.5
[pip3] torchaudio==0.13.0
[pip3] torchmetrics==0.10.1
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.7.0 hd8887f6_10 nvidia
[conda] ema-pytorch 0.0.10 pypi_0 pypi
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.23.5 py39h3d75532_0 conda-forge
[conda] pytorch 1.13.0 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_1 pytorch
[conda] pytorch-lightning 1.8.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-ranger 0.1.1 pyhd8ed1ab_0 conda-forge
[conda] separate-torch 0.0.0 pypi_0 pypi
[conda] torch-optimizer 0.3.0 pyhd8ed1ab_0 conda-forge
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 0.13.0 py39_cu117 pytorch
[conda] torchmetrics 0.10.1 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.14.0 py39_cu117 pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,298 | 96,098 |
Error during inference on iOS: INTERNAL ASSERT FAILED at it_type_base.h:535
|
oncall: jit
|
### 🐛 Describe the bug
I have a model which is Faster-RCNN with a custom backbone. I've converted it for mobile using these steps:
```
torchscript_model = torch.jit.script(model)
torchscript_model_optimized = optimize_for_mobile(torchscript_model)
torchscript_model_optimized._save_for_lite_interpreter("model.ptl")
```
Both converted and original models run inference successfully from Python.
When I try running inference on iOS I get this error:
```
2023-03-06 00:57:31.578406-0500 acetrace[43512:3335359] Error during inference r INTERNAL ASSERT FAILED at "/Users/distiller/project/aten/src/ATen/core/jit_type_base.h":535, please report a bug to PyTorch.
Debug info for handle(s): debug_handles:{-1}, was not found.
Exception raised from expect at /Users/distiller/project/aten/src/ATen/core/jit_type_base.h:535 (most recent call first):
frame #0: _ZN3c106detail14torchCheckFailEPKcS2_jS2_ + 92 (0x102bccb28 in acetrace)
frame #1: _ZN3c103strIJEEEDcDpRKT_ + 0 (0x1023266e4 in acetrace)
frame #2: _ZN3c104Type6expectINS_9TupleTypeEEEDav + 92 (0x10228d878 in acetrace)
frame #3: _ZN5torch3jit12_GLOBAL__N_121dictConstructFromListERNSt3__16vectorIN3c106IValueENS2_9allocatorIS5_EEEE + 160 (0x102275848 in acetrace)
frame #4: _ZNSt3__110__function6__funcIZN5torch3jit6mobile20makeOperatorFunctionEN3c1012OperatorNameENS5_8optionalIiEEE3$_0NS_9allocatorIS9_EEFvRNS_6vectorINS5_6IValueENSA_ISD_EEEEEEclESG_ + 84 (0x10223daec in acetrace)
frame #5: _ZN5torch3jit6mobile16InterpreterState3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 360 (0x102248b80 in acetrace)
frame #6: _ZN5torch3jit6mobile8Function3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 116 (0x10223c7f8 in acetrace)
frame #7: _ZNK5torch3jit6mobile6Method3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 528 (0x10224cd54 in acetrace)
frame #8: _ZNK5torch3jit6mobile6MethodclENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 24 (0x10224d924 in acetrace)
frame #9: _ZN5torch3jit6mobile6Module7forwardENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 144 (0x10237a888 in acetrace)
frame #10: -[MobileUnet + + (0x10238ebac in acetrace)
frame #11: -[MobileUnet + + (0x10238f854 in acetrace)
frame #12: -[MobileUnet + + (0x10238fe14 in acetrace)
frame #13: $s8acetrace11GolfTrackerC5track_8onResult0E8Complete0E6CancelyAA9TrackFromV_ySayAA05FrameF0VGcys5Error_pSgcyyctFyycfU_yyXEfU_ + 2048 (0x1027839cc in acetrace)
frame #14: $s8acetrace11GolfTrackerC5track_8onResult0E8Complete0E6CancelyAA9TrackFromV_ySayAA05FrameF0VGcys5Error_pSgcyyctFyycfU_yyXEfU_TA + 56 (0x1027841cc in acetrace)
frame #15: $s10ObjectiveC15autoreleasepool8invokingxxyKXE_tKlF + 64 (0x1d5c4e308 in libswiftObjectiveC.dylib)
frame #16: $s8acetrace11GolfTrackerC5track_8onResult0E8Complete0E6CancelyAA9TrackFromV_ySayAA05FrameF0VGcys5Error_pSgcyyctFyycfU_ + 1156 (0x102782e9c in acetrace)
frame #17: $s8acetrace11GolfTrackerC5track_8onResult0E8Complete0E6CancelyAA9TrackFromV_ySayAA05FrameF0VGcys5Error_pSgcyyctFyycfU_TA + 88 (0x102783164 in acetrace)
frame #18: $sIeg_IeyB_TR + 48 (0x1023c291c in acetrace)
frame #19: _dispatch_call_block_and_release + 32 (0x10945853c in libdispatch.dylib)
frame #20: _dispatch_client_callout + 20 (0x109459ff0 in libdispatch.dylib)
frame #21: _dispatch_queue_override_invoke + 1052 (0x10945cb28 in libdispatch.dylib)
frame #22: _dispatch_root_queue_drain + 408 (0x10946e468 in libdispatch.dylib)
frame #23: _dispatch_worker_thread2 + 196 (0x10946ee64 in libdispatch.dylib)
frame #24: _pthread_wqthread + 228 (0x1fce1cdbc in libsystem_pthread.dylib)
frame #25: start_wqthread + 8 (0x1fce1cb98 in libsystem_pthread.dylib)
```
### Versions
```
PyTorch version: 1.14.0.dev20221110
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.25.0
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-13.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] numpydoc==1.2
[pip3] torch==1.14.0.dev20221110
[pip3] torchaudio==0.14.0.dev20221110
[pip3] torchfile==0.1.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.0.dev20221110
[conda] nomkl 3.0 0
[conda] numpy 1.23.4 pypi_0 pypi
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] torch 1.14.0.dev20221110 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20221110 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221110 pypi_0 pypi
```
Podfile.lock:
```
PODS:
- abseil/algorithm (1.20211102.0):
- abseil/algorithm/algorithm (= 1.20211102.0)
- abseil/algorithm/container (= 1.20211102.0)
- abseil/algorithm/algorithm (1.20211102.0):
- abseil/base/config
- abseil/algorithm/container (1.20211102.0):
- abseil/algorithm/algorithm
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/base (1.20211102.0):
- abseil/base/atomic_hook (= 1.20211102.0)
- abseil/base/base (= 1.20211102.0)
- abseil/base/base_internal (= 1.20211102.0)
- abseil/base/config (= 1.20211102.0)
- abseil/base/core_headers (= 1.20211102.0)
- abseil/base/dynamic_annotations (= 1.20211102.0)
- abseil/base/endian (= 1.20211102.0)
- abseil/base/errno_saver (= 1.20211102.0)
- abseil/base/fast_type_id (= 1.20211102.0)
- abseil/base/log_severity (= 1.20211102.0)
- abseil/base/malloc_internal (= 1.20211102.0)
- abseil/base/pretty_function (= 1.20211102.0)
- abseil/base/raw_logging_internal (= 1.20211102.0)
- abseil/base/spinlock_wait (= 1.20211102.0)
- abseil/base/strerror (= 1.20211102.0)
- abseil/base/throw_delegate (= 1.20211102.0)
- abseil/base/atomic_hook (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/base (1.20211102.0):
- abseil/base/atomic_hook
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/log_severity
- abseil/base/raw_logging_internal
- abseil/base/spinlock_wait
- abseil/meta/type_traits
- abseil/base/base_internal (1.20211102.0):
- abseil/base/config
- abseil/meta/type_traits
- abseil/base/config (1.20211102.0)
- abseil/base/core_headers (1.20211102.0):
- abseil/base/config
- abseil/base/dynamic_annotations (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/errno_saver (1.20211102.0):
- abseil/base/config
- abseil/base/fast_type_id (1.20211102.0):
- abseil/base/config
- abseil/base/log_severity (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/malloc_internal (1.20211102.0):
- abseil/base/base
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/raw_logging_internal
- abseil/base/pretty_function (1.20211102.0)
- abseil/base/raw_logging_internal (1.20211102.0):
- abseil/base/atomic_hook
- abseil/base/config
- abseil/base/core_headers
- abseil/base/log_severity
- abseil/base/spinlock_wait (1.20211102.0):
- abseil/base/base_internal
- abseil/base/core_headers
- abseil/base/errno_saver
- abseil/base/strerror (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/errno_saver
- abseil/base/throw_delegate (1.20211102.0):
- abseil/base/config
- abseil/base/raw_logging_internal
- abseil/container/common (1.20211102.0):
- abseil/meta/type_traits
- abseil/types/optional
- abseil/container/compressed_tuple (1.20211102.0):
- abseil/utility/utility
- abseil/container/container_memory (1.20211102.0):
- abseil/base/config
- abseil/memory/memory
- abseil/meta/type_traits
- abseil/utility/utility
- abseil/container/fixed_array (1.20211102.0):
- abseil/algorithm/algorithm
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/throw_delegate
- abseil/container/compressed_tuple
- abseil/memory/memory
- abseil/container/flat_hash_map (1.20211102.0):
- abseil/algorithm/container
- abseil/container/container_memory
- abseil/container/hash_function_defaults
- abseil/container/raw_hash_map
- abseil/memory/memory
- abseil/container/hash_function_defaults (1.20211102.0):
- abseil/base/config
- abseil/hash/hash
- abseil/strings/cord
- abseil/strings/strings
- abseil/container/hash_policy_traits (1.20211102.0):
- abseil/meta/type_traits
- abseil/container/hashtable_debug_hooks (1.20211102.0):
- abseil/base/config
- abseil/container/hashtablez_sampler (1.20211102.0):
- abseil/base/base
- abseil/base/core_headers
- abseil/container/have_sse
- abseil/debugging/stacktrace
- abseil/memory/memory
- abseil/profiling/exponential_biased
- abseil/profiling/sample_recorder
- abseil/synchronization/synchronization
- abseil/utility/utility
- abseil/container/have_sse (1.20211102.0)
- abseil/container/inlined_vector (1.20211102.0):
- abseil/algorithm/algorithm
- abseil/base/core_headers
- abseil/base/throw_delegate
- abseil/container/inlined_vector_internal
- abseil/memory/memory
- abseil/container/inlined_vector_internal (1.20211102.0):
- abseil/base/core_headers
- abseil/container/compressed_tuple
- abseil/memory/memory
- abseil/meta/type_traits
- abseil/types/span
- abseil/container/layout (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/strings/strings
- abseil/types/span
- abseil/utility/utility
- abseil/container/raw_hash_map (1.20211102.0):
- abseil/base/throw_delegate
- abseil/container/container_memory
- abseil/container/raw_hash_set
- abseil/container/raw_hash_set (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/container/common
- abseil/container/compressed_tuple
- abseil/container/container_memory
- abseil/container/hash_policy_traits
- abseil/container/hashtable_debug_hooks
- abseil/container/hashtablez_sampler
- abseil/container/have_sse
- abseil/memory/memory
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/utility/utility
- abseil/debugging/debugging_internal (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/errno_saver
- abseil/base/raw_logging_internal
- abseil/debugging/demangle_internal (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/debugging/stacktrace (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/debugging/debugging_internal
- abseil/debugging/symbolize (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/malloc_internal
- abseil/base/raw_logging_internal
- abseil/debugging/debugging_internal
- abseil/debugging/demangle_internal
- abseil/strings/strings
- abseil/functional/bind_front (1.20211102.0):
- abseil/base/base_internal
- abseil/container/compressed_tuple
- abseil/meta/type_traits
- abseil/utility/utility
- abseil/functional/function_ref (1.20211102.0):
- abseil/base/base_internal
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/hash/city (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/hash/hash (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/container/fixed_array
- abseil/hash/city
- abseil/hash/low_level_hash
- abseil/meta/type_traits
- abseil/numeric/int128
- abseil/strings/strings
- abseil/types/optional
- abseil/types/variant
- abseil/utility/utility
- abseil/hash/low_level_hash (1.20211102.0):
- abseil/base/config
- abseil/base/endian
- abseil/numeric/bits
- abseil/numeric/int128
- abseil/memory (1.20211102.0):
- abseil/memory/memory (= 1.20211102.0)
- abseil/memory/memory (1.20211102.0):
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/meta (1.20211102.0):
- abseil/meta/type_traits (= 1.20211102.0)
- abseil/meta/type_traits (1.20211102.0):
- abseil/base/config
- abseil/numeric/bits (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/numeric/int128 (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/numeric/bits
- abseil/numeric/representation (1.20211102.0):
- abseil/base/config
- abseil/profiling/exponential_biased (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/profiling/sample_recorder (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/synchronization/synchronization
- abseil/time/time
- abseil/random/distributions (1.20211102.0):
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/random/internal/distribution_caller
- abseil/random/internal/fast_uniform_bits
- abseil/random/internal/fastmath
- abseil/random/internal/generate_real
- abseil/random/internal/iostream_state_saver
- abseil/random/internal/traits
- abseil/random/internal/uniform_helper
- abseil/random/internal/wide_multiply
- abseil/strings/strings
- abseil/random/internal/distribution_caller (1.20211102.0):
- abseil/base/config
- abseil/base/fast_type_id
- abseil/utility/utility
- abseil/random/internal/fast_uniform_bits (1.20211102.0):
- abseil/base/config
- abseil/meta/type_traits
- abseil/random/internal/fastmath (1.20211102.0):
- abseil/numeric/bits
- abseil/random/internal/generate_real (1.20211102.0):
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/random/internal/fastmath
- abseil/random/internal/traits
- abseil/random/internal/iostream_state_saver (1.20211102.0):
- abseil/meta/type_traits
- abseil/numeric/int128
- abseil/random/internal/nonsecure_base (1.20211102.0):
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/random/internal/pool_urbg
- abseil/random/internal/salted_seed_seq
- abseil/random/internal/seed_material
- abseil/types/optional
- abseil/types/span
- abseil/random/internal/pcg_engine (1.20211102.0):
- abseil/base/config
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/numeric/int128
- abseil/random/internal/fastmath
- abseil/random/internal/iostream_state_saver
- abseil/random/internal/platform (1.20211102.0):
- abseil/base/config
- abseil/random/internal/pool_urbg (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/base/raw_logging_internal
- abseil/random/internal/randen
- abseil/random/internal/seed_material
- abseil/random/internal/traits
- abseil/random/seed_gen_exception
- abseil/types/span
- abseil/random/internal/randen (1.20211102.0):
- abseil/base/raw_logging_internal
- abseil/random/internal/platform
- abseil/random/internal/randen_hwaes
- abseil/random/internal/randen_slow
- abseil/random/internal/randen_engine (1.20211102.0):
- abseil/base/endian
- abseil/meta/type_traits
- abseil/random/internal/iostream_state_saver
- abseil/random/internal/randen
- abseil/random/internal/randen_hwaes (1.20211102.0):
- abseil/base/config
- abseil/random/internal/platform
- abseil/random/internal/randen_hwaes_impl
- abseil/random/internal/randen_hwaes_impl (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/numeric/int128
- abseil/random/internal/platform
- abseil/random/internal/randen_slow (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/numeric/int128
- abseil/random/internal/platform
- abseil/random/internal/salted_seed_seq (1.20211102.0):
- abseil/container/inlined_vector
- abseil/meta/type_traits
- abseil/random/internal/seed_material
- abseil/types/optional
- abseil/types/span
- abseil/random/internal/seed_material (1.20211102.0):
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/raw_logging_internal
- abseil/random/internal/fast_uniform_bits
- abseil/strings/strings
- abseil/types/optional
- abseil/types/span
- abseil/random/internal/traits (1.20211102.0):
- abseil/base/config
- abseil/random/internal/uniform_helper (1.20211102.0):
- abseil/base/config
- abseil/meta/type_traits
- abseil/random/internal/traits
- abseil/random/internal/wide_multiply (1.20211102.0):
- abseil/base/config
- abseil/numeric/bits
- abseil/numeric/int128
- abseil/random/internal/traits
- abseil/random/random (1.20211102.0):
- abseil/random/distributions
- abseil/random/internal/nonsecure_base
- abseil/random/internal/pcg_engine
- abseil/random/internal/pool_urbg
- abseil/random/internal/randen_engine
- abseil/random/seed_sequences
- abseil/random/seed_gen_exception (1.20211102.0):
- abseil/base/config
- abseil/random/seed_sequences (1.20211102.0):
- abseil/container/inlined_vector
- abseil/random/internal/nonsecure_base
- abseil/random/internal/pool_urbg
- abseil/random/internal/salted_seed_seq
- abseil/random/internal/seed_material
- abseil/random/seed_gen_exception
- abseil/types/span
- abseil/status/status (1.20211102.0):
- abseil/base/atomic_hook
- abseil/base/config
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/container/inlined_vector
- abseil/debugging/stacktrace
- abseil/debugging/symbolize
- abseil/functional/function_ref
- abseil/strings/cord
- abseil/strings/str_format
- abseil/strings/strings
- abseil/types/optional
- abseil/status/statusor (1.20211102.0):
- abseil/base/base
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/meta/type_traits
- abseil/status/status
- abseil/strings/strings
- abseil/types/variant
- abseil/utility/utility
- abseil/strings/cord (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/base/raw_logging_internal
- abseil/container/fixed_array
- abseil/container/inlined_vector
- abseil/functional/function_ref
- abseil/meta/type_traits
- abseil/strings/cord_internal
- abseil/strings/cordz_functions
- abseil/strings/cordz_info
- abseil/strings/cordz_statistics
- abseil/strings/cordz_update_scope
- abseil/strings/cordz_update_tracker
- abseil/strings/internal
- abseil/strings/str_format
- abseil/strings/strings
- abseil/types/optional
- abseil/strings/cord_internal (1.20211102.0):
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/base/raw_logging_internal
- abseil/base/throw_delegate
- abseil/container/compressed_tuple
- abseil/container/inlined_vector
- abseil/container/layout
- abseil/functional/function_ref
- abseil/meta/type_traits
- abseil/strings/strings
- abseil/types/span
- abseil/strings/cordz_functions (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/profiling/exponential_biased
- abseil/strings/cordz_handle (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/raw_logging_internal
- abseil/synchronization/synchronization
- abseil/strings/cordz_info (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/container/inlined_vector
- abseil/debugging/stacktrace
- abseil/strings/cord_internal
- abseil/strings/cordz_functions
- abseil/strings/cordz_handle
- abseil/strings/cordz_statistics
- abseil/strings/cordz_update_tracker
- abseil/synchronization/synchronization
- abseil/types/span
- abseil/strings/cordz_statistics (1.20211102.0):
- abseil/base/config
- abseil/strings/cordz_update_tracker
- abseil/strings/cordz_update_scope (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/strings/cord_internal
- abseil/strings/cordz_info
- abseil/strings/cordz_update_tracker
- abseil/strings/cordz_update_tracker (1.20211102.0):
- abseil/base/config
- abseil/strings/internal (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/base/raw_logging_internal
- abseil/meta/type_traits
- abseil/strings/str_format (1.20211102.0):
- abseil/strings/str_format_internal
- abseil/strings/str_format_internal (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/functional/function_ref
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/numeric/int128
- abseil/numeric/representation
- abseil/strings/strings
- abseil/types/optional
- abseil/types/span
- abseil/strings/strings (1.20211102.0):
- abseil/base/base
- abseil/base/config
- abseil/base/core_headers
- abseil/base/endian
- abseil/base/raw_logging_internal
- abseil/base/throw_delegate
- abseil/memory/memory
- abseil/meta/type_traits
- abseil/numeric/bits
- abseil/numeric/int128
- abseil/strings/internal
- abseil/synchronization/graphcycles_internal (1.20211102.0):
- abseil/base/base
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/base/malloc_internal
- abseil/base/raw_logging_internal
- abseil/synchronization/kernel_timeout_internal (1.20211102.0):
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/time/time
- abseil/synchronization/synchronization (1.20211102.0):
- abseil/base/atomic_hook
- abseil/base/base
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/base/dynamic_annotations
- abseil/base/malloc_internal
- abseil/base/raw_logging_internal
- abseil/debugging/stacktrace
- abseil/debugging/symbolize
- abseil/synchronization/graphcycles_internal
- abseil/synchronization/kernel_timeout_internal
- abseil/time/time
- abseil/time (1.20211102.0):
- abseil/time/internal (= 1.20211102.0)
- abseil/time/time (= 1.20211102.0)
- abseil/time/internal (1.20211102.0):
- abseil/time/internal/cctz (= 1.20211102.0)
- abseil/time/internal/cctz (1.20211102.0):
- abseil/time/internal/cctz/civil_time (= 1.20211102.0)
- abseil/time/internal/cctz/time_zone (= 1.20211102.0)
- abseil/time/internal/cctz/civil_time (1.20211102.0):
- abseil/base/config
- abseil/time/internal/cctz/time_zone (1.20211102.0):
- abseil/base/config
- abseil/time/internal/cctz/civil_time
- abseil/time/time (1.20211102.0):
- abseil/base/base
- abseil/base/core_headers
- abseil/base/raw_logging_internal
- abseil/numeric/int128
- abseil/strings/strings
- abseil/time/internal/cctz/civil_time
- abseil/time/internal/cctz/time_zone
- abseil/types (1.20211102.0):
- abseil/types/any (= 1.20211102.0)
- abseil/types/bad_any_cast (= 1.20211102.0)
- abseil/types/bad_any_cast_impl (= 1.20211102.0)
- abseil/types/bad_optional_access (= 1.20211102.0)
- abseil/types/bad_variant_access (= 1.20211102.0)
- abseil/types/compare (= 1.20211102.0)
- abseil/types/optional (= 1.20211102.0)
- abseil/types/span (= 1.20211102.0)
- abseil/types/variant (= 1.20211102.0)
- abseil/types/any (1.20211102.0):
- abseil/base/config
- abseil/base/core_headers
- abseil/base/fast_type_id
- abseil/meta/type_traits
- abseil/types/bad_any_cast
- abseil/utility/utility
- abseil/types/bad_any_cast (1.20211102.0):
- abseil/base/config
- abseil/types/bad_any_cast_impl
- abseil/types/bad_any_cast_impl (1.20211102.0):
- abseil/base/config
- abseil/base/raw_logging_internal
- abseil/types/bad_optional_access (1.20211102.0):
- abseil/base/config
- abseil/base/raw_logging_internal
- abseil/types/bad_variant_access (1.20211102.0):
- abseil/base/config
- abseil/base/raw_logging_internal
- abseil/types/compare (1.20211102.0):
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/types/optional (1.20211102.0):
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/memory/memory
- abseil/meta/type_traits
- abseil/types/bad_optional_access
- abseil/utility/utility
- abseil/types/span (1.20211102.0):
- abseil/algorithm/algorithm
- abseil/base/core_headers
- abseil/base/throw_delegate
- abseil/meta/type_traits
- abseil/types/variant (1.20211102.0):
- abseil/base/base_internal
- abseil/base/config
- abseil/base/core_headers
- abseil/meta/type_traits
- abseil/types/bad_variant_access
- abseil/utility/utility
- abseil/utility/utility (1.20211102.0):
- abseil/base/base_internal
- abseil/base/config
- abseil/meta/type_traits
- BoringSSL-GRPC (0.0.24):
- BoringSSL-GRPC/Implementation (= 0.0.24)
- BoringSSL-GRPC/Interface (= 0.0.24)
- BoringSSL-GRPC/Implementation (0.0.24):
- BoringSSL-GRPC/Interface (= 0.0.24)
- BoringSSL-GRPC/Interface (0.0.24)
- cloud_firestore (4.1.0):
- Firebase/Firestore (= 10.3.0)
- firebase_core
- Flutter
- nanopb (< 2.30910.0, >= 2.30908.0)
- cloud_functions (4.0.5):
- Firebase/Functions (= 10.3.0)
- firebase_core
- Flutter
- device_info_plus (0.0.1):
- Flutter
- FBAEMKit (15.1.0):
- FBSDKCoreKit_Basics (= 15.1.0)
- FBSDKCoreKit (15.1.0):
- FBAEMKit (= 15.1.0)
- FBSDKCoreKit_Basics (= 15.1.0)
- FBSDKCoreKit_Basics (15.1.0)
- FBSDKLoginKit (15.1.0):
- FBSDKCoreKit (= 15.1.0)
- Firebase/Analytics (10.3.0):
- Firebase/Core
- Firebase/Auth (10.3.0):
- Firebase/CoreOnly
- FirebaseAuth (~> 10.3.0)
- Firebase/Core (10.3.0):
- Firebase/CoreOnly
- FirebaseAnalytics (~> 10.3.0)
- Firebase/CoreOnly (10.3.0):
- FirebaseCore (= 10.3.0)
- Firebase/Crashlytics (10.3.0):
- Firebase/CoreOnly
- FirebaseCrashlytics (~> 10.3.0)
- Firebase/Database (10.3.0):
- Firebase/CoreOnly
- FirebaseDatabase (~> 10.3.0)
- Firebase/Firestore (10.3.0):
- Firebase/CoreOnly
- FirebaseFirestore (~> 10.3.0)
- Firebase/Functions (10.3.0):
- Firebase/CoreOnly
- FirebaseFunctions (~> 10.3.0)
- Firebase/Installations (10.3.0):
- Firebase/CoreOnly
- FirebaseInstallations (~> 10.3.0)
- Firebase/Messaging (10.3.0):
- Firebase/CoreOnly
- FirebaseMessaging (~> 10.3.0)
- Firebase/RemoteConfig (10.3.0):
- Firebase/CoreOnly
- FirebaseRemoteConfig (~> 10.3.0)
- Firebase/Storage (10.3.0):
- Firebase/CoreOnly
- FirebaseStorage (~> 10.3.0)
- firebase_analytics (10.1.0):
- Firebase/Analytics (= 10.3.0)
- firebase_core
- Flutter
- firebase_app_check (0.1.1-8):
- Firebase/CoreOnly (~> 10.3.0)
- firebase_core
- FirebaseAppCheck (~> 10.3.0-beta)
- Flutter
- firebase_app_installations (0.2.1-8):
- Firebase/Installations (= 10.3.0)
- firebase_core
- Flutter
- firebase_auth (4.1.3):
- Firebase/Auth (= 10.3.0)
- firebase_core
- Flutter
- firebase_core (2.4.1):
- Firebase/CoreOnly (= 10.3.0)
- Flutter
- firebase_crashlytics (3.0.8):
- Firebase/Crashlytics (= 10.3.0)
- firebase_core
- Flutter
- firebase_database (10.0.9):
- Firebase/Database (= 10.3.0)
- firebase_core
- Flutter
- firebase_messaging (14.2.1):
- Firebase/Messaging (= 10.3.0)
- firebase_core
- Flutter
- firebase_remote_config (3.0.9):
- Firebase/RemoteConfig (= 10.3.0)
- firebase_core
- Flutter
- firebase_storage (11.0.6):
- Firebase/Storage (= 10.3.0)
- firebase_core
- Flutter
- FirebaseABTesting (10.3.0):
- FirebaseCore (~> 10.0)
- FirebaseAnalytics (10.3.0):
- FirebaseAnalytics/AdIdSupport (= 10.3.0)
- FirebaseCore (~> 10.0)
- FirebaseInstallations (~> 10.0)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/MethodSwizzler (~> 7.8)
- GoogleUtilities/Network (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- nanopb (< 2.30910.0, >= 2.30908.0)
- FirebaseAnalytics/AdIdSupport (10.3.0):
- FirebaseCore (~> 10.0)
- FirebaseInstallations (~> 10.0)
- GoogleAppMeasurement (= 10.3.0)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/MethodSwizzler (~> 7.8)
- GoogleUtilities/Network (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- nanopb (< 2.30910.0, >= 2.30908.0)
- FirebaseAppCheck (10.3.0):
- FirebaseCore (~> 10.0)
- GoogleUtilities/Environment (~> 7.8)
- PromisesObjC (~> 2.1)
- FirebaseAppCheckInterop (10.3.0)
- FirebaseAuth (10.3.0):
- FirebaseCore (~> 10.0)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/Environment (~> 7.8)
- GTMSessionFetcher/Core (< 4.0, >= 2.1)
- FirebaseAuthInterop (10.3.0)
- FirebaseCore (10.3.0):
- FirebaseCoreInternal (~> 10.0)
- GoogleUtilities/Environment (~> 7.8)
- GoogleUtilities/Logger (~> 7.8)
- FirebaseCoreExtension (10.3.0):
- FirebaseCore (~> 10.0)
- FirebaseCoreInternal (10.3.0):
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- FirebaseCrashlytics (10.3.0):
- FirebaseCore (~> 10.0)
- FirebaseInstallations (~> 10.0)
- GoogleDataTransport (~> 9.2)
- GoogleUtilities/Environment (~> 7.8)
- nanopb (< 2.30910.0, >= 2.30908.0)
- PromisesObjC (~> 2.1)
- FirebaseDatabase (10.3.0):
- FirebaseCore (~> 10.0)
- leveldb-library (~> 1.22)
- FirebaseFirestore (10.3.0):
- abseil/algorithm (~> 1.20211102.0)
- abseil/base (~> 1.20211102.0)
- abseil/container/flat_hash_map (~> 1.20211102.0)
- abseil/memory (~> 1.20211102.0)
- abseil/meta (~> 1.20211102.0)
- abseil/strings/strings (~> 1.20211102.0)
- abseil/time (~> 1.20211102.0)
- abseil/types (~> 1.20211102.0)
- FirebaseCore (~> 10.0)
- "gRPC-C++ (~> 1.44.0)"
- leveldb-library (~> 1.22)
- nanopb (< 2.30910.0, >= 2.30908.0)
- FirebaseFunctions (10.3.0):
- FirebaseAppCheckInterop (~> 10.0)
- FirebaseAuthInterop (~> 10.0)
- FirebaseCore (~> 10.0)
- FirebaseCoreExtension (~> 10.0)
- FirebaseMessagingInterop (~> 10.0)
- FirebaseSharedSwift (~> 10.0)
- GTMSessionFetcher/Core (< 4.0, >= 2.1)
- FirebaseInstallations (10.3.0):
- FirebaseCore (~> 10.0)
- GoogleUtilities/Environment (~> 7.8)
- GoogleUtilities/UserDefaults (~> 7.8)
- PromisesObjC (~> 2.1)
- FirebaseMessaging (10.3.0):
- FirebaseCore (~> 10.0)
- FirebaseInstallations (~> 10.0)
- GoogleDataTransport (~> 9.2)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/Environment (~> 7.8)
- GoogleUtilities/Reachability (~> 7.8)
- GoogleUtilities/UserDefaults (~> 7.8)
- nanopb (< 2.30910.0, >= 2.30908.0)
- FirebaseMessagingInterop (10.3.0)
- FirebaseRemoteConfig (10.3.0):
- FirebaseABTesting (~> 10.0)
- FirebaseCore (~> 10.0)
- FirebaseInstallations (~> 10.0)
- GoogleUtilities/Environment (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- FirebaseSharedSwift (10.3.0)
- FirebaseStorage (10.3.0):
- FirebaseAppCheckInterop (~> 10.0)
- FirebaseAuthInterop (~> 10.0)
- FirebaseCore (~> 10.0)
- FirebaseCoreExtension (~> 10.0)
- GTMSessionFetcher/Core (< 4.0, >= 2.1)
- Flutter (1.0.0)
- flutter_app_badger (1.3.0):
- Flutter
- flutter_facebook_auth (5.0.4):
- FBSDKLoginKit (~> 15.1.0)
- Flutter
- flutter_secure_storage (6.0.0):
- Flutter
- FlutterPluginRegistrant (0.0.1):
- cloud_firestore
- cloud_functions
- device_info_plus
- firebase_analytics
- firebase_app_check
- firebase_app_installations
- firebase_auth
- firebase_core
- firebase_crashlytics
- firebase_database
- firebase_messaging
- firebase_remote_config
- firebase_storage
- Flutter
- flutter_app_badger
- flutter_facebook_auth
- flutter_secure_storage
- image_gallery_saver
- image_picker_ios
- mixpanel_flutter
- path_provider_ios
- shared_preferences_ios
- sqflite
- the_apple_sign_in
- url_launcher_ios
- video_player_avfoundation
- video_thumbnail
- FMDB (2.7.5):
- FMDB/standard (= 2.7.5)
- FMDB/standard (2.7.5)
- GoogleAppMeasurement (10.3.0):
- GoogleAppMeasurement/AdIdSupport (= 10.3.0)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/MethodSwizzler (~> 7.8)
- GoogleUtilities/Network (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- nanopb (< 2.30910.0, >= 2.30908.0)
- GoogleAppMeasurement/AdIdSupport (10.3.0):
- GoogleAppMeasurement/WithoutAdIdSupport (= 10.3.0)
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/MethodSwizzler (~> 7.8)
- GoogleUtilities/Network (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- nanopb (< 2.30910.0, >= 2.30908.0)
- GoogleAppMeasurement/WithoutAdIdSupport (10.3.0):
- GoogleUtilities/AppDelegateSwizzler (~> 7.8)
- GoogleUtilities/MethodSwizzler (~> 7.8)
- GoogleUtilities/Network (~> 7.8)
- "GoogleUtilities/NSData+zlib (~> 7.8)"
- nanopb (< 2.30910.0, >= 2.30908.0)
- GoogleDataTransport (9.2.0):
- GoogleUtilities/Environment (~> 7.7)
- nanopb (< 2.30910.0, >= 2.30908.0)
- PromisesObjC (< 3.0, >= 1.2)
- GoogleUtilities (7.10.0):
- GoogleUtilities/AppDelegateSwizzler (= 7.10.0)
- GoogleUtilities/Environment (= 7.10.0)
- GoogleUtilities/ISASwizzler (= 7.10.0)
- GoogleUtilities/Logger (= 7.10.0)
- GoogleUtilities/MethodSwizzler (= 7.10.0)
- GoogleUtilities/Network (= 7.10.0)
- "GoogleUtilities/NSData+zlib (= 7.10.0)"
- GoogleUtilities/Reachability (= 7.10.0)
- GoogleUtilities/SwizzlerTestHelpers (= 7.10.0)
- GoogleUtilities/UserDefaults (= 7.10.0)
- GoogleUtilities/AppDelegateSwizzler (7.10.0):
- GoogleUtilities/Environment
- GoogleUtilities/Logger
- GoogleUtilities/Network
- GoogleUtilities/Environment (7.10.0):
- PromisesObjC (< 3.0, >= 1.2)
- GoogleUtilities/ISASwizzler (7.10.0)
- GoogleUtilities/Logger (7.10.0):
- GoogleUtilities/Environment
- GoogleUtilities/MethodSwizzler (7.10.0):
- GoogleUtilities/Logger
- GoogleUtilities/Network (7.10.0):
- GoogleUtilities/Logger
- "GoogleUtilities/NSData+zlib"
- GoogleUtilities/Reachability
- "GoogleUtilities/NSData+zlib (7.10.0)"
- GoogleUtilities/Reachability (7.10.0):
- GoogleUtilities/Logger
- GoogleUtilities/SwizzlerTestHelpers (7.10.0):
- GoogleUtilities/MethodSwizzler
- GoogleUtilities/UserDefaults (7.10.0):
- GoogleUtilities/Logger
- "gRPC-C++ (1.44.0)":
- "gRPC-C++/Implementation (= 1.44.0)"
- "gRPC-C++/Interface (= 1.44.0)"
- "gRPC-C++/Implementation (1.44.0)":
- abseil/base/base (= 1.20211102.0)
- abseil/base/core_headers (= 1.20211102.0)
- abseil/container/flat_hash_map (= 1.20211102.0)
- abseil/container/inlined_vector (= 1.20211102.0)
- abseil/functional/bind_front (= 1.20211102.0)
- abseil/hash/hash (= 1.20211102.0)
- abseil/memory/memory (= 1.20211102.0)
- abseil/random/random (= 1.20211102.0)
- abseil/status/status (= 1.20211102.0)
- abseil/status/statusor (= 1.20211102.0)
- abseil/strings/cord (= 1.20211102.0)
- abseil/strings/str_format (= 1.20211102.0)
- abseil/strings/strings (= 1.20211102.0)
- abseil/synchronization/synchronization (= 1.20211102.0)
- abseil/time/time (= 1.20211102.0)
- abseil/types/optional (= 1.20211102.0)
- abseil/types/variant (= 1.20211102.0)
- abseil/utility/utility (= 1.20211102.0)
- "gRPC-C++/Interface (= 1.44.0)"
- gRPC-Core (= 1.44.0)
- "gRPC-C++/Interface (1.44.0)"
- gRPC-Core (1.44.0):
- gRPC-Core/Implementation (= 1.44.0)
- gRPC-Core/Interface (= 1.44.0)
- gRPC-Core/Implementation (1.44.0):
- abseil/base/base (= 1.20211102.0)
- abseil/base/core_headers (= 1.20211102.0)
- abseil/container/flat_hash_map (= 1.20211102.0)
- abseil/container/inlined_vector (= 1.20211102.0)
- abseil/functional/bind_front (= 1.20211102.0)
- abseil/hash/hash (= 1.20211102.0)
- abseil/memory/memory (= 1.20211102.0)
- abseil/random/random (= 1.20211102.0)
- abseil/status/status (= 1.20211102.0)
- abseil/status/statusor (= 1.20211102.0)
- abseil/strings/cord (= 1.20211102.0)
- abseil/strings/str_format (= 1.20211102.0)
- abseil/strings/strings (= 1.20211102.0)
- abseil/synchronization/synchronization (= 1.20211102.0)
- abseil/time/time (= 1.20211102.0)
- abseil/types/optional (= 1.20211102.0)
- abseil/types/variant (= 1.20211102.0)
- abseil/utility/utility (= 1.20211102.0)
- BoringSSL-GRPC (= 0.0.24)
- gRPC-Core/Interface (= 1.44.0)
- Libuv-gRPC (= 0.0.10)
- gRPC-Core/Interface (1.44.0)
- GTMSessionFetcher/Core (3.0.0)
- image_gallery_saver (1.5.0):
- Flutter
- image_picker_ios (0.0.1):
- Flutter
- leveldb-library (1.22.1)
- LibTorch-Lite (1.13.0.1):
- LibTorch-Lite/Core (= 1.13.0.1)
- LibTorch-Lite/Core (1.13.0.1):
- LibTorch-Lite/Torch
- LibTorch-Lite/Torch (1.13.0.1)
- LibTorchvision (0.14.0)
- Libuv-gRPC (0.0.10):
- Libuv-gRPC/Implementation (= 0.0.10)
- Libuv-gRPC/Interface (= 0.0.10)
- Libuv-gRPC/Implementation (0.0.10):
- Libuv-gRPC/Interface (= 0.0.10)
- Libuv-gRPC/Interface (0.0.10)
- libwebp (1.2.4):
- libwebp/demux (= 1.2.4)
- libwebp/mux (= 1.2.4)
- libwebp/webp (= 1.2.4)
- libwebp/demux (1.2.4):
- libwebp/webp
- libwebp/mux (1.2.4):
- libwebp/demux
- libwebp/webp (1.2.4)
- Mixpanel-swift (4.0.1):
- Mixpanel-swift/Complete (= 4.0.1)
- Mixpanel-swift/Complete (4.0.1)
- mixpanel_flutter (2.0.0):
- Flutter
- Mixpanel-swift (= 4.0.1)
- nanopb (2.30909.0):
- nanopb/decode (= 2.30909.0)
- nanopb/encode (= 2.30909.0)
- nanopb/decode (2.30909.0)
- nanopb/encode (2.30909.0)
- path_provider_ios (0.0.1):
- Flutter
- PromisesObjC (2.1.1)
- shared_preferences_ios (0.0.1):
- Flutter
- sqflite (0.0.2):
- Flutter
- FMDB (>= 2.7.5)
- the_apple_sign_in (1.0.0):
- Flutter
- url_launcher_ios (0.0.1):
- Flutter
- video_player_avfoundation (0.0.1):
- Flutter
- video_thumbnail (0.0.1):
- Flutter
- libwebp
DEPENDENCIES:
- cloud_firestore (from `feed/.ios/.symlinks/plugins/cloud_firestore/ios`)
- cloud_functions (from `feed/.ios/.symlinks/plugins/cloud_functions/ios`)
- device_info_plus (from `feed/.ios/.symlinks/plugins/device_info_plus/ios`)
- Firebase/Messaging
- firebase_analytics (from `feed/.ios/.symlinks/plugins/firebase_analytics/ios`)
- firebase_app_check (from `feed/.ios/.symlinks/plugins/firebase_app_check/ios`)
- firebase_app_installations (from `feed/.ios/.symlinks/plugins/firebase_app_installations/ios`)
- firebase_auth (from `feed/.ios/.symlinks/plugins/firebase_auth/ios`)
- firebase_core (from `feed/.ios/.symlinks/plugins/firebase_core/ios`)
- firebase_crashlytics (from `feed/.ios/.symlinks/plugins/firebase_crashlytics/ios`)
- firebase_database (from `feed/.ios/.symlinks/plugins/firebase_database/ios`)
- firebase_messaging (from `feed/.ios/.symlinks/plugins/firebase_messaging/ios`)
- firebase_remote_config (from `feed/.ios/.symlinks/plugins/firebase_remote_config/ios`)
- firebase_storage (from `feed/.ios/.symlinks/plugins/firebase_storage/ios`)
- Flutter (from `feed/.ios/Flutter`)
- flutter_app_badger (from `feed/.ios/.symlinks/plugins/flutter_app_badger/ios`)
- flutter_facebook_auth (from `feed/.ios/.symlinks/plugins/flutter_facebook_auth/ios`)
- flutter_secure_storage (from `feed/.ios/.symlinks/plugins/flutter_secure_storage/ios`)
- FlutterPluginRegistrant (from `feed/.ios/Flutter/FlutterPluginRegistrant`)
- GoogleUtilities
- image_gallery_saver (from `feed/.ios/.symlinks/plugins/image_gallery_saver/ios`)
- image_picker_ios (from `feed/.ios/.symlinks/plugins/image_picker_ios/ios`)
- LibTorch-Lite
- LibTorchvision
- mixpanel_flutter (from `feed/.ios/.symlinks/plugins/mixpanel_flutter/ios`)
- path_provider_ios (from `feed/.ios/.symlinks/plugins/path_provider_ios/ios`)
- shared_preferences_ios (from `feed/.ios/.symlinks/plugins/shared_preferences_ios/ios`)
- sqflite (from `feed/.ios/.symlinks/plugins/sqflite/ios`)
- the_apple_sign_in (from `feed/.ios/.symlinks/plugins/the_apple_sign_in/ios`)
- url_launcher_ios (from `feed/.ios/.symlinks/plugins/url_launcher_ios/ios`)
- video_player_avfoundation (from `feed/.ios/.symlinks/plugins/video_player_avfoundation/ios`)
- video_thumbnail (from `feed/.ios/.symlinks/plugins/video_thumbnail/ios`)
SPEC REPOS:
trunk:
- abseil
- BoringSSL-GRPC
- FBAEMKit
- FBSDKCoreKit
- FBSDKCoreKit_Basics
- FBSDKLoginKit
- Firebase
- FirebaseABTesting
- FirebaseAnalytics
- FirebaseAppCheck
- FirebaseAppCheckInterop
- FirebaseAuth
- FirebaseAuthInterop
- FirebaseCore
- FirebaseCoreExtension
- FirebaseCoreInternal
- FirebaseCrashlytics
- FirebaseDatabase
- FirebaseFirestore
- FirebaseFunctions
- FirebaseInstallations
- FirebaseMessaging
- FirebaseMessagingInterop
- FirebaseRemoteConfig
- FirebaseSharedSwift
- FirebaseStorage
- FMDB
- GoogleAppMeasurement
- GoogleDataTransport
- GoogleUtilities
- "gRPC-C++"
- gRPC-Core
- GTMSessionFetcher
- leveldb-library
- LibTorch-Lite
- LibTorchvision
- Libuv-gRPC
- libwebp
- Mixpanel-swift
- nanopb
- PromisesObjC
EXTERNAL SOURCES:
cloud_firestore:
:path: feed/.ios/.symlinks/plugins/cloud_firestore/ios
cloud_functions:
:path: feed/.ios/.symlinks/plugins/cloud_functions/ios
device_info_plus:
:path: feed/.ios/.symlinks/plugins/device_info_plus/ios
firebase_analytics:
:path: feed/.ios/.symlinks/plugins/firebase_analytics/ios
firebase_app_check:
:path: feed/.ios/.symlinks/plugins/firebase_app_check/ios
firebase_app_installations:
:path: feed/.ios/.symlinks/plugins/firebase_app_installations/ios
firebase_auth:
:path: feed/.ios/.symlinks/plugins/firebase_auth/ios
firebase_core:
:path: feed/.ios/.symlinks/plugins/firebase_core/ios
firebase_crashlytics:
:path: feed/.ios/.symlinks/plugins/firebase_crashlytics/ios
firebase_database:
:path: feed/.ios/.symlinks/plugins/firebase_database/ios
firebase_messaging:
:path: feed/.ios/.symlinks/plugins/firebase_messaging/ios
firebase_remote_config:
:path: feed/.ios/.symlinks/plugins/firebase_remote_config/ios
firebase_storage:
:path: feed/.ios/.symlinks/plugins/firebase_storage/ios
Flutter:
:path: feed/.ios/Flutter
flutter_app_badger:
:path: feed/.ios/.symlinks/plugins/flutter_app_badger/ios
flutter_facebook_auth:
:path: feed/.ios/.symlinks/plugins/flutter_facebook_auth/ios
flutter_secure_storage:
:path: feed/.ios/.symlinks/plugins/flutter_secure_storage/ios
FlutterPluginRegistrant:
:path: feed/.ios/Flutter/FlutterPluginRegistrant
image_gallery_saver:
:path: feed/.ios/.symlinks/plugins/image_gallery_saver/ios
image_picker_ios:
:path: feed/.ios/.symlinks/plugins/image_picker_ios/ios
mixpanel_flutter:
:path: feed/.ios/.symlinks/plugins/mixpanel_flutter/ios
path_provider_ios:
:path: feed/.ios/.symlinks/plugins/path_provider_ios/ios
shared_preferences_ios:
:path: feed/.ios/.symlinks/plugins/shared_preferences_ios/ios
sqflite:
:path: feed/.ios/.symlinks/plugins/sqflite/ios
the_apple_sign_in:
:path: feed/.ios/.symlinks/plugins/the_apple_sign_in/ios
url_launcher_ios:
:path: feed/.ios/.symlinks/plugins/url_launcher_ios/ios
video_player_avfoundation:
:path: feed/.ios/.symlinks/plugins/video_player_avfoundation/ios
video_thumbnail:
:path: feed/.ios/.symlinks/plugins/video_thumbnail/ios
SPEC CHECKSUMS:
abseil: ebe5b5529fb05d93a8bdb7951607be08b7fa71bc
BoringSSL-GRPC: 3175b25143e648463a56daeaaa499c6cb86dad33
cloud_firestore: 5109ab08f92a38a9aad45f3e3b4473910acabf13
cloud_functions: a4c7a98e813c562a809c637783a57538c42585ae
device_info_plus: e5c5da33f982a436e103237c0c85f9031142abed
FBAEMKit: c7f82b5145d446bcbbcd50485c032689032fc6a2
FBSDKCoreKit: 7542746fc63a2a38dd6a865eeb54268341f37b83
FBSDKCoreKit_Basics: 92d6b26c0bed30ab09bbdd96dccaa26e6c9978d1
FBSDKLoginKit: 4e275d30cf90e92bdf3a7c82857a8642abf23037
Firebase: f92fc551ead69c94168d36c2b26188263860acd9
firebase_analytics: 9f3a4cb560a59976b2c48707abae2d4cb94bcb3a
firebase_app_check: 44e4b4a2e69608f9093ed729f1d101bccdb8badd
firebase_app_installations: 7787e75e1cbbabc8013c59227492d26f83d8663c
firebase_auth: dea927502627c0d3f9cbadc3463d540cc43f0d1f
firebase_core: bf59c32d2e53814f558efa20840c1902fa2fe461
firebase_crashlytics: d92ba7149a9cc098b1157ccb39154b83851f7d1f
firebase_database: 64eb24851aa709a26fb33fccd1b422658e307c76
firebase_messaging: ee597229fc260f8fa491fa8f2d4a32dfbfa406fa
firebase_remote_config: 5007603d4cec2dc1e5016077a7ec36ed93c5041b
firebase_storage: f4e284d30ff204f01dfc9d8267d5581257d95fe6
FirebaseABTesting: e6660693429b4663573c82f8d2f1041deff1753a
FirebaseAnalytics: 036232b6a1e2918e5f67572417be1173576245f3
FirebaseAppCheck: edbc4d99f30a2762603d618330f28046a47c031d
FirebaseAppCheckInterop: 9fc57dfa08f0abb737b185ea065422b55355c909
FirebaseAuth: 0e415d29d846c1dce2fb641e46f35e9888d9bec6
FirebaseAuthInterop: 7a766bd56971347e0de4b7674aaa62ddc7820097
FirebaseCore: 988754646ab3bd4bdcb740f1bfe26b9f6c0d5f2a
FirebaseCoreExtension: 93d252fabdc9696bf14a73b04d84877ab9b3a832
FirebaseCoreInternal: 29b76f784d607df8b2a1259d73c3f04f1210137b
FirebaseCrashlytics: f20d956f8229010b645e534693c39e0b7843c268
FirebaseDatabase: d0732ba8aece0eccfa0cfb3ef540e7ba6fa1c6a6
FirebaseFirestore: 244f71ff14ef44f39e00b44d356eac708ce03103
FirebaseFunctions: d8415d2237cc807d05fa0a921d645f50a0d9d803
FirebaseInstallations: e2f26126089dcf41e215f7b8925af8d953c7d602
FirebaseMessaging: e345b219fd15d325f0cf2fef28cb8ce00d851b3f
FirebaseMessagingInterop: 3c1f7b57edba1679aac310eb2330c7104343fad8
FirebaseRemoteConfig: c24f767c17b0440ee63c7e93380d599173556113
FirebaseSharedSwift: d82ad66b3f8de9dda19c77b9627cbcaad71e245e
FirebaseStorage: 0efbff0ac978981866d89804191688ae50d64033
Flutter: f04841e97a9d0b0a8025694d0796dd46242b2854
flutter_app_badger: b87fc231847b03b92ce1412aa351842e7e97932f
flutter_facebook_auth: c69f4e643b1d9cc9063ec87c9411bd9ec268108f
flutter_secure_storage: 23fc622d89d073675f2eaa109381aefbcf5a49be
FlutterPluginRegistrant: 9db8f5474458f934bb3fadbd542a038522799034
FMDB: 2ce00b547f966261cd18927a3ddb07cb6f3db82a
GoogleAppMeasurement: c7d6fff39bf2d829587d74088d582e32d75133c3
GoogleDataTransport: 1c8145da7117bd68bbbed00cf304edb6a24de00f
GoogleUtilities: bad72cb363809015b1f7f19beb1f1cd23c589f95
"gRPC-C++": 9675f953ace2b3de7c506039d77be1f2e77a8db2
gRPC-Core: 943e491cb0d45598b0b0eb9e910c88080369290b
GTMSessionFetcher: c1edebe64e9fb4e8f6415d018edf1fd3eac074a1
image_gallery_saver: 259eab68fb271cfd57d599904f7acdc7832e7ef2
image_picker_ios: b786a5dcf033a8336a657191401bfdf12017dabb
leveldb-library: 50c7b45cbd7bf543c81a468fe557a16ae3db8729
LibTorch-Lite: dd01cdefa487b3b1d41f1c8a4495f5e41c79fd80
LibTorchvision: a488b9103266ea8c5e3d2ed4028b080e75c8e9da
Libuv-gRPC: 55e51798e14ef436ad9bc45d12d43b77b49df378
libwebp: f62cb61d0a484ba548448a4bd52aabf150ff6eef
Mixpanel-swift: 6e970d16daf10283cd30a3e6e1a08f4410b25183
mixpanel_flutter: b10020fd7e671b28dca0a3a426a5b10662cc0697
nanopb: b552cce312b6c8484180ef47159bc0f65a1f0431
path_provider_ios: 14f3d2fd28c4fdb42f44e0f751d12861c43cee02
PromisesObjC: ab77feca74fa2823e7af4249b8326368e61014cb
shared_preferences_ios: 548a61f8053b9b8a49ac19c1ffbc8b92c50d68ad
sqflite: 6d358c025f5b867b29ed92fc697fd34924e11904
the_apple_sign_in: 2e78c83cdb09eba07bb16dcc1f3bc12fcdc8263d
url_launcher_ios: 839c58cdb4279282219f5e248c3321761ff3c4de
video_player_avfoundation: e489aac24ef5cf7af82702979ed16f2a5ef84cff
video_thumbnail: c4e2a3c539e247d4de13cd545344fd2d26ffafd1
PODFILE CHECKSUM: 99717c6c71fe8a78c0238c0a6257ff000b595770
COCOAPODS: 1.11.3
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
3,299 | 96,093 |
[RFC] Extend CPU AMP to add FP16 support on eager mode
|
triaged, module: amp (automated mixed precision)
|
### 🚀 The feature, motivation and pitch
Currently low precision of PyTorch CPU AMP on eager mode is BF16 and it does not support FP16 i.e. torch.half. If we want to add FP16 support on CPU #97068, we need also to extend PyTorch CPU AMP.
This RFC is proposed to extend PyTorch CPU AMP to support both BF16 and FP16 on eager mode so that users can use `torch.cpu.amp.autocast` to set the lower precision data type to `torch.half/torch.bfloat16` and autocast will automatically cast the inputs to float, lower precision data type i.e. FP16/BF16 or make inputs fall through.
### Proposed Implementation:
**Extend autocast to support setting lower precision data type to FP16 on CPU.**
* Extend autocast init method on CPU:
Usage of `torch.autocast`:
Before: `torch.autocast()` only supports setting dtype to torch.bfloat16 on CPU. if users use `torch.autocast(device_type="cpu", dtype=torch.half)`, autocast will be disabled with the warning: "In CPU autocast, but the target dtype is not supported. Disabling autocast".
After: users can set dtype=torch.bfloat16 or dtype=torch.half.:
```Python
with torch.autocast(device_type="cpu", dtype=torch.bfloat16):
...
with torch.autocast(device_type="cpu", dtype=torch.half):
...
```
* Nest behavior of autocast on CPU:
Nest AMP with mixed torch.bfloat16 and torch.half is not defined yet in PyTorch. For example:
```Python
with torch.autocast(device_type="cpu", dtype=torch.bfloat16):
...
with torch.autocast(device_type="cpu", dtype=torch.half)::
...
```
Both BF16 and FP16 are allowed on CUDA, and it also does not handle such usage specially. In addition, in such case `cache_enabled` will be also enabled by default, and the dtype of cached tensor (e.g., BF16) may be different with the target data type (e.g., FP16 set by nested amp). Ops may report errors with inconsistent data types.
Nesting AMP with mixed torch.bfloat16 and torch.half seems unreasonable. To simplify things we can disable such mixed usage or give warning and document it.
### Test
* Add test cases for FP16 autocast on CPU.
### Additional context
This RFC depends on FP16 supports of operators https://github.com/pytorch/pytorch/issues/97068
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 15 |
3,300 | 96,088 |
DISABLED test_nn_sequential_invocation_dynamic_shapes (torch._dynamo.testing.DynamicShapesMiscTests)
|
triaged, module: flaky-tests, skipped, module: unknown
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nn_sequential_invocation_dynamic_shapes&suite=torch._dynamo.testing.DynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11781994621).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nn_sequential_invocation_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/testing.py` or `/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/testing.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/master/test//opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/testing.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.