Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,601 | 101,398 |
ONNX model different to pytorch and jit trace output
|
module: onnx, triaged
|
### 🐛 Describe the bug
When exporting a model from [https://github.com/sw-gong/spiralnet_plus](https://github.com/sw-gong/spiralnet_plus) using onnx, the onnx model produces a different output than the original model, or script created with the jit tracer. The onnx exporter does not give any warnings or something similar.
I modified the network of the original project slightly.
The model is exported using the following code:
```
dummy_input = torch.ones((1, num_value)).to(device)
with torch.no_grad():
torch.onnx.export(model, dummy_input, fileName, opset_version=12)
```
To reproduce, please download the minimal version of this project from gdrive ([spiralMini](https://drive.google.com/drive/folders/1rSnYQbDi2gMInePNwUOSzk-Z0lTBWxx0?usp=sharing)) and execute main.py .
It creates a onnx file and a jit trace and runs them and the original model. It then asserts the results to the original.
It tested this with the environment below and a Ubuntu installation with pytorch 2.0.0 with the same error.
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: N/A
Python version: 3.9.16 (main, Jan 11 2023, 16:16:36) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 528.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=4200
DeviceID=CPU0
Family=198
L2CacheSize=1024
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=4200
Name=Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.12.1
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.1.0.post1
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.15
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hc0ea762_10 conda-forge
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-include 2022.1.0 h6a75c08_874 conda-forge
[conda] mkl-service 2.4.0 py39h2bbff1b_0
[conda] mkl_fft 1.3.1 py39h277e83a_0
[conda] mkl_random 1.2.2 py39hf11a4ad_0
[conda] numpy 1.23.5 py39h3b20f71_0
[conda] numpy-base 1.23.5 py39h4da318b_0
[conda] pyg 2.1.0 py39_torch_1.12.0_cu116 pyg
[conda] pytorch 1.12.1 py3.9_cuda11.6_cudnn8_0 pytorch
[conda] pytorch-cluster 1.6.0 py39_torch_1.12.0_cu116 pyg
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-scatter 2.0.9 py39_torch_1.12.0_cu116 pyg
[conda] pytorch-sparse 0.6.15 py39_torch_1.12.0_cu116 pyg
[conda] torchaudio 0.12.1 py39_cu116 pytorch
[conda] torchvision 0.13.1 py39_cu116 pytorch
| 0 |
2,602 | 101,385 |
torch.Tensor.is_sparse returns false for non-COO sparse tensors
|
module: sparse, triaged
|
### 🐛 Describe the bug
In my humble opinion, `torch.Tensor.is_sparse` should return `True` for all possible layouts of sparse matrices.
For now, I think it only does so for COO tensors. You can reproduce with the following snippet:
```python
import torch
a = torch.tensor([[1, 2, 3], [0, 0, 7]])
b = a.to_sparse().to_sparse_csc()
b.is_sparse
```
I think that this function is the culprit:
https://github.com/pytorch/pytorch/blob/7dd8e08817ee59c926922409062e25f30408469b/torch/_linalg_utils.py#L11-L19
Checking for all possible sparse layouts should probably be the default scenario.
Maybe checking that `"sparse"` is in the layout name would work.
```python
import torch
a = torch.tensor([[1, 2, 3], [0, 0, 7]])
b = a.to_sparse().to_sparse_coo()
print("COO", b.is_sparse, str(b.layout).find("sparse") >= 0)
b = a.to_sparse().to_sparse_csc()
print("CSC", b.is_sparse, str(b.layout).find("sparse") >= 0)
b = a.to_sparse().to_sparse_csr()
print("CSR", b.is_sparse, str(b.layout).find("sparse") >= 0)
b = a.to_sparse().to_sparse_bsc(blocksize=1)
print("BSC", b.is_sparse, str(b.layout).find("sparse") >= 0)
b = a.to_sparse().to_sparse_bsr(blocksize=1)
print("BSR", b.is_sparse, str(b.layout).find("sparse") >= 0)
```
yields
```
COO True True
CSC False True
CSR False True
BSC False True
BSR False True
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.27
Python version: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-DGXS-32GB
GPU 1: Tesla V100-DGXS-32GB
GPU 2: Tesla V100-DGXS-32GB
GPU 3: Tesla V100-DGXS-32GB
Nvidia driver version: 450.51.05
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
Boutisme : Little Endian
Processeur(s) : 40
Liste de processeur(s) en ligne : 0-39
Thread(s) par cœur : 2
Cœur(s) par socket : 20
Socket(s) : 1
Nœud(s) NUMA : 1
Identifiant constructeur : GenuineIntel
Famille de processeur : 6
Modèle : 79
Nom de modèle : Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Révision : 1
Vitesse du processeur en MHz : 1898.795
Vitesse maximale du processeur en MHz : 3600,0000
Vitesse minimale du processeur en MHz : 1200,0000
BogoMIPS : 4397.21
Virtualisation : VT-x
Cache L1d : 32K
Cache L1i : 32K
Cache L2 : 256K
Cache L3 : 51200K
Nœud NUMA 0 de processeur(s) : 0-39
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] cudatoolkit-dev 11.7.0 h1de0b5d_6 conda-forge
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
2,603 | 101,380 |
all_to_all_single seems to be missing a check for checkSplitSizes when splitsize=0.
|
oncall: distributed
|
### 🐛 Describe the bug
By reading the source code of alltoall_single, I found that the implementation of alltoall_base is divided into two cases by judging whether the size() of input&output SplitSizes is 0. https://github.com/pytorch/pytorch/blob/v1.9.0/torch/lib/c10d/ProcessGroupNCCL.cpp#L1607
When the size() is not equal to 0, the check of c10d::checkSplitSizes will be performed first.
https://github.com/pytorch/pytorch/blob/v1.9.0/torch/lib/c10d/Utils.hpp#L411
```
inline void checkSplitSizes(
const std::vector<int64_t>& split_sizes,
const at::Tensor& tensor,
int group_size) {
if (split_sizes.size() == 0) {
TORCH_CHECK(
tensor.size(0) % group_size == 0,
"Tensor's dim 0 does not divide equally across group size");
} else {
TORCH_CHECK(
split_sizes.size() == group_size,
"Number of tensor splits not equal to group size");
const auto sum = c10::sum_integers(split_sizes);
TORCH_CHECK(
sum == tensor.size(0), "Split sizes doesn't match total dim 0 size");
}
}
```
The check shows that split_sizes.size() == 0 will be judged first ,but currently this code seems to never be executed in alltoall_base.
So this TORCH_CHECK(tensor.size(0) % group_size == 0) does not seem to be executed in a reasonable scenario.
The following tests were performed on the python :
```
import torch
import multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
INIT_METHOD = 'tcp://127.0.0.5:12345'
TIMEOUT = 100
def spawn_processes(world_size, func):
processes = []
# start task
for rank in range(world_size):
name = "process " + str(rank)
process = mp.Process(target=func, name=name, args=(rank, world_size))
process.start()
processes.append(process)
# wait task completion
for rank, process in enumerate(processes):
process.join(TIMEOUT)
if process.is_alive():
print("Timeout waiting for rank %d to terminate" % rank)
process.terminate()
def test_alltoall_single_no_split(rank, world_size):
torch.cuda.set_device(rank)
dist.init_process_group(backend='nccl', init_method=INIT_METHOD, rank=rank, world_size=world_size)
input = torch.arange(5, dtype=torch.float).cuda() + rank * 5
print("input: ", input)
output = torch.empty([5], dtype=torch.float).cuda()
dist.all_to_all_single(output, input)
print("output: ", output)
dist.destroy_process_group()
if __name__ == "__main__":
world_size = 4
spawn_processes(world_size, test_alltoall_single_no_split)
```
In this test case, when (input.size()==5 and world_size==4), the output of result is as follows:
```
input: tensor([0., 1., 2., 3., 4.], device='cuda:0')
input: tensor([5., 6., 7., 8., 9.], device='cuda:1')
input: tensor([10., 11., 12., 13., 14.], device='cuda:2')
input: tensor([15., 16., 17., 18., 19.], device='cuda:3')
output: tensor([ 0.0000e+00, -1.0842e-19, 8.9683e-44, 2.3362e-41, -6.8723e-13],
device='cuda:0')
output: tensor([5.8316e-39, 6.0000e+00, 4.6566e-10, 9.1084e-44, 6.0567e+23],
device='cuda:1')
output: tensor([ 2.2959e-41, 5.9578e-39, 1.2000e+01, -3.8519e-34, 6.0447e+23],
device='cuda:2')
output: tensor([-8.9683e-44, 2.3318e-41, 5.9693e-39, 8.0234e+00, 1.4356e+24],
device='cuda:3')
```
This result doesn't seem like a reasonable situation. So I think whether TORCH_CHECK(tensor.size(0) % group_size == 0) in checkSplitSizes needs to be moved to the case of outputSplitSizes.size() == 0 && inputSplitSizes.size() == 0 of alltoall_base.
### Versions
torch 1.9.0a0+c3d40fd
torchvision 0.10.0a0
numpy 1.20.3
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
2,604 | 101,371 |
[torch.compile] CRASH with segmentation fault when assign cuda value to cpu tensor
|
triaged, oncall: pt2, module: fakeTensor, module: aotdispatch
|
### 🐛 Describe the bug
`torch.compile` Crash with segmentation fault when assign cuda value to cpu tensor
```py
import torch
torch.manual_seed(420)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
n = x.shape[0]
v = x.split(2)
y = torch.zeros([4, 2, 2, 3])
z = [i + 1 for i in range(n)]
y[z] = v[0]
return y
func = Model().to('cuda')
x = torch.randn(2, 2, 3).to('cuda')
with torch.no_grad():
func = func.eval()
jit_func = torch.compile(func)
res2 = jit_func(x)
# segmentation fault (core dumped)
```
### Versions
<details>
<summary>Click to expand</summary>
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230514+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230514+cu118
[pip3] torchaudio==2.1.0.dev20230514+cu118
[pip3] torchvision==0.16.0.dev20230514+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230514+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230514+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230514+cu118 pypi_0 pypi
```
</details>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 4 |
2,605 | 101,370 |
SparseAdam: working with dense parameters but sparse gradients - usecase
|
module: optimizer, triaged, actionable
|
### 📚 The doc issue
I have a use case where some gradients are sparse while others are dense, and therefore I'm splitting them into two groups and optimizing them separately using Adam and AdamSparse.
In the case of the parameter tensor that is dense while the gradient tensor is sparse (many of the values are set to zero as I do not want to update those parameters), which seems like a perfect use case for the SparseAdam optimizer, however, I get into the situation where if I pass my parameters associated with the sparse gradient matrix (when I say sparse here, I mean that many of the values are zero, not that the gradient tensor is converted to a sparse tensor using `torch.sparse_coo_tensor` or `torch.to_sparse`.
According to the documentation of the SparseAdam optimizer the variant is applied for the following:
> In this variant, only moments that appear in the gradient get updated, and only those portions of the gradient get applied to the parameters.
And according to the source code, in the `init()` method, sparse parameters are rejected after being checked with the `param.is_sparse`:
`if sparse_params:
raise ValueError(
f"Sparse params at indices {sparse_params}: SparseAdam requires dense parameter tensors"
)
`
So the optimizer only accepts dense parameters. However, it does not accept dense gradients (which is the purpose of the optimizer). But the problem I am having here, is if I am passing my dense parameters with the sparse gradient tensor (many zeros) the optimizer fails at `optimizer.step` because of this check
` if not p.grad.is_sparse:
raise RuntimeError('SparseAdam does not support dense gradients, please consider Adam instead')`
A possible work around is using a hook to change the parameter gradient to a sparseTensor using `torch.sparse_coo_tensor` but then i get the error
`Exception has occurred: RuntimeError
hook 'hook' has changed the type of value (was torch.cuda.FloatTensor got torch.cuda.sparse.FloatTensor)`
It is unclear from the documentation how this optimizer is supposed to be used or if this use case can be covered by the sparseAdam optimizer.
What I would think a possibility of this use-case being covered, which might be useful in many cases where parts of the layers/nodes would be updated, but not others, is passing the parameters and their zero gradients to the sparseAdam optimizer. The optimizer converts the gradient to a sparse tensor to enable this use case.
If there's a workaround, I'd appreciate the help/input.
### Suggest a potential alternative/fix
How the sparseAdam optimizer is used - which use cases does it cover, and which use cases it does not. The documentation mentions that it's used for sparse tensors, but not much else. It is specifically confusing because the source code has a check for if the parameter dense is dense (which is required) and the gradient tensor is sparse (which is required), but not on how to satisfy this use-case.
cc @vincentqb @jbschlosser @albanD @janeyx99
| 15 |
2,606 | 101,369 |
Theme update
|
module: docs, triaged, module: doc infra
|
### 📚 The doc issue
To update `pytorch_sphinx_theme` to support new `sphinx>=7.0.0`, I made 2 PRs:
- https://github.com/pytorch/pytorch_sphinx_theme/pull/180
- https://github.com/pytorch/pytorch_sphinx_theme/pull/181
And there was a very old PR that I submitted but sadly got no attention. It fixes multiple visual bugs and improve visual looking:
- https://github.com/pytorch/pytorch_sphinx_theme/pull/150
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker @ezyang @zou3519 @holly1238
| 1 |
2,607 | 101,359 |
RuntimeError in Scaled Dot Product Attention Tutorial Code
|
module: cpu, triaged
|
### 🐛 Describe the bug
When running the [Explicit Dispatcher Control tutorial code](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html#explicit-dispatcher-control), encountered this error on a CPU only device.
```
Traceback (most recent call last):
File "/opt/project/sandbox/flash_attention/flash_attention_testbed.py", line 32, in <module>
print(f"The default implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):,.3f} microseconds")
File "/opt/project/sandbox/flash_attention/flash_attention_testbed.py", line 18, in benchmark_torch_function_in_microseconds
return t0.blocked_autorange().mean * 1e6
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/utils/benchmark/utils/timer.py", line 394, in blocked_autorange
number = self._estimate_block_size(min_run_time)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/utils/benchmark/utils/timer.py", line 311, in _estimate_block_size
time_taken = self._timeit(number)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/utils/benchmark/utils/timer.py", line 256, in _timeit
return max(self._timer.timeit(number), 1e-9)
File "/home/ray/anaconda3/lib/python3.8/timeit.py", line 177, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
RuntimeError: "baddbmm_with_gemm" not implemented for 'Half'
```
Code in question is this
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# Lets define a helpful benchmarking function:
import torch.utils.benchmark as benchmark
def benchmark_torch_function_in_microseconds(f, *args, **kwargs):
t0 = benchmark.Timer(
stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}
)
return t0.blocked_autorange().mean * 1e6
device = "cuda" if torch.cuda.is_available() else "cpu"
# Lets define the hyper-parameters of our input
batch_size = 32
max_sequence_len = 1024
num_heads = 32
embed_dimension = 32
dtype = torch.float16
query = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)
key = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)
value = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)
print(f"The default implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):.3f} microseconds")
# Lets explore the speed of each of the 3 implementations
from torch.backends.cuda import sdp_kernel, SDPBackend
# Helpful arguments mapper
backend_map = {
SDPBackend.MATH: {"enable_math": True, "enable_flash": False, "enable_mem_efficient": False},
SDPBackend.FLASH_ATTENTION: {"enable_math": False, "enable_flash": True, "enable_mem_efficient": False},
SDPBackend.EFFICIENT_ATTENTION: {
"enable_math": False, "enable_flash": False, "enable_mem_efficient": True}
}
with sdp_kernel(**backend_map[SDPBackend.MATH]):
print(f"The math implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):.3f} microseconds")
with sdp_kernel(**backend_map[SDPBackend.FLASH_ATTENTION]):
try:
print(f"The flash attention implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):.3f} microseconds")
except RuntimeError:
print("FlashAttention is not supported. See warnings for reasons.")
with sdp_kernel(**backend_map[SDPBackend.EFFICIENT_ATTENTION]):
try:
print(f"The memory efficient implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):.3f} microseconds")
except RuntimeError:
print("EfficientAttention is not supported. See warnings for reasons.")
```
**Note**:
* Changing `dtype` from `torch.float16` to `torch.float32` does not generate the error.
* The ["Hardware Dependence" section](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html#hardware-dependence) of the tutorial does not mention any limitation of hardware other than run-time duration.
* As I understand [this page](https://pytorch.org/docs/stable/tensors.html#data-types) `dtype` `torch.float16` is supported for the `cpu` device.
I with expecting the tutorial code would run successfully on a cpu only device.
### Versions
The physical runtime environment is a
* MacBook Pro (16-inch, 2019)
* MacOS: 12.6.1
* Docker Desktop for Mac: 4.9.1
The failing code is executed in a Docker container...
```
Collecting environment information...
PyTorch version: 2.0.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.104-linuxkit-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Stepping: 13
CPU MHz: 2300.000
BogoMIPS: 4600.00
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 128 MiB
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, STIBP: disabled
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 bmi2 erms xsaveopt arat
Versions of relevant libraries:
[pip3] botorch==0.8.5
[pip3] gpytorch==1.10
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0+cpu
[pip3] torchaudio==2.0.1+cpu
[pip3] torchdata==0.6.0
[pip3] torchinfo==1.7.2
[pip3] torchmetrics==0.11.4
[pip3] torchtest==0.5
[pip3] torchtext==0.15.1+cpu
[pip3] torchvision==0.15.1+cpu
[pip3] torchviz==0.0.2
[conda] botorch 0.8.5 pypi_0 pypi
[conda] gpytorch 1.10 pypi_0 pypi
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 2.0.0+cpu pypi_0 pypi
[conda] torchaudio 2.0.1+cpu pypi_0 pypi
[conda] torchdata 0.6.0 pypi_0 pypi
[conda] torchinfo 1.7.2 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtest 0.5 pypi_0 pypi
[conda] torchtext 0.15.1+cpu pypi_0 pypi
[conda] torchvision 0.15.1+cpu pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
2,608 | 101,356 |
inductor: inductor conv2d get a different size and stride with eager mod when input channel is zero
|
triaged, ZeroTensor, module: inductor
|
### 🐛 Describe the bug
For con2d's input channel is zero, the inductor gets the wrong size and stride with the eager mod(the eager mod's size and stride are ```(3, 0, 7, 14)``` and
```(98, 98, 14, 1)```, but inductor's size and stride are ```(3, 8, 7, 14)``` and ```(784, 98, 14, 1)```, I think we need to align the behavior.
```python
import torch
import torch._inductor.config as config
config.cpp.weight_prepack=False
torch.manual_seed(420)
x = torch.randn(3, 0, 16, 16)
class Module(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = torch.nn.Conv2d(in_channels=0, out_channels=8, kernel_size=(3, 3), stride=(2, 1), padding=(0,), dilation=1)
def forward(self, x):
x = self.conv1(x)
return x
func = Module().to('cpu')
with torch.no_grad():
func.train(False)
res1 = func(x) # without jit
print(res1)
jit_func = torch.compile(func)
res2 = jit_func(x)
print(torch.equal(res1, res2))
```
expected output:
```
File "/home/xiaobing/pytorch-offical/torch/_functorch/aot_autograd.py", line 1318, in call_func_with_args
out = normalize_as_list(f(args))
File "/home/xiaobing/pytorch-offical/torch/_functorch/aot_autograd.py", line 1405, in rng_functionalization_wrapper
return compiled_fw(args)
File "/tmp/torchinductor_xiaobing/lc/clceh4wrtimlylfyukiuk6t3h6pwayfkdyy2chyzirfutompzo2y.py", line 27, in call
assert_size_stride(buf0, (3, 8, 7, 14), (784, 98, 14, 1))
AssertionError: expected size 3==3, stride 98==784 at dim=0
```
### Versions
```
PyTorch version: 2.1.0a0+git036a8d6
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.10.5
[pip3] ema-pytorch==0.2.2
[pip3] functorch==1.14.0a0+408bcf1
[pip3] intel-extension-for-pytorch==2.1.0+git8fb55c7
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.1
[pip3] torch==2.1.0a0+git6211ead
[pip3] torch-fidelity==0.3.0
[pip3] torch-struct==0.5
[pip3] torchaudio==2.0.0a0+a8f4e97
[pip3] torchdata==0.7.0a0+f1283eb
[pip3] torchmetrics==0.11.4
[pip3] torchrec-nightly==2023.3.23
[pip3] torchtext==0.15.0a0+46e7eef
[pip3] torchvision==0.15.0a0+98c5815
[pip3] vector-quantize-pytorch==1.1.2
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.10.5 pypi_0 pypi
[conda] ema-pytorch 0.2.2 pypi_0 pypi
[conda] functorch 1.14.0a0+408bcf1 pypi_0 pypi
[conda] intel-extension-for-pytorch 2.1.0+git8fb55c7 dev_0 <develop>
[conda] mkl 2023.0.0 h6d00ec8_25399
[conda] mkl-include 2023.1.0 pypi_0 pypi
[conda] mkl-static 2023.1.0 pypi_0 pypi
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.1 pypi_0 pypi
[conda] torch 2.1.0a0+git6211ead dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.0.0a0+a8f4e97 pypi_0 pypi
[conda] torchdata 0.6.0 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchrec-nightly 2023.3.23 pypi_0 pypi
[conda] torchtext 0.15.0a0+46e7eef pypi_0 pypi
[conda] torchvision 0.15.0a0+98c5815 pypi_0 pypi
[conda] vector-quantize-pytorch 1.1.2 pypi_0 pypi
```
cc @soumith @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 4 |
2,609 | 101,335 |
fsdp training with the seq2seqTranier module gets stuck during evaluation.
|
triaged, module: fsdp
|
### 🐛 Describe the bug
when trying to finetune flan-t5 large with the Seq2seqTrainer module, and also passing fsdp_transformer_layer_cls_to_wrap="T5Block" and fsdp="full_shard auto_wrap", I got at first the following error:
```
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy al
location, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
```
Following #82461 , I replaced the following line:
[https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L276](link)
with this code:
```
try:
decoder_input_ids = inputs["decoder_input_ids"]
inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"}
generated_tokens = self.model.generate(**inputs, **gen_kwargs)
except RuntimeError as e:
if "The tensor has a non-zero number of elements, but its data is not allocated yet" in str(e):
inputs["decoder_input_ids"] = decoder_input_ids
inputs_dummy = {key:value[:,:1] for key,value in inputs.items()}
model(**inputs_dummy)
inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"}
generated_tokens = self.model.generate(**inputs, **gen_kwargs)
else:
raise RuntimeError(str(e))
```
But now, when the model hits `model(**inputs_dummy)`, the GPUs' utility reaches 100% instantly, and the whole process gets stuck.
I traced the origin of the problem to the following function, but I couldn't quite understand what causes it:
[https://github.com/pytorch/pytorch/blob/main/torch/distributed/distributed_c10d.py#L2776](link)
I would really appreciate you help.
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.7.16 (default, Jan 17 2023, 22:20:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX TITAN X
GPU 1: NVIDIA GeForce GTX TITAN X
GPU 2: NVIDIA GeForce GTX TITAN X
GPU 3: NVIDIA GeForce GTX TITAN X
Nvidia driver version: 470.42.01
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2499.975
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.74
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts spec_ctrl intel_stibp
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.9.5
[pip3] torch==1.13.1
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37h6c91a56_3
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchaudio 0.12.1 py37_cu113 pytorch
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.13.1 py37_cu113 pytorch
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 2 |
2,610 | 101,334 |
Functions for Calculating Skewness and Kurtosis
|
feature, triaged, module: reductions
|
### 🚀 The feature, motivation and pitch
I was working on a ML project recently involving statistical descriptors. Usually, I try to stick to torch functions(e.g- torch.mean() etc) but I was unable to find anything for skewness and kurtosis. I was looking for something similar to scipy.stats skew and kurtosis functions.
### Alternatives
For now, I have used scipy.stats.skew() and scipy.stats.kurtosis()
### Additional context
_No response_
| 5 |
2,611 | 101,332 |
Terminate handler
|
triaged, open source, ciflow/trunk
|
Fixes #50051.
This PR is based on #50320 and I address the last feedback.
On Windows it is enabled by default. Can be enabled or disabled via USE_CUSTOM_TERMINATE env variable.
This PR adds support for overriding the terminate handler in order to log uncaught exceptions in the threads.
If an exception is thrown and not caught, it will print <Unhandled exception caught in c10/util/AbortHandler.h>
The point of doing this is that in issue #50051, exceptions were thrown but not logged. With this logging system it will be easier to debug it in the future.
| 7 |
2,612 | 101,331 |
Pytorch 2.1.0.dev20230512 cuda not available
|
needs reproduction, oncall: binaries, module: cuda, triaged
|
### 🐛 Describe the bug
1) Created an Ubuntu 20.04 A100 machine
2) Installed CUDA as described [here](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network)
3) Created a new conda environment
4) Installed torch nightly
```
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia
```
Getting the following error:
```
Python 3.9.16 (main, Mar 8 2023, 14:00:05)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
/home/keremturgutlu/miniconda3/envs/myenv2/lib/python3.9/site-packages/torch/cuda/__init__.py:124: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1683875589791/work/c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
False
>>> torch.device()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Device() received an invalid combination of arguments - got (), but expected one of:
* (torch.device device)
* (str type, int index)
>>> torch.cud
torch.cuda torch.cudnn_convolution( torch.cudnn_convolution_transpose(
torch.cudnn_affine_grid_generator( torch.cudnn_convolution_add_relu( torch.cudnn_grid_sampler(
torch.cudnn_batch_norm( torch.cudnn_convolution_relu( torch.cudnn_is_acceptable(
>>> torch.cuda.device_count()
1
>>> torch.ones(1).cuda()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/keremturgutlu/miniconda3/envs/myenv2/lib/python3.9/site-packages/torch/cuda/__init__.py", line 264, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu.
>>>
```
### Versions
Collecting environment information...
/home/keremturgutlu/miniconda3/envs/myenv2/lib/python3.9/site-packages/torch/cuda/__init__.py:124: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1683875589791/work/c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.1.0.dev20230512
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-gcp-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.204
BogoMIPS: 4400.40
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 6 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230512
[pip3] torchaudio==2.1.0.dev20230512
[pip3] torchvision==0.16.0.dev20230513
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.3 py39hf6e8229_1
[conda] numpy-base 1.24.3 py39h060ed82_1
[conda] pytorch 2.1.0.dev20230512 py3.9_cuda12.1_cudnn8.8.1_0 pytorch-nightly
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.1.0.dev20230512 py39_cu121 pytorch-nightly
[conda] torchtriton 2.1.0+7d1a95b046 py39 pytorch-nightly
[conda] torchvision 0.16.0.dev20230513 py39_cu121 pytorch-nightly
cc @seemethere @malfet @ngimel
| 5 |
2,613 | 101,321 |
Speed when installing from source is very low with CUDA 11
|
needs reproduction, module: performance, module: build, module: cuda, triaged
|
### 🐛 Describe the bug
Hello!
I have a performance issue as I am trying to build PyTorch 1.12 with CUDA 11.3 and CUDNN 8.2 from source.
Although the compilation is successful, running anything, especially with CUDNN is much slower compared to installing from source.
Here is a minimal example:
```
import torch
import time
layer = torch.nn.BatchNorm2d(64).cuda()
input = torch.randn(4, 64, 112, 112).cuda()
for j in range(100):
start = time.time()
for i in range(100):
output = layer(input)
print(f"It took {time.time()-start} sec")
```
Option 1:
* install with: `pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 `
* time: 5.2 ms
Option 2:
* install from [source](https://github.com/pytorch/pytorch/tree/v1.12.1)
* time: 44 ms
I am wondering why there is so much difference in performance. Thank you!
### Versions
v.1.12.1
cc @malfet @seemethere @ngimel
| 1 |
2,614 | 101,319 |
Deprecated File bug
|
low priority, triaged, topic: build
|
### 🐛 Describe the bug
Error When Compiling pytorch
### Error logs
```
/opt/rocm-5.6.0/include/rccl.h:16:2: error: "This file is deprecated. Use the header file from /opt/rocm-5.6.0/include/rccl/rccl.h by using #include <rccl/rccl.h>"
#error "This file is deprecated. Use the header file from /opt/rocm-5.6.0/include/rccl/rccl.h by using #include <rccl/rccl.h>"
```
### Minified repro
-
### Versions
```
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 5
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 5599.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 2.5 MiB (10 instances)
L3 cache: 20 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[conda] Could not collect
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 2 |
2,615 | 101,314 |
Shared library loading logic breaks when CUDA packages are installed in a non-standard location
|
triaged, module: bazel, topic: build, bug
|
### 🐛 Describe the bug
**tl;dr:** Some CUDA libraries are distributed alongside Torch via PyPI packages. These packages include `nvidia-cudnn-cu11`, `nvidia-cusparse-cu11`, and so on. Torch's `__init__.py` has various tricks to find and load these libraries, but one of these tricks break when Torch is installed in a different location to the `nvidia-*` packages. This could be fixed by linking all of Torch's CUDA dependencies into `libtorch_global_deps.so`.
----
**Longer version:**
I'm using Torch PyPI with the [pants](https://www.pantsbuild.org/) build system, which creates Python environments with a slightly weird layout. Specifically, each package ends up in its own directory, rather than everything landing in `site-packages` like it would in a virtualenv. This causes problems when I attempt to import PyTorch 2.0.0:
```
ImportError Traceback (most recent call last)
<ipython-input-20-eb42ca6e4af3> in <cell line: 1>()
----> 1 import torch
~/.cache/pants/named_caches/pex_root/installed_wheels/6befaad784004b7af357e3d87fa0863c1f642866291f12a4c2af2de435e8ac5c/torch-2.0.0-cp39-cp39-manylinux1_x86_64.whl/torch/__init__.py in <module>
--> 239 from torch._C import * # noqa: F403
240
241 # Appease the type checker; ordinarily this binding is inserted by the
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
```
I think this may point at an issue with the shared library loading logic in Torch. Specifically, `_load_global_deps()` in Torch's `__init__.py` has this logic that first attempts to load globals deps from `libtorch_global_deps.so`, and then attempts to load any missing libraries if the `CDLL()` call fails:
```python
# See Note [Global dependencies]
def _load_global_deps():
# ... snip ...
lib_name = 'libtorch_global_deps' + ('.dylib' if platform.system() == 'Darwin' else '.so')
here = os.path.abspath(__file__)
lib_path = os.path.join(os.path.dirname(here), 'lib', lib_name)
try:
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
except OSError as err:
cuda_libs: Dict[str, str] = {
'cublas': 'libcublas.so.*[0-9]',
'cudnn': 'libcudnn.so.*[0-9]',
'cuda_nvrtc': 'libnvrtc.so.*[0-9].*[0-9]',
'cuda_runtime': 'libcudart.so.*[0-9].*[0-9]',
'cuda_cupti': 'libcupti.so.*[0-9].*[0-9]',
'cufft': 'libcufft.so.*[0-9]',
'curand': 'libcurand.so.*[0-9]',
'cusolver': 'libcusolver.so.*[0-9]', X
'cusparse': 'libcusparse.so.*[0-9]', X
'nccl': 'libnccl.so.*[0-9]', X
'nvtx': 'libnvToolsExt.so.*[0-9]',
}
is_cuda_lib_err = [lib for lib in cuda_libs.values() if(lib.split('.')[0] in err.args[0])]
# ... some more logic to load libs by looking through `sys.path` ...
```
On my system, the `CDLL()` call succeeds at loading `torch-2.0.0-cp39-cp39-manylinux1_x86_64.whl/torch/lib/libtorch_global_deps.so`, so it returns immediately without attempting to load the libraries in the `cuda_libs` dict. However, that `.so` file only links to a subset of the libraries listed above:
```
$ ldd /long/path/to/torch-2.0.0-cp39-cp39-manylinux1_x86_64.whl/torch/lib/libtorch_global_deps.so
linux-vdso.so.1 (0x00007ffe3b7d1000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6d85c92000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6d85b41000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6d85b3b000)
libcurand.so.10 => /lib/x86_64-linux-gnu/libcurand.so.10 (0x00007f6d7ff4b000)
libcufft.so.10 => /lib/x86_64-linux-gnu/libcufft.so.10 (0x00007f6d774be000)
libcublas.so.11 => /lib/x86_64-linux-gnu/libcublas.so.11 (0x00007f6d6dd40000)
libcublasLt.so.11 => /lib/x86_64-linux-gnu/libcublasLt.so.11 (0x00007f6d58cda000)
libcudart.so.11.0 => /lib/x86_64-linux-gnu/libcudart.so.11.0 (0x00007f6d58a34000)
libnvToolsExt.so.1 => /lib/x86_64-linux-gnu/libnvToolsExt.so.1 (0x00007f6d5882a000)
libgomp-a34b3233.so.1 => /long/path/to/torch-2.0.0-cp39-cp39-manylinux1_x86_64.whl/torch/lib/libgomp-a34b3233.so.1 (0x00007f6d58600000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6d5840e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6d85cd9000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6d58404000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6d583e7000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6d58205000)
```
Some libraries from `cuda_libs` are missing from the `ldd` output. This is fine when the `nvidia-*` Python packages are installed in the same directory as Torch, because Python can Torch's RPATH to find the packages. Specifically, the RPATH has a bunch of relative paths to the nvidia libraries, which look like this:
```
$ORIGIN/../../nvidia/cublas/lib:$ORIGIN/../../nvidia/cuda_cupti/lib:$ORIGIN/../../nvidia/cuda_nvrtc/lib:$ORIGIN/../../nvidia/cuda_runtime/lib:$ORIGIN/../../nvidia/cudnn/lib:$ORIGIN/../../nvidia/cufft/lib:$ORIGIN/../../nvidia/curand/lib:$ORIGIN/../../nvidia/cusolver/lib:$ORIGIN/../../nvidia/cusparse/lib:$ORIGIN/../../nvidia/nccl/lib:$ORIGIN/../../nvidia/nvtx/lib:$ORIGIN
```
Unfortunately these relative paths do not work when Torch is installed in a different directory to the `nvidia-*` packages, which is the case for me.
`__init__.py` already has the logic necessary to fix this problem by scanning `sys.path` for the missing libraries. However, that logic currently only gets triggered when the `libtorch_global_deps` import fails. When I modify the code to always look for these libraries, I can import PyTorch again:
```python
try:
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
raise OSError("libcudnn libnvrtc libcupti libcusolver libcusparse libnccl") # always look for these libraries
except OSError as err:
cuda_libs: Dict[str, str] = {
# ... etc. ...
```
Ideally `__init__.py` should use a more robust test to determine whether `libcudnn` and friends can be loaded. Probably the easiest fix is to link all the libs from `cuda_libs` into `libtorch_global_deps`.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 510.60.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 3249.791
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4900.34
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tc
e topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_
vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] flake8==3.7.9
[pip3] numpy==1.17.4
[conda] No relevant packages
| 4 |
2,616 | 101,294 |
Docs suggestion `FullyShardedDataParallel.summon_full_params` must be called on all ranks/processes
|
triaged, module: fsdp
|
### 📚 The doc issue
The docs for [summon_full_params](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.summon_full_params) don't state that it must be called on all processes. I think I was lead astray by the `rank0_only` argument, thinking that I'd get what I need only calling the context manager on the rank 0 process, but instead things hang (at least with `with_grads=True`).
I am guessing under the hood all_gather is being called, which would explain the hanging. But that's not explicit in the docs.
### Suggest a potential alternative/fix
Add a Note to the docs for summon_full_params that it should be called on all ranks, even if parameters are only being used on one process.
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 1 |
2,617 | 101,293 |
Operations to shared tensors in the forked process could lead to silent crash
|
module: multiprocessing, triaged
|
### 🐛 Describe the bug
I am using `torch.multiprocessing` with the `fork` context, but the forked processes would mysteriously crash, with no exception thrown. I have minimize the scope of the problem into the following example:
```python
import torch
import torch.multiprocessing as torch_mp
def fn(q):
tensor = q.get()
print(tensor, tensor.is_shared())
tensor += 1 # process silently crashes
print("OK")
def test():
ctx = torch_mp.get_context('fork')
q = ctx.Queue()
tensor = torch.zeros(1024*1024)
q.put(tensor)
proc = ctx.Process(target=fn, args=(q,))
proc.start()
proc.join()
if __name__ == "__main__":
torch_mp.set_start_method('fork')
test()
```
If, however, the tensor is created with `torch.randn` or `torch.empty`, the program wouldn't crash. `torch.ones, torch.arange` lead to crashes. Using the `spawn`/`forkserver` contexts also works.
Is this the expected behavior with `fork`?
### Versions
Tested with pytorch 2.0.1 under linux.
cc @VitalyFedyunin @ejguan
| 2 |
2,618 | 101,291 |
fused torch.optim.AdamW isn't faster than the unfused version
|
module: optimizer, triaged
|
### 🐛 Describe the bug
When I enabled the fused option of `torch.optim.AdamW` on PyTorch 2.0.1 for my vision model that has a low SM efficiency at the optimizer step, I didn't see any speedup. I profiled the fused version and unfused version, and I could see that although the fused version could improve SM efficiency by fusing multiple small CUDA kernels, the fused `aten::_fused_adamw_` was also launched much later, which actually cancelled out the performance benefit of fused optimizer.
See the attached screenshots of the two trace files:
**1. Original Unfused Version:**

**2. Fused Version in v2.0.1:**

Will there be any further optimization to handle this issue and deliver more actual speedup?
### Versions
PyTorch 2.0.1
cc @vincentqb @jbschlosser @albanD @janeyx99
| 5 |
2,619 | 101,289 |
Support fake tensor real inputs in dynamo
|
triaged, module: dynamo
|
### 🐛 Describe the bug
Today, Dynamo assumes that if you have a fake tensor, it was was generated by fakeification in the compiler. But fake tensor is also a user visible concept, and you could for example call a torch.compile'd function with fake tensor inputs. We should be able to handle fake tensor inputs, distinguishing them from internal fake tensor inputs by checking which fake mode they're associated with (torch.compile will always allocate a fresh fake mode).
Some basic functionality works; for example, it works to `FakeTensorMode.from_tensor` a fake tensor (however, if you had a Parameter+fake tensor, the parameter-ness is lost; that needs to be fixed.)
Here are at least some of the spots where we make incorrect assumptions.
```
diff --git a/torch/_dynamo/variables/builder.py b/torch/_dynamo/variables/builder.py
index 1d7c33d50c2..64523f3e610 100644
--- a/torch/_dynamo/variables/builder.py
+++ b/torch/_dynamo/variables/builder.py
@@ -154,8 +154,6 @@ class GraphArg:
assert isinstance(
self.fake_tensor, torch._subclasses.fake_tensor.FakeTensor
)
- if isinstance(self.example, torch._subclasses.fake_tensor.FakeTensor):
- raise AssertionError("Fake Tensor observed in TorchDynamo Fx graph inputs")
def load(self, tx):
return self.source.reconstruct(tx)
@@ -796,7 +794,7 @@ class VariableBuilder:
# a later point in time.
ignore_subclass = True
else:
- assert type(value) in (torch.Tensor, torch.nn.Parameter), type(value)
+ assert type(value) in (torch.Tensor, torch.nn.Parameter, FakeTensor), type(value)
ignore_subclass = False
is_duplicate_tensor = source in self.tx.output.input_source_to_var
@@ -1024,10 +1022,6 @@ def wrap_fx_proxy_cls(
if example_value is None:
example_value = get_fake_value(proxy.node, tx)
- # Handle recursive calls here
- elif isinstance(example_value, FakeTensor):
- pass
-
elif isinstance(example_value, torch.Tensor):
if tx.export:
# The legacy behavior for real value cache with subclasses was
@@ -1200,7 +1194,7 @@ class TrackedFake:
def wrap_to_fake_tensor_and_record(
e, tx, ignore_subclass=False, *, source: Optional[Source], is_tensor: bool
):
- if type(e) in (torch.Tensor, torch.nn.Parameter) or (
+ if type(e) in (torch.Tensor, torch.nn.Parameter, FakeTensor) or (
ignore_subclass and isinstance(e, torch.Tensor)
):
assert source is not None
```
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @thiagocrepaldi
### Versions
master
| 1 |
2,620 | 101,288 |
Should be ok to call _dynamo.export and torch.compile under FakeTensorMode
|
triaged, module: dynamo
|
### 🐛 Describe the bug
This will require either: (1) ensuring that the eval frame handler disables fake tensor mode before doing anything serious, or (2) we refactor all of Dynamo to work if there is a fake tensor mode or not. Probably (1) is easiest.
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @thiagocrepaldi
### Versions
master
| 0 |
2,621 | 101,273 |
IPEX as TorchDynamo Backend Performance Dashboard
|
triaged, intel, oncall: pt2
|
Dashboard to track the performance of IPEX as TorchDynamo backend on CPU.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
```[tasklist]
### Tasks
```
| 54 |
2,622 | 101,265 |
Noisy warning - torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard (...), this could result in accuracy problems
|
low priority, triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
This warning is raised literally thousands of times when compiling my model. Not sure what it means nor if I should be worried.
I managed to write a smaller code that produces the warning only once:
```python
import torch
CHANNELS = 4
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(1, 1, 1)
def forward(self, x, emb):
x = self.conv(x) + emb # (batch_size, 1, ...) + (batch_size, CHANNELS, ...)
return x
net = MyModule()
net = net.cuda()
opt_net = torch.compile(net, dynamic=True)
batch_sizes = [7, 5, 8]
lengths = [297, 120, 361]
for batch_size, length in zip(batch_sizes, lengths):
print(f'batch_size={batch_size}, length={length}')
x = torch.randn(batch_size, 1, 256, length, device='cuda')
emb = torch.randn(batch_size, CHANNELS, 1, 1, device='cuda')
out = opt_net(x, emb)
out.mean().backward()
```
Produces the following output:
```
batch_size=7, length=297
batch_size=5, length=120
[2023-05-11 23:06:55,488] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 256*s0*s1 < 2147483648 == True, this could result in accuracy problems
batch_size=8, length=361
```
I note that by doing **any** of the following, the warning is prevented:
* Setting `CHANNELS=1` such that the output from `self.conv` does not need to be expanded along the channel dimension when doing `x = self.conv(x) + emb`
* Skipping/commenting out `out.mean().backward()`
* Setting `batch_size` to be constant
* Setting `length` to be constant
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230508+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Scientific Linux release 7.9 (Nitrogen) (x86_64)
GCC version: (GCC) 12.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.17
Python version: 3.10.11 (main, Apr 15 2023, 23:15:15) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
Stepping: 4
CPU MHz: 999.914
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 19712K
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_pkg_req pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230508+cu121
[pip3] torch-ema==0.3
[pip3] torchaudio==2.1.0.dev20230508+cu121
[pip3] torchvision==0.16.0.dev20230508+cu121
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 2 |
2,623 | 101,255 |
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment.
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
ERROR:torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment, Please use functorch.experimental.control_flow.cond to explicitly capture the control flow
i am trying to use a torch(2.1.0.dev20230510+cu118) dynamo export a function with control flow. The relevant code is as follows,interesting thing is: when the control condition is “if b.dim < 0:”, it works fine; if the control condition is “if b.sum() < 0:”, the error comes as " torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment, Please use functorch.experimental.control_flow.cond to explicitly capture the control flow "。(The relevant situation is basically consistent with the description in the link :https://pytorch.org/functorch/nightly/ux_limitations.html)
when i try to find how to use functorch.experimental.control_flow.cond,i found the document is empty。how can i get
instructions for this function?
~~~
from typing import List
import torch
import torch._dynamo as dynamo
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.tensor]):
print("my_compiler() called with FX graph:")
gm.graph.print_tabular()
return gm.forward
@dynamo.optimize(my_compiler)
def foo(a, b):
x = a/(torch.abs(a)+1)
if b.sum() < 0:
# if b.dim < 0:
b = b*-1
else:
b = b-1
return x*b
graph, guards=torch._dynamo.export(foo,torch.rand(10),torch.rand(10))
print("the output is:---------------------------------------------")
print(guards)
~~~
<img width="988" alt="image" src="https://github.com/pytorch/pytorch/assets/51430740/2cb9a50c-593a-4827-bc28-70307e83d912">
### Versions
torch 2.1.0 20230510;cuda 11.8
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 4 |
2,624 | 101,251 |
Mac m2 MPSNDArray.mm:78: failed assertion `[MPSNDArrayDescriptor sliceDimension:withSubrange:] error: dimension index (2) not within number of dimensions (2) Dimension indices are 0-based'
|
triaged, module: mps
|
### 🐛 Describe the bug
import numpy as np
import pandas as pd
import scipy.io as sio
import numpy.linalg as la
from sklearn.preprocessing import scale,LabelEncoder
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class NeuralNet(nn.Module):
"""
"""
def __init__(self, input_size=1000, hidden_size=100, output_size=10):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(in_features=input_size, out_features=hidden_size, bias=True)
self.fc2 = nn.Linear(in_features=hidden_size, out_features=output_size, bias=True)
def forward(self, X):
FX = F.relu(self.fc1(X)) # hidden layer activation features
prob = F.softmax(self.fc2(FX), dim=1) # probability output
return FX, prob
def DAL(FX, y, l, sigma=100, lamda=1e-2):
m = FX.shape[0]
domain_label = torch.unique(l)
n_domain_label = len(domain_label)
Delta = torch.as_tensor(y[:, None] == y, dtype=torch.float32, device=FX.device) # construct the label kernel matrix
# construct the Gaussian kernel matrix
# https://stackoverflow.com/questions/47271662/what-is-the-fastest-way-to-compute-an-rbf-kernel-in-python
FX_norm = torch.sum(FX ** 2, axis=-1)
K = torch.exp(-(FX_norm[:, None] + FX_norm[None, :] - 2 * torch.matmul(FX, FX.t())) / sigma)
P = K * Delta # product kernel matrix
H = 0.
for s in domain_label:
Ps = P[l == s]
H += torch.matmul(Ps.t(), Ps) * 1. / (n_domain_label * Ps.shape[0])
invM = torch.inverse(H + lamda * torch.eye(m, device=FX.device)) # the bug is here ,while the device is coda the bug does not trigger but it is mps it comes
D = 0.
for s in domain_label:
bs = torch.mean(P[l == s], axis=0)
alphas = torch.matmul(invM, bs)
D += 2. * torch.matmul(bs, alphas) - torch.matmul(alphas, torch.matmul(H, alphas))
return D / n_domain_label - 1.
class DADG:
"""
The true training batch size per iteration is batch_size * num_class * num_domain
"""
def __init__(self, input_size=1024, hidden_size=512, output_size=68, seed=1000, device=torch.device('cpu'),
epoch=200, sigma=None, lamda=1e-2, gamma=1, batch_size=4, lr=1e-3, log=False):
args_values = locals()
args_values.pop("self")
for arg, value in args_values.items():
setattr(self, arg, value)
def fit(self, X_list, y_list, Xt, yt):
torch.manual_seed(self.seed)
net = NeuralNet(input_size=self.input_size, hidden_size=self.hidden_size, output_size=self.output_size).to(
self.device)
optimizer = optim.SGD(params=net.parameters(), lr=self.lr, momentum=0.9)
for epoch in range(self.epoch):
dataset_loaders, l = [], 0
for Xs, ys in zip(X_list, y_list):
for i, counts in zip(*np.unique(ys, return_counts=True)):
dataset = np.hstack((Xs[ys == i], ys[ys == i][:, None], l * np.ones((counts, 1))))
dataset_loaders.append(
torch.utils.data.DataLoader(dataset=torch.tensor(dataset), batch_size=self.batch_size,
shuffle=True, drop_last=True))
l = l + 1
train_err, m_train = 0.0, 0.0
for batches in zip(*dataset_loaders):
Xyl = torch.cat(batches, dim=0)
X, y, l = Xyl[:, :-2].to(self.device, torch.float32), Xyl[:, -2].to(self.device, torch.int64), Xyl[:,
-1].to(
self.device, torch.int64)
m = X.shape[0]
FX, prob = net(X)
num_class = prob.shape[1]
negative_log_loss = -torch.mean(torch.sum(torch.log(prob) * F.one_hot(y, num_class), dim=1))
if self.sigma is None:
pairwise_dist = torch.cdist(X, X, p=2) ** 2
self.sigma = torch.median(pairwise_dist[pairwise_dist != 0])
dal = DAL(FX, y, l, sigma=self.sigma, lamda=self.lamda)
loss = negative_log_loss + self.gamma * dal
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_err += loss.item() * m
m_train += m
with torch.no_grad():
Xt, yt = torch.as_tensor(Xt, dtype=torch.float32, device=self.device), torch.as_tensor(yt,
dtype=torch.int64,
device=self.device)
pred = torch.argmax(net(Xt)[1], dim=1)
correct = torch.sum((pred == yt)).item()
m_test = len(yt)
if True == self.log:
print('epoch ', epoch, ', training error ', train_err / m_train, ', test acc. ',
(correct / m_test) * 100)
self.net = net
def score(self, Xt, yt):
with torch.no_grad():
Xt, yt = torch.as_tensor(Xt, dtype=torch.float32, device=self.device), torch.as_tensor(yt,
dtype=torch.int64,
device=self.device)
pred = torch.argmax(self.net(Xt)[1], dim=1)
correct = torch.sum((pred == yt)).item()
m_test = len(yt)
return (correct / m_test) * 100
def readData(tg, domains):
data = sio.loadmat('datasets/PIE/' + tg + '.mat')
Xt, yt = data['fea'].astype(np.float64), data['gnd'].ravel()
yt = LabelEncoder().fit(yt).transform(yt).astype(np.float64)
Xt = scale(Xt / Xt.sum(axis=1, keepdims=True))
Xs_list, ys_list = [], []
for sc in domains:
if sc != tg:
data = sio.loadmat('datasets/PIE/' + sc + '.mat')
Xs, ys = data['fea'].astype(np.float64), data['gnd'].ravel()
ys = LabelEncoder().fit(ys).transform(ys).astype(np.float64)
Xs = scale(Xs / Xs.sum(axis=1, keepdims=True))
Xs_list.append(Xs), ys_list.append(ys)
return Xs_list, ys_list, Xt, yt
domains = ['PIE05', 'PIE07', 'PIE09', 'PIE27', 'PIE29']
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "mps")
domcouples = []
res = []
for tg in domains:
X_list, y_list, Xt, yt = readData(tg, domains)
instance = DADG(input_size=1024, hidden_size=512, output_size=68, seed=0, device=DEVICE,
epoch=300, lamda=1e-3, gamma=100, batch_size=5, lr=1e-2, log=False)
instance.fit(X_list, y_list, Xt, yt)
print(tg, 'acc: ', instance.score(Xt, yt))
### Versions
python3.8.6
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
macOS Ventura 13.3.1(a)
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
2,625 | 101,249 |
`einsum` is about 40x slower on CUDA than manually multiplying and summing
|
module: performance, triaged, module: linear algebra
|
### 🐛 Describe the bug
A similar issue was reported in #32591. Unfortunately, it doesn't look like it was actually fully fixed at the time.
Comparing
```python
(a * b).sum(dim=-1)
torch.einsum("Nc,Nc->N", a, b)
numpy.einsum("Nc,Nc->N", a, b) # a and b are equivalent numpy arrays
```
where `a` and `b` are tensors with a large `N` dimension and a relatively small `c` dimension
```
[------------------------------------- -------------------------------------]
| mul/sum | torch.einsum | numpy.einsum
1 threads: -------------------------------------------------------------------
Nc,Nc->N cpu (1048576, 2) | 9900 | 9000 | 6660
Nc,Nc->N cuda (1048576, 2) | 700 | 30880 | 6000
Times are in microseconds (us).
```
So on the CPU both the "manual" and the `einsum` versions are about the same (though `numpy` is faster). But on the GPU, the `einsum` version is about 40 times slower (the exact slowdown can vary by machine, I also got a 10x slowdown on a slightly more modern workstation).
Here's the script that I used to get these results:
<details>
<summary><code>bench_einsum.py</code></summary>
```python
import numpy as np
import torch
import torch.utils.benchmark as benchmark
from torch.testing._internal.common_utils import make_tensor
def generate_test_cases(device, dtype):
a = make_tensor((1024 * 1024, 2), device=device, dtype=dtype)
b = make_tensor((1024 * 1024, 2), device=device, dtype=dtype)
yield "Nc,Nc->N", a, b
results = []
for dev in ["cpu", "cuda"]:
for equation, a, b in generate_test_cases(dev, torch.float):
sub_label = f"{equation} {dev} {tuple(a.shape)}"
results.append(
benchmark.Timer(
stmt="(a * b).sum(dim=-1)",
globals={"equation": equation, "a": a, "b": b},
sub_label=sub_label,
description="mul/sum",
).blocked_autorange(min_run_time=1)
)
results.append(
benchmark.Timer(
stmt="torch.einsum(equation, a, b)",
globals={"equation": equation, "a": a, "b": b},
sub_label=sub_label,
description="torch.einsum",
).blocked_autorange(min_run_time=1)
)
results.append(
benchmark.Timer(
stmt="numpy.einsum(equation, a, b)",
setup="import numpy",
globals={
"equation": equation,
"a": a.cpu().numpy(),
"b": b.cpu().numpy(),
},
sub_label=sub_label,
description="numpy.einsum",
).blocked_autorange(min_run_time=1)
)
compare = benchmark.Compare(results)
compare.trim_significant_figures()
compare.colorize(rowwise=True)
compare.print()
```
</details>
### Versions
<details>
<summary>Versions</summary>
```
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 12.2.1 20230201
Clang version: 15.0.7
CMake version: version 3.26.3
Libc version: glibc-2.37
Python version: 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (64-bit runtime)
Python platform: Linux-5.15.108-1-MANJARO-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050
Nvidia driver version: 530.41.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.8.0
/usr/lib/libcudnn_adv_infer.so.8.8.0
/usr/lib/libcudnn_adv_train.so.8.8.0
/usr/lib/libcudnn_cnn_infer.so.8.8.0
/usr/lib/libcudnn_cnn_train.so.8.8.0
/usr/lib/libcudnn_ops_infer.so.8.8.0
/usr/lib/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 9
CPU(s) scaling MHz: 84%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5602.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-lightning==2.0.1
[pip3] torch==2.0.0
[pip3] torchmetrics==0.11.3
[pip3] torchvision==0.15.1a0
[conda] Could not collect
```
</details>
cc @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 8 |
2,626 | 101,246 |
Tool for identifying where in eager model an operation is nondeterministic
|
triaged, module: determinism
|
### 🐛 Describe the bug
Let's say you have a model code and when you run it twice you get bitwise different results. Where did it diverge? We can use TorchFunctionMode/TorchDispatchMode to localize where the first divergence occurred.
### Versions
master
cc @mruberry @kurtamohler
| 2 |
2,627 | 101,241 |
Different results with vmap when using torch.jit.script
|
oncall: jit, triaged, module: functorch
|
### 🐛 Describe the bug
Hi,
I am trying to compute Hessian-vector products (hvps) between individual layers of a neural net, with multiple vectors. To get hvps with multiple vectors, I am using vmap on a function that I have defined (```hessian_vector_product2```). I am then using ```@torch.jit.script``` with ```torch.jit.fork``` to split the hvps over layers of the net, i.e., to compute hvps with the block-diagonal Hessian. However, using ```@torch.jit.script``` only gives me the hvp with respect to a single vector. Funnily enough, when I comment out ```@torch.jit.script```, I get the hvps with respect to all the vectors. Is there a simple explanation for why this is occurring?
I've included the relevant code snippet below, along with the entire script.
Thanks!
```python
hv2 = vmap(hessian_vector_product2, in_dims=(None, 0, None)) # Use vmap to vectorize hvps
# Get hvps w.r.t. layers in parallel
@torch.jit.script
def hvp_parallel(params : List[torch.Tensor], vs : List[Optional[torch.Tensor]], grad_params: List[List[torch.Tensor]], detach : bool = True):
futures: List[torch.jit.Future[List[Optional[torch.Tensor]]]] = []
for p, v, g in zip(params, vs, grad_params):
futures.append(torch.jit.fork(hv2, [p], v, g))
hvps : List[List[Optional[torch.Tensor]]] = []
for future in futures:
hvp = torch.jit.wait(future)
hvps.append(hvp)
return hvps
# Get HVPs with 10 random vectors
n_v = 10
vs : List[Optional[torch.Tensor]] = []
for p in model.parameters():
vs.append(torch.randn(n_v, *tuple(p.shape)).to(device))
hvps = hvp_parallel(list(model.parameters()), vs, grad_params)
print(hvps[0][0].shape) # Got torch.Size([20, 784]), expected torch.Size([10, 20, 784])
print(hvps[1][0].shape) # Got torch.Size([20]), expected torch.Size([10, 20])
print(hvps[2][0].shape) # Got torch.Size([10, 20]), expected torch.Size([10, 10, 20])
print(hvps[3][0].shape) # Got torch.Size([10]), expected torch.Size([10, 10])
```
```python
import torch
from torch.func import vmap
from typing import List, Optional
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torch.nn.Sequential(
torch.nn.Flatten(),
torch.nn.Linear(784, 20),
torch.nn.Sigmoid(),
torch.nn.Linear(20, 10),
).to(device)
loss_fn = torch.nn.CrossEntropyLoss().to(device)
# hvp function
def hessian_vector_product2(params : List[torch.Tensor], v : Optional[torch.Tensor], grad_params: List[torch.Tensor]) -> List[Optional[torch.Tensor]]:
Hv = torch.autograd.grad(grad_params, params, grad_outputs = [v,],
retain_graph = True)
return Hv
def get_grad_params(f : List[torch.Tensor], params : List[torch.Tensor]) -> List[Optional[torch.Tensor]]:
grad_params = torch.autograd.grad(f, params, create_graph = True)
return grad_params
# Define x and y
x = torch.randn(1, 28, 28).to(device)
y = torch.randint(10, (1,)).to(device)
# Get gradients while using jit -- this works
@torch.jit.script
def grad_params_parallel(f : List[torch.Tensor], params : List[torch.Tensor]):
assert torch.jit.isinstance(f, List[torch.Tensor])
futures: List[torch.jit.Future[List[Optional[torch.Tensor]]]] = []
for p in params:
futures.append(torch.jit.fork(get_grad_params, f, [p]))
grad_params : List[List[Optional[torch.Tensor]]] = []
for future in futures:
g : List[Optional[torch.Tensor]] = torch.jit.wait(future)
grad_params.append(g)
return grad_params
grad_params = grad_params_parallel([loss_fn(model(x), y),], list(model.parameters()))
hv2 = vmap(hessian_vector_product2, in_dims=(None, 0, None)) # Use vmap to vectorize hvps
# Get hvps w.r.t. layers in parallel
@torch.jit.script
def hvp_parallel(params : List[torch.Tensor], vs : List[Optional[torch.Tensor]], grad_params: List[List[torch.Tensor]], detach : bool = True):
futures: List[torch.jit.Future[List[Optional[torch.Tensor]]]] = []
for p, v, g in zip(params, vs, grad_params):
futures.append(torch.jit.fork(hv2, [p], v, g))
hvps : List[List[Optional[torch.Tensor]]] = []
for future in futures:
hvp = torch.jit.wait(future)
hvps.append(hvp)
return hvps
# Get HVPs with 10 random vectors
n_v = 10
vs : List[Optional[torch.Tensor]] = []
for p in model.parameters():
vs.append(torch.randn(n_v, *tuple(p.shape)).to(device))
hvps = hvp_parallel(list(model.parameters()), vs, grad_params)
print(hvps[0][0].shape) # Got torch.Size([20, 784]), expected torch.Size([10, 20, 784])
print(hvps[1][0].shape) # Got torch.Size([20]), expected torch.Size([10, 20])
print(hvps[2][0].shape) # Got torch.Size([10, 20]), expected torch.Size([10, 10, 20])
print(hvps[3][0].shape) # Got torch.Size([10]), expected torch.Size([10, 10])
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN V
GPU 1: NVIDIA TITAN V
GPU 2: NVIDIA TITAN V
GPU 3: NVIDIA TITAN V
GPU 4: NVIDIA TITAN V
GPU 5: NVIDIA TITAN V
GPU 6: NVIDIA TITAN V
GPU 7: NVIDIA TITAN V
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 2
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4599.86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 8 MiB (32 instances)
L3 cache: 80 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] functorch==2.0.0
[pip3] numpy==1.23.1
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.0
[pip3] torch-optimizer==0.3.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 2 |
2,628 | 101,233 |
How the [WARNING] using triton random, expect difference from eager arises?
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
I just compile MAE model from [here](https://github.com/facebookresearch/mae/blob/main/models_mae.py), and got this warning. The program can run normally without broken.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] DISTS-pytorch==0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] open-clip-torch==2.19.0
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[conda] dists-pytorch 0.1 pypi_0 pypi
[conda] numpy 1.24.1 pypi_0 pypi
[conda] open-clip-torch 2.19.0 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchaudio 2.0.1+cu118 pypi_0 pypi
[conda] torchvision 0.15.1+cu118 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 5 |
2,629 | 101,210 |
GLCM implementation in pytorch C++ api and cuda
|
module: cpp, triaged
|
### 🚀 The feature, motivation and pitch
The Gray Level Co-occurrence Matrix GLCM https://en.wikipedia.org/wiki/Co-occurrence_matrix is heavily used to calculate a variety of texture features called haralick features, for many bio-medical imaging applications. Currently there is no GPU implementation available using pytorch. However, skimage and pyradiomics have CPU implementations available that are very slow.
I have very elementary level understanding of Cuda, C++, and Torch C++ API. However, as you know, a proper documentation for Torch C++ is really lacking. I would your help to write these in Torch C++ API and Cuda. Thank you.
### Alternatives
Here are two CPU implementations available in Cpython or C.
calculate_glcm in:
https://github.com/AIM-Harvard/pyradiomics/blob/master/radiomics/src/cmatrices.c
GLCM Loop in:
https://github.com/scikit-image/scikit-image/blob/v0.20.0/skimage/feature/_texture.pyx
There is also the following paper, that has a simple comparison of implementation of it on CPU and GPU:
Computation of Gray Level Co-Occurrence Matrix Based on CUDA and Optimization for Medical Computer Vision Application
### Additional context
_No response_
cc @jbschlosser
| 3 |
2,630 | 101,209 |
Migrate windows runners to non-ephemeral instances
|
module: ci, triaged
|
### 🚀 The feature, motivation and pitch
Due to increased pressure over our windows runners, and the elevated cost of instantiating and bringing down those instances, we want to migrate instances from ephemeral to not ephemeral.
Possible impacts are related to breakages in or misbehaves on CI jobs that puts the runners in a bad state. Other possible impacts are related to exhaustion of resources, especially disk space, but memory might be a contender, as CI trash piles up on those instances.
As a somewhat middle of the road approach to this, currently nonephemeral instances are stochastically rotated as older instances get higher priority to be terminated when demand is lower.
Instances definition can be found here: https://github.com/pytorch/test-infra/pull/4072
* ✅ migrate `windows.4xlarge` to `windows.4xlarge.nonephemeral` instances under `pytorch/pytorch` (#100377)
* 📣 migrate `windows.8xlarge.nvidia.gpu` to `windows.8xlarge.nvidia.gpu.nonephemeral` instances under `pytorch/pytorch` (#104404)
* ⏳ submit PRs to all repositories under `pytorch/` organization to migrate `windows.4xlarge` to `windows.4xlarge.nonephemeral`
* ⏳ submit PRs to all repositories under `pytorch/` organization to migrate `windows.8xlarge.nvidia.gpu` to `windows.8xlarge.nvidia.gpu.nonephemeral`
* ⏳ terminate the existence of `windows.4xlarge` and `windows.8xlarge.nvidia.gpu`
* ⏳ evaluate and start the work related to the adoption of `windows.g5.4xlarge.nvidia.gpu` to replace `windows.8xlarge.nvidia.gpu.nonephemeral` in other repositories and use cases (proposed by @huydhn)
The reasoning for this phased approach is to reduce the scope of possible contenders to investigate in case of misbehave of particular CI jobs.
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 4 |
2,631 | 101,192 |
AOTAutograd export path does not support training graphs with parameters that do not receive gradients.
|
triaged, module: aotdispatch
|
See the comment [here](https://github.com/pytorch/pytorch/pull/100587/files#diff-df954bbf954d2dcb81f687876053267ffa4ddb36ed86b7d2bd76319ff2b94416R3786): today, the AOTAutograd export path expects that every parameter in the module you export will receive some sort of gradient. It's possible to create a case where this doesn't happen. We can support it, but I'd like to gauge how much request there is for this first, since it's more of an edge case.
Example:
```
def f(p1, p2):
# p1 will never receive any gradients. p2 will though
return p1.detach() * p2
```
| 0 |
2,632 | 101,191 |
custom_op API follow-ups
|
triaged
|
@soulitzer, @bdhirsh, @ezyang if I missed any:
Tracking here so I don't need to keep all of them in my head
- [x] test symint works with Sequence[int]
- [x] FakeTensor notimplemented error message
- [x] Stop using python RAII (in general); prefer context-managers.
- [ ] Use the autograd notimplemented fallback
- [x] autograd API
- [ ] way to mark operator as non-differentiable
- [ ] ctx.needs_input_grad
- [ ] custom_op should not graph break on dynamo; dynamo safety rules for custom_op
- [ ] (maybe) CustomOp.impl to specify device-agnostic impl
- [ ] (maybe) torch.compile-like decorator
- [ ] (maybe) ban kwarg-only args
- [ ] (maybe) save_pytree_for_backward doesn't need to run if grad_mode is False
- [ ] Mechanism to wrap an existing custom operator with a CustomOp object to use the new APIs
- [ ] Triton kernels: see if we need to do any special handling. Ideally we would generate a cpp_wrapper (using AOTInductor) for it to lower overhead while running under torch.compile.
- [ ] Finish and ship the [usage documentation](https://docs.google.com/document/d/16YsyKY5uil_HF5mwa6UOtw0AxEHJkZp06NZcSJ3GyJs/edit#heading=h.deyt6pleeqxt)
- [ ] custom_op and determinism (likely needs some way to add the tag)
- [ ] custom_op and randomness (need to ban, likely through testing)
- [ ] custom_op and cuda graphs (Elias tells me not all custom ops work with cuda graphs)
Testing
- [ ] Test that under tracing, the operator has the same view+inplace semantics
- [ ] Test that under AOTAutograd, view+inplace edge cases work out.
| 1 |
2,633 | 101,189 |
`dense -> sparse compressed` to work with empty batches.
|
module: sparse, triaged
|
### 🚀 The feature, motivation and pitch
Currently, it is not possible to convert an empy-batched dense tensor to a sparse compressed tensor.
```python
In [1]: import torch
In [2]: torch.rand(0, 4, 4).to_sparse_bsr(2)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [2], in <cell line: 1>()
----> 1 torch.rand(0, 4, 4).to_sparse_bsr(2)
RuntimeError: to_sparse_bsr: Expected product of batch dimensions to be non-zero.
In [3]: torch.rand(0, 4, 4).to_sparse_csr()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 torch.rand(0, 4, 4).to_sparse_csr()
RuntimeError: to_sparse_csr: Expected product of batch dimensions to be non-zero.
```
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer
| 0 |
2,634 | 101,188 |
Weird dataloader performance degradation caused by torch and numpy import order
|
triaged, module: openmp
|
### 🐛 Describe the bug
Hi,
I recently noticed a weird behavior in Pytorch. The import order of torch and numpy can have a significant impact on the dataloader performance. In short (and see the complete example script below):
```
# Setting A: faster. 100 iterations take 91 seconds. Average load time 0.035.
import torch
import numpy as np
# Setting B: slower. 100 iterations take 158 seconds. Average load time ~0.45.
import numpy as np
import torch
```
It seems this performance is determined by the first time these two packages are imported (e.g., importing these two packages in the main script in B order and the dataloader script in A order, it will end up with B performance).
## Reproduction
I used AWS EC2 machines (and more specifically, zone `us-west-2` with `p4de24xlarge` instances). I am not sure if this is reproducible in other places.
### Step 1: Dockerfile
```
FROM nvidia/cuda:11.3.0-devel-ubuntu20.04
# Fix NV docker problem
RUN apt-key adv --fetch-keys https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/7fa2af80.pub
# Install handy tools
RUN apt-get update && apt-get install vim htop tmux sudo git wget -y && rm -rf /var/lib/apt/lists/*
# Create user with sudo privilege
RUN addgroup --gid 1000 ubuntu
RUN adduser --disabled-password --gecos '' --uid 1000 --gid 1000 ubuntu
RUN adduser ubuntu sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ubuntu
USER 1000:1000
RUN sudo chown ubuntu:ubuntu /home/ubuntu/ # Actually, not sure if this is even needed
# Conda
ENV PATH="/home/ubuntu/miniconda3/bin:${PATH}"
ARG PATH="/home/ubuntu/miniconda3/bin:${PATH}"
RUN sudo apt-get install -y wget
RUN cd /home/ubuntu/ \
&& wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& bash Miniconda3-latest-Linux-x86_64.sh -b -p /home/ubuntu/miniconda3/ \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda install python=3.9 pytorch torchvision torchaudio pytorch-cuda=11.8 xformers -c pytorch -c nvidia -c xformers -y
RUN pip install tqdm
RUN conda clean --all -y
```
### Step 2: After getting into the environment
Add these lines to `~/.bashrc` and `source ~/.bashrc` to init conda
``` #bash
conda_root="/home/ubuntu/miniconda3/"
conda_steup_bin="${conda_root}bin/conda"
__conda_setup="$($conda_steup_bin 'shell.bash' 'hook')"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "${conda_root}etc/profile.d/conda.sh" ]; then
. "${conda_root}etc/profile.d/conda.sh"
else
export PATH="${conda_root}bin:$PATH"
fi
fi
unset __conda_setup
```
### Step 3: Example script
```python
# Switching these two lines will get a different performance
import torch
import numpy as np
from torch.utils.data import Dataset
import time
class MyDataset(Dataset):
def __getitem__(self, i):
st = time.time()
data = {k: np.random.rand(3, 512, 512) for k in range(6)} # it seems only numpy has the issue
print(" [*] Worker load time {:.4f}".format(time.time()-st))
return data
def __len__(self):
return 1000000
if __name__ == "__main__":
from torch.utils.data import DataLoader
from tqdm import tqdm
dataloader = DataLoader(MyDataset())
for data in tqdm(dataloader):
pass
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.228-132.418.amzn2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1916.685
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.3 py39hf6e8229_1
[conda] numpy-base 1.24.3 py39h060ed82_1
[conda] pytorch 2.0.0 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.0 py39_cu118 pytorch
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.0 py39_cu118 pytorch
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 4 |
2,635 | 101,185 |
Pure virtual function call exception on Python interpreter exit when using debug wheel
|
triaged, module: python frontend
|
### 🐛 Describe the bug
I built a PyTorch wheel debug version. The problem is that after I import torch in Python interpreter and quit the interpreter I'm receiving pure virtual function call from the torch library. From my naive testing seems that https://github.com/pytorch/pytorch/blob/4eaaa086231b911ef768dd2d959f837b082efee6/CMakeLists.txt#L888 setting -fno-omit-frame-pointer -O0 exposes the problem. Commenting on that line and using debug mode makes the problem non-existent.
Steps to reproduce the problem:
```
git clone https://github.com/pytorch/pytorch.git
cd pytorch
git checkout v2.0.1
git submodule sync
git submodule update --init --recursive
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p `pwd`/miniconda
. `pwd`/miniconda/etc/profile.d/conda.sh
# Hide local python packages from Conda (Conflicts with conan)
export PYTHONNOUSERSITE=1
conda create python=python3.8.10 ninja pyyaml setuptools cmake cffi typing git --prefix `pwd`/buildenv -y
conda activate `pwd`/buildenv
pip3 install -r requirements.txt
export USE_CUDA=0
export USE_MKLDNN=0
export DEBUG=1
export BUILD_TEST=0
export PYTORCH_BUILD_VERSION=2.0.1
export PYTORCH_BUILD_NUMBER=0
python setup.py bdist_wheel
ls -l dist/
cd dist
python3 -m venv test
. /test/bin/activate.sh
pip3 install torch-2.0.1-cp38-cp38-linux_x86_64.whl
python3
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> quit()
pure virtual method called
terminate called without an active exception
Aborted
```
Stack trace:
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7dda859 in __GI_abort () at abort.c:79
#2 0x00007ffff714d911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff715938c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff71593f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff715a155 in __cxa_pure_virtual () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff58a898f in c10::SafePyObject::~SafePyObject (this=0x4458808, __in_chrg=<optimized out>) at ../c10/core/SafePyObject.h:32
#7 0x00007ffff618e7e4 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x44587f0, __in_chrg=<optimized out>) at ../torch/csrc/utils/python_dispatch.cpp:103
#8 0x00007ffff618e810 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x44587f0, __in_chrg=<optimized out>) at ../torch/csrc/utils/python_dispatch.cpp:103
#9 0x00007fffe46e7df9 in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::reset_ (this=0xadbb50) at ../c10/util/intrusive_ptr.h:291
#10 0x00007fffe46e5a4a in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::~intrusive_ptr (this=0xadbb50, __in_chrg=<optimized out>) at ../c10/util/intrusive_ptr.h:370
#11 0x00007fffe46e1be8 in c10::BoxedKernel::~BoxedKernel (this=0xadbb50, __in_chrg=<optimized out>) at ../aten/src/ATen/core/boxing/BoxedKernel.h:74
#12 0x00007fffe46e3484 in c10::KernelFunction::~KernelFunction (this=0xadbb50, __in_chrg=<optimized out>) at ../aten/src/ATen/core/boxing/KernelFunction.h:74
#13 0x00007fffe4a842f6 in std::array<c10::KernelFunction, 112ul>::~array (this=0xadb9f0, __in_chrg=<optimized out>) at /usr/include/c++/9/array:94
#14 0x00007fffe4a84396 in c10::impl::OperatorEntry::~OperatorEntry (this=0xadb970, __in_chrg=<optimized out>) at ../aten/src/ATen/core/dispatch/OperatorEntry.h:70
#15 0x00007fffe4a8c2dc in c10::Dispatcher::OperatorDef::~OperatorDef (this=0xadb970, __in_chrg=<optimized out>) at ../aten/src/ATen/core/dispatch/Dispatcher.h:66
#16 0x00007fffe4a8c300 in __gnu_cxx::new_allocator<std::_List_node<c10::Dispatcher::OperatorDef> >::destroy<c10::Dispatcher::OperatorDef> (this=0x7ffff48df7c0 <c10::Dispatcher::realSingleton()::_singleton>, __p=0xadb970) at /usr/include/c++/9/ext/new_allocator.h:152
#17 0x00007fffe4a8a693 in std::allocator_traits<std::allocator<std::_List_node<c10::Dispatcher::OperatorDef> > >::destroy<c10::Dispatcher::OperatorDef> (__a=..., __p=0xadb970) at /usr/include/c++/9/bits/alloc_traits.h:496
#18 0x00007fffe4a880c4 in std::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::_M_clear (this=0x7ffff48df7c0 <c10::Dispatcher::realSingleton()::_singleton>) at /usr/include/c++/9/bits/list.tcc:77
#19 0x00007fffe4a85bbc in std::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~_List_base (this=0x7ffff48df7c0 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>) at /usr/include/c++/9/bits/stl_list.h:495
#20 0x00007fffe4a847b0 in std::list<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~list (this=0x7ffff48df7c0 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>) at /usr/include/c++/9/bits/stl_list.h:823
#21 0x00007fffe4a7e370 in c10::Dispatcher::~Dispatcher (this=0x7ffff48df7c0 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>) at ../aten/src/ATen/core/dispatch/Dispatcher.h:61
#22 0x00007ffff7dfe8a7 in __run_exit_handlers (status=0, listp=0x7ffff7fa4718 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit.c:108
#23 0x00007ffff7dfea60 in __GI_exit (status=<optimized out>) at exit.c:139
#24 0x00007ffff7ddc08a in __libc_start_main (main=0x4efd60 <main>, argc=2, argv=0x7fffffffe308, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe2f8) at ../csu/libc-start.c:342
#25 0x00000000005fc5fe in _start ()
```
Used compiler: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Used python version 3.8.10
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 3079.007
BogoMIPS: 4491.47
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] torch==2.0.1
[conda] Could not collect
pure virtual method called
terminate called without an active exception
Aborted
cc @albanD
| 9 |
2,636 | 101,184 |
Fork run CI from upstream remote (more than 10,000 emails)
|
module: ci, triaged
|
### 🐛 Describe the bug
Fork run CI from upstream remote
### Versions

cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 2 |
2,637 | 101,177 |
version 4.26.1 to 4.29.0 has two bugs
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
1. when I use the transformers=4.28.1 4 gpus to train and use 'model.hald()' export the fp16 model and serving
for the same input, the service get different output, the rate is 0.4%
2. when I use the lastest transformers=4.29.0 it failed to train, the old code is work when I use transformers=4.28.1
```
Traceback (most recent call last):
File "train.py", line 289, in <module>
main()
File "train.py", line 282, in main
do_train(args, writer, model, tokenizer, train_loader, valid_loader)
File "train.py", line 125, in do_train
logits = model(*inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise
raise exception
TypeError: Caught TypeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/tiger/business_tagging_platform/yuntu_tagging/mlx_src/model.py", line 273, in forward
logits = module(*inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/tiger/business_tagging_platform/yuntu_tagging/mlx_src/model.py", line 202, in forward
return self._model(input_ids, input_mask, segment_ids)[0]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 993, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 902, in get_extended_attention_mask
extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min
TypeError: torch.finfo() requires a floating point input type. Use torch.iinfo to handle 'torch.finfo'
```
by the way, these two question did not happen in transformers=4.26.1
### Versions
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.28
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-5.4.56.bsk.10-amd64-x86_64-with-debian-10.13
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 450.142.00
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz
Stepping: 7
CPU MHz: 3099.259
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] byted-torch==1.10.0.post35
[pip3] numpy==1.21.5
[pip3] torch==1.10.0
[pip3] torchaudio==0.10.0+cu113
[pip3] torchvision==0.11.1+cu113
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
2,638 | 101,168 |
[torch.compile] torch._dynamo.exc.Unsupported: setattr(UserDefinedObjectVariable) for yolov7
|
triaged, bug, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
My minimal repro:
```
import torch._dynamo
import torch
import torch.nn as nn
models = nn.Sequential(nn.Linear(3, 3))
@torch._dynamo.optimize("eager", nopython=True)
def run():
models[0].training = False
x = torch.randn(1, 3)
return models(x)
run()
```
stacktrace:
```
Traceback (most recent call last):
File "test.py", line 39, in <module>
run()
File "/home/yj/pytorch/torch/_dynamo/eval_frame.py", line 280, in _fn
return fn(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/eval_frame.py", line 433, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 122, in _fn
return fn(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert
return _compile(
File "/home/yj/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 425, in _compile
out_code = transform_code_object(code, transform)
File "/home/yj/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 410, in transform
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2010, in run
super().run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 385, in wrapper
return inner_fn(self, inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 1095, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 554, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yj/pytorch/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 385, in wrapper
return inner_fn(self, inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 1135, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 554, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 306, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 269, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 102, in call_function
return tx.inline_user_function_return(
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 385, in wrapper
return inner_fn(self, inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 1095, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 554, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yj/pytorch/torch/_dynamo/variables/nn_module.py", line 266, in call_function
tx.call_function(
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 554, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yj/pytorch/torch/_dynamo/variables/nn_module.py", line 700, in call_function
return variables.UserFunctionVariable(
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 269, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 102, in call_function
return tx.inline_user_function_return(
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 385, in wrapper
return inner_fn(self, inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 1135, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 554, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/yj/pytorch/torch/_dynamo/variables/misc.py", line 418, in call_function
return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
File "/home/yj/pytorch/torch/_dynamo/variables/nn_module.py", line 753, in call_method
return super().call_method(tx, name, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/user_defined.py", line 272, in call_method
return UserMethodVariable(
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 306, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 269, in call_function
return super().call_function(tx, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/functions.py", line 102, in call_function
return tx.inline_user_function_return(
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 1189, in STORE_ATTR
BuiltinVariable(setattr)
File "/home/yj/pytorch/torch/_dynamo/variables/builtin.py", line 584, in call_function
result = handler(tx, *args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/variables/builtin.py", line 1108, in call_setattr
unimplemented(
File "/home/yj/pytorch/torch/_dynamo/exc.py", line 134, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: setattr(UserDefinedObjectVariable) <function Module.__setattr__ at 0x7fe00fbd2790>
from user code:
File "/home/yj/pytorch/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "test.py", line 16, in forward
self.training |= self.export
Set torch._dynamo.config.verbose=True or TORCHDYNAMO_VERBOSE=1 for more information
```
It is interesting that if we move `model = Outer()` into the `def run()`, a different error is thrown, which looks like #101030 . My guess on the root cause of this problem is that `torch.compile` doesn't work so well with `nn.Sequential`, and therefore doesn't recognize the inner module as a mutable local variable which then caused the graph break. If we only decorate the inner module by `model.model[0] = dynamo.optimize("eager", nopython=True)model.model[0]`, then the inner module is marked correctly as a mutable local and will not cause a graph break.
### Versions
PyTorch version: 2.0.0a0+gitc263bd4
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 5 |
2,639 | 101,167 |
round float16 calculation error in mps backend
|
triaged, module: mps
|
### 🐛 Describe the bug
import torch
x = torch.tensor([8.5], dtype=torch.half).to("mps")
y = torch.round(x)
y is 9 but should be 8 (According to the rules of "round half to even" in the [doc](https://pytorch.org/docs/stable/generated/torch.round.html?highlight=torch+round#torch.round))
### Versions
This issue was found in pre-ci, and I don't have a mac to reproduce this bug. For details please see https://github.com/pytorch/pytorch/actions/runs/4944088105/jobs/8839307876
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,640 | 101,162 |
DISABLED test_fsdp_tp_checkpoint_integration (__main__.TestTPFSDPIntegration)
|
oncall: distributed, triaged, skipped
|
Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/distributed%2Ffsdp%2Ftest_fsdp_tp_integration.py%3A%3ATestTPFSDPIntegration%3A%3Atest_fsdp_tp_checkpoint_integration)).
This test starts to fail on periodic multigpu job recently https://github.com/pytorch/pytorch/actions/runs/4942837753/jobs/8837502304
```
=================================== FAILURES ===================================
__________ TestTPFSDPIntegration.test_fsdp_tp_checkpoint_integration ___________
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 541, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 760, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 805, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_fsdp_tp_integration.py", line 331, in test_fsdp_tp_checkpoint_integration
tp_fsdp_model.load_state_dict(state_dict)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2056, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FullyShardedDataParallel:
size mismatch for _fsdp_wrapped_module.net1.weight: copying a param with shape torch.Size([4, 5]) from checkpoint, the shape in current model is torch.Size([8, 5]).
size mismatch for _fsdp_wrapped_module.net1.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([8]).
size mismatch for _fsdp_wrapped_module.net2.weight: copying a param with shape torch.Size([4, 4]) from checkpoint, the shape in current model is torch.Size([4, 8]).
While copying the parameter named "_fsdp_wrapped_module.net2.bias", whose dimensions in the model are torch.Size([4]) and whose dimensions in the checkpoint are torch.Size([4]), an exception occurred : ('aten.copy_.default: got mixed distributed and non-distributed tensors.',).
----------------------------- Captured stdout call -----------------------------
Process 2 terminated with exit code 10, terminating remaining processes.
------------------------------ Captured log call -------------------------------
INFO numba.cuda.cudadrv.driver:driver.py:245 init
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 0 with pid 52714
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 1 with pid 52715
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 2 with pid 52716
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 3 with pid 52717
----------------------------- Captured stdout call -----------------------------
Process 0 terminated with exit code 10, terminating remaining processes.
------------------------------ Captured log call -------------------------------
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 0 with pid 52791
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 1 with pid 52792
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 2 with pid 52793
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 3 with pid 52794
----------------------------- Captured stdout call -----------------------------
Process 1 terminated with exit code 10, terminating remaining processes.
------------------------------ Captured log call -------------------------------
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 0 with pid 52868
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 1 with pid 52869
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 2 with pid 52870
INFO torch.testing._internal.common_distributed:common_distributed.py:589 Started process 3 with pid 52871
- generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/distributed.fsdp.test_fsdp_tp_integration/distributed.fsdp.test_fsdp_tp_integration-620ff0cafbb720dd.xml -
=========================== short test summary info ============================
FAILED [5.7867s] distributed/fsdp/test_fsdp_tp_integration.py::TestTPFSDPIntegration::test_fsdp_tp_checkpoint_integration - RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_fsdp_tp_integration.py", line 331, in test_fsdp_tp_checkpoint_integration
tp_fsdp_model.load_state_dict(state_dict)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2056, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FullyShardedDataParallel:
size mismatch for _fsdp_wrapped_module.net1.weight: copying a param with shape torch.Size([4, 5]) from checkpoint, the shape in current model is torch.Size([8, 5]).
size mismatch for _fsdp_wrapped_module.net1.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([8]).
size mismatch for _fsdp_wrapped_module.net2.weight: copying a param with shape torch.Size([4, 4]) from checkpoint, the shape in current model is torch.Size([4, 8]).
While copying the parameter named "_fsdp_wrapped_module.net2.bias", whose dimensions in the model are torch.Size([4]) and whose dimensions in the checkpoint are torch.Size([4]), an exception occurred : ('aten.copy_.default: got mixed distributed and non-distributed tensors.',).
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 4 |
2,641 | 101,160 |
Fine-tuning HuggingFace wav2vec 2.0 with `torch.compile`
|
high priority, oncall: distributed, module: crash, triaged, module: nccl, ezyang's list, bug, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
I followed the example to fine-tune HuggingFace's wav2vec 2.0 for speech recognition, using `torch.compile`, aiming to get faster training. However, I ran into an issue as outlined in the error logs.
I suspect that HuggingFace's wav2vec 2.0 is not yet supported in PyTorch 2.0 and needs some modification to ensure compatibility when running `torch.compile`. It's mostly related to creating mask tensors for SpecAugment.
This issue seems to also be related: [fairseq Hubert with torch.compile](https://github.com/pytorch/pytorch/issues/97511). Same issue also raised in [HuggingFace](https://github.com/huggingface/transformers/issues/22849).
### Error logs
```
***** Running training *****
Num examples = 3,478
Num Epochs = 15
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 1
Total optimization steps = 13,050
Number of trainable parameters = 311,270,569
0%| | 0/13050 [00:00<?, ?it/sTraceback (most recent call last):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/__init__.py", line 1390, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 455, in compile_fx
return aot_autograd(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/backends/common.py", line 48, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2822, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2515, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1715, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2104, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward, aot_config.decompositions)(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 714, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 443, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn
tree_out = root_fn(*tree_args)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1158, in traced_joint
return functionalized_f_helper(primals, tangents)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1110, in functionalized_f_helper
f_outs = flat_fn_no_input_mutations(fn, f_primals, f_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1078, in flat_fn_no_input_mutations
outs = flat_fn_with_synthetic_bases_expanded(fn, primals, primals_after_cloning, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1050, in flat_fn_with_synthetic_bases_expanded
outs = forward_or_joint(fn, primals_before_cloning, primals, maybe_tangents, meta, keep_input_mutations)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1019, in forward_or_joint
backward_out = torch.autograd.grad(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 269, in grad
return handle_torch_function(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/overrides.py", line 1534, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_inductor/overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/autograd/__init__.py", line 303, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 487, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 512, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 345, in proxy_call
out = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1162, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 453, in index_tensor
check_no_bool_index_tensors(func, *args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 432, in check_no_bool_index_tensors
raise DynamicOutputShapeException(func)
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.index.Tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/wilson_bookbotkids_com/transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py", line 775, in <module>
main()
File "/home/wilson_bookbotkids_com/transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py", line 723, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1664, in train
return inner_training_loop(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 1940, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/trainer.py", line 2767, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1684, in forward
outputs = self.wav2vec2(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1316, in forward
hidden_states = self._mask_hidden_states(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1249, in _mask_hidden_states
if not getattr(self.config, "apply_spec_augment", True):
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1259, in <graph break in _mask_hidden_states>
mask_time_indices = _compute_mask_indices(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1266, in <graph break in _mask_hidden_states>
mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE
self.output.compile_subgraph(
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 517, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/torch/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised DynamicOutputShapeException: aten.index.Tensor
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Minified repro
Install HuggingFace Transformers from source:
```
git clone https://github.com/huggingface/transformers.git
pip install transformers/
```
Run training script
```sh
python transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-demo-dist" \
--preprocessing_num_workers="16" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="4" \
--gradient_accumulation_steps="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--save_steps="400" \
--eval_steps="100" \
--logging_steps="1" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--do_train --do_eval \
--torch_compile True
```
### Versions
<details>
<summary>Versions</summary>
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.28
Python version: 3.9.0 | packaged by conda-forge | (default, Nov 26 2020, 07:57:39) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.0.76
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.60.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.152
BogoMIPS: 4400.30
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1+cu117 pypi_0 pypi
[conda] torchaudio 2.0.2+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
</details>
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 12 |
2,642 | 101,159 |
Inconsistency between GPU memory usage in torch.cuda.memory_summary and nvidia-smi
|
module: cuda, triaged
|
### 🐛 Describe the bug
I also encountered a similar problem where PyTorch reports inconsistent memory usage between vGPU memory and actual GPU memory
The value of GPU memory obtained through PyTorch is 5GB, but when checking with nvidia-smi it shows 22GB. I have tried many methods, such as upgrading both Torch and Cuda, but the bug still persists.
https://github.com/pytorch/pytorch/issues/37250
### Versions
## env
- OS:unbantu
- Python:python3.9
- Transformers:transformers==4.27.1
- PyTorch:torch==1.10.1 / torch==2.0.0 same
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :True
```python
Python 3.9.16 (main, Dec 7 2022, 01:11:58)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
>>> gpu_id = torch.cuda.current_device()
>>> torch.cuda.set_device(0)
>>> batch_size = 1024
>>> data_shape = (3, 224, 224)
>>> tensor = torch.zeros([batch_size] + list(data_shape)).cuda(device=0)
>>> batch_size = 10240
>>> tensor = torch.zeros([batch_size] + list(data_shape)).cuda(device=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: CUDA out of memory. Tried to allocate 5.74 GiB (GPU 0; 5.50 GiB total capacity; 588.00 MiB already allocated; 4.13 GiB free; 588.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
>>>
>>> torch.set_default_tensor_type(torch.FloatTensor)
>>> total_memory = torch.cuda.max_memory_allocated()
>>> free_memory = torch.cuda.max_memory_cached()
/home/admin/langchain-ChatGLM/venv/lib/python3.9/site-packages/torch/cuda/memory.py:392: FutureWarning: torch.cuda.max_memory_cached has been renamed to torch.cuda.max_memory_reserved
warnings.warn(
>>>
>>> print(f"Total memory: {total_memory / (1024 ** 3):.2f} GB")
Total memory: 0.57 GB
>>> print(f"Free memory: {free_memory / (1024 ** 3):.2f} GB")
Free memory: 0.57 GB
>>> print(f"status:{torch.cuda.memory_summary()}")
初始状态:|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 1 | cudaMalloc retries: 1 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 602124 KB | 602124 KB | 602124 KB | 0 B |
| from large pool | 602112 KB | 602112 KB | 602112 KB | 0 B |
| from small pool | 12 KB | 12 KB | 12 KB | 0 B |
|---------------------------------------------------------------------------|
| Active memory | 602124 KB | 602124 KB | 602124 KB | 0 B |
| from large pool | 602112 KB | 602112 KB | 602112 KB | 0 B |
| from small pool | 12 KB | 12 KB | 12 KB | 0 B |
|---------------------------------------------------------------------------|
| GPU reserved memory | 604160 KB | 604160 KB | 604160 KB | 0 B |
| from large pool | 602112 KB | 602112 KB | 602112 KB | 0 B |
| from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 2035 KB | 2040 KB | 2040 KB | 4608 B |
| from large pool | 0 KB | 0 KB | 0 KB | 0 B |
| from small pool | 2035 KB | 2040 KB | 2040 KB | 4608 B |
|---------------------------------------------------------------------------|
| Allocations | 5 | 5 | 5 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 4 | 4 | 4 | 0 |
|---------------------------------------------------------------------------|
| Active allocs | 5 | 5 | 5 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 4 | 4 | 4 | 0 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 2 | 2 | 2 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 1 | 1 | 1 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 1 | 1 | 1 | 0 |
| from large pool | 0 | 0 | 0 | 0 |
| from small pool | 1 | 1 | 1 | 0 |
|---------------------------------------------------------------------------|
| Oversize allocations | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize GPU segments | 0 | 0 | 0 | 0 |
|===========================================================================|
```
```shell
# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Thu May 11 02:39:19 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.172.01 Driver Version: 450.172.01 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P40 Off | 00000000:88:00.0 Off | 0 |
| N/A 43C P0 50W / 250W | 2924MiB / 22919MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
cc @ngimel
| 1 |
2,643 | 101,154 |
[Dynamo] TB hf_Reformer graph breaks
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Repro:
```
import torch
import logging
import sys
import torch._dynamo
# torch._logging.set_logs(dynamo=logging.DEBUG, bytecode=True)
torch._dynamo.config.print_graph_breaks = True
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.linear = torch.nn.Linear(5, 5)
self.dropout = torch.nn.Dropout()
def _init_attention_seed(self):
"""
This function sets a new seed for the attention layer to make dropout deterministic for both forward calls: 1
normal forward call and 1 forward call in backward to recalculate activations.
"""
# randomize seeds
# use cuda generator if available
if hasattr(torch.cuda, "default_generators") and len(torch.cuda.default_generators) > 0:
# GPU
device_idx = torch.cuda.current_device()
self.attention_seed = torch.cuda.default_generators[device_idx].seed()
else:
# CPU
self.attention_seed = int(torch.seed() % sys.maxsize)
torch.manual_seed(self.attention_seed)
def forward(self, x):
self._init_attention_seed()
return self.dropout(self.linear(x))
x = torch.randn(5, 5)
m = MyModel()
print(m(x))
opt_m = torch.compile(backend="eager")(m)
print(opt_m(x))
```
There are several graph breaks:
```
[2023-05-11 04:12:58,513] torch._dynamo.symbolic_convert: [WARNING] Graph break: hasattr: TorchVariable(<module 'torch.cuda' from '/scratch/ybliang/work/repos/pytorch/torch/cuda/__init__.py'>) from user code at File "/scratch/ybliang/work/repos/debug/debug3.py", line 39, in forward
self._init_attention_seed()
File "/scratch/ybliang/work/repos/debug/debug3.py", line 28, in _init_attention_seed
if hasattr(torch.cuda, "default_generators") and len(torch.cuda.default_generators) > 0:
[2023-05-11 04:12:58,748] torch._dynamo.symbolic_convert: [WARNING] Graph break: inlining disallowed: <function current_device at 0x7f2ec26d8430> from user code at File "/scratch/ybliang/work/repos/debug/debug3.py", line 30, in <resume in _init_attention_seed>
device_idx = torch.cuda.current_device()
[2023-05-11 04:12:58,754] torch._dynamo.symbolic_convert: [WARNING] Graph break: call_method UserDefinedObjectVariable(seed) __call__ [] {} from user code at File "/scratch/ybliang/work/repos/debug/debug3.py", line 31, in <resume in _init_attention_seed>
self.attention_seed = torch.cuda.default_generators[device_idx].seed()
```
### Versions
N/A
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 3 |
2,644 | 101,152 |
Importing torch after TensorFlow results in std::runtime_error
|
triaged, module: tensorflow
|
### 🐛 Describe the bug
The following minimal example (`Dockerfile`, just run `docker build` with it ) reproduces the error:
```Dockerfile
FROM python:3.11.3
RUN pip install tensorflow==2.12.0 torch==2.0.1
RUN python -c "import tensorflow as tf; import torch"
```
```
terminate called after throwing an instance of 'std::runtime_error'
what(): random_device could not be read
Aborted (core dumped)
```
It does not happen when importing torch first.
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.11.3 (main, May 4 2023, 05:53:32) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 94
Model name: Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
Stepping: 3
CPU MHz: 3300.000
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 6599.98
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 6 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht
| 6 |
2,645 | 101,150 |
[ONNX] OnnxFunction of aten_index_put_bool operation isn't consistent to aten::index_put inx FX exporter
|
module: onnx, triaged, onnx-triaged
|
### 🐛 Describe the bug
Need below in `test_fx_op_consistency.py`
```python
xfail(
"index_put",
matcher=lambda sample: (sample.args[0][0].dtype == torch.bool) and (sample.kwargs.get("accumulate") == False),
reason=onnx_test_common.reason_dynamo_does_not_support("Unknown reason!"),
),
```
However, the ops_test.py of this OnnxFunction is actually passed.
cc @justinchuby
### Versions
Nightly
| 1 |
2,646 | 101,138 |
Fault and vauge error when invoking nvcc: The system cannot find the file specified
|
module: windows, module: cpp-extensions, triaged
|
### 🐛 Describe the bug
The issue is torch faulting unexpectedly and not reporting anything useful on windows. I ran into the issue trying to run setup_cuda in https://github.com/qwopqwop200/GPTQ-for-LLaMa.git
which gave me a mysterious
```
error: [WinError 2] The system cannot find the file specified
```
After debugging setup_cuda with pdb, I found that this came from the torch library.
To reproduce:
The first part of the issue is a faulty CUDA_HOME being set in `cpp_extensions._find_cuda_home` because it finds it through `where nvcc` which outputs two different ones.
```python
import subprocess
import os
from torch.utils import cpp_extension
print(subprocess.check_output(['where', 'nvcc'], stderr=open(os.devnull, 'w')))
print(cpp_extension._find_cuda_home())
```
which yields
```
b'C:\\Users\\zakar\\miniconda3\\envs\\tgwui\\Library\\bin\\nvcc.exe\r\nC:\\Users\\zakar\\miniconda3\\envs\\tgwui\\bin\\nvcc.exe\r\n'
'C:\\Users\\zakar\\miniconda3\\envs\\tgwui\\Library\\bin\\nvcc.exe\r\nC:\\Users\\zakar\\miniconda3\\envs\\tgwui'
```
This is then used when checking cuda's version, the following reproduces my specific configuration:
```python
from torch.utils import cpp_extension
cpp_extension._check_cuda_version('cl', '19.29.30148')
```
which yields
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\zakar\miniconda3\envs\tgwui\Lib\site-packages\torch\utils\cpp_extension.py", line 376, in _check_cuda_version
cuda_version_str = subprocess.check_output([nvcc, '--version']).strip().decode(*SUBPROCESS_DECODE_ARGS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zakar\miniconda3\envs\tgwui\Lib\subprocess.py", line 466, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zakar\miniconda3\envs\tgwui\Lib\subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zakar\miniconda3\envs\tgwui\Lib\subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\zakar\miniconda3\envs\tgwui\Lib\subprocess.py", line 1509, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
Through my cuda setup I only saw it output this "[WinError 2] The system cannot find the file specified" without any clarification, so this case should be handled and alerted.
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.20.21032501-MSVC_2
Libc version: N/A
Python version: 3.11.3 | packaged by Anaconda, Inc. | (main, Apr 19 2023, 23:46:34) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 531.68
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2592
DeviceID=CPU0
Family=198
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2592
Name=Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] cudatoolkit-dev 11.7.0 h9f2f4db_6 conda-forge
[conda] mkl 2023.1.0 h8bd8f75_46356
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.6 py311hf62ec03_1
[conda] mkl_random 1.2.2 py311hf62ec03_1
[conda] numpy 1.24.3 py311hdab7c0b_1
[conda] numpy-base 1.24.3 py311hd01c5d8_1
[conda] pytorch 2.0.1 py3.11_cuda11.7_cudnn8_0 pytorch
[conda] pytorch-cuda 11.7 h16d0643_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @malfet @zou3519
| 0 |
2,647 | 101,137 |
[PyTorch/Triton with ROCm 5.5] torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised TypeError: 'NoneType' object is not subscriptable
|
module: rocm, triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
I'm attempting to train a new model using the DreamBooth extension with Automatic1111's webui for Stable Diffusion. I'm currently running Torch, Vision and Triton compiled with ROCm support.
Error message:
```
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
Error: Attempting to get amgpu ISA Details 'NoneType' object has no attribute 'group'
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
I found myself at /home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/_C/../third_party/rocm/lib/libdrm_amdgpu.so 170
Error: Attempting to get amgpu ISA Details 'NoneType' object has no attribute 'group'
Traceback (most recent call last):
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 670, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/debug_utils.py", line 1055, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/__init__.py", line 1388, in __call__
from torch._inductor.compile_fx import compile_fx
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
from . import config, metrics, overrides, pattern_matcher
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
from . import config, ir
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/ir.py", line 29, in <module>
from . import config, dependencies
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
from .codegen.common import index_prevent_reordering
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
from ..utils import (
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_inductor/utils.py", line 32, in <module>
from triton.testing import do_bench
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/__init__.py", line 20, in <module>
from .runtime import (
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import Config, Heuristics, autotune, heuristics
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/runtime/autotuner.py", line 7, in <module>
from ..compiler import OutOfResources
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/compiler.py", line 1895, in <module>
@static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/pytorch_triton_rocm-2.0.2-py3.8-linux-x86_64.egg/triton/compiler.py", line 1874, in _get_amdgcn_bitcode_paths
gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
TypeError: 'NoneType' object is not subscriptable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/SD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 727, in start_training
result = main(class_gen_method=class_gen_method)
File "/home/user/SD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1371, in main
return inner_loop()
File "/home/user/SD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 119, in decorator
return function(batch_size, grad_size, prof, *args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1169, in inner_loop
noise_pred = unet(
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py", line 556, in forward
t_emb = self.time_proj(timesteps)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1792, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 517, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 588, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/user/SD/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 675, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised TypeError: 'NoneType' object is not subscriptable
Set torch._dynamo.config.verbose=True for more information
```
### Versions
```
$ python3.8 collect_env.py
[W interface.cpp:47] Warning: Loading nvfuser library failed with: Error in dlopen: libnvfuser_codegen.so: cannot open shared object file: No such file or directory (function LoadingNvfuserLibrary)
Collecting environment information...
PyTorch version: 2.0.0a0+gitc263bd4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.5.30201-c1741e9b
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.8.16 (default, Dec 7 2022, 01:12:06) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Radeon RX 7900 XTX
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.5.30201
MIOpen runtime version: 2.19.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 7600X 6-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5452,7339
CPU min MHz: 3000,0000
BogoMIPS: 9382.61
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lion-pytorch==0.0.8
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-lightning==1.7.7
[pip3] pytorch-triton-rocm==2.0.2
[pip3] torch==2.0.0a0+gitc263bd4
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.1a0+42759b1
[pip3] triton==2.1.0
[conda] Could not collect
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 9 |
2,648 | 101,135 |
Pytorch compile failure on Windows with CUDA 12.1 because of lacking NVTX component
|
module: build, module: windows, triaged
|
### 🐛 Describe the bug
**Procedure to reproduce:**
1. For a new windows system
2. Install Miniconda and create a python=3.11 environment
3. Install cuda 12.1 with all components
4. git clone pytorch and run `python setup.py develop`
5. It fails.
**Reason:** There is no longer NVTX component in cuda 12.1, while pytorch assumes it installed at `C:\Program Files\NVIDIA Corporation\NvToolsExt`.
**Solution:** Open CUDA 11.8 installer and install NVTX component.
**Expected Behavior:** Pytorch shall be compiled with only CUDA 12.1. We don't need to install an extra NVTX component in CUDA 11.8.
**Error Log:**
```
Building wheel torch-2.1.0a0+git4052741
-- Building version 2.1.0a0+git4052741
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=D:\workspace\pytorch\torch -DCMAKE_PREFIX_PATH=C:\Users\ain-s\miniconda3\Lib\site-packages -DJAVA_HOME=C:\Program Files\Java\jdk1.8.0_301 -DNUMPY_INCLUDE_DIR=C:\Users\ain-s\miniconda3\lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=C:\Users\ain-s\miniconda3\python.exe -DPYTHON_INCLUDE_DIR=C:\Users\ain-s\miniconda3\Include -DPYTHON_LIBRARY=C:\Users\ain-s\miniconda3/libs/python310.lib -DTORCH_BUILD_VERSION=2.1.0a0+git4052741 -DUSE_NUMPY=True D:\workspace\pytorch
CMake Warning at CMakeLists.txt:368 (message):
TensorPipe cannot be used on Windows. Set it to OFF
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
CMake Error at cmake/public/cuda.cmake:69 (message):
Failed to find nvToolsExt
Call Stack (most recent call first):
cmake/Dependencies.cmake:43 (include)
CMakeLists.txt:710 (include)
-- Configuring incomplete, errors occurred!
```
### Versions
I've installed successfully. If you do need the script output in error case, I could uninstall NVTX and try it days later.
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht
| 4 |
2,649 | 101,117 |
Barriers to using torch.compile directly in PyTorch library code
|
triaged, oncall: pt2, module: inductor, module: dynamo
|
### 🐛 Describe the bug
There are many places in the PyTorch eager API where we could conceivably write a kernel for something, but we it's annoying / time consuming to do so / increases our binary size.
For example, in https://github.com/pytorch/pytorch/pull/101115 we don't have a deterministic implementation of upsample_bilinear backwards and implementing one would be pretty annoying (the most plausible approach is to redo the CUDA kernel with the iteration space on grad_input rather than grad_output (which necessitates atomic adds). However, we could instead use torch.compile to compile us an implementation of the backwards! Assuming that torch.compile has a deterministic implementation of scatter (it doesn't, but we could conceivably do it by going to eager's deterministic implementation.)
So ideally, you'd like to modify code in PyTorch's Python frontend to look like this:
```
if input.dim() == 4 and mode == "bilinear":
assert align_corners is not None
if antialias:
return torch._C._nn._upsample_bilinear2d_aa(input, output_size, align_corners, scale_factors)
if torch.are_deterministic_algorithms_enabled() and input.is_cuda:
return torch.compile(torch._C._nn.upsample_bilinear2d, dynamic=True)(input, output_size, align_corners, scale_factors)
else:
return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors)
```
However, there are a few problems to doing this:
1. This breaks torch.compile! If you modify a function which is allow_in_graph'ed, during Dynamo execution we will send fake tensors through the arguments, but if you recursively hit torch.compile it will choke as it doesn't know how to deal with fake tensors.
2. The warmup time is bad! It takes a long time to compile, and if you hit the function at multiple sizes we will compile a few times before quiescing dynamic=True. https://github.com/pytorch/pytorch/pull/101027 could potentially help here.
3. It goops up observability! Most users of our logging and debug tools assume that they will only get logs for their actual model, and no PT2 logs when they are running stock eager code. This breaks this assumption, since random eager code can trigger compile. (I also ran into this with https://github.com/pytorch/pytorch/pull/99809)
4. Performance might be bad! These are small graphs and guard overhead is a lot more notable in these situations.
What I am kind of imagining is that we need some alternative frontend to torch.compile which is specialized for "single function optimize me" use case. This frontend can be export oriented; e.g., we won't recompile unless you explicitly bump a version number or something and should aggressively cache its compile products to disk.
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith @Chillee
### Versions
master
| 0 |
2,650 | 101,110 |
Tensorboard graph tracing with torch fx API
|
triaged, module: tensorboard, module: fx
|
### 🚀 The feature, motivation and pitch
The current implementation of adding a model graph to tensorboard is not perfect:
- A brittle heuristic is used to assemble the scopes, in- and outputs of Modules and operations in the graph, based on strings extracted from the trace
- This leads to [silently](https://github.com/pytorch/pytorch/issues/33691) [wrong](https://github.com/pytorch/pytorch/issues/65652), [confusing](https://github.com/pytorch/pytorch/issues/39292) or [useless](https://github.com/pytorch/pytorch/issues/37387) graphs sometimes
- sometimes operations have duplicate inputs / outputs in the graph
- operations are labeled with a meaningless number instead of the name of the operation
- in general, pytorch model graphs are just subpar compared to tensorflow models
- [The graphing breaks completely with zero-dim tensors](https://github.com/pytorch/pytorch/issues/88707)
🚀 My idea / feature request is now to build the tensorboard graph from a symbolic trace using the newer `torch.fx` API.
As I understand it, it contains all the necessary information and good scopes that help correct nesting of modules in the graph.
Graphing a general `fx` IR to a tensorboard may also be an excellent way of visually debugging `fx` transformations.
This maybe can also help with this issue, where some users want a model graph without having to provide concrete inputs values: #37334.
### Alternatives
- Continue using the status quo with the buggy parsing of torch jit traces
- Currently, the trace parsing discards `ClassType` nodes. Investigate if including information from these nodes may help create more accurate graphs
### Additional context
See some of the linked issues for images where the tensorboard graph is not satisfactory.
The code for parsing the jit trace is here: https://github.com/pytorch/pytorch/blob/e9ebda29d87ce0916ab08c06ab26fd3766a870e5/torch/utils/tensorboard/_pytorch_graph.py#L236
cc @ezyang @SherlockNoMad @soumith @EikanWang @jgong5 @wenzhe-nrv
| 6 |
2,651 | 101,107 |
Make compiled models serializable
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
Serializing a compiled model with `pickle` fails with `Can't pickle local object 'convert_frame.<locals>._convert_frame'` and `cannot pickle 'ConfigModuleInstance' object` when using `dill`.
A Colab with an example:
https://colab.research.google.com/drive/1v6jUUq86ql1Era4X47cIDj7bzrrz2RZe?usp=sharing
In Hugging Face Datasets, this error stops us from generating (deterministic) hashes for transforms (functions) that reference a compiled model, meaning such transforms cannot be cached and must be re-computed each time when transforming a dataset.
(The "export" API for the compiled models would also work for us.)
### Error logs
_No response_
### Minified repro
_No response_
### Versions
<details>
<summary>Colab env with torch 2.0.1 installed</summary>
```
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2200.196
BogoMIPS: 4400.39
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
</details>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 5 |
2,652 | 101,106 |
addmv doesn't do type promotion correctly,
|
triaged, module: type promotion, module: linear algebra
|
Google colab test:
Input:
```
input = torch.rand([3], dtype=torch.complex128, requires_grad=True)
vec1 = torch.rand([3, 2], dtype=torch.float64, requires_grad=True)
vec2 = torch.rand([2], dtype=torch.float64, requires_grad=True)
res = torch.Tensor.addmv(input, vec1, vec2)
desired_dtype = torch.promote_types(torch.complex128, torch.float64)
print(res.dtype)
print(desired_dtype)
```
Output:
```
torch.float64
torch.complex128
<ipython-input-3-e284f66968e0>:5: UserWarning: Casting complex values to real discards the imaginary part (Triggered internally at ../aten/src/ATen/native/Copy.cpp:276.)
res = torch.Tensor.addmv(input, vec1, vec2)
```
_Originally posted by @gerhean in https://github.com/pytorch/pytorch/issues/99984#issuecomment-1542566779_
cc @nairbv @mruberry @jianyuh @nikitaved @pearu @walterddr @IvanYashchuk @xwang233 @Lezcano
| 2 |
2,653 | 101,096 |
[BE] Refactor logic for MultiTensorApply
|
triaged, better-engineering, actionable, module: mta
|
### 🚀 The feature, motivation and pitch
Context: https://github.com/pytorch/pytorch/pull/100811#discussion_r1187576042
It would be a valuable next step to refactor the multitensorapply code to allow for better engineering in terms of clarity. The code currently is convoluted and we should break up the for-loop into separate preprocessing and kernel call steps. Desirable outcomes:
- allow people to more easily understand what is going on
- lessen the chance that bugs will be introduced accidentally
### Alternatives
do nothing
### Additional context
_No response_
cc @crcrpar @mcarilli @ngimel
| 4 |
2,654 | 101,091 |
Cannot export quantized model to onnx: cannot call qscheme on UnknownQuantizer
|
module: onnx, oncall: quantization, triaged
|
### 🐛 Describe the bug
I am trying to convert a quantized model to onnx. It fails with the error as shown below. This issue may be related to #100654 because a pretrained pytorch model is quantized and converted to onnx without error.
[Code](https://gist.github.com/chinmayjog13/c66226681d538382014a9140e0bc8de1)
**Output**- contains quantized model graph and the error message
```
(v_pytorch2) chinmay@mcfluid03:~/tf_face_recognition$ python test_quantized_model.py
/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/ao/quantization/observer.py:214: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
warnings.warn(
prepared model for calibration
num imgs for calibration: 1000
batch 0 done
/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/ao/quantization/utils.py:302: UserWarning: must run observer before calling calculate_qparams. Returning default values.
warnings.warn(
GraphModule(
(conv1): QuantizedConv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.04243697598576546, zero_point=64, padding=(1, 1))
(prelu): QuantizedPReLU()
(layer1): Module(
(0): Module(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.048839956521987915, zero_point=70, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), scale=0.02982931397855282, zero_point=56, padding=(1, 1))
(downsample): Module(
(0): QuantizedConv2d(64, 64, kernel_size=(1, 1), stride=(2, 2), scale=0.026763780042529106, zero_point=72)
)
)
(1): Module(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.03681090474128723, zero_point=88, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.030850104987621307, zero_point=61, padding=(1, 1))
)
)
(layer2): Module(
(0): Module(
(conv1): QuantizedConv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.028876036405563354, zero_point=68, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), scale=0.016839243471622467, zero_point=68, padding=(1, 1))
(downsample): Module(
(0): QuantizedConv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), scale=0.0114758824929595, zero_point=64)
)
)
(1): Module(
(conv1): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.02233685925602913, zero_point=82, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.023190662264823914, zero_point=66, padding=(1, 1))
)
)
(layer3): Module(
(0): Module(
(conv1): QuantizedConv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.02156665176153183, zero_point=67, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), scale=0.01482717040926218, zero_point=64, padding=(1, 1))
(downsample): Module(
(0): QuantizedConv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), scale=0.0048522488214075565, zero_point=65)
)
)
(1): Module(
(conv1): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.01981436274945736, zero_point=80, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.014433014206588268, zero_point=63, padding=(1, 1))
)
)
(layer4): Module(
(0): Module(
(conv1): QuantizedConv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.019120512530207634, zero_point=79, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), scale=0.011832070536911488, zero_point=70, padding=(1, 1))
(downsample): Module(
(0): QuantizedConv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), scale=0.002516211476176977, zero_point=61)
)
)
(1): Module(
(conv1): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.03342185169458389, zero_point=87, padding=(1, 1))
(prelu): QuantizedPReLU()
(conv2): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.017271075397729874, zero_point=63, padding=(1, 1))
)
)
(dropout): QuantizedDropout(p=0, inplace=True)
(fc): QuantizedLinear(in_features=25088, out_features=512, scale=0.06431952863931656, zero_point=63, qscheme=torch.per_channel_affine)
)
def forward(self, x):
conv1_input_scale_0 = self.conv1_input_scale_0
conv1_input_zero_point_0 = self.conv1_input_zero_point_0
quantize_per_tensor = torch.quantize_per_tensor(x, conv1_input_scale_0, conv1_input_zero_point_0, torch.quint8); x = conv1_input_scale_0 = conv1_input_zero_point_0 = None
conv1 = self.conv1(quantize_per_tensor); quantize_per_tensor = None
prelu = self.prelu(conv1); conv1 = None
layer1_0_conv1 = getattr(self.layer1, "0").conv1(prelu)
layer1_0_prelu = getattr(self.layer1, "0").prelu(layer1_0_conv1); layer1_0_conv1 = None
layer1_0_conv2 = getattr(self.layer1, "0").conv2(layer1_0_prelu); layer1_0_prelu = None
layer1_0_downsample_0 = getattr(getattr(self.layer1, "0").downsample, "0")(prelu); prelu = None
layer1_0_scale_0 = self.layer1_0_scale_0
layer1_0_zero_point_0 = self.layer1_0_zero_point_0
add_8 = torch.ops.quantized.add(layer1_0_conv2, layer1_0_downsample_0, layer1_0_scale_0, layer1_0_zero_point_0); layer1_0_conv2 = layer1_0_downsample_0 = layer1_0_scale_0 = layer1_0_zero_point_0 = None
layer1_1_conv1 = getattr(self.layer1, "1").conv1(add_8)
layer1_1_prelu = getattr(self.layer1, "1").prelu(layer1_1_conv1); layer1_1_conv1 = None
layer1_1_conv2 = getattr(self.layer1, "1").conv2(layer1_1_prelu); layer1_1_prelu = None
layer1_1_scale_0 = self.layer1_1_scale_0
layer1_1_zero_point_0 = self.layer1_1_zero_point_0
add_9 = torch.ops.quantized.add(layer1_1_conv2, add_8, layer1_1_scale_0, layer1_1_zero_point_0); layer1_1_conv2 = add_8 = layer1_1_scale_0 = layer1_1_zero_point_0 = None
layer2_0_conv1 = getattr(self.layer2, "0").conv1(add_9)
layer2_0_prelu = getattr(self.layer2, "0").prelu(layer2_0_conv1); layer2_0_conv1 = None
layer2_0_conv2 = getattr(self.layer2, "0").conv2(layer2_0_prelu); layer2_0_prelu = None
layer2_0_downsample_0 = getattr(getattr(self.layer2, "0").downsample, "0")(add_9); add_9 = None
layer2_0_scale_0 = self.layer2_0_scale_0
layer2_0_zero_point_0 = self.layer2_0_zero_point_0
add_10 = torch.ops.quantized.add(layer2_0_conv2, layer2_0_downsample_0, layer2_0_scale_0, layer2_0_zero_point_0); layer2_0_conv2 = layer2_0_downsample_0 = layer2_0_scale_0 = layer2_0_zero_point_0 = None
layer2_1_conv1 = getattr(self.layer2, "1").conv1(add_10)
layer2_1_prelu = getattr(self.layer2, "1").prelu(layer2_1_conv1); layer2_1_conv1 = None
layer2_1_conv2 = getattr(self.layer2, "1").conv2(layer2_1_prelu); layer2_1_prelu = None
layer2_1_scale_0 = self.layer2_1_scale_0
layer2_1_zero_point_0 = self.layer2_1_zero_point_0
add_11 = torch.ops.quantized.add(layer2_1_conv2, add_10, layer2_1_scale_0, layer2_1_zero_point_0); layer2_1_conv2 = add_10 = layer2_1_scale_0 = layer2_1_zero_point_0 = None
layer3_0_conv1 = getattr(self.layer3, "0").conv1(add_11)
layer3_0_prelu = getattr(self.layer3, "0").prelu(layer3_0_conv1); layer3_0_conv1 = None
layer3_0_conv2 = getattr(self.layer3, "0").conv2(layer3_0_prelu); layer3_0_prelu = None
layer3_0_downsample_0 = getattr(getattr(self.layer3, "0").downsample, "0")(add_11); add_11 = None
layer3_0_scale_0 = self.layer3_0_scale_0
layer3_0_zero_point_0 = self.layer3_0_zero_point_0
add_12 = torch.ops.quantized.add(layer3_0_conv2, layer3_0_downsample_0, layer3_0_scale_0, layer3_0_zero_point_0); layer3_0_conv2 = layer3_0_downsample_0 = layer3_0_scale_0 = layer3_0_zero_point_0 = None
layer3_1_conv1 = getattr(self.layer3, "1").conv1(add_12)
layer3_1_prelu = getattr(self.layer3, "1").prelu(layer3_1_conv1); layer3_1_conv1 = None
layer3_1_conv2 = getattr(self.layer3, "1").conv2(layer3_1_prelu); layer3_1_prelu = None
layer3_1_scale_0 = self.layer3_1_scale_0
layer3_1_zero_point_0 = self.layer3_1_zero_point_0
add_13 = torch.ops.quantized.add(layer3_1_conv2, add_12, layer3_1_scale_0, layer3_1_zero_point_0); layer3_1_conv2 = add_12 = layer3_1_scale_0 = layer3_1_zero_point_0 = None
layer4_0_conv1 = getattr(self.layer4, "0").conv1(add_13)
layer4_0_prelu = getattr(self.layer4, "0").prelu(layer4_0_conv1); layer4_0_conv1 = None
layer4_0_conv2 = getattr(self.layer4, "0").conv2(layer4_0_prelu); layer4_0_prelu = None
layer4_0_downsample_0 = getattr(getattr(self.layer4, "0").downsample, "0")(add_13); add_13 = None
layer4_0_scale_0 = self.layer4_0_scale_0
layer4_0_zero_point_0 = self.layer4_0_zero_point_0
add_14 = torch.ops.quantized.add(layer4_0_conv2, layer4_0_downsample_0, layer4_0_scale_0, layer4_0_zero_point_0); layer4_0_conv2 = layer4_0_downsample_0 = layer4_0_scale_0 = layer4_0_zero_point_0 = None
layer4_1_conv1 = getattr(self.layer4, "1").conv1(add_14)
layer4_1_prelu = getattr(self.layer4, "1").prelu(layer4_1_conv1); layer4_1_conv1 = None
layer4_1_conv2 = getattr(self.layer4, "1").conv2(layer4_1_prelu); layer4_1_prelu = None
layer4_1_scale_0 = self.layer4_1_scale_0
layer4_1_zero_point_0 = self.layer4_1_zero_point_0
add_15 = torch.ops.quantized.add(layer4_1_conv2, add_14, layer4_1_scale_0, layer4_1_zero_point_0); layer4_1_conv2 = add_14 = layer4_1_scale_0 = layer4_1_zero_point_0 = None
flatten = torch.flatten(add_15, 1); add_15 = None
dropout = self.dropout(flatten); flatten = None
fc = self.fc(dropout); dropout = None
dequantize_41 = fc.dequantize(); fc = None
return dequantize_41
# To see more debug info, please use `graph_module.print_readable()`
Converted model
================ Diagnostic Run torch.onnx.export version 2.0.0 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "test_quantized_model.py", line 193, in <module>
torch.onnx.export(quantized_model, img, output, opset_version=18,
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py", line 6829, in prim_constant
return g.op("Constant", value_t=symbolic_helper._node_get(node, "value"))
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py", line 86, in op
return _add_op(self, opname, *raw_args, outputs=outputs, **kwargs)
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py", line 245, in _add_op
node = _create_node(
File "/home/chinmay/anaconda3/envs/v_pytorch2/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py", line 306, in _create_node
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
RuntimeError: false INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1678402379298/work/aten/src/ATen/quantized/Quantizer.cpp":444, please report a bug to PyTorch. cannot call qscheme on UnknownQuantizer
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.105.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7502 32-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1498.611
CPU max MHz: 3354.4919
CPU min MHz: 1500.0000
BogoMIPS: 5000.33
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 2.0.0 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_3 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.0 py38_cu117 pytorch
[conda] torchtriton 2.0.0 py38 pytorch
[conda] torchvision 0.15.0 py38_cu117 pytorch
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 6 |
2,655 | 101,082 |
Multiple Learning Rate Scheduler for Specific Parameters Groups
|
module: optimizer, triaged, needs research, module: LrScheduler
|
### 🚀 The feature, motivation and pitch
The concept of learning rate schedulers is currently to affect the learning rate for all parameter groups. However there are use cases where it is needed to affect only specific parameters groups, have different parameters for the scheduler between the parameters groups, or even have different types of schedulers for each parameters group.
To give an example:
1. I want to have a warm-up phase (e.g. LinearLR from 0.1 to 1.0 over 10 epochs) but only for the network backend or head. So, only for a specific parameters group.
2. I want to have a warm-up phase for my model but with different lengths (total_iters=10 for the backend and total_iters=30 for the head or similar).
3. I want to have a warm-up phase but with different lengths and styles (e.g. linear LR for the backend for the first 10 epochs and exponential increasing LR for the head for the first 30 epochs.
Currently, it is not possible to implement it by using LRSchedulers because they affect all parameter groups which makes it impossible to also use multiple LRSchedulers for different parameter groups or even different scheduler parameters.
### Alternatives
I see one possible solution which covers all three problems described above. We can add an additional parameter to the LRScheduler base class that specifies the parameter groups that should be affected. This would allow different parameters between the parameter groups by initializing multiple schedulers of the same type, schedulers affecting only specific parameter groups, and also multiple schedulers of different types and parameter groups.
The problem is that parameter groups are currently not specified by some parameter (e.g. name). However, properties, like name, are addable. I would suggest specifying the parameter groups that should be affected by a list of names which is optional. If no list is passed, the scheduler would affect all parameter groups as usual (backward compatibility).
Here is an example:
```python
optim = Adam([
dict(name='backend', params=backend_model.parameters(), ...),
dict(name='head', params=head_model.parameters(), ...),
], lr=0.001, ...
)
scheduler_backend = LinearLR(optim, total_iters=10, params_groups=['backend'])
scheduler_backend = LinearLR(optim, total_iters=30, params_groups=['head'])
```
I currently have a use case for that problem (see additional context). So, I am interested in a solution as soon as possible 😄 . I also can implement a solution but would prefer to discuss it before with the community.
Additional solutions:
- Updating all LRSchedulers to allow different parameters for each parameters group (many code changes, solving not all problems described above).
- Allowing scheduler parameters within the parameters group dict (e.g. `dict(params=model.parameters(), total_iters=30)`).
- More?
### Additional context
I currently work on a model that should forecast energy time-series by using different modalities (historical energy, weather, and calendar data). Separated the model into an encoding part, modal fusion, and forecasting part. If I train all layers at once, some of my layers have "frozen" neurons. So, they do not change much during training which makes no sense (it is not dying ReLU). I solved the problem by training the model step-wise (and also got much better forecasts. So, I first train the first few layers, then add more, and finally train the whole model. This is similar to having way smaller learning rates in the last layers which I also tried to implement using torch.optim LRScheduler but it is not possible without implementing a specific learning rate scheduler for my specific use case. Here is an example of what kind of learning rate scheduling I want:

cc @vincentqb @jbschlosser @albanD @janeyx99
| 4 |
2,656 | 101,080 |
Sequence annotation in type hints is wrong
|
module: typing, triaged
|
### 🐛 Describe the bug
We have generated .pyi files that specify annotation of SymInt[], float[], and possibly other types as Sequence[T]:
```
// torch.ones in torch/_VF.pyi
def ones(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor] = None, dtype: Op
tional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device,
str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool]
= False) -> Tensor: ...
```
This is incorrect, because we only support Tuple and List via python_arg_parser (and Sequence represents anything that subscribes to the Sequence API, like ranges).
Some options are:
- we correct these annotations (probably the easiest option)
- we support PySequence in python_arg_parser (https://docs.python.org/3.8/c-api/sequence.html)
### Versions
main
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 4 |
2,657 | 101,075 |
torch.lobpcg producing different largest eigenvalue than scipy and np.linalg.eig
|
triaged, module: numpy, module: linear algebra
|
### 🐛 Describe the bug
I have tried to use [lobpcg](https://pytorch.org/docs/stable/generated/torch.lobpcg.html) to obtain the largest eigenvalue of a of a symmetric [generalized eigenvalue problem](https://en.wikipedia.org/wiki/Generalized_eigenvalue_problem) defined by the pair of matrices (A,B). In most of the cases, it obtains the same value as the one obtained using [scipy.sparse.linalg.lobpcg](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.lobpcg.html) or applying [np.linalg.eig]() to B^-1A
However, there are instances where `lobpcg` fails to generate the same answer as `scipy` or `eig`. The code to reproduce this error is this one:
```python
import numpy as np
import torch
import scipy
dim=3
while True:
print("\n\n-------")
#Random generation of matrices A and B
tmp = np.random.uniform(-1, 1, (dim, dim))
A=(tmp+tmp.T)/2.0 #A is symmetric by construction
tmp = np.random.uniform(-1, 1, (dim, dim))
B = np.dot(tmp, tmp.transpose()) + np.eye(dim) #B is symmetric positive definite by construction
print(f"A={A}\n")
print(f"B={B}\n")
X=np.random.rand(A.shape[0], 1) #Initial guess for the eigenvector
# Using lobpcg
lambda_lobpcg, _ = torch.lobpcg(A=torch.from_numpy(A), k=1, B=torch.from_numpy(B), niter=-1, X=torch.from_numpy(X))
lambda_lobpcg = lambda_lobpcg.item()
print(f"lambda_lobpcg={lambda_lobpcg}")
# Using Scipy
lambda_scipy, _ =scipy.sparse.linalg.lobpcg(A=A, B=B, X=X, maxiter=10000)
lambda_scipy=lambda_scipy[0]
print(f"lambda_scipy={lambda_scipy}")
# Using normal eigendecomposition
all_lambdas, _=np.linalg.eig(np.linalg.inv(B)@A);
# print(f"all_lambdas={all_lambdas}")
lambda_eig=np.max(all_lambdas)
print(f"lambda_eig={lambda_eig}")
assert abs(lambda_lobpcg-lambda_eig)<1e-6
assert abs(lambda_scipy-lambda_eig)<1e-6
```
When I run this code, after ~2 seconds it finds a pair of matrices (A,B) for which the result of `lobpcg` is different than the other two methods
```
A=[[-0.90404846 0.33497974 -0.47530819]
[ 0.33497974 -0.70714823 -0.34896375]
[-0.47530819 -0.34896375 -0.25162643]]
B=[[ 1.51871581e+00 -8.51033541e-05 -1.49293047e-02]
[-8.51033541e-05 1.16174545e+00 -1.42144256e-01]
[-1.49293047e-02 -1.42144256e-01 2.73072664e+00]]
lambda_lobpcg=-0.560304480425443
lambda_scipy=0.10388536097610737
lambda_eig=0.10388536097610745
---
test_bug_logpcg.py 39 <module>
assert abs(lambda_lobpcg-lambda_eig)<1e-6
AssertionError
```
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 151
Model name: 12th Gen Intel(R) Core(TM) i9-12900
Stepping: 2
CPU MHz: 2400.000
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 4838.40
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 10 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==1.13.1+cu116
[pip3] torchaudio==0.13.1+cu116
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
cc @mruberry @rgommers @jianyuh @nikitaved @pearu @walterddr @IvanYashchuk @xwang233 @Lezcano
| 9 |
2,658 | 101,070 |
Lazily format C++ stack trace if it is not used
|
module: performance, triaged
|
### 🐛 Describe the bug
Meta-only context: https://fb.workplace.com/groups/1405155842844877/?multi_permalinks=6965759763451096&hoisted_section_header_type=recently_seen
We can speed up stack trace collection by only formatting it when we actually need to print it. To do this, we can use @zdevito's new stack trace code described at https://dev-discuss.pytorch.org/t/fast-combined-c-python-torchscript-inductor-tracebacks/1158/3
This should speed up how quickly our test suite runs.
### Versions
master
cc @ngimel
| 0 |
2,659 | 101,069 |
torch.autograd.detect_anomaly should report the original forward trace as part of the error, rather than as out of band warning
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
Today, when an exception is raised from a backward function, and anomaly mode was enabled, we will emit a warning saying what forward caused the exception:
```
/data/users/ezyang/a/pytorch/torch/autograd/__init__.py:319: UserWarning: Error detected in SumBackward0. Traceback of forward call that caused the error:
File "/data/users/ezyang/a/pytorch/test/dynamo/test_misc.py", line 4872, in f
return h(a)
(Triggered internally at /data/users/ezyang/a/pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:119.)
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
There should be a way to force this information to instead get attached to the exception being thrown. This is important to do as high level error reporting frameworks will typically surface the exception message to the user, but not necessarily warnings (which have to be separately extracted from logs.)
### Versions
master
cc @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 2 |
2,660 | 101,068 |
Tensor __getitem__ not documented, sparse grad?
|
module: sparse, triaged, module: advanced indexing
|
### 📚 The doc issue
I was search for some documentation on `Tensor.__getitem__` but I don't find any, at least not [here](https://pytorch.org/docs/stable/tensors.html#torch.Tensor). Maybe it is elsewhere?
Esp, I wonder if it uses `torch.gather` internally when you give it some scalar index.
And I wonder if it uses `sparse_grad` for `torch.gather` or not.
I also wonder, why does `torch.gather` has such a `sparse_grad` option at all? Why is it not always sparse? Why is the default to have it dense? What are the downsides in always enabling this? The [`torch.gather` documentation](https://pytorch.org/docs/stable/generated/torch.gather.html) doesn't really say anything about this.
If `Tensor.__getitem__` does not use `torch.gather`, is the gradient sparse or not?
E.g. when I have a tensor `x` of shape [T, B, F], and I loop over T, would it be a good idea to have this code (when training)?
```python
for t in range(T):
y = f(x[t])
...
```
If `__getitem__` is not using sparse gradients, wouldn't it be better to have sparse gradients in this case? Wouldn't it always be better to have sparse gradients?
### Suggest a potential alternative/fix
Add some documentation on `Tensor.__getitem__`.
Extend the documentation of `torch.gather` `sparse_grad` option.
Briefly address my questions here.
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 12 |
2,661 | 101,055 |
DISABLE libtorch-2.0.0+cu117 destructor exception
|
oncall: jit
|
### 🐛 Describe the bug
I try to load a torchscript model in a class and do inference. When everything is done, the destructor doesn't work (probably the reason).
The minimum reproducible code is given below.**libtorch-2.0.0+cu117**
```
#include <iostream>
#include <torch/torch.h>
#include <torch/script.h>
#include <torch/cuda.h>
class MyDLClass
{
public:
MyDLClass() {}
~MyDLClass() {}
void init()
{
m_device = torch::cuda::is_available() ? torch::kCUDA : torch::kCPU;
m_model = torch::jit::load(std::string(R"(E:\code\C++\DCDL\model\yolov5s.torchscript)"));
m_model.to(m_device);
m_model.eval();
}
void run()
{
auto tensor = torch::randn({ 1,3,640,640 }).to(m_device);
auto r = m_model.forward({ tensor }).toTuple()->elements()[0].toTensor();
std::cout << r.sizes() << std::endl;
}
private:
torch::DeviceType m_device = torch::kCPU;
torch::jit::Module m_model;
};
int main()
{
MyDLClass m;
m.init();
m.run();
return 0;
}
```
At the end of the run, the IDE reports an error, but I don't have enough information to determine the cause of the error.
When I roll back to **libtorch-1.10.1+cu113** version everything works fine.

### Versions
Visual Studio 2019
CUDA:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0
Libtorch:
libtorch-2.0.0+cu117
PyTorch version: 1.12.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版
GCC version: (x86_64-posix-seh, Built by strawberryperl.com project) 8.3.0
Clang version: Could not collect
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 531.61
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
12th Gen Intel(R) Core(TM) i7-12700H
Revision=
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.6.5
[pip3] torch==1.12.0+cu116
[pip3] torchaudio==0.12.0+cu116
[pip3] torchfile==0.1.0
[pip3] torchmetrics==0.9.3
[pip3] torchtext==0.13.0
[pip3] torchvision==0.13.0+cu116
[conda] blas 1.0 mkl http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] cudatoolkit 11.3.1 h59b6b97_2 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl 2021.4.0 haa95532_640 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py39h2bbff1b_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.1 py39h277e83a_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.2 py39hf11a4ad_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 1.22.4 pypi_0 pypi
[conda] numpy-base 1.22.3 py39hca35cd5_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torch 1.12.0+cu116 pypi_0 pypi
[conda] torchaudio 0.12.0+cu116 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchtext 0.13.0 pypi_0 pypi
[conda] torchvision 0.13.0+cu116 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,662 | 101,039 |
[torch.compile] the sum of `softmax` isn't `1` on cuda
|
module: numerical-stability, triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
`torch.compile` returns the sum of `softmax` isn't equal to `1` on cuda
```py
import torch
torch.manual_seed(420)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.softmax = torch.nn.Softmax(dim=-1)
def forward(self, query, key):
y = torch.matmul(query, key.transpose(-2, -1))
z = y.div(1e-06)
w = self.softmax(z)
return w
device = 'cuda'
func = Model().to(device)
query = torch.randn(1, 10, 40).to(device)
key = torch.randn(1, 2, 40).to(device)
jit_func = torch.compile(func)
res1 = func(query, key) # without jit
print(res1)
# tensor([[[0., 1.],
# [0., 1.],
# [0., 1.],
# [0., 1.],
# [0., 1.],
# [1., 0.],
# [1., 0.],
# [0., 1.],
# [0., 1.],
# [0., 1.]]], device='cuda:0')
res2 = jit_func(query, key)
print(res2)
# tensor([[[0.0000, 0.9804],
# [0.0000, 1.0561],
# [0.0000, 1.1190],
# [0.0000, 0.9721],
# [0.0000, 1.1999],
# [0.9972, 0.0000],
# [0.8769, 0.0000],
# [0.0000, 1.0315],
# [0.0000, 1.0398],
# [0.0000, 0.8039]]], device='cuda:0')
```
### Versions
<details>
<summary>Click to expand</summary>
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230503+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230503+cu118
[pip3] torchaudio==2.1.0.dev20230503+cu118
[pip3] torchvision==0.16.0.dev20230503+cu118
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230503+cu118 pypi_0 pypi
```
</details>
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
| 5 |
2,663 | 101,033 |
Exporting the operator 'prim::is_cuda' to ONNX opset version 14 is not supported
|
module: onnx, triaged
|
### 🐛 Describe the bug
Error 'Exporting the operator 'prim::is_cuda' to ONNX opset version 14 is not supported' is happening during export to onnx model with TransformerEncoder
```
/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py:307: UserWarning: Constant folding in symbolic shape inference fails: index out of range in self (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:439.)
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
Traceback (most recent call last):
File "/home/roman/projects/python/tools/test_onnx.py", line 24, in <module>
torch.onnx.export(
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 507, in export
_export(
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 1567, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 1128, in _model_to_graph
graph = _optimize_graph(
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 666, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 1908, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/symbolic_opset9.py", line 6951, in prim_if
torch._C._jit_pass_onnx_block(
File "/home/roman/miniconda3/envs/pt/lib/python3.10/site-packages/torch/onnx/utils.py", line 1918, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'prim::is_cuda' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
```
Here is the code to reproduce:
```
import torch
from torch import Tensor
from torch.nn import Module, TransformerEncoderLayer, TransformerEncoder
class TestNet(Module):
def __init__(self, tf_n_channels=3, device=torch.device('cpu'), *args, **kwargs):
super().__init__(*args, **kwargs)
encoder_layer = TransformerEncoderLayer(d_model=tf_n_channels * 5, nhead=tf_n_channels, dim_feedforward=60,
dropout=0.0, device=device, dtype=torch.float32)
self.transformer = TransformerEncoder(encoder_layer, num_layers=2)
def forward(self, _input: Tensor) -> Tensor:
return self.transformer(_input, is_causal=True, mask=torch.ones((_input.size(0), _input.size(0)),
dtype=torch.bool,
device=_input.device).triu(diagonal=1))
if __name__ == '__main__':
_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
example_inputs = torch.randn((100, 1, 15)).to(_device, dtype=torch.float)
model = TestNet(device=_device)
model = torch.jit.script(model)
torch.onnx.export(
model,
example_inputs,
"test_model.onnx",
export_params=True,
do_constant_folding=True,
verbose=True
)
```
- OS Ubuntu 22.04
- cuda 12.1
- pytorch 2.1.0.dev20230508+cu121
- onnxruntime 1.14.1
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0.dev20230508+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8
/usr/lib/libcudnn_adv_infer.so.8
/usr/lib/libcudnn_adv_train.so.8
/usr/lib/libcudnn_cnn_infer.so.8
/usr/lib/libcudnn_cnn_train.so.8
/usr/lib/libcudnn_ops_infer.so.8
/usr/lib/libcudnn_ops_train.so.8
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-lightning==2.0.2
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230508+cu121
[pip3] torch-tensorrt==1.3.0
[pip3] torchaudio==2.1.0.dev20230508+cu121
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.16.0.dev20230508+cu121
[pip3] triton==2.0.0
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] cudatoolkit-dev 11.7.0 h1de0b5d_6 conda-forge
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-lightning 2.0.2 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230508+cu121 pypi_0 pypi
[conda] torch-tensorrt 1.3.0 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230508+cu121 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230508+cu121 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 3 |
2,664 | 101,031 |
[PT2] torch.compile doesn't perform horizontal fusion
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
I wrote a program to test if PT2 will perform horizontal fusion for the split-cat horizontal fusion described in this [blog post](https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/).
Looking at the Kineto trace, I see that there was some vertical fusion of point-wise kernels, but no horizontal fusion.
### Error logs
_No response_
### Minified repro
```
import random
import sys
import torch
import torch._dynamo
import torch.nn as nn
from torch._inductor import config
from torch.profiler import ProfilerActivity
from torch._inductor import config as torchinductor_config
def trace_handler(prof):
s_trace_link = ( "horizontal_issue_raw.json")
prof.export_chrome_trace(s_trace_link)
print("Trace link {}".format(s_trace_link))
def unfused(input_data, batch_size, input_dim, layer_norm, linear_layer):
split_input_data = torch.split(input_data, batch_size * input_dim )
norm_features = []
for i in range(len(split_input_data)):
# This conditional is enough to completely kill PT2's ability to fuse:
# if i == 0:
# print(f"shape split_input_data[0] = {split_input_data[0].shape}")
f = split_input_data[i]
f_view = f.view([batch_size, input_dim])
norm_f = layer_norm(f_view)
tanh_norm_f = torch.tanh(norm_f)
norm_features.append(tanh_norm_f)
linear_input = torch.stack(norm_features)
linear_output = linear_layer(linear_input)
result = nn.functional.relu(linear_output)
return result
def get_compiled_model(num_inputs, batch_size, input_dim, layer_norm, linear_layer):
torchinductor_config.aggressive_fusion = True
inputs = torch.rand(num_inputs * batch_size * input_dim, device='cuda:0')
compiled_unfused = torch.compile(unfused)
# Run warmup iterations.
for _ in range(10):
compiled_unfused(inputs, batch_size, input_dim, layer_norm, linear_layer)
torch.cuda.synchronize()
return compiled_unfused
def run_example_pt2():
num_inputs = 16
batch_size = 2**16
input_dim = 128
layer_norm = torch.nn.LayerNorm(input_dim, eps=0.0, elementwise_affine=False, device='cuda:0')
linear_layer = nn.Linear(input_dim, 1, device='cuda:0')
config.triton.unique_kernel_names = True
config.triton.descriptive_names = "torch"
compiled_unfused = get_compiled_model(num_inputs, batch_size, input_dim, layer_norm, linear_layer)
inputs = torch.rand(num_inputs * batch_size * input_dim, device='cuda:0')
# Use PyTorch profiler to collect a trace.
with torch.profiler.profile(
with_stack=True,
record_shapes=True,
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
on_trace_ready=trace_handler,
):
# Run the model without horizontally fused layer norm.
pred1 = unfused(inputs, batch_size, input_dim, layer_norm, linear_layer)
torch.cuda.synchronize()
# Run the compiled version of the unfused function.
pred3 = compiled_unfused(inputs, batch_size, input_dim, layer_norm, linear_layer)
torch.cuda.synchronize()
def main(argv):
# Run multiple times to get more warmups (trace will be from last call).
run_example_pt2()
run_example_pt2()
run_example_pt2()
run_example_pt2()
if __name__ == "__main__":
main(sys.argv[1:])
```

### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230504
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CentOS Stream 8 (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk13_zion_7455_gb24de3bdb045-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
Stepping: 11
CPU MHz: 2499.998
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230504
[pip3] torchaudio==2.1.0.dev20230504
[pip3] torchvision==0.16.0.dev20230504
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] brotlipy 0.7.0 py311h9bf148f_1002 pytorch-nightly
[conda] cffi 1.15.1 py311h9bf148f_3 pytorch-nightly
[conda] cryptography 38.0.4 py311h46ebde7_0 pytorch-nightly
[conda] filelock 3.9.0 py311_0 pytorch-nightly
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py311h9bf148f_0 pytorch-nightly
[conda] mkl_fft 1.3.1 py311hc796f24_0 pytorch-nightly
[conda] mkl_random 1.2.2 py311hbba84a0_0 pytorch-nightly
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 1.24.3 py311hc206e33_0
[conda] numpy-base 1.24.3 py311hfd5febd_0
[conda] pillow 9.3.0 py311h3fd9d12_2 pytorch-nightly
[conda] pysocks 1.7.1 py311_0 pytorch-nightly
[conda] pytorch 2.1.0.dev20230504 py3.11_cuda11.8_cudnn8.7.0_0 pytorch-nightly
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] requests 2.28.1 py311_0 pytorch-nightly
[conda] torchaudio 2.1.0.dev20230504 py311_cu118 pytorch-nightly
[conda] torchtriton 2.1.0+7d1a95b046 py311 pytorch-nightly
[conda] torchvision 0.16.0.dev20230504 py311_cu118 pytorch-nightly
[conda] urllib3 1.26.14 py311_0 pytorch-nightly
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
| 0 |
2,665 | 101,013 |
Unsupported: ONNX export of operator interpolate (with scales) error
|
module: onnx, triaged
|
### 🐛 Describe the bug
Trying to run onnx export of a scripted model.
I guess it can't figure out the shape of the tensor for interpolate?
It's hard to give a reproducible test case without sending the whole model.
It's also hard to debug this since there's no hint as to which interpolate (of the several that we have in the network) that it's complaining about.
See this stack trace:
```
/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py:825: UserWarning: no signature found for <torch.ScriptMethod object at 0x7f238781f290>, skipping _decide_input_format
warnings.warn(f"{e}, skipping _decide_input_format")
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/home/go22670/Projects/test2/convert.py", line 21, in <module>
out, vad, enhFile = enh.process(file,outFile=myout)
File "/home/go22670/Projects/test2/src/seamnet/enhance.py", line 278, in process
self.convert(self.model, tWin)
File "/home/go22670/Projects/test2/src/seamnet/enhance.py", line 206, in convert
Th.onnx.export(model,
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_opset9.py", line 6709, in prim_loop
torch._C._jit_pass_onnx_block(
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 392, in wrapper
return fn(g, *args, **kwargs)
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_opset11.py", line 391, in __interpolate
return symbolic_helper.__interpolate_helper(
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 1212, in __interpolate_helper
return _unimplemented("interpolate (with scales)", "missing input shape")
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 607, in _unimplemented
_onnx_unsupported(f"{op}, {msg}", value)
File "/home/go22670/.conda/envs/torch/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 622, in _onnx_unsupported
raise errors.OnnxExporterError(message)
torch.onnx.errors.OnnxExporterError: Unsupported: ONNX export of operator interpolate (with scales), missing input shape. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues
```
### Versions
(torch) [go22670@nti-rh8-gpu test2]$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 8.7 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-15)
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: GRID A100-40C
GPU 1: GRID A100-40C
Nvidia driver version: 510.85.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 8
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
CPU MHz: 1996.249
BogoMIPS: 3992.49
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero wbnoinvd arat umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchview==0.2.6
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-fft 1.3.1 pypi_0 pypi
[conda] mkl-random 1.2.2 pypi_0 pypi
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchview 0.2.6 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 0 |
2,666 | 101,011 |
[Placeholder] PyTorch 2.0 Dynamo/Inductor Hack{day/week}
|
triaged, oncall: pt2, module: dynamo
|
We will have a list of graph breaks with the minimized repro for the hack{day/week}.
(Assigned task to @yanboliang temporarily before we start the hackday)
## Dynamo Graph Breaks ##
* [x] [Easy] https://github.com/pytorch/pytorch/issues/101154
* [x] [Easy] https://github.com/pytorch/pytorch/issues/101155
* [x] [Easy] https://github.com/pytorch/pytorch/issues/91662
* [ ] [Medium] https://github.com/pytorch/pytorch/issues/97115
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102338
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102340
* [x] [Easy] https://github.com/pytorch/pytorch/issues/101980
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102877
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102878
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102879
* [x] [Easy] https://github.com/pytorch/pytorch/issues/103125
## Other Dynamo Issues ##
* [x] [Easy] https://github.com/pytorch/pytorch/issues/102053
* [x] [Easy] https://github.com/pytorch/pytorch/issues/100854
* [x] [Medium] https://github.com/pytorch/pytorch/issues/99639
* [x] [Medium] https://github.com/pytorch/pytorch/issues/98973
* [x] [Medium] https://github.com/pytorch/pytorch/issues/99569
+ [ ] [Medium] ^^ followup to the above issue, which was worked-around with a graph break but underlying AOT Autograd + nn.Parameter issue still exists
* [ ] [Easy] https://github.com/pytorch/pytorch/issues/99752
* [x] [Easy] https://github.com/pytorch/pytorch/issues/93340
* [x] [Easy] https://github.com/pytorch/pytorch/issues/103139
## Newly added ones during hackaday
* [ ] From Avik - https://www.internalfb.com/phabricator/paste/view/P763199946
* [ ] From Zhengxu - https://github.com/pytorch/pytorch/issues/103210
## Other PT2 stack issues
- [ ] Debuggability + Inductor: https://github.com/pytorch/pytorch/issues/97778
- [ ] [Medium] https://github.com/pytorch/pytorch/issues/101832
- [ ] Model benchmarks - https://github.com/pytorch/pytorch/issues/96759
- [x] Make explain pretty - https://github.com/pytorch/pytorch/pull/102869
## Hard ones
- [ ] AOT Autograd - https://github.com/pytorch/pytorch/issues/96456
- [ ] [Hard] https://github.com/pytorch/pytorch/issues/100386
- [ ] [Hard] [Vague] https://github.com/pytorch/pytorch/issues/102063
| 0 |
2,667 | 100,996 |
ONNX TorchDynamo Exporter - Ability to export and load ONNX files without parameters
|
module: onnx, triaged, oncall: pt2
|
## 🚀 Feature Request
### Motivation
I'm working on a large ML benchmarking effort that heavily utilizes ONNX files. This generates hundreds to thousands of ONNX models that we can't store economically. Most of the size of the models comes from the parameters, which are not essential for applications such as benchmarking (on certain accelerators).
### Feature Description
A feature that I would love to see as part of the new [TorchDynamo Exporter](https://pytorch.org/docs/main/onnx.html#preview-torch-onnx-torchdynamo-exporter) is the ability to:
* Export/Save models without storing parameters, significantly reducing file size, and
* Load models exported/saved without parameters by generating random parameters.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
2,668 | 100,990 |
Extending compatibility of LibTorch
|
module: cpp, feature, triaged, needs design
|
### 🚀 The feature, motivation and pitch
I am noticing that the current C-API is limited to modern C++ standards. I am working on a code that assumes a pure C90 standard, and am not able to compile it with LibTorch functionality since the C-API is too modern.
Would it be possible to LibTorch functional with historical pure C standards?
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser
| 1 |
2,669 | 100,989 |
RuntimeError: nonzero is not supported for tensors with more than INT_MAX elements, file a support request
|
oncall: quantization, module: cuda, triaged, module: 64-bit, module: sorting and selection
|
### 🐛 Describe the bug
Hi,
I am trying to prune Dolly model (remove least x% weights from all layers combined) and was getting the following error
```
checkpoint = 'databricks/dolly-v1-6b'
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
global_pruning(linear_layers_list, prune_percentage=prune_percentage)
```
where global_pruning function takes all the linear layers present in the model and does Global pruning i.e
```
def global_pruning(linear_layers_list, prune_percentage):
parameters_to_prune = tuple((x, 'weight') for x in linear_layers_list)
prune.global_unstructured(parameters_to_prune, pruning_method=prune.L1Unstructured, amount=prune_percentage)
```
When I am loading the model with fp16 and doing the above pruning, I am getting the below error
```
/hdd4/srinath2/.local/lib/python3.10/site-packages/torch/nn/utils/prune.py:1
119 in global_unstructured
1116
1117 # use the `compute_mask` method from `PruningContainer` to combin
1118 # mask computed by the new method with the pre-existing mask
❱ 1119 final_mask = container.compute_mask(relevant_importance_scores, d
1120
1121 # Pointer for slicing the mask to match the shape of each paramet
1122 pointer = 0
/hdd4/srinath2/.local/lib/python3.10/site-packages/torch/nn/utils/prune.py:4
16 in compute_mask
413 return new_mask
414
415 method = self._pruning_methods[-1]
❱ 416 mask = _combine_masks(method, t, default_mask)
417 return mask
418
419
/hdd4/srinath2/.local/lib/python3.10/site-packages/torch/nn/utils/prune.py:4
10 in _combine_masks
407 )
408
409 # compute the new mask on the unpruned slice of the tenso
❱ 410 partial_mask = method.compute_mask(t[slc], default_mask=m
411 new_mask[slc] = partial_mask.to(dtype=new_mask.dtype)
412
413 return new_mask
RuntimeError: nonzero is not supported for tensors with more than INT_MAX
elements, file a support request
```
***Note:*** When I am doing it with fp32, I am getting a memory error because the `_mask and _orig` matrices are getting created temporarily and thus the memory requirement needs to be double i.e for ~26GB model, temporarily around ~52GB is getting occupied. (something similar to [https://github.com/ultralytics/yolov5/issues/304#issuecomment-654564855])
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA GeForce RTX 3070
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 4411.156
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7386.15
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640 anaconda
[conda] mkl-service 2.4.0 py310h7f8727e_0 anaconda
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 anaconda
[conda] mkl_random 1.2.2 py310h00e6091_0 anaconda
[conda] numpy 1.23.5 py310hd5efca6_0 anaconda
[conda] numpy-base 1.23.5 py310h8e6c178_0 anaconda
[conda] pytorch 2.0.0 py3.10_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
```
**cc:** @ngimel
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @ngimel
| 1 |
2,670 | 100,985 |
native_batch_norm has different size results on "CPU" vs "META" device
|
triaged, module: meta tensors, module: fakeTensor
|
### 🐛 Describe the bug
When `native_batch_norm` is run on `CPU` device with `running_mean` and `running_var` of size `[3]`, the last two results have size `[0]`. However, when run on the `META` device, the last two results have size `[3]`.
```python
import torch
input_tensor = torch.rand(2, 3)
running_mean = torch.rand(3)
running_var = torch.rand(3)
result = torch.ops.aten.native_batch_norm(
input_tensor, weight=None, bias=None, running_mean=running_mean,
running_var=running_var, training=False, momentum=1e-4, eps=1e-6)
result_meta = torch.ops.aten.native_batch_norm(
input_tensor.to("meta"), weight=None, bias=None, running_mean=running_mean.to("meta"),
running_var=running_var.to("meta"), training=False, momentum=1e-4, eps=1e-6)
print(result[1].shape, result[2].shape) # torch.Size([0]) torch.Size([0])
print(result_meta[1].shape, result_meta[2].shape) # torch.Size([3]) torch.Size([3])
```
### Versions
```
PyTorch version: 2.1.0.dev20230505+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.22.5
Libc version: glibc-2.36
Python version: 3.10.9 (main, Dec 7 2022, 13:47:07) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.15-1rodete3-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 3
BogoMIPS: 4000.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 77 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] functorch==0.3.0a0+0feda8a
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] pytorch-transformers==1.2.0
[pip3] torch==2.1.0.dev20230505+cpu
[pip3] torch-struct==0.5
[pip3] torchdynamo==1.13.0.dev0
[pip3] torchfile==0.1.0
[pip3] torchmetrics==0.9.2
[pip3] torchrec-nightly==2022.4.26
[pip3] torchvision==0.16.0.dev20230505+cpu
[pip3] torchx-nightly==2022.7.1
[conda] Could not collect
```
cc @ezyang @eellison @bdhirsh @soumith
| 2 |
2,671 | 100,974 |
Pytorch 2.0.1 pypi wheel does not install dependent cuda libraries
|
triage review, oncall: binaries, module: regression, needs design
|
### 🐛 Describe the bug
With torch 2.0.1 the torch pypi wheel does not depend on cuda libraries anymore. Therefore when starting torch on a GPU enabled machine, it complains `ValueError: libnvrtc.so.*[0-9].*[0-9] not found in the system path` (stacktrace see at the end below).
When I show the dependency trees for torch=2.0.1 and torch=2.0.0 with poetry (installed on the same machine with same dependency file as before) it becomes clear that torch 2.0.1 is missing the nvidia dependencies:
```
└── torch 2.0.1
├── filelock *
├── jinja2 *
│ └── markupsafe >=2.0
├── networkx *
├── sympy *
│ └── mpmath >=0.19
└── typing-extensions *
```
```
└── torch 2.0.0
├── filelock *
├── jinja2 *
│ └── markupsafe >=2.0
├── networkx *
├── nvidia-cublas-cu11 11.10.3.66
│ ├── setuptools *
│ └── wheel *
├── nvidia-cuda-cupti-cu11 11.7.101
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cuda-nvrtc-cu11 11.7.99
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cuda-runtime-cu11 11.7.99
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cudnn-cu11 8.5.0.96
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cufft-cu11 10.9.0.58
├── nvidia-curand-cu11 10.2.10.91
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cusolver-cu11 11.4.0.1
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-cusparse-cu11 11.7.4.91
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── nvidia-nccl-cu11 2.14.3
├── nvidia-nvtx-cu11 11.7.91
│ ├── setuptools * (circular dependency aborted here)
│ └── wheel * (circular dependency aborted here)
├── sympy *
│ └── mpmath >=0.19
├── triton 2.0.0
│ ├── cmake *
│ ├── filelock * (circular dependency aborted here)
│ ├── lit *
│ └── torch * (circular dependency aborted here)
└── typing-extensions *
```
Here the stacktrace of the error at runtime:
```
File "/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages/easyocr/recognition.py", line 2, in <module>
import torch
File "/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages/torch/__init__.py", line 228, in <module>
_load_global_deps()
File "/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages/torch/__init__.py", line 189, in _load_global_deps
_preload_cuda_deps(lib_folder, lib_name)
File "/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages/torch/__init__.py", line 154, in _preload_cuda_deps
raise ValueError(f"{lib_name} not found in the system path {sys.path}")
ValueError: libnvrtc.so.*[0-9].*[0-9] not found in the system path ['/home/ray', '/home/ray/anaconda3/lib/python3.10/site-packages/ray/dashboard', '/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages/ray/thirdparty_files', '/home/ray/anaconda3/lib/python3.10/site-packages/ray/_private/workers', '/home/ray/anaconda3/envs/myenv/lib/python3.10', '/home/ray/anaconda3/envs/myenv/lib/python3.10/lib-dynload', '/home/ray/anaconda3/envs/myenv/lib/python3.10/site-packages']
```
### Versions
Version where the issue occurs is the pypi wheel of torch 2.0.1.
When trying to run python collect_env.py to collect the versions, two errors shows up:
```
"OSError: libcurand.so.10: cannot open shared object file: No such file or directory"
During handling of the above exception, another exception occurred:
"ValueError: libnvrtc.so.*[0-9].*[0-9] not found in the system path"
```
cc @ezyang @gchanan @zou3519 @seemethere @malfet
| 31 |
2,672 | 100,968 |
AssertionError: slice.Tensor is not supported with cpp wrapper (llama)
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
```
2023-05-08T10:46:35.4955096Z cuda eval llama ERROR:common:backend='inductor' raised:
2023-05-08T10:46:35.4956007Z AssertionError: slice.Tensor is not supported with cpp wrapper
2023-05-08T10:46:35.4956583Z
2023-05-08T10:46:35.4956590Z
2023-05-08T10:46:35.4956888Z You can suppress this exception and fall back to eager by setting:
2023-05-08T10:46:35.4957450Z import torch._dynamo
2023-05-08T10:46:35.4957901Z torch._dynamo.config.suppress_errors = True
2023-05-08T10:46:35.4958374Z Traceback (most recent call last):
2023-05-08T10:46:35.4959003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy
2023-05-08T10:46:35.4959721Z new_result = optimized_model_iter_fn(model_copy, example_inputs)
2023-05-08T10:46:35.4960737Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 286, in _fn
2023-05-08T10:46:35.4961257Z return fn(*args, **kwargs)
2023-05-08T10:46:35.4961779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations
2023-05-08T10:46:35.4962443Z self.model_iter_fn(mod, inputs, collect_outputs=False)
2023-05-08T10:46:35.4963963Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 439, in catch_errors
2023-05-08T10:46:35.4964599Z return callback(frame, cache_size, hooks, frame_state)
2023-05-08T10:46:35.4965812Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame
2023-05-08T10:46:35.4966506Z result = inner_convert(frame, cache_size, hooks, frame_state)
2023-05-08T10:46:35.4967425Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn
2023-05-08T10:46:35.4967995Z return fn(*args, **kwargs)
2023-05-08T10:46:35.4969267Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert
2023-05-08T10:46:35.4969878Z return _compile(
2023-05-08T10:46:35.4970691Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-08T10:46:35.4971306Z r = func(*args, **kwargs)
2023-05-08T10:46:35.4972150Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile
2023-05-08T10:46:35.4972781Z out_code = transform_code_object(code, transform)
2023-05-08T10:46:35.4973787Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
2023-05-08T10:46:35.4974489Z transformations(instructions, code_options)
2023-05-08T10:46:35.4975399Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform
2023-05-08T10:46:35.4975970Z tracer.run()
2023-05-08T10:46:35.4976818Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run
2023-05-08T10:46:35.4977355Z super().run()
2023-05-08T10:46:35.4978121Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run
2023-05-08T10:46:35.4978685Z and self.step()
2023-05-08T10:46:35.4979578Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step
2023-05-08T10:46:35.4980213Z getattr(self, inst.opname)(inst)
2023-05-08T10:46:35.4980931Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE
2023-05-08T10:46:35.4981543Z self.output.compile_subgraph(
2023-05-08T10:46:35.4982068Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph
2023-05-08T10:46:35.4982780Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
2023-05-08T10:46:35.4983236Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
2023-05-08T10:46:35.4983542Z return func(*args, **kwds)
2023-05-08T10:46:35.4984078Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph
2023-05-08T10:46:35.4984491Z compiled_fn = self.call_user_compiler(gm)
2023-05-08T10:46:35.4985117Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-08T10:46:35.4985460Z r = func(*args, **kwargs)
2023-05-08T10:46:35.4985954Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler
2023-05-08T10:46:35.4986401Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
2023-05-08T10:46:35.4986981Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler
2023-05-08T10:46:35.4987396Z compiled_fn = compiler_fn(gm, self.example_inputs())
2023-05-08T10:46:35.4987936Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper
2023-05-08T10:46:35.4988344Z compiled_gm = compiler_fn(gm, example_inputs)
2023-05-08T10:46:35.4989228Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor
2023-05-08T10:46:35.4989584Z return compile_fx(*args, **kwargs)
2023-05-08T10:46:35.4990091Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx
2023-05-08T10:46:35.4990458Z return compile_fx_with_cpp_wrapper(
2023-05-08T10:46:35.4991007Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_with_cpp_wrapper
2023-05-08T10:46:35.4991350Z return compile_fx(
2023-05-08T10:46:35.4991948Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx
2023-05-08T10:46:35.4992309Z return aot_autograd(
2023-05-08T10:46:35.4992802Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn
2023-05-08T10:46:35.4993214Z cg = aot_module_simplified(gm, example_inputs, **kwargs)
2023-05-08T10:46:35.4993786Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified
2023-05-08T10:46:35.4994189Z compiled_fn = create_aot_dispatcher_function(
2023-05-08T10:46:35.4994686Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-08T10:46:35.4995026Z r = func(*args, **kwargs)
2023-05-08T10:46:35.4995671Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function
2023-05-08T10:46:35.4996199Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-08T10:46:35.4996794Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe
2023-05-08T10:46:35.4997237Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-08T10:46:35.4997851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base
2023-05-08T10:46:35.4998281Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-08T10:46:35.4998904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base
2023-05-08T10:46:35.4999318Z compiled_fw = compiler(fw_module, adjusted_flat_args)
2023-05-08T10:46:35.4999841Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-08T10:46:35.5000170Z r = func(*args, **kwargs)
2023-05-08T10:46:35.5000673Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base
2023-05-08T10:46:35.5001026Z return inner_compile(
2023-05-08T10:46:35.5001505Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper
2023-05-08T10:46:35.5001906Z inner_compiled_fn = compiler_fn(gm, example_inputs)
2023-05-08T10:46:35.5002432Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner
2023-05-08T10:46:35.5002768Z return fn(*args, **kwargs)
2023-05-08T10:46:35.5003067Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
2023-05-08T10:46:35.5003371Z return func(*args, **kwds)
2023-05-08T10:46:35.5003878Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner
2023-05-08T10:46:35.5004228Z graph.run(*example_inputs)
2023-05-08T10:46:35.5004711Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-08T10:46:35.5005376Z r = func(*args, **kwargs)
2023-05-08T10:46:35.5005872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run
2023-05-08T10:46:35.5006375Z return super().run(*args)
2023-05-08T10:46:35.5006855Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run
2023-05-08T10:46:35.5007208Z self.env[node] = self.run_node(node)
2023-05-08T10:46:35.5007678Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 476, in run_node
2023-05-08T10:46:35.5008076Z result = fallback_handler(n.target, add_to_fallback_set=False)(
2023-05-08T10:46:35.5008612Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 1043, in handler
2023-05-08T10:46:35.5009313Z TensorBox.create, ir.FallbackKernel.create(kernel, *args, **kwargs)
2023-05-08T10:46:35.5009905Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3173, in create
2023-05-08T10:46:35.5010251Z packed = FallbackKernel(
2023-05-08T10:46:35.5010743Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3122, in __init__
2023-05-08T10:46:35.5011048Z assert (
2023-05-08T10:46:35.5011444Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
2023-05-08T10:46:35.5011834Z AssertionError: slice.Tensor is not supported with cpp wrapper
2023-05-08T10:46:35.5012039Z
2023-05-08T10:46:35.5012045Z
2023-05-08T10:46:35.5012212Z You can suppress this exception and fall back to eager by setting:
2023-05-08T10:46:35.5012597Z import torch._dynamo
2023-05-08T10:46:35.5012910Z torch._dynamo.config.suppress_errors = True
2023-05-08T10:46:35.5013095Z
2023-05-08T10:46:35.5013274Z TorchDynamo optimized model failed to run because of following error
2023-05-08T10:46:35.5014490Z fail_to_run
```
### Versions
master
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 1 |
2,673 | 100,960 |
Issues building with caffe2 enabled
|
caffe2, triaged
|
### 🐛 Describe the bug
When attempting to build from main (57e19ad86d072c10851c12e77d03a78e54bd7ad4) i am unable to build successfully with the following cmake commands:
```shell
BUILD_CAFFE2=ON BUILD_CAFFE2_OPS=ON USE_MKLDNN=ON python setup.py install
```
Output:
```shell
#31 5185.6 /opt/pytorch/caffe2/operators/generate_proposals_op_util_nms_gpu_test.cc:449:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
#31 5185.6 for (int itest = 0; itest < input_thresh.size(); ++itest) {
#31 5185.6 ~~~~~~^~~~~~~~~~~~~~~~~~~~~
#31 5209.1 In file included from /opt/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include/dnnl.h:20:0,
#31 5209.1 from /opt/pytorch/third_party/ideep/include/ideep/abstract_types.hpp:4,
#31 5209.1 from /opt/pytorch/third_party/ideep/include/ideep.hpp:39,
#31 5209.1 from /opt/pytorch/caffe2/ideep/ideep_utils.h:6,
#31 5209.1 from /opt/pytorch/caffe2/python/pybind_state_ideep.cc:12:
#31 5209.1 /opt/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/include/oneapi/dnnl/dnnl.h:23:10: fatal error: oneapi/dnnl/dnnl_config.h: No such file or directory
#31 5209.1 #include "oneapi/dnnl/dnnl_config.h"
#31 5209.1 ^~~~~~~~~~~~~~~~~~~~~~~~~~~
#31 5209.1 compilation terminated.
#31 5209.1 make[2]: *** [caffe2/CMakeFiles/caffe2_pybind11_state_gpu.dir/python/pybind_state_ideep.cc.o] Error 1
#31 5209.1 make[2]: *** Waiting for unfinished jobs....
#31 5225.2 make[1]: *** [caffe2/CMakeFiles/caffe2_pybind11_state_gpu.dir/all] Error 2
#31 5225.2 make[1]: *** Waiting for unfinished jobs....
#31 5258.6 /opt/pytorch/torch/csrc/cuda/shared/cudart.cpp: In function ‘void torch::cuda::shared::initCudartBindings(PyObject*)’:
#31 5258.6 /opt/pytorch/torch/csrc/cuda/shared/cudart.cpp:103:7: warning: ‘cudaError_t cudaProfilerInitialize(const char*, const char*, cudaOutputMode_t)’ is deprecated [-Wdeprecated-declarations]
#31 5258.6 cudaProfilerInitialize);
#31 5258.6 ^~~~~~~~~~~~~~~~~~~~~~
#31 5258.6 In file included from /opt/pytorch/torch/csrc/cuda/shared/cudart.cpp:5:0:
#31 5258.6 /usr/local/cuda/include/cuda_profiler_api.h:134:57: note: declared here
#31 5258.6 extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI cudaProfilerInitialize(const char *configFile,
#31 5258.6 ^~~~~~~~~~~~~~~~~~~~~~
#31 5317.0 make: *** [all] Error 2
```
It looks like the `dnnl_config.h.in` is not being configured in `third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt`
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-128-generic-x86_64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.161.03
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
Stepping: 5
CPU MHz: 4818.277
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7599.80
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
| 1 |
2,674 | 100,957 |
PyTorch installs the file mkldnn.cmake that looks for the package MKLDNN that doesn't exist
|
module: build, triaged, module: mkldnn
|
### 🐛 Describe the bug
/usr/local/lib/python3.9/site-packages/torch/share/cmake/Caffe2/public/mkldnn.cmake looks for ```MKLDNN```.
There is a GitHub project https://github.com/Intel-tensorflow/mkl-dnn that was last updated in 2019. Its website though redirects to oneDNN.
As a result, the DGL project (https://github.com/dmlc/dgl) fails to build:
```
CMake Error at /usr/local/lib/python3.9/site-packages/torch/share/cmake/Caffe2/public/mkldnn.cmake:7 (find_package):
By not providing "FindMKLDNN.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "MKLDNN", but
CMake did not find one.
Could not find a package configuration file provided by "MKLDNN" with any
of the following names:
MKLDNNConfig.cmake
mkldnn-config.cmake
Add the installation prefix of "MKLDNN" to CMAKE_PREFIX_PATH or set
"MKLDNN_DIR" to a directory containing one of the above files. If "MKLDNN"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
/usr/local/lib/python3.9/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:106 (include)
/usr/local/lib/python3.9/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:29 (find_package)
```
### Versions
PyTorch-2.0.0
FreeBSD 13.2
cc @malfet @seemethere @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 3 |
2,675 | 100,932 |
torch.concat fails with float16 input in autocast(device_type=cpu) context
|
module: cpu, triaged, module: bfloat16, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
`torch.concat` fails with half-float inputs in a `device_type="cpu"` context, which is quite surprising. It should just return a half-float output! The error is:
```
RuntimeError: Unexpected floating ScalarType in at::autocast::prioritize
```
```
with torch.autocast(device_type='cuda'):
print(torch.concat([torch.zeros(size=(1, 1), dtype=torch.float32)]).dtype) # float32
print(torch.concat([torch.zeros(size=(1, 1), dtype=torch.float16)]).dtype) # float16
with torch.autocast(device_type='cpu'):
print(torch.concat([torch.zeros(size=(1, 1), dtype=torch.float32)]).dtype) # float32
print(torch.concat([torch.zeros(size=(1, 1), dtype=torch.float16)]).dtype) # Fails!
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.5 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4)
Clang version: 12.0.1 (Red Hat 12.0.1-4.module+el8.5.0+715+58f51d49)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 08:57:19) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 48
NUMA node(s): 6
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 75F3 32-Core Processor
Stepping: 1
CPU MHz: 2944.468
BogoMIPS: 5888.93
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext cpb invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero wbnoinvd arat umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.24.3 py311h64a7726_0 conda-forge
[conda] pytorch 2.0.0 cpu_py311h410fd25_0 conda-forge
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| 5 |
2,676 | 100,928 |
DISABLED test_vmapjvpvjp_linalg_lu_factor_ex_cuda_float32 (__main__.TestOperatorsCUDA)
|
triaged, module: flaky-tests, skipped, module: functorch
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_vmapjvpvjp_linalg_lu_factor_ex_cuda_float32&suite=TestOperatorsCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_vmapjvpvjp_linalg_lu_factor_ex_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `functorch/test_ops.py` or `functorch/test_ops.py`
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 4 |
2,677 | 100,914 |
[MPS] Track failures of test_module.py for MPS backend
|
triaged, module: backend, module: mps
|
### 🐛 Describe the bug
This issue tracks the failures in test_modules.py for the MPS backend.
- BatchNorm1d/BatchNorm2d: fails due to unsupported float64 passed to MPS's batch_norm implementation with error:
`TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.`
- These tests fail due to functional issues (mismatching elements):
1. SELU
2. Softplus
3. PReLU
4. ReLU6
5. Threshold
I'll update this issue whenever each of the above failures get fixed.
These issues were discovered in the PR #95334
### Versions
PyTorch version: 2.1.0a0+gite975f83
Is debug build: True
CMake version: version 3.26.3
Python version: 3.9.16 (main, Mar 8 2023, 04:29:24) [Clang 14.0.6 ] (64-bit runtime)
CPU:
Apple processor
cc @bdhirsh @kulinseth @albanD @malfet @DenisVieriu97 @abhudev
| 0 |
2,678 | 100,913 |
[onnx] UnsupportedOperatorError: Exporting the operator 'aten::l1_loss' to ONNX opset version 17 is not supported
|
module: onnx, triaged, oncall: pt2
|
### 🐛 Describe the bug
My repro:
```
python benchmarks/dynamo/torchbench.py --only Super_SloMo --backend onnxrt -dcuda --performance --inference
```
with this line `opset_version=17,` added for [line](https://github.com/pytorch/pytorch/blob/b0a372e1faef97adf46bab08510c7e0abdff2611/torch/_dynamo/backends/onnxrt.py#L77).
And with the following error:
```
cuda eval Super_SloMo ========= Diagnostic Run torch.onnx.export version 2.0.0a0+gitc263bd4 ==========
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ========================
ERROR: missing-standard-symbolic-function
=========================================
Exporting the operator 'aten::l1_loss' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
None
<Set verbose=True to see more details>
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/home/yj/pytorch/benchmarks/dynamo/common.py", line 1485, in warmup
fn(model, example_inputs)
File "/home/yj/pytorch/torch/_dynamo/eval_frame.py", line 280, in _fn
return fn(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/eval_frame.py", line 433, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 519, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 122, in _fn
return fn(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert
return _compile(
File "/home/yj/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 425, in _compile
out_code = transform_code_object(code, transform)
File "/home/yj/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/home/yj/pytorch/torch/_dynamo/convert_frame.py", line 410, in transform
tracer.run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2010, in run
super().run()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/yj/pytorch/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/yj/pytorch/torch/_dynamo/output_graph.py", line 723, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/yj/anaconda3/envs/dynamite/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/yj/pytorch/torch/_dynamo/output_graph.py", line 800, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/yj/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/output_graph.py", line 859, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/yj/pytorch/torch/_dynamo/output_graph.py", line 855, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/yj/pytorch/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/yj/pytorch/torch/_dynamo/backends/common.py", line 125, in wrapper
return fn(model, inputs, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/backends/onnxrt.py", line 55, in onnxrt
return onnxrt(gm, example_inputs, filename=tmp.name)
File "/home/yj/pytorch/torch/_dynamo/backends/common.py", line 125, in wrapper
return fn(model, inputs, **kwargs)
File "/home/yj/pytorch/torch/_dynamo/backends/onnxrt.py", line 72, in onnxrt
torch.onnx.export(
File "/home/yj/pytorch/torch/onnx/utils.py", line 507, in export
_export(
File "/home/yj/pytorch/torch/onnx/utils.py", line 1567, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/yj/pytorch/torch/onnx/utils.py", line 1128, in _model_to_graph
graph = _optimize_graph(
File "/home/yj/pytorch/torch/onnx/utils.py", line 666, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/yj/pytorch/torch/onnx/utils.py", line 1918, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch._dynamo.exc.BackendCompilerFailed: backend='onnxrt' raised:
UnsupportedOperatorError: Exporting the operator 'aten::l1_loss' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.0.0a0+gitc263bd4
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2786.283
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7399.70
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.12.4
[pip3] ema-pytorch==0.1.4
[pip3] functorch==1.14.0a0+408bcf1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.16.0
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.1
[pip3] torch==2.1.0a0+git2f95380
[pip3] torch-fidelity==0.3.0
[pip3] torch-scatter==2.1.1+pt20cpu
[pip3] torch-sparse==0.6.17+pt20cpu
[pip3] torch-struct==0.5
[pip3] torchaudio==2.1.0a0+d5b2996
[pip3] torchdata==0.7.0a0+f083d52
[pip3] torchmetrics==0.11.0
[pip3] torchrec-nightly==2023.1.25
[pip3] torchtext==0.16.0a0+79100a6
[pip3] torchvision==0.16.0a0+0d75d9e
[pip3] torchx==0.4.0
[pip3] vector-quantize-pytorch==0.10.15
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.12.4 pypi_0 pypi
[conda] ema-pytorch 0.1.4 pypi_0 pypi
[conda] functorch 1.14.0a0+408bcf1 pypi_0 pypi
[conda] magma-cuda118 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.23.5 pypi_0 pypi
[conda] open-clip-torch 2.16.0 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.1 pypi_0 pypi
[conda] torch 2.1.0a0+git2f95380 dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-scatter 2.1.1+pt20cpu pypi_0 pypi
[conda] torch-sparse 0.6.17+pt20cpu pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.1.0a0+d5b2996 dev_0 <develop>
[conda] torchdata 0.7.0a0+f083d52 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchrec-nightly 2023.1.25 pypi_0 pypi
[conda] torchtext 0.16.0a0+79100a6 dev_0 <develop>
[conda] torchvision 0.15.0a0+85983a5 pypi_0 pypi
[conda] torchx 0.4.0 pypi_0 pypi
[conda] vector-quantize-pytorch 0.10.15 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
2,679 | 100,904 |
Revise glossary
|
module: docs, triaged
|
### 📚 The doc issue
The glossary provided by #40640 could use some attention
- [ ] revise existing terms
- [ ] create new glossary entries (and links) for various sections of the docs
- [ ] compiler terms to augment the existing JIT terms
- [ ] generator
- [ ] torchrun
- [ ] torchscript
- [ ] backends
- [ ] autograd
...
pls post a list of suggestions
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 0 |
2,680 | 100,884 |
`torch.distributions.categorical.Categorical` samples indices with zero probability
|
module: distributions, triaged
|
### 🐛 Describe the bug
Similar to https://github.com/pytorch/pytorch/issues/91863, but the bug persists when providing `probs` as an argument in addition to `logits`.
```python
import torch
from collections import Counter
probs = torch.tensor([0.0] * 1000 + [1.0] * 1000)
categorical = torch.distributions.categorical.Categorical(probs=probs)
counts = Counter()
for _ in range(100000):
counts[categorical.sample().item()] += 1
print(min(counts)) # <- bug: should be >= 1000, but got 66
```
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 525.60.13
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7302 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5988.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.2.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
2,681 | 100,879 |
MPS backend is not supported on MacOS 12.6.3
|
needs reproduction, triaged, module: mps
|
### 🐛 Describe the bug
ERROR: test_output_match_zero__cpu_uint8 (__main__.TestConsistencyCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/ec2-user/pytorch/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
raise rte
File "/Users/ec2-user/pytorch/torch/testing/_internal/common_device_type.py", line 401, in instantiated_test
result = test(self, **param_kwargs)
File "/Users/ec2-user/pytorch/torch/testing/_internal/common_device_type.py", line 846, in test_wrapper
return test(*args, **kwargs)
File "/Users/ec2-user/pytorch/test/test_mps.py", line 10350, in test_output_match
mps_sample = cpu_sample.transform(
File "/Users/ec2-user/pytorch/torch/testing/_internal/opinfo/core.py", line 264, in transform
tt(self.input),
File "/Users/ec2-user/pytorch/torch/testing/_internal/opinfo/core.py", line 251, in tt
return _tt(t)
File "/Users/ec2-user/pytorch/torch/testing/_internal/opinfo/core.py", line 248, in _tt
return f(t)
File "/Users/ec2-user/pytorch/test/test_mps.py", line 10351, in <lambda>
lambda x: x.detach().to("mps").requires_grad_(x.requires_grad) if isinstance(x, torch.Tensor) else x)
RuntimeError: The MPS backend is supported on MacOS 12.3+.Current OS version can be queried using `sw_vers`
----------------------------------------------------------------------
Ran 4506 tests in 57.884s
FAILED (errors=2687, skipped=159, expected failures=1561)
(pt) ec2-user@ip-172-31-44-228 test %
(pt) ec2-user@ip-172-31-44-228 test % sw_vers
ProductName: macOS
ProductVersion: 12.6.3
BuildVersion: 21G419
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gita3989b2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.3 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.9.16 (main, Mar 8 2023, 04:29:44) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+gita3989b2
[conda] mkl 2023.1.0 h59209a4_43558
[conda] mkl-include 2023.1.0 hecd8cb5_43558
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.1.0a0+gita3989b2 dev_0 <develop>
cc @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 4 |
2,682 | 100,873 |
onnx.export fails if do_constant_folding=False
|
module: onnx, triaged
|
### 🐛 Describe the bug
I am currently using the code below to convert a diffusion unet model to ONNX format. Noticed the unet model is larger than 2GB, which exceeds protobuf limitation, so the model weights will be exported as separate files.
```python
import onnx
import torch
from diffusers import UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
revision="fp16",
subfolder="unet")
unet.cuda()
# unet.float()
with torch.inference_mode(), torch.autocast("cuda"):
inputs = torch.randn(2,4,64,64, dtype=torch.float16, device='cuda'), torch.randn(1, dtype=torch.float16, device='cuda'), torch.randn(2, 77, 768, dtype=torch.float16, device='cuda')
# Export the model
torch.onnx.export(unet,
inputs,
"unet.onnx",
opset_version=14,
do_constant_folding=False, # whether to execute constant folding for optimization
# do_constant_folding=True,
input_names = ['input_0', 'input_1', 'input_2'],
output_names = ['output_0'])
```
The code works as expected when using either float32 precision or the "do_constant_folding=True" flag. However, when attempting to export the model using float16 version and "do_constant_folding=False", the code produces a libprotobuf error.
```
libprotobuf ERROR ../third_party/protobuf/src/google/protobuf/message_lite.cc:457] onnx_torch.ModelProto exceeded maximum protobuf size of 2GB: 2891617469
```
### Versions
1.12.1+cu113
| 0 |
2,683 | 100,850 |
[BUG] Poor torch.bmm performance on H100
|
module: performance, module: cuda, triaged, module: cublas, matrix multiplication
|
### 🐛 Describe the bug
I was recently testing bmm performance on H100. When batch size is 1, the number is normal and expected. However, when I increase batch size to 2 and 4, the TFLOPS number drops significantly. I think the bug stems from cublas because I can obtain similar bad numbers by implementing a CUDA micro-benchmakring script directly using `cublasGemmStridedBatchedEx`.
- batch size 1: 742.7 TFLOPS
- batch size 2: 549.5 TFLOPS
- batch size 4: 542.9 TFLOPS
The pytorch results can be reproduced using the script below with command: `python3 benchmark.py --batch-size 1` with `nvcr.io/ea-bignlp/bignlp-training` container.
```
import argparse
import torch
pt_dtype_mappings = {
"float": torch.float,
"half": torch.half,
"float16": torch.float16,
"bfloat16": torch.bfloat16,
}
def parse_args():
"""Define command-line arguments"""
parser = argparse.ArgumentParser()
parser.add_argument("--batch-size", type=int, default=1, help="Batch size")
parser.add_argument("--sequence-length", type=int, default=2048, help="Sequence length")
parser.add_argument("--hidden-size", type=int, default=8192, help="Hidden size")
parser.add_argument("--warmup", type=int, default=10, help="Warmp-up iterations")
parser.add_argument(
"--iterations",
type=int,
default=100,
help="The number of repeat matmul iterations",
)
parser.add_argument("--dtype", type=str, default="bfloat16", help="Precision of the tensor")
parser.add_argument("--gpu-idx", type=int, default=0, help="GPU index")
return parser.parse_args()
def run(
batch_size, sequence_size, hidden_size, warmup=10, iterations=100, dtype="float", gpu_idx=0
):
ngpu = torch.cuda.device_count()
assert gpu_idx < ngpu, f"GPU index {gpu_idx} is out of range ({ngpu} GPUs available)"
torch.cuda.set_device(gpu_idx)
a = torch.randn(batch_size, sequence_size, hidden_size, dtype=dtype, device=f"cuda:{gpu_idx}")
b = torch.randn(batch_size, hidden_size, sequence_size, dtype=dtype, device=f"cuda:{gpu_idx}")
c = torch.randn(batch_size, sequence_size, sequence_size, dtype=dtype, device=f"cuda:{gpu_idx}")
for _ in range(warmup):
torch.bmm(a, b, out=c)
torch.cuda.synchronize()
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for _ in range(iterations):
torch.bmm(a, b, out=c)
end.record()
torch.cuda.synchronize()
tflops = (
2
* batch_size
* sequence_size**2
* hidden_size
* iterations
/ start.elapsed_time(end)
/ 10**9
)
print(
f"The TFLOPS for computing batched matmul between {dtype} tensor "
f"({batch_size}, {sequence_size}, {hidden_size}) and "
f"({batch_size}, {hidden_size}, {sequence_size}) on GPU {gpu_idx} is {tflops}"
)
if __name__ == "__main__":
args = parse_args()
batch_size = args.batch_size
sequence_size = args.sequence_length
hidden_size = args.hidden_size
warmup = args.warmup
iterations = args.iterations
if args.dtype not in pt_dtype_mappings:
raise ValueError(f"Unsupported dtype: {args.dtype}")
dtype = pt_dtype_mappings[args.dtype]
gpu_idx = args.gpu_idx
run(batch_size, sequence_size, hidden_size, warmup, iterations, dtype, gpu_idx)
```
### Versions
```
PyTorch version: 1.14.0a0+44dac51
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
Frequency boost: enabled
CPU MHz: 1895.558
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.9.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.14.0a0+44dac51
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchmetrics==0.9.1
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[pip3] triton==2.0.0
[pip3] tritonclient==2.22.4
[conda] Could not collect
```
cc @ngimel @csarofeen @ptrblck @xwang233
| 6 |
2,684 | 100,847 |
logger instead of print in lr_scheduler.py
|
module: optimizer, module: logging, triaged, actionable, module: LrScheduler
|
### 🚀 The feature, motivation and pitch
In lr_scheduler.py, there are a couple calls to print(). For example, `ReduceLROnPlateau._reduce_lr` calls `print`. If it called a logger instead, the logs could be turned off, or at least would show up in python logging streams as part of stderr.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 7 |
2,685 | 100,842 |
Accuracy issues with Jitterated complex kernels for acos, acosh, asin, asinh, tan and tanh
|
module: cuda, good first issue, triaged, module: jiterator
|
## Issue description
The following complex kernels cause CI failures when implemented with `jitted_gpu_kernel`.
- [ ] [acos](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricAcosKernel.cu)
- [ ] [acosh](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricAcoshKernel.cu)
- [ ] [asin](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricAsinKernel.cu)
- [ ] [asinh](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricAsinhKernel.cu)
- [ ] [tan](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricTanKernel.cu)
- [ ] [tanh](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/UnaryGeometricTanhKernel.cu)
Some may simply require a small tolerance increase to the tests but for larger differences we need to figure out which implementation is correct, thrust or jitterator.
Note that `gpu_kernel` implemenations use `thrust` while `jitted_gpu_kernel` calls the complex math implementations we vendor from `libc++`. These can be found in [`llvm_complex.cpp`](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/llvm_complex.cpp).
cc @ngimel @mruberry @ezyang @soumith @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
2,686 | 100,838 |
Dynamo infers different return type vs. eager for `torch.ops.aten`
|
good first issue, triaged, oncall: pt2, module: decompositions
|
### 🐛 Describe the bug
```python
import torch
from torch._dynamo.backends.common import aot_autograd
from torch._functorch.aot_autograd import make_boxed_compiler
import torch._dynamo as dynamo
@make_boxed_compiler
def my_aot_autograd_backend(gm: torch.fx.GraphModule, _example_inputs):
last_node = list(gm.graph.nodes)[-2]
print("dynamo return dtype", last_node.meta["val"].dtype)
return gm
my_backend = aot_autograd(fw_compiler=my_aot_autograd_backend)
def compile(model, example_args):
with torch.no_grad():
torch._dynamo.reset()
dynamo.optimize(my_backend, nopython=True)(model)(*example_args)
def run(model, example_args):
print(model.__class__.__name__)
print(
f"eager return dtype",
model.forward(*example_args).dtype,
)
compile(model, example_args)
print()
class BaddbmmDifferentDtypesModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, batch1, batch2):
return torch.ops.aten.baddbmm(input, batch1, batch2)
run(
BaddbmmDifferentDtypesModule(),
(
torch.randint(size=(3, 4, 5), low=0, high=10),
torch.rand(3, 4, 6),
torch.rand(3, 6, 5),
),
)
class FullModuleInt3D(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self):
return torch.ops.aten.full([2, 3, 4], 5)
run(FullModuleInt3D(), ())
class ThresholdBackward1dIntModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, grad, input):
return torch.ops.aten.threshold_backward(grad, input, 1)
run(
ThresholdBackward1dIntModule(),
(torch.randint(0, 10, (4,)), torch.randint(0, 8, (4,))),
)
class ThresholdBackward2dIntModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, grad, input):
return torch.ops.aten.threshold_backward(grad, input, 0.5)
run(
ThresholdBackward2dIntModule(),
(torch.randint(0, 10, (4, 5)), torch.randint(0, 8, (4, 5))),
)
class ThresholdBackward3dIntModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, grad, input):
return torch.ops.aten.threshold_backward(grad, input, 1)
run(
ThresholdBackward2dIntModule(),
(torch.randint(0, 10, (4, 5, 6)), torch.randint(0, 8, (4, 5, 6))),
)
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @SherlockNoMad @powderluv, @ramiro050
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230428+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.6
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6787.52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230428+cu117
[pip3] torchaudio==2.0.0.dev20230313+cu117
[pip3] torchvision==0.16.0.dev20230428+cu117
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230428+cu117 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230313+cu117 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230428+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 6 |
2,687 | 100,821 |
add new private operator copy-on-write torch._lazy_clone
|
module: internals, triaged, open source, ciflow/trunk, topic: not user facing, ciflow/mps, ciflow/inductor, no-stale
|
add new private operator copy-on-write torch._lazy_clone
Summary:
This is an early access operator with some limitations. It will more
eagerly copy than desired because we haven't completed our audit
partitioning data pointer access between const_data_ptr and
mutable_data_ptr.
The idea is to add some logging when a copy is triggered to help
prioritize which operations will be audited and fixed.
Test Plan: Added a new unit test.
---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/100821).
* __->__ #100821
* #100820
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 9 |
2,688 | 100,820 |
implement a function to materialize a copy-on-write storage
|
module: internals, triaged, open source, ciflow/trunk, topic: not user facing, ciflow/mps
|
implement a function to materialize a copy-on-write storage
Summary:
This is triggered if the storage is copy-on-write whenever mutable
access to a DataPtr is given.
Test Plan: 100% code coverage
---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/100820).
* #100821
* __->__ #100820
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 3 |
2,689 | 100,812 |
hf_LongFormer failing eval with inductor and dynamic shapes
|
triaged, oncall: pt2, module: dynamic shapes, module: inductor
|
### 🐛 Describe the bug
Here is the error:
```
2023-05-06T11:09:15.3987736Z cuda eval hf_Longformer WARNING:common:fp64 golden ref were not generated for hf_Longformer. Setting accuracy check to cosine
2023-05-06T11:09:23.9126068Z [2023-05-06 11:09:23,911] torch._dynamo.variables.torch: [WARNING] Calling <built-in method div of type object at 0x7f6ee83e7ec0> on only torch.SymInt arguments is not yet supported.
2023-05-06T11:09:23.9127806Z To support this behavior, we need to allow const-propping tensors that store symint data.
2023-05-06T11:09:23.9130519Z For now, dynamo will explicitly graph break when it encounters user code with this behavior.
2023-05-06T11:09:23.9130774Z
2023-05-06T11:09:29.3972191Z ERROR:common:backend='inductor' raised:
2023-05-06T11:09:29.3972744Z LoweringException: AttributeError: 'View' object has no attribute 'get_stride'
2023-05-06T11:09:29.3973066Z target: aten.sym_stride
2023-05-06T11:09:29.3973300Z args[0]: TensorBox(
2023-05-06T11:09:29.3973508Z View(
2023-05-06T11:09:29.3973680Z View(
2023-05-06T11:09:29.3980329Z PermuteView(data=PermuteView(data=View(
2023-05-06T11:09:29.3980772Z StorageBox(
2023-05-06T11:09:29.3981578Z Pointwise(
2023-05-06T11:09:29.3982239Z 'cuda',
2023-05-06T11:09:29.3982597Z torch.float16,
2023-05-06T11:09:29.3982979Z def inner_fn(index):
2023-05-06T11:09:29.3983372Z i0, i1, i2 = index
2023-05-06T11:09:29.3983722Z tmp0 = ops.load(buf1, i2 + 768 * i1 + 768 * i0 * s0)
2023-05-06T11:09:29.3984197Z tmp1 = ops.load(arg1_1, i2)
2023-05-06T11:09:29.3984537Z tmp2 = tmp0 + tmp1
2023-05-06T11:09:29.3984997Z tmp3 = ops.constant(8.0, torch.float16)
2023-05-06T11:09:29.3985422Z tmp4 = tmp2 / tmp3
2023-05-06T11:09:29.3985786Z return tmp4
2023-05-06T11:09:29.3986166Z ,
2023-05-06T11:09:29.3986453Z ranges=[4096, s0, 768],
2023-05-06T11:09:29.3986872Z origin_node=div,
2023-05-06T11:09:29.3987238Z origins={add, div}
2023-05-06T11:09:29.3987584Z )
2023-05-06T11:09:29.3987943Z ),
2023-05-06T11:09:29.3988239Z size=(4096, s0, 12, 64),
2023-05-06T11:09:29.3988695Z reindex=lambda i0, i1, i2, i3: [i0, i1, 64*i2 + i3],
2023-05-06T11:09:29.3989110Z origins={add, div, view_6}
2023-05-06T11:09:29.3989558Z ), dims=[1, 0, 2, 3]), dims=[0, 2, 1, 3]),
2023-05-06T11:09:29.3989890Z size=(12*s0, 4096, 64),
2023-05-06T11:09:29.3990546Z reindex=lambda i0, i1, i2: [ModularIndexing(i0, 12, s0), ModularIndexing(i0, 1, 12), i1, i2],
2023-05-06T11:09:29.3991025Z origins={view_8}
2023-05-06T11:09:29.3991405Z ),
2023-05-06T11:09:29.3991743Z size=(12*s0, 8, 512, 64),
2023-05-06T11:09:29.3992205Z reindex=lambda i0, i1, i2, i3: [i0, 512*i1 + i2, i3],
2023-05-06T11:09:29.3992593Z origins={view_10}
2023-05-06T11:09:29.3992932Z )
2023-05-06T11:09:29.3993256Z )
2023-05-06T11:09:29.3993505Z args[1]: 1
2023-05-06T11:09:29.3993712Z
2023-05-06T11:09:29.3993723Z
2023-05-06T11:09:29.3994041Z You can suppress this exception and fall back to eager by setting:
2023-05-06T11:09:29.3994494Z import torch._dynamo
2023-05-06T11:09:29.3994998Z torch._dynamo.config.suppress_errors = True
2023-05-06T11:09:29.3995420Z Traceback (most recent call last):
2023-05-06T11:09:29.3996020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy
2023-05-06T11:09:29.3996946Z new_result = optimized_model_iter_fn(model_copy, example_inputs)
2023-05-06T11:09:29.3998022Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn
2023-05-06T11:09:29.3998548Z return fn(*args, **kwargs)
2023-05-06T11:09:29.3999141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations
2023-05-06T11:09:29.3999774Z self.model_iter_fn(mod, inputs, collect_outputs=False)
2023-05-06T11:09:29.4000427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass
2023-05-06T11:09:29.4000965Z return mod(*inputs)
2023-05-06T11:09:29.4001847Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
2023-05-06T11:09:29.4002802Z return self._call_impl(*args, **kwargs)
2023-05-06T11:09:29.4003672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
2023-05-06T11:09:29.4004335Z return forward_call(*args, **kwargs)
2023-05-06T11:09:29.4005286Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward
2023-05-06T11:09:29.4005907Z outputs = self.longformer(
2023-05-06T11:09:29.4006783Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
2023-05-06T11:09:29.4007391Z return self._call_impl(*args, **kwargs)
2023-05-06T11:09:29.4008313Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
2023-05-06T11:09:29.4008981Z return forward_call(*args, **kwargs)
2023-05-06T11:09:29.4009571Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward
2023-05-06T11:09:29.4010216Z encoder_outputs = self.encoder(
2023-05-06T11:09:29.4010872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
2023-05-06T11:09:29.4011242Z return self._call_impl(*args, **kwargs)
2023-05-06T11:09:29.4011745Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
2023-05-06T11:09:29.4012152Z return forward_call(*args, **kwargs)
2023-05-06T11:09:29.4012789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward
2023-05-06T11:09:29.4013218Z is_global_attn = is_index_global_attn.flatten().any().item()
2023-05-06T11:09:29.4013834Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1326, in <resume in forward>
2023-05-06T11:09:29.4014220Z layer_outputs = layer_module(
2023-05-06T11:09:29.4014726Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
2023-05-06T11:09:29.4015096Z return self._call_impl(*args, **kwargs)
2023-05-06T11:09:29.4015595Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
2023-05-06T11:09:29.4015951Z return forward_call(*args, **kwargs)
2023-05-06T11:09:29.4016589Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors
2023-05-06T11:09:29.4016975Z return callback(frame, cache_size, hooks, frame_state)
2023-05-06T11:09:29.4017495Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame
2023-05-06T11:09:29.4017889Z result = inner_convert(frame, cache_size, hooks, frame_state)
2023-05-06T11:09:29.4018410Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn
2023-05-06T11:09:29.4018744Z return fn(*args, **kwargs)
2023-05-06T11:09:29.4019250Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert
2023-05-06T11:09:29.4019604Z return _compile(
2023-05-06T11:09:29.4020068Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-06T11:09:29.4020426Z r = func(*args, **kwargs)
2023-05-06T11:09:29.4020908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile
2023-05-06T11:09:29.4021290Z out_code = transform_code_object(code, transform)
2023-05-06T11:09:29.4021861Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
2023-05-06T11:09:29.4022449Z transformations(instructions, code_options)
2023-05-06T11:09:29.4022980Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform
2023-05-06T11:09:29.4023467Z tracer.run()
2023-05-06T11:09:29.4023933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run
2023-05-06T11:09:29.4024259Z super().run()
2023-05-06T11:09:29.4024725Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run
2023-05-06T11:09:29.4025036Z and self.step()
2023-05-06T11:09:29.4025509Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step
2023-05-06T11:09:29.4025861Z getattr(self, inst.opname)(inst)
2023-05-06T11:09:29.4026514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE
2023-05-06T11:09:29.4026880Z self.output.compile_subgraph(
2023-05-06T11:09:29.4027408Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph
2023-05-06T11:09:29.4027833Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
2023-05-06T11:09:29.4028191Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
2023-05-06T11:09:29.4028490Z return func(*args, **kwds)
2023-05-06T11:09:29.4029009Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph
2023-05-06T11:09:29.4029406Z compiled_fn = self.call_user_compiler(gm)
2023-05-06T11:09:29.4029896Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-06T11:09:29.4030301Z r = func(*args, **kwargs)
2023-05-06T11:09:29.4030823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler
2023-05-06T11:09:29.4031273Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
2023-05-06T11:09:29.4031845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler
2023-05-06T11:09:29.4032243Z compiled_fn = compiler_fn(gm, self.example_inputs())
2023-05-06T11:09:29.4032790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper
2023-05-06T11:09:29.4033158Z compiled_gm = compiler_fn(gm, example_inputs)
2023-05-06T11:09:29.4033672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor
2023-05-06T11:09:29.4034027Z return compile_fx(*args, **kwargs)
2023-05-06T11:09:29.4034517Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx
2023-05-06T11:09:29.4034859Z return aot_autograd(
2023-05-06T11:09:29.4035357Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn
2023-05-06T11:09:29.4035754Z cg = aot_module_simplified(gm, example_inputs, **kwargs)
2023-05-06T11:09:29.4036304Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified
2023-05-06T11:09:29.4036979Z compiled_fn = create_aot_dispatcher_function(
2023-05-06T11:09:29.4037541Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-06T11:09:29.4037864Z r = func(*args, **kwargs)
2023-05-06T11:09:29.4038408Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function
2023-05-06T11:09:29.4038882Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-06T11:09:29.4039476Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe
2023-05-06T11:09:29.4040078Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-06T11:09:29.4040747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base
2023-05-06T11:09:29.4041194Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
2023-05-06T11:09:29.4041775Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base
2023-05-06T11:09:29.4042192Z compiled_fw = compiler(fw_module, adjusted_flat_args)
2023-05-06T11:09:29.4042845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-06T11:09:29.4043183Z r = func(*args, **kwargs)
2023-05-06T11:09:29.4043674Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base
2023-05-06T11:09:29.4044102Z return inner_compile(
2023-05-06T11:09:29.4044610Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper
2023-05-06T11:09:29.4045003Z inner_compiled_fn = compiler_fn(gm, example_inputs)
2023-05-06T11:09:29.4045515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner
2023-05-06T11:09:29.4045829Z return fn(*args, **kwargs)
2023-05-06T11:09:29.4046141Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
2023-05-06T11:09:29.4046442Z return func(*args, **kwds)
2023-05-06T11:09:29.4046928Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner
2023-05-06T11:09:29.4047283Z graph.run(*example_inputs)
2023-05-06T11:09:29.4047765Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper
2023-05-06T11:09:29.4048099Z r = func(*args, **kwargs)
2023-05-06T11:09:29.4048547Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run
2023-05-06T11:09:29.4048878Z return super().run(*args)
2023-05-06T11:09:29.4049342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run
2023-05-06T11:09:29.4049673Z self.env[node] = self.run_node(node)
2023-05-06T11:09:29.4050196Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 488, in run_node
2023-05-06T11:09:29.4050539Z result = super().run_node(n)
2023-05-06T11:09:29.4051006Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node
2023-05-06T11:09:29.4051392Z return getattr(self, n.op)(n.target, args, kwargs)
2023-05-06T11:09:29.4051908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 392, in call_function
2023-05-06T11:09:29.4052321Z raise LoweringException(e, target, args, kwargs).with_traceback(
2023-05-06T11:09:29.4052851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 389, in call_function
2023-05-06T11:09:29.4053212Z out = lowerings[target](*args, **kwargs)
2023-05-06T11:09:29.4053705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 228, in wrapped
2023-05-06T11:09:29.4054050Z out = decomp_fn(*args, **kwargs)
2023-05-06T11:09:29.4054550Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 4036, in sym_stride
2023-05-06T11:09:29.4054888Z return a.get_stride()[dim]
2023-05-06T11:09:29.4055413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3823, in __getattr__
2023-05-06T11:09:29.4055739Z fn = getattr(self.data, name)
2023-05-06T11:09:29.4056167Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
2023-05-06T11:09:29.4056799Z LoweringException: AttributeError: 'View' object has no attribute 'get_stride'
2023-05-06T11:09:29.4057098Z target: aten.sym_stride
2023-05-06T11:09:29.4057329Z args[0]: TensorBox(
2023-05-06T11:09:29.4057536Z View(
2023-05-06T11:09:29.4057712Z View(
2023-05-06T11:09:29.4057964Z PermuteView(data=PermuteView(data=View(
2023-05-06T11:09:29.4058222Z StorageBox(
2023-05-06T11:09:29.4058418Z Pointwise(
2023-05-06T11:09:29.4058673Z 'cuda',
2023-05-06T11:09:29.4058887Z torch.float16,
2023-05-06T11:09:29.4059105Z def inner_fn(index):
2023-05-06T11:09:29.4059335Z i0, i1, i2 = index
2023-05-06T11:09:29.4059693Z tmp0 = ops.load(buf1, i2 + 768 * i1 + 768 * i0 * s0)
2023-05-06T11:09:29.4059987Z tmp1 = ops.load(arg1_1, i2)
2023-05-06T11:09:29.4060260Z tmp2 = tmp0 + tmp1
2023-05-06T11:09:29.4060562Z tmp3 = ops.constant(8.0, torch.float16)
2023-05-06T11:09:29.4060973Z tmp4 = tmp2 / tmp3
2023-05-06T11:09:29.4061212Z return tmp4
2023-05-06T11:09:29.4061441Z ,
2023-05-06T11:09:29.4061658Z ranges=[4096, s0, 768],
2023-05-06T11:09:29.4061877Z origin_node=div,
2023-05-06T11:09:29.4062197Z origins={add, div}
2023-05-06T11:09:29.4062544Z )
2023-05-06T11:09:29.4062834Z ),
2023-05-06T11:09:29.4063103Z size=(4096, s0, 12, 64),
2023-05-06T11:09:29.4063560Z reindex=lambda i0, i1, i2, i3: [i0, i1, 64*i2 + i3],
2023-05-06T11:09:29.4063818Z origins={add, div, view_6}
2023-05-06T11:09:29.4064068Z ), dims=[1, 0, 2, 3]), dims=[0, 2, 1, 3]),
2023-05-06T11:09:29.4064317Z size=(12*s0, 4096, 64),
2023-05-06T11:09:29.4064631Z reindex=lambda i0, i1, i2: [ModularIndexing(i0, 12, s0), ModularIndexing(i0, 1, 12), i1, i2],
2023-05-06T11:09:29.4064941Z origins={view_8}
2023-05-06T11:09:29.4065145Z ),
2023-05-06T11:09:29.4065336Z size=(12*s0, 8, 512, 64),
2023-05-06T11:09:29.4065597Z reindex=lambda i0, i1, i2, i3: [i0, 512*i1 + i2, i3],
2023-05-06T11:09:29.4065847Z origins={view_10}
2023-05-06T11:09:29.4066048Z )
2023-05-06T11:09:29.4066214Z )
2023-05-06T11:09:29.4066398Z args[1]: 1
2023-05-06T11:09:29.4066524Z
2023-05-06T11:09:29.4066531Z
2023-05-06T11:09:29.4066697Z You can suppress this exception and fall back to eager by setting:
2023-05-06T11:09:29.4066975Z import torch._dynamo
2023-05-06T11:09:29.4067255Z torch._dynamo.config.suppress_errors = True
2023-05-06T11:09:29.4067433Z
2023-05-06T11:09:29.4067600Z TorchDynamo optimized model failed to run because of following error
2023-05-06T11:09:29.4067877Z fail_to_run
```
full logs https://ossci-raw-job-status.s3.amazonaws.com/log/13280812757
Possibly related to https://github.com/pytorch/pytorch/pull/100115
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @soumith
### Versions
master
| 0 |
2,690 | 100,807 |
[torch.compile] returns output with WRONG SHAPE after `cat_slice_cat`
|
triaged, inductor_pattern_match
|
### 🐛 Describe the bug
`torch.compile` returns output with WRONG SHAPE after `cat_slice_cat`
```py
import torch
torch.manual_seed(420)
width = 16
height = 16
channels = 3
batch_size = 2
x = torch.randn(batch_size, channels, height, width)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
cat_output = torch.cat([x, x[:, :5]], dim=1)
final_output = torch.cat([cat_output, cat_output[:, :-3]], dim=1)
return final_output
func = Model()
with torch.no_grad():
func.train(False)
jit_func = torch.compile(func)
res1 = func(x) # without jit
res2 = jit_func(x)
print(res1.shape) # torch.Size([2, 9, 16, 16])
print(res2.shape) # torch.Size([2, 6, 16, 16])
torch.testing.assert_allclose(res1, res2)
# AssertionError: The values for attribute 'shape' do not match: torch.Size([2, 9, 16, 16]) != torch.Size([2, 6, 16, 16]).
```
I found that this model triggers the `cat_slice_cat` optimization, which would be the cause.
### Versions
<details>
<summary>Click to expand</summary>
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230503+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230503+cu118
[pip3] torchaudio==2.1.0.dev20230503+cu118
[pip3] torchvision==0.16.0.dev20230503+cu118
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230503+cu118 pypi_0 pypi
```
</details>
| 0 |
2,691 | 100,804 |
Wrong type for `get_lr` inside lr_scheduler.pyi
|
module: optimizer, module: typing, triaged, actionable, module: LrScheduler
|
### 🐛 Describe the bug
`lr_scheduler.pyi` describes `def get_lr(self) -> float: ...` but the implementation returns `List[float]`
https://github.com/pytorch/pytorch/blob/31f311a816c026bbfca622d6121d6a7fab44260d/torch/optim/lr_scheduler.pyi#L19
https://github.com/pytorch/pytorch/blob/31f311a816c026bbfca622d6121d6a7fab44260d/torch/optim/lr_scheduler.py#L394
### Versions
Current master branch
cc @vincentqb @jbschlosser @albanD @janeyx99 @ezyang @malfet @rgommers @xuzhao9 @gramster
| 2 |
2,692 | 100,801 |
There is a performance drop because we have not yet implemented the batching rule for aten::native_dropout_backward
|
triaged, actionable, module: functorch
|
### 🐛 Describe the bug
On Google Colab when using vmap. Unlike #97425, this is not using torch.compile.
Example: https://colab.research.google.com/drive/1g7pqIrVJkr8uB0wb9nR18Ak_rHO01xO0?usp=sharing
```
/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py:303: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::native_dropout_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py:303: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::_thnn_fused_lstm_cell_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
### Versions
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchdata==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 1 |
2,693 | 100,796 |
[Quant][pt2e] Failed to run pt2e flow on LLaMA
|
oncall: quantization, triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
When try to enable the pt2e flow on transformer models, taking `LLaMA` as example. We meet the issue that these models can't generate the FX graph by `dynamo.export` API.
To reproduce this issue: Please download the models from [here](https://huggingface.co/decapoda-research/llama-7b-hf). Rename the downloaded folder from `llama-7b-hf` to `llama-7b`. Then test with below script.
```
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM, LlamaTokenizer
import torch._dynamo as torchdynamo
import torch._inductor
import copy
prompt_pool = {
"32": "Once upon a time, there existed a little girl who liked to have adventures. She wanted to go to places and meet new people, and have fun",
"512": "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web. Playing on the web works, but you have to simulate multiple touch for table moving and that can be a bit confusing. There is a lot I'd like to talk about. I will go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks which I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise - something with a lot of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident that I could fit any theme around it. In the end, the problem with a theme like 'Evolution' in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game? In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it's not evolution anymore - it's the equivalent of intelligent design, the fable invented by creationists to combat the idea of evolution. Being agnostic and a Pastafarian, that's not something that rubbed me the right way. Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn't want to create an 'intelligent design' simulator and wrongly call it evolution. This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I'd say the only real solution was through the use of artificial selection, somehow. So far, I have not seen any entry using this at its core gameplay. Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out. My initial idea was to create something where humanity tried to evolve to a next level",
"1024": "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web. Playing on the web works, but you have to simulate multiple touch for table moving and that can be a bit confusing. There is a lot I'd like to talk about. I will go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks which I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise - something with a lot of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident that I could fit any theme around it. In the end, the problem with a theme like 'Evolution' in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game? In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it's not evolution anymore - it's the equivalent of intelligent design, the fable invented by creationists to combat the idea of evolution. Being agnostic and a Pastafarian, that's not something that rubbed me the right way. Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn't want to create an 'intelligent design' simulator and wrongly call it evolution. This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I'd say the only real solution was through the use of artificial selection, somehow. So far, I haven't seen any entry using this at its core gameplay. Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out. My initial idea was to create something where humanity tried to evolve to a next level, but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn't think of compelling (read: serious) mechanics for that. Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg? The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it. Conversations with my inspiring co-worker Roushey (who also created the 'Mechanical Underdogs' signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist - by evolving from a normal dinner table. So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your 'base'. There are 5 other guests at the table, each with their own plate. Your plate can spawn little pieces of pasta. You do so by 'ordering' them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying 'costs', which are debited from your credits (you start with a number of credits). Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps). Your pasta doesn't like other people's pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill. Once a pasta is in vicinity of a plate",
"2017": "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web. Playing on the web works, but you have to simulate multiple touch for table moving and that can be a bit confusing. There is a lot I'd like to talk about. I will go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks which I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise - something with a lot of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident that I could fit any theme around it. In the end, the problem with a theme like 'Evolution' in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game? In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it's not evolution anymore - it's the equivalent of intelligent design, the fable invented by creationists to combat the idea of evolution. Being agnostic and a Pastafarian, that's not something that rubbed me the right way. Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn't want to create an 'intelligent design' simulator and wrongly call it evolution. This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I'd say the only real solution was through the use of artificial selection, somehow. So far, I have not seen any entry using this at its core gameplay. Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out. My initial idea was to create something where humanity tried to evolve to a next level but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn't think of compelling (read: serious) mechanics for that. Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg? The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it. Conversations with my inspiring co-worker Roushey (who also created the 'Mechanical Underdogs' signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist - by evolving from a normal dinner table. So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your 'base'. There are 5 other guests at the table, each with their own plate. Your plate can spawn little pieces of pasta. You do so by 'ordering' them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying 'costs', which are debited from your credits (you start with a number of credits). Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps). Your pasta doesn't like other people's pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill. Once a pasta is in the vicinity of a plate, it starts conquering it for its team. It takes around 10 seconds for a plate to be conquered; less if more pasta from the same team are around. If pasta from other team are around, though, they get locked down in their attempt, unable to conquer the plate, until one of them die (think Battlefield's standard 'Conquest' mode). You get points every second for every plate you own. Over time, the concept also evolved to use an Italian bistro as its main scenario. Carlos, Carlos' Bistro's founder and owner Setup No major changes were made from my work setup. I used FDT and Starling creating an Adobe AIR (ActionScript) project, all tools or frameworks I already had some knowledge with. One big change for me was that I livestreamed my work through a twitch.tv account. This was a new thing for me. As recommended by Roushey, I used a program called XSplit and I got to say, it is pretty amazing. It made the livestream pretty effortless and the features are awesome, even for the free version. It was great to have some of my friends watch me, and then interact with them and random people through chat. It was also good knowing that I was also recording a local version of the files, so I could make a timelapse video later. Knowing the video was being recorded also made me a lot more self-conscious about my computer use, as if someone was watching over my shoulder. It made me realize that sometimes I spend too much time in seemingly inane tasks (I ended up wasting the longest time just to get some text alignment the way I wanted - it'll probably drive someone crazy if they watch it) and that I do way too many typos where writing code. I pretty much spend half of the time writing a line and the other half fixing the crazy characters in it. My own stream was probably boring to watch since I was coding for the most time. But livestreaming is one of the cool things to do as a spectator too. It was great seeing other people working - I had a few tabs opened on my second monitor all the time. It's actually a bit sad, because if I could, I could have spent the whole weekend just watching other people working! But I had to do my own work, so I'd only do it once in a while, when resting for a bit. Design Although I wanted some simple, low-fi, high-contrast kind of design, I ended up going with somewhat realistic (vector) art. I think it worked very well, fitting the mood of the game, but I also went overboard. For example: to know the state of a plate (who owns it, who's conquering it and how much time they have left before conquering it, which pasta units are in the queue, etc), you have to look at the plate's bill. The problem I realized when doing some tests is that people never look at the bill! They think it's some kind of prop, so they never actually read its details. Plus, if you're zoomed out too much, you can't actually read it, so it's hard to know what's going on with the game until you zoom in to the area of a specific plate. One other solution that didn't turn out to be as perfect as I thought was how to indicate who a plate base belongs to. In the game, that's indicated by the plate's decoration - its color denotes the team owner. But it's something that fits so well into the design that people never realized it, until they were told about it. In the end, the idea of going with a full physical metaphor is one that should be done with care. Things that are very important risk becoming background noise, unless the player knows its importance. Originally, I wanted to avoid any kind of heads-up display in my game. In the end, I ended up adding it at the bottom to indicate your credits and bases owned, as well as the hideous out-of-place-and-still-not-obvious 'Call Waiter' button. But in hindsight, I should have gone with a simple HUD from the start, especially one that indicated each team's colors and general state of the game without the need for zooming in and out. Development Development went fast. But not fast enough. Even though I worked around 32+ hours for this Ludum Dare, the biggest problem that I had to face in the end was overscoping. I had too much planned,",
"2048": "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web. Playing on the web works, but you have to simulate multiple touch for table moving and that can be a bit confusing. There is a lot I'd like to talk about. I will go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks which I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise - something with a lot of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident that I could fit any theme around it. In the end, the problem with a theme like 'Evolution' in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game? In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it's not evolution anymore - it's the equivalent of intelligent design, the fable invented by creationists to combat the idea of evolution. Being agnostic and a Pastafarian, that's not something that rubbed me the right way. Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn't want to create an 'intelligent design' simulator and wrongly call it evolution. This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I'd say the only real solution was through the use of artificial selection, somehow. So far, I have not seen any entry using this at its core gameplay. Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out. My initial idea was to create something where humanity tried to evolve to a next level but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn't think of compelling (read: serious) mechanics for that. Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg? The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it. Conversations with my inspiring co-worker Roushey (who also created the 'Mechanical Underdogs' signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist - by evolving from a normal dinner table. So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your 'base'. There are 5 other guests at the table, each with their own plate. Your plate can spawn little pieces of pasta. You do so by 'ordering' them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying 'costs', which are debited from your credits (you start with a number of credits). Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps). Your pasta doesn't like other people's pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill. Once a pasta is in the vicinity of a plate, it starts conquering it for its team. It takes around 10 seconds for a plate to be conquered; less if more pasta from the same team are around. If pasta from other team are around, though, they get locked down in their attempt, unable to conquer the plate, until one of them die (think Battlefield's standard 'Conquest' mode). You get points every second for every plate you own. Over time, the concept also evolved to use an Italian bistro as its main scenario. Carlos, Carlos' Bistro's founder and owner Setup No major changes were made from my work setup. I used FDT and Starling creating an Adobe AIR (ActionScript) project, all tools or frameworks I already had some knowledge with. One big change for me was that I livestreamed my work through a twitch.tv account. This was a new thing for me. As recommended by Roushey, I used a program called XSplit and I got to say, it is pretty amazing. It made the livestream pretty effortless and the features are awesome, even for the free version. It was great to have some of my friends watch me, and then interact with them and random people through chat. It was also good knowing that I was also recording a local version of the files, so I could make a timelapse video later. Knowing the video was being recorded also made me a lot more self-conscious about my computer use, as if someone was watching over my shoulder. It made me realize that sometimes I spend too much time in seemingly inane tasks (I ended up wasting the longest time just to get some text alignment the way I wanted - it'll probably drive someone crazy if they watch it) and that I do way too many typos where writing code. I pretty much spend half of the time writing a line and the other half fixing the crazy characters in it. My own stream was probably boring to watch since I was coding for the most time. But livestreaming is one of the cool things to do as a spectator too. It was great seeing other people working - I had a few tabs opened on my second monitor all the time. It's actually a bit sad, because if I could, I could have spent the whole weekend just watching other people working! But I had to do my own work, so I'd only do it once in a while, when resting for a bit. Design Although I wanted some simple, low-fi, high-contrast kind of design, I ended up going with somewhat realistic (vector) art. I think it worked very well, fitting the mood of the game, but I also went overboard. For example: to know the state of a plate (who owns it, who's conquering it and how much time they have left before conquering it, which pasta units are in the queue, etc), you have to look at the plate's bill. The problem I realized when doing some tests is that people never look at the bill! They think it's some kind of prop, so they never actually read its details. Plus, if you're zoomed out too much, you can't actually read it, so it's hard to know what's going on with the game until you zoom in to the area of a specific plate. One other solution that didn't turn out to be as perfect as I thought was how to indicate who a plate base belongs to. In the game, that's indicated by the plate's decoration - its color denotes the team owner. But it's something that fits so well into the design that people never realized it, until they were told about it. In the end, the idea of going with a full physical metaphor is one that should be done with care. Things that are very important risk becoming background noise, unless the player knows its importance. Originally, I wanted to avoid any kind of heads-up display in my game. In the end, I ended up adding it at the bottom to indicate your credits and bases owned, as well as the hideous out-of-place-and-still-not-obvious 'Call Waiter' button. But in hindsight, I should have gone with a simple HUD from the start, especially one that indicated each team's colors and general state of the game without the need for zooming in and out. Development Development went fast. But not fast enough. Even though I worked around 32+ hours for this Ludum Dare, the biggest problem that I had to face in the end was overscoping. I had too much planned, and couldn't get it all done. Content-wise, I had several kinds of pasta planned - Wikipedia is just amazing in that regard,"
}
def test_Llama():
model = LlamaForCausalLM.from_pretrained("llama-7b", low_cpu_mem_usage=True, torch_dtype=torch.float32)
tokenizer = LlamaTokenizer.from_pretrained("llama-7b")
model = model.eval().to(torch.device("cpu")).to(memory_format=torch.channels_last)
prompt = prompt_pool["32"]
batch_size = 1
prompt = [prompt] * batch_size
generate_kwargs = dict(do_sample=False, temperature=0.9, num_beams=4)
with torch.no_grad():
# Test for Dynamo export
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(torch.device("cpu"))
example_inputs = (input_ids, )
model.generate, guards = torchdynamo.export(model.generate, *copy.deepcopy(example_inputs), aten_graph=True)
# # Test for norm dynamo path
# model.generate = torch.compile(model.generate, backend='inductor', dynamic=True)
# input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(torch.device("cpu"))
# _ = model.generate(input_ids, max_new_tokens=32, **generate_kwargs)
return
if __name__ == "__main__":
test_Llama()
```
For details of the error message, please refer to this [gist ](https://gist.github.com/leslie-fang-intel/ed30004497125df538747440a230d3bf).
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git3362c1d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.19.5-1.el7.elrepo.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Genuine CPU
Stepping: 10
CPU MHz: 2900.698
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] torch==2.1.0a0+git3362c1d
[pip3] torchvision==0.15.0a0+5850f37
[conda] mkl 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 <pip>
[conda] mkl-include 2023.0.0 intel_25398 intel
[conda] mkl-service 2.4.0 py38h3605609_14 intel
[conda] mkl-static 2023.0.0 <pip>
[conda] mkl_fft 1.3.1 py38hcab1719_22 intel
[conda] mkl_random 1.2.2 py38hbf47bc3_22 intel
[conda] mkl_umath 0.1.1 py38hf66a691_32 intel
[conda] numpy 1.23.1 <pip>
[conda] numpy 1.22.3 py38hf0956d0_5 intel
[conda] numpy-base 1.22.3 py38h45c9ace_5 intel
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @SherlockNoMad
| 20 |
2,694 | 100,795 |
Quickstart notebook fails to train properly with ROCm
|
module: rocm, triaged
|
### 🐛 Describe the bug
When running a notebook from [Quickstart](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html) using ROCm with Radeon RX 6900 XT on Ubuntu Server 22.04 I get 0% accuracy, while switching to CPU I get proper ~45%.
Here is a non-notebook reproducer I used:
```python
from typing import Any
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.nn import CrossEntropyLoss
from torch.optim import SGD
if (not torch.cuda.is_available()):
print("No CUDA device available")
exit(-1)
device = "cuda"
print(torch.cuda.get_device_properties(device))
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
batch_size = 64
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
def train(dataloader: DataLoader[Any], model: NeuralNetwork, loss_fn: CrossEntropyLoss, optimizer: SGD):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), (batch + 1) * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
epochs = 1
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print("Done!")
```
Here is the output for CUDA run:
```
_CudaDeviceProperties(name='AMD Radeon RX 6900 XT', major=10, minor=3, total_memory=16368MB, multi_processor_count=40)
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
Epoch 1
-------------------------------
loss: 0.022123 [ 64/60000]
loss: 0.000000 [ 6464/60000]
loss: 1.000000 [12864/60000]
loss: 1.000000 [19264/60000]
loss: 0.000000 [25664/60000]
loss: 0.000000 [32064/60000]
loss: 0.000000 [38464/60000]
loss: 0.000000 [44864/60000]
loss: 0.000000 [51264/60000]
loss: 0.249406 [57664/60000]
Test Error:
Accuracy: 0.0%, Avg loss: -0.438122
Done!
```
And here is the output when I switch to CPU (excluding CUDA device logging):
```
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
Epoch 1
-------------------------------
loss: 2.296403 [ 64/60000]
loss: 2.294028 [ 6464/60000]
loss: 2.271100 [12864/60000]
loss: 2.277084 [19264/60000]
loss: 2.251947 [25664/60000]
loss: 2.234447 [32064/60000]
loss: 2.227726 [38464/60000]
loss: 2.205537 [44864/60000]
loss: 2.204656 [51264/60000]
loss: 2.163422 [57664/60000]
Test Error:
Accuracy: 45.8%, Avg loss: 2.168922
Done!
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230502+rocm5.4.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.4.22803-474e8620
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6900 XT
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.4.22803
MIOpen runtime version: 2.19.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: 0
BogoMIPS: 7399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (2 instances)
L1i cache: 128 KiB (2 instances)
L2 cache: 1 MiB (2 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] open-clip-torch==2.19.0
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.1.0.dev20230502+rocm5.4.2
[pip3] torchaudio==2.1.0.dev20230504+rocm5.4.2
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.0.0rc0
[pip3] torchsde==0.2.5
[pip3] torchvision==0.16.0.dev20230504+rocm5.4.2
[conda] Could not collect
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
| 2 |
2,695 | 100,792 |
inductor cpp wrapper: crash when disable lowmem_dropout
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
inductor cpp wrapper: crash when disable lowmem_dropout
Reproduce ut:
```
import torch
import torch._dynamo
import torch._inductor.config as config
dropout = torch.nn.Dropout(p=0.1, inplace=False)
@config.patch(cpp_wrapper=True, lowmem_dropout=False)
@torch._dynamo.optimize("inductor")
def fn(a):
return dropout(a)
x = torch.rand([64, 4, 128, 128])
out = fn(x)
```
Error message:
```
Traceback (most recent call last):
File "test_dropout_cpp.py", line 12, in <module>
out = fn(x)
File "/home/bzheng/anaconda3/envs/torchdynamo/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/eval_frame.py", line 282, in _fn
return fn(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/eval_frame.py", line 435, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/convert_frame.py", line 519, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/convert_frame.py", line 122, in _fn
return fn(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert
return _compile(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/convert_frame.py", line 425, in _compile
out_code = transform_code_object(code, transform)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/convert_frame.py", line 410, in transform
tracer.run()
File "/home/bzheng/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 2010, in run
super().run()
File "/home/bzheng/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 703, in run
and self.step()
File "/home/bzheng/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 663, in step
getattr(self, inst.opname)(inst)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE
self.output.compile_subgraph(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/output_graph.py", line 698, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/bzheng/anaconda3/envs/torchdynamo/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/output_graph.py", line 800, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/output_graph.py", line 859, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/output_graph.py", line 855, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/bzheng/workspace/pytorch/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/backends/inductor.py", line 9, in inductor
return compile_fx(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_inductor/compile_fx.py", line 628, in compile_fx
return compile_fx_with_cpp_wrapper(
File "/home/bzheng/workspace/pytorch/torch/_inductor/compile_fx.py", line 533, in compile_fx_with_cpp_wrapper
return compile_fx(
File "/home/bzheng/workspace/pytorch/torch/_inductor/compile_fx.py", line 728, in compile_fx
return aot_autograd(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/backends/common.py", line 56, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/bzheng/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/bzheng/workspace/pytorch/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/bzheng/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base
compiled_fw = compiler(fw_module, adjusted_flat_args)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base
return inner_compile(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/bzheng/workspace/pytorch/torch/_inductor/debug.py", line 220, in inner
return fn(*args, **kwargs)
File "/home/bzheng/anaconda3/envs/torchdynamo/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/bzheng/workspace/pytorch/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner
compiled_fn = graph.compile_to_fn()
File "/home/bzheng/workspace/pytorch/torch/_inductor/graph.py", line 717, in compile_to_fn
return self.compile_to_module().call
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 177, in time_wrapper
r = func(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/torch/_inductor/graph.py", line 695, in compile_to_module
mod = PyCodeCache.load(code, linemap=linemap)
File "/home/bzheng/workspace/pytorch/torch/_inductor/codecache.py", line 706, in load
return cls.load_by_key_path(key, path, linemap)
File "/home/bzheng/workspace/pytorch/torch/_inductor/codecache.py", line 721, in load_by_key_path
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_bzheng/kc/ckckgql4t3stalfakr53hkwhau5nn4ee4m2aeh5arfaszir6iqud.py", line 60, in <module>
module = load_inline(
File "/home/bzheng/workspace/pytorch/torch/utils/cpp_extension.py", line 1433, in load_inline
return _jit_compile(
File "/home/bzheng/workspace/pytorch/torch/utils/cpp_extension.py", line 1508, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/bzheng/workspace/pytorch/torch/utils/cpp_extension.py", line 1623, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/bzheng/workspace/pytorch/torch/utils/cpp_extension.py", line 1910, in _run_ninja_build
raise RuntimeError(message) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Error building extension 'inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -I/home/bzheng/workspace/tests/-I/home/bzheng/workspace/pytorch/torch/include -I/home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include -I/home/bzheng/workspace/pytorch/torch/include/TH -I/home/bzheng/workspace/pytorch/torch/include/THC -I/home/bzheng/anaconda3/envs/torchdynamo/include/python3.8 -isystem /home/bzheng/workspace/pytorch/torch/include -isystem /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include -isystem /home/bzheng/workspace/pytorch/torch/include/TH -isystem /home/bzheng/workspace/pytorch/torch/include/THC -isystem /home/bzheng/anaconda3/envs/torchdynamo/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -DCPU_CAPABILITY_AVX512 -D C10_USING_CUSTOM_GENERATED_MACROS -c /home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp -o main.o
FAILED: main.o
c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -I/home/bzheng/workspace/tests/-I/home/bzheng/workspace/pytorch/torch/include -I/home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include -I/home/bzheng/workspace/pytorch/torch/include/TH -I/home/bzheng/workspace/pytorch/torch/include/THC -I/home/bzheng/anaconda3/envs/torchdynamo/include/python3.8 -isystem /home/bzheng/workspace/pytorch/torch/include -isystem /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include -isystem /home/bzheng/workspace/pytorch/torch/include/TH -isystem /home/bzheng/workspace/pytorch/torch/include/THC -isystem /home/bzheng/anaconda3/envs/torchdynamo/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -DCPU_CAPABILITY_AVX512 -D C10_USING_CUSTOM_GENERATED_MACROS -c /home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp -o main.o
In file included from /home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:4:
/tmp/torchinductor_bzheng/5b/c5bcubr6yrbvnx73gevjlm24khhax3e2tzjnnvb47oxio6qm462z.h:110: warning: ignoring ‘#pragma unroll ’ [-Wunknown-pragmas]
110 | #pragma unroll
|
/tmp/torchinductor_bzheng/5b/c5bcubr6yrbvnx73gevjlm24khhax3e2tzjnnvb47oxio6qm462z.h:139: warning: ignoring ‘#pragma unroll ’ [-Wunknown-pragmas]
139 | #pragma unroll
|
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp: In function ‘std::vector<at::Tensor> inductor_entry_cpp(const std::vector<at::Tensor>&)’:
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:47:5: error: ‘aten’ was not declared in this scope
47 | aten.bernoulli_(buf1, 0.9)
| ^~~~
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:47:5: note: suggested alternatives:
In file included from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/ir/ir.h:18,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/function_impl.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/method.h:7,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/object.h:6,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/module.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/samplers.h:8,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/datasets.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/all.h:9,
from /home/bzheng/workspace/pytorch/torch/include/torch/extension.h:4,
from /home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:1:
/home/bzheng/workspace/pytorch/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’
353 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/bzheng/workspace/pytorch/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::namespaces::aten’
353 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
In file included from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/function_impl.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/method.h:7,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/object.h:6,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/api/module.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/samplers.h:8,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data/datasets.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/data.h:4,
from /home/bzheng/workspace/pytorch/torch/include/torch/csrc/api/include/torch/all.h:9,
from /home/bzheng/workspace/pytorch/torch/include/torch/extension.h:4,
from /home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:1:
/home/bzheng/workspace/pytorch/torch/include/torch/csrc/jit/ir/ir.h:78:11: note: ‘torch::jit::aten’
78 | namespace aten {
| ^~~~
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:48:56: error: expected primary-expression before ‘,’ token
48 | auto buf3 = at::empty_strided({64L, 4L, 128L, 128L}, {65536L, 16384L, 128L, 1L}, at::device(at::kCPU).dtype(at::kFloat));
| ^
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:48:58: error: expected primary-expression before ‘{’ token
48 | auto buf3 = at::empty_strided({64L, 4L, 128L, 128L}, {65536L, 16384L, 128L, 1L}, at::device(at::kCPU).dtype(at::kFloat));
| ^
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:48:84: error: expected primary-expression before ‘,’ token
48 | auto buf3 = at::empty_strided({64L, 4L, 128L, 128L}, {65536L, 16384L, 128L, 1L}, at::device(at::kCPU).dtype(at::kFloat));
| ^
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:49:83: error: ‘buf3’ was not declared in this scope; did you mean ‘buf1’?
49 | kernel_cpp_1((float*)(arg0_1.data_ptr()), (float*)(buf1.data_ptr()), (float*)(buf3.data_ptr()));
| ^~~~
| buf1
/home/bzheng/.cache/torch_extensions/py38_cpu/inline_extension_c5c7mli22y5hpvkheu57bx7ihqg2ps7kavapfifdyxr56lex4jll/main.cpp:51:17: error: could not convert ‘{buf3}’ from ‘<brace-enclosed initializer list>’ to ‘std::vector<at::Tensor>’
51 | return {buf3};
| ^
| |
| <brace-enclosed initializer list>
ninja: build stopped: subcommand failed.
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.1.0a0+git8e69879
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 160
On-line CPU(s) list: 0-159
Thread(s) per core: 2
Core(s) per socket: 40
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2300.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
L1d cache: 3.8 MiB
L1i cache: 2.5 MiB
L2 cache: 100 MiB
L3 cache: 120 MiB
NUMA node0 CPU(s): 0-39,80-119
NUMA node1 CPU(s): 40-79,120-159
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.6
[pip3] dalle2-pytorch==1.10.5
[pip3] ema-pytorch==0.1.1
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] intel-extension-for-pytorch==2.0.0+git42d5ba6
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] open-clip-torch==2.16.0
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.1.5
[pip3] torch==2.1.0a0+git675029a
[pip3] torch-fidelity==0.3.0
[pip3] torch-scatter==2.1.1+pt20cpu
[pip3] torch-sparse==0.6.17+pt20cpu
[pip3] torch-struct==0.5
[pip3] torchaudio==2.0.0a0+a8f4e97
[pip3] torchdata==0.7.0a0+f1283eb
[pip3] torchmetrics==0.11.0
[pip3] torchrec-nightly==2023.4.1
[pip3] torchtext==0.15.0a0+46e7eef
[pip3] torchvision==0.15.0a0+98c5815
[pip3] torchx==0.3.0
[pip3] vector-quantize-pytorch==0.10.14
[conda] bert-pytorch 0.0.1a4 <pip>
[conda] clip-anytorch 2.5.2 <pip>
[conda] CoCa-pytorch 0.0.6 <pip>
[conda] dalle2-pytorch 1.10.5 <pip>
[conda] ema-pytorch 0.1.1 <pip>
[conda] functorch 1.14.0a0+b71aa0b <pip>
[conda] intel-extension-for-pytorch 2.0.0+git42d5ba6 <pip>
[conda] intel-extension-for-pytorch 2.1.0+git42d5ba6 <pip>
[conda] mkl 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 intel_25398 intel
[conda] mkl-include 2022.2.1 <pip>
[conda] mkl-service 2.4.0 py38h3605609_14 intel
[conda] mkl-static 2022.2.1 <pip>
[conda] mkl_fft 1.3.1 py38hcab1719_22 intel
[conda] mkl_random 1.2.2 py38hbf47bc3_22 intel
[conda] mkl_umath 0.1.1 py38hf66a691_32 intel
[conda] numpy 1.22.3 py38hf0956d0_5 intel
[conda] numpy 1.23.1 <pip>
[conda] numpy-base 1.22.3 py38h45c9ace_5 intel
[conda] open-clip-torch 2.16.0 <pip>
[conda] pytorch-transformers 1.2.0 <pip>
[conda] pytorch-warmup 0.1.1 <pip>
[conda] rotary-embedding-torch 0.1.5 <pip>
[conda] torch 2.1.0a0+git675029a <pip>
[conda] torch-fidelity 0.3.0 <pip>
[conda] torch-scatter 2.1.1+pt20cpu <pip>
[conda] torch-sparse 0.6.17+pt20cpu <pip>
[conda] torch-struct 0.5 <pip>
[conda] torchaudio 2.0.0a0+a8f4e97 <pip>
[conda] torchdata 0.7.0a0+f1283eb <pip>
[conda] torchmetrics 0.11.0 <pip>
[conda] torchrec-nightly 2023.4.1 <pip>
[conda] torchtext 0.15.0a0+46e7eef <pip>
[conda] torchvision 0.15.0a0+98c5815 <pip>
[conda] torchx 0.3.0 <pip>
[conda] vector-quantize-pytorch 0.10.14 <pip>
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
2,696 | 100,790 |
ONNX Opset 16 GridSample Does Not Support 5D Volumetric Input Tensor
|
module: onnx, triaged
|
### 🚀 The feature, motivation and pitch
[#92209](https://github.com/pytorch/pytorch/issues/92209)
I need to use the gridsample operator in my code. I can use onnx in Pytorch1.12.0 to do this successfully, but there are errors in infer, because 5D input is not supported, so I checked the update log for Pytorch 2.0, found that as of now, the latest pytorch onnx does not support 5D input, so please ask:
When Will ONNX's GridSample support for 5D volumetric input be implemented?
### Alternatives
_No response_
### Additional context
_No response_
| 8 |
2,697 | 100,785 |
compile torch2.0 in debug mode
|
triaged, topic: build
|
### 🐛 Describe the bug
Hello, I want to compile torch2.0 in debug mode. However, I met some problems.
[1/1817] Performing build step for 'nccl_external'
FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a /pytorch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
/pytorch/build/nccl/lib/libnccl_static.a
cd /pytorch/third_party/nccl/nccl && make -j64 -l64 CXX=/mnt/lustre/share/gcc-10.1.0/bin/c++ CUDA_HOME=/share/cuda-11.2-cudnn8.1
NVCC=/share/cuda-11.2-cudnn8.1/bin/nvcc NVCC_GENCODE=-gencode=arch=compute_80,code=sm_80 BUILDDIR=/pytorch/build/nccl VERBOSE=0 && /anaconda3/envs/torch_compile/bin/cmake -E touch /pytorch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
make -C src build BUILDDIR=/pytorch/build/nccl
NVCC_GENCODE is -gencode=arch=compute_80,code=sm_80
make[1]: Entering directory `
/pytorch/third_party/nccl/nccl/src'
NVCC_GENCODE is -gencode=arch=compute_80,code=sm_80
make[2]: Entering directory `
/pytorch/third_party/nccl/nccl/src/collectives/device'
In file included from /share/cuda-11.2-cudnn8.1/bin/crt/link.stub:129:
/tmp/tmpxft_00012f69_00000000-4_devlink.fatbin.c:33985860: internal compiler error: Segmentation fault
33985860 | ".text\n");
|
0xc6221f crash_signal
../../gcc/toplev.c:328
0xe9dc53 build_string(int, char const*)
../../gcc/tree.c:2229
0x71134b cp_parser_string_literal
../../gcc/cp/parser.c:4274
0x722151 cp_parser_asm_definition
../../gcc/cp/parser.c:20275
0x722151 cp_parser_block_declaration
../../gcc/cp/parser.c:13513
0x746c82 cp_parser_declaration
../../gcc/cp/parser.c:13433
0x7473a4 cp_parser_translation_unit
../../gcc/cp/parser.c:4734
0x7473a4 c_parse_file()
../../gcc/cp/parser.c:43975
0x80f82b c_common_parse_file()
../../gcc/c-family/c-opts.c:1190
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
make[2]: *** [/pytorch/build/nccl/obj/collectives/device/devlink.o] Error 1
make[2]: Leaving directory `/pytorch/third_party/nccl/nccl/src/collectives/device'
make[1]: *** [/pytorch/build/nccl/obj/collectives/device/colldevice.a] Error 2
make[1]: Leaving directory `
/pytorch/third_party/nccl/nccl/src'
make: *** [src.build] Error 2
ninja: build stopped: subcommand failed.
Building wheel torch-2.1.0a0+gitd4bf76c
-- Building version 2.1.0a0+gitd4bf76c
cmake --build . --target install --config Debug -- -j 1
I have tried set the env variable MAX_JOBS=1, and this problem still occurs. Do you have any suggestions?
### Versions
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.1.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
2,698 | 100,784 |
[CUDA RPC] Incorrect results of GPU Tensor transferring using RPC when parallelized with other GPU programs
|
oncall: distributed, module: cuda
|
### 🐛 Describe the bug
### Issue Summary
This is a simplified version of [Issue](https://github.com/pytorch/pytorch/issues/100725#issue-1697886172).
I transfer a GPU tensor between two GPUs using PyTorch RPC, and find incorrect results when parallelized with other GPU programs, such as a pure computation task in the backend using PyTorch. When the other GPU task is stopped, the results become right.
This suggests that other GPU programs could be interfering with the CUDA Support RPC, leading to incorrect message transfers.
Could anyone provide some insight or guidance on how to prevent other GPU programs from interfering with the CUDA Support RPC and ensure correct message transfers? Any help or suggestions would be greatly appreciated.
### Steps tp Reproduce
1. Run a pure computation task in the backend using PyTorch.
```python
import torch
import time
from multiprocessing import Pool, set_start_method
def run_on_single_gpu(device):
a = torch.randn(20000,20000).cuda(device)
b = torch.randn(20000,20000).cuda(device)
ta = a
tb = b
while True:
a = ta
b = tb
a = torch.sin(a)
b = torch.sin(b)
a = torch.cos(a)
b = torch.cos(b)
a = torch.tan(a)
b = torch.tan(b)
a = torch.exp(a)
b = torch.exp(b)
a = torch.log(a)
b = torch.log(b)
b = torch.matmul(a, b)
#time.sleep(0.000005)
if __name__ == '__main__':
set_start_method('spawn')
print('start running')
num_gpus = torch.cuda.device_count()
pool = Pool(processes=num_gpus)
pool.map(run_on_single_gpu, range(num_gpus))
pool.close()
pool.join()
```
2. Run a PyTorch program which transfers GPU tensors using RPC.
```python
import os
import time
from functools import wraps
import torch
import torch.nn as nn
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
from torch.distributed.rpc import RRef
class Identity(nn.Module):
def __init__(self, device):
super(Identity, self).__init__()
self.device = device
def forward(self, x_rref):
out = x_rref.to_here().to(self.device)
# return out.cpu()
return out
class DistP2P(nn.Module):
"""
Assemble two parts as an nn.Module and define pipelining logic
"""
def __init__(self, workers):
super(DistP2P, self).__init__()
# Put the first part of the ResNet50 on workers[0]
self.p1_rref = rpc.remote(
workers[0],
Identity,
args = ("cuda:0",)
)
# Put the second part of the ResNet50 on workers[1]
self.p2_rref = rpc.remote(
workers[1],
Identity,
args = ("cuda:1",)
)
def forward(self, xs):
out_futures = []
x_rref = RRef(xs)
y_rref = self.p1_rref.remote().forward(x_rref)
z_fut = self.p2_rref.rpc_async().forward(y_rref)
out_futures.append(z_fut)
# collect and cat all output tensors into one tensor.
return torch.cat(torch.futures.wait_all(out_futures))
#########################################################
# Run RPC Processes #
#########################################################
def run_master():
# put the two model parts on worker1 and worker2 respectively
p2p = DistP2P(["worker1", "worker2"])
torch.manual_seed(1234)
inputs = torch.randn(1, 12)
for i in range(5):
print(f"Processing batch {i}")
outputs = p2p(inputs)
print(f"outputs: {outputs}")
def run_worker(rank, world_size):
os.environ['MASTER_ADDR'] = '11.218.124.179'
os.environ['MASTER_PORT'] = '29000'
# Higher timeout is added to accommodate for kernel compilation time in case of ROCm.
options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256, rpc_timeout=300)
if rank == 0:
options.set_device_map("master", {rank: 0})
options.set_device_map("worker1", {rank: 0})
options.set_device_map("worker2", {rank: 1})
rpc.init_rpc(
"master",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
run_master()
else:
options.set_device_map("master", {rank-1: 0})
options.set_device_map("worker1", {rank-1: 0})
options.set_device_map("worker2", {rank-1: 1})
rpc.init_rpc(
f"worker{rank}",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
pass
# block until all rpcs finish
rpc.shutdown()
if __name__=="__main__":
world_size = 3
mp.spawn(run_worker, args=(world_size,), nprocs=world_size, join=True)
```
3. Observe the log of the RPC task and note that the output values are inconsistent when I fix the transferred GPU tensor.
<img width="801" alt="image" src="https://user-images.githubusercontent.com/34735292/236594605-aa5ae209-4bbe-4f57-99dc-e7a70c584897.png">
4. Close the pure computation task and only run the RPC task. Observe that the output values become consistent.
<img width="835" alt="image" src="https://user-images.githubusercontent.com/34735292/236594630-ab874480-de37-4e49-b552-1c40abd2d4cc.png">
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Tencent tlinux 2.2 (Final) (x86_64)
GCC version: (GCC) 8.3.0
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.17
Python version: 3.8.12 (default, Jun 13 2022, 19:37:57) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-4.14.105-1-tlinux3-0013-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: A100-SXM4-40GB
GPU 1: A100-SXM4-40GB
GPU 2: A100-SXM4-40GB
GPU 3: A100-SXM4-40GB
GPU 4: A100-SXM4-40GB
GPU 5: A100-SXM4-40GB
GPU 6: A100-SXM4-40GB
GPU 7: A100-SXM4-40GB
Nvidia driver version: 450.156.00
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.5.0
/usr/lib64/libcudnn_adv_infer.so.8.5.0
/usr/lib64/libcudnn_adv_train.so.8.5.0
/usr/lib64/libcudnn_cnn_infer.so.8.5.0
/usr/lib64/libcudnn_cnn_train.so.8.5.0
/usr/lib64/libcudnn_ops_infer.so.8.5.0
/usr/lib64/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7K62 48-Core Processor
Stepping: 0
CPU MHz: 3301.340
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5190.52
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchpippy==0.1.0
[pip3] torchvision==0.12.0
[pip3] triton==2.0.0
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @ngimel
| 0 |
2,699 | 100,775 |
[torch.compile] returns NaN for `tensor.mul(big_number).softmax()`
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
`torch.compile` returns NaN for `tensor.mul(big_number).softmax()`
```py
import torch
torch.manual_seed(420)
class Model(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Model, self).__init__()
self.query = torch.nn.Linear(input_size, hidden_size)
self.key = torch.nn.Linear(input_size, hidden_size)
self.scale_factor = torch.nn.Parameter(torch.tensor([-1.7197e+14]))
def forward(self, x):
query = self.query(x)
key = self.key(x)
scores = torch.matmul(query, key.transpose(-2, -1))
scores_scaled = scores.mul(self.scale_factor)
t = scores_scaled.softmax(dim=-1)
return t
input_size = 32
hidden_size = 1
x = torch.randn(10, input_size)
func = Model(input_size, hidden_size)
jit_func = torch.compile(func)
res1 = func(x) # without jit
res2 = jit_func(x)
print(res1)
# tensor([[0.], ...)
print(res2)
# tensor([[nan], ...)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230503+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230503+cu118
[pip3] torchaudio==2.1.0.dev20230503+cu118
[pip3] torchvision==0.16.0.dev20230503+cu118
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230503+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230503+cu118 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 3 |
2,700 | 100,764 |
[MPS] Unary ops yield wrong results if striding is different
|
high priority, triaged, module: regression, module: correctness (silent), module: mps
|
### 🐛 Describe the bug
Consider the following code
```python
x=torch.arange(4.0, device='mps').reshape(2, 2)
y=torch.empty(2, 2, device='mps').t()
print(torch.neg(x, out=y))
```
it will print
```
tensor([[-0., -2.],
[-1., -3.]], device='mps:0')
```
but should have printed
```
tensor([[-0., -1.],
[-2., -3.]], device='mps:0')
```
I guess error stems from this check:
https://github.com/pytorch/pytorch/blob/c9593bc0e15b661dc90286eeaf503f31a3c9df78/aten/src/ATen/native/mps/operations/UnaryOps.mm#L84-L88
### Versions
2.0/nightly
cc @ezyang @gchanan @zou3519 @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.