Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,801 | 99,866 |
Logs output_code and inductor do not interact as expected
|
module: logging, triaged
|
### 🐛 Describe the bug
As discussed in https://github.com/pytorch/pytorch/issues/94788#issuecomment-1515441713, setting `inductor` does not generate the `output_code` artefact. This goes against the definition that `output_code` is an artifact owned by `inductor`. This issue lead to the discussion in https://github.com/pytorch/pytorch/pull/99038 of wanting to log the output folder at two different levels, which signals that there's something a bit off about the whole thing. The better way of controlling this would be via logging the output file and the code at two different verbosity levels.
cc @mlazos @kurtamohler
### Versions
master
| 0 |
2,802 | 99,852 |
Slight numerical divergence between torch.compile and eager; shows up in practice on yolov3
|
triaged, oncall: pt2, module: pt2 accuracy
|
### 🐛 Describe the bug
This repro uses the new style accuracy repro infra from https://github.com/pytorch/pytorch/pull/99834 ; to run it, you will need to patch in the two PRs in that stack. This repro does not reproduce without the actual data.
```
import torch._inductor.overrides
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
from torch.fx.experimental.proxy_tensor import make_fx
# REPLACEABLE COMMENT FOR TESTING PURPOSES
# torch version: 2.0.0a0+gita27bd42
# torch cuda version: 11.4
# torch git version: a27bd42bb9ad39504fdd94ad38a5ad0346f1758b
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2021 NVIDIA Corporation
# Built on Sun_Aug_15_21:14:11_PDT_2021
# Cuda compilation tools, release 11.4, V11.4.120
# Build cuda_11.4.r11.4/compiler.30300941_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, unsqueeze_1, unsqueeze_3, rsqrt, convolution):
var_mean = torch.ops.aten.var_mean.correction(convolution, [0, 2, 3], correction = 0, keepdim = True)
getitem_1 = var_mean[1]; var_mean = None
sub = torch.ops.aten.sub.Tensor(convolution, getitem_1); convolution = getitem_1 = None
mul = torch.ops.aten.mul.Tensor(sub, rsqrt); sub = rsqrt = None
mul_6 = torch.ops.aten.mul.Tensor(mul, unsqueeze_1); mul = unsqueeze_1 = None
add_4 = torch.ops.aten.add.Tensor(mul_6, unsqueeze_3); mul_6 = unsqueeze_3 = None
gt = torch.ops.aten.gt.Scalar(add_4, 0); add_4 = None
return (gt,)
import torch._dynamo.repro.after_aot
reader = torch._dynamo.repro.after_aot.InputReader(save_dir='/tmp/minifier-20230423')
buf0 = reader.storage('b3e108077e73f8bbdefbd419a1798700731646a1', 256, device=device(type='cuda', index=0))
t0 = reader.tensor(buf0, (64, 1, 1))
buf1 = reader.storage('6ee108072a73f8bb41fbd4197ff98700151646a1', 256, device=device(type='cuda', index=0))
t1 = reader.tensor(buf1, (64, 1, 1))
buf2 = reader.storage('65d17de8e97efedb72fa1a01d7f26cc32f948b59', 256, device=device(type='cuda', index=0))
t2 = reader.tensor(buf2, (1, 64, 1, 1))
buf3 = reader.storage('38c695755b4a84a70b07c240f092e2b293280811', 50331648, device=device(type='cuda', index=0))
t3 = reader.tensor(buf3, (4, 64, 192, 256))
args = [t0, t1, t2, t3]
mod = make_fx(Repro(), tracing_mode='symbolic')(*args)
from torch._inductor.compile_fx import compile_fx_inner
from torch._dynamo.debug_utils import same_two_models
compiled = compile_fx_inner(mod, args)
class AccuracyError(Exception):
pass
if not same_two_models(mod, compiled, args, only_fwd=True):
raise AccuracyError("Bad accuracy detected")
```
Meta employees only: to fetch the test data, run
```
cd /tmp && manifold getr ai_training_ftw/tree/ezyang/minifier-20230423
```
(or drop it into whatever directory you like and modify the `save_dir` in the repro script.)
I'm hesitant to say this is a bug per se, because the accuracy problem is induced by a `> 0` operation, which if we are unlucky, will be sensitive to epsilon perturbations if we are unlucky with the particular data we get. And I guess there are a few places which could cause divergence, notably it looks like there is an opportunity to introduce a fused multiply-add.
This is a bit annoying for accuracy repro extraction though, because the mask is different, and this difference could get amplified if the network is poorly designed! In the case of yolov3, even when this mask changes, the regular model passes accuracy E2E, so who can say if it's a problem. What is a good strategy for letting people know that the accuracy problem here is benign?
### Versions
master
cc @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @soumith
| 3 |
2,803 | 99,836 |
NTK notebook calculates wrong object - wrong output dimensions
|
triaged, module: functorch
|
### 📚 The doc issue
The notebook given in https://pytorch.org/functorch/stable/notebooks/neural_tangent_kernels.html does not calculate the NTK - the output depends on the size of the input tensors instead of the size of the network's input. The NTK should be a n_in x n_in x n_out x n_out matrix (https://en.wikipedia.org/wiki/Neural_tangent_kernel), where n_in/n_out are the sizes of the network's inputs/outputs.
### Suggest a potential alternative/fix
_No response_
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 0 |
2,804 | 99,825 |
When backend is nccl, the distribution group type generated by Pytorch 2.0 shoule be ProcessGroupNCCL, but is ProcessGroup
|
oncall: distributed
|
### 🐛 Describe the bug
Hi, I have discovered an issue. The default distribution group type generated by Pytorch 2.0 is ProcessGroup, while in Pytorch version 1.8, it is ProcessGroupNCCL. By debugging the code, it was found that the return values of the code in Pytorch 2.0 and 1.8 are different
Pytorch 2.0
```
import torch
issubclass(torch._C._distributed_c10d.ProcessGroupNCCL, torch._C._distributed_c10d.ProcessGroup)
False
```
Pytorch 1.8
```
import torch
issubclass(torch._C._distributed_c10d.ProcessGroupNCCL, torch._C._distributed_c10d.ProcessGroup)
True
```
The above issue will cause problems with this function “_new_process_group_helper”:
```
if issubclass(type(backend_class), ProcessGroup):
pg = backend_class
break
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,805 | 99,821 |
Tracer cannot infer type of Seq2SeqLMOutput
|
oncall: jit
|
### Tracer cannot infer type of Seq2SeqLMOutput
使用torch.jit.trace对模型进行追踪的时候发生了一个错误
错误信息为:
RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput
下面是我这边的代码:
tokenizer = AutoTokenizer.from_pretrained('./outputs/model_files/')
model = AutoModelForSeq2SeqLM.from_pretrained('./outputs/model_files/')
device = torch.device("cpu")
model.to(device)
model.eval()
sample_sentence = "generate some numbers"
encoding = tokenizer(sample_sentence,
padding="max_length",
max_length=5,
return_tensors="pt",
return_attention_mask=True,
truncation=True)
input_ids = encoding.input_ids
attention_mask = encoding.attention_mask
decoder_input_ids = torch.ones(1,1, dtype=torch.int32) * model.config.decoder_start_token_id
traced_model = torch.jit.trace(model, (input_ids,attention_mask,decoder_input_ids),strict=False)
traced_model.save("./model.pt")
具体的错误为信息:
D:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py:701: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
Traceback (most recent call last):
File "E:\Python\project\Chinese_Chat_T5_Base-main\convertModel.py", line 37, in <module>
traced_model = torch.jit.trace(model, (input_ids,attention_mask,decoder_input_ids),strict=False)
File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 759, in trace
return trace_module(
File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput(loss=None, logits=tensor([[[-8.0331, -0.6127, 1.7029, ..., -6.0205, -4.9355, -7.5521]]],
grad_fn=<UnsafeViewBackward0>), past_key_values=((tensor([[[[-4.1845e-01, -3.1748e+00, 3.5584e-01, 1.3317e-01, -4.8382e-01,
4.9041e-01, 1.2883e+00, 5.5251e-01, 2.3777e+00, 3.6629e-01,
-2.3793e-01, 1.6337e+00, 9.4133e-01, -1.0904e+00, -2.8644e+00,
-5.2565e-02, 2.9996e-01, -4.1858e-01, -7.8744e-01, -1.7734e+00,
-1.0728e+00, 5.5014e-01, -1.5405e+00, 2.7343e+00, 3.5340e+00,
-1.5999e-02, -7.7990e-01, 4.5489e-01, -2.4964e-01, -2.9343e-01,
7.0564e-01, 9.1929e-01, 3.4561e+00, -6.6381e-01, 8.5702e-01,
6.3156e-01, -7.5711e-01, 1.6548e+00, -8.5602e-01, -9.3094e-01,
9.1188e-02, -8.6472e-01, 6.4054e-01, 4.7034e-01, 3.4763e+00,
-1.0079e+00, 1.2279e-01, 1.5227e+00, 1.6583e-01, 9.4017e-01,
1.5735e+00, 3.4655e-01, -8.0972e-01, 9.2279e-01, 3.1652e-01,
-2.3178e+00, 5.2484e-02, 4.8382e-01, -1.7146e-01, 2.4539e+00,
.......
[-2.7458e-03, -4.8062e-02, -5.2608e-02, ..., -4.8220e-03,
5.0419e-02, 2.8005e-03]]]], grad_fn=<TransposeBackward0>))), decoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, encoder_last_hidden_state=tensor([[[-0.0070, 0.1318, -0.0300, ..., 0.0244, -0.0696, 0.0580],
[-0.0274, 0.0240, -0.0552, ..., -0.0846, -0.0992, 0.0408],
[-0.0647, 0.0068, -0.0779, ..., 0.0064, 0.0316, 0.0111],
[-0.0445, -0.0067, -0.0273, ..., 0.0320, 0.0382, 0.0814],
[-0.0006, 0.0002, 0.0010, ..., -0.0002, 0.0009, -0.0009]]],
grad_fn=<MulBackward0>), encoder_hidden_states=None, encoder_attentions=None)
:Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor]]
原模型地址为:https://huggingface.co/mxmax/Chinese_Chat_T5_Base
### Versions
torch2.0
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
2,806 | 99,812 |
cuda.is_available() error
|
module: cuda, triaged
|
I test pytorch on a server with epyc 7742 and 8*A100 40G, when print torch.cuda.is_available() there is no output and even can't kill the python process with Ctrl+C, how can i solve this bug without update the driver version? I try install another version cuda like 11.1 or 11.0.2 but still no result.
Hope your reply!
Thanks!
- PyTorch:
- How you installed PyTorch: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
- OS: Ubuntu18.04
- PyTorch version:1.7.1
- Python version:3.8.3
- CUDA/cuDNN version: cuda11.0.3
- GPU models and configuration: 8*A100
- Nvidia Driver:450.51.06
cc @ngimel
| 1 |
2,807 | 99,807 |
AOTAutograd/Inductor file system cache
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
I was playing around with implementing a kernel using torch.compile(dynamic=True) to guarantee low memory usage (and also because I needed an unusual reduction, xor_sum, which wasn't in PyTorch proper.)
```
# Use of torch.compile is mandatory for (1) good memory usage
# and (2) xor_sum implementation
@torch.compile(dynamic=True)
def kernel(x):
# The randint calls are carefully written to hit things we
# have lowerings for in inductor. Lack of unsigned 32-bit integer
# is a pain.
a = torch.randint(
-2**31, 2**31,
x.shape, device=x.device, dtype=torch.int32
).abs()
a = ((a % (2**31-1)) + 1).long()
b = torch.randint(
-2**31, 2**31,
x.shape, device=x.device, dtype=torch.int32
).abs().long()
# This is a standard shift-multiply universal hash family
# plus xor sum hash, using Philox to generate random numbers.
# Our Philox RNG is not deterministic across devices so
# don't use this for stable hashing.
#
# This assumes fixed length so you're also obligated to bucket
# by the length of tensor as well
return prims.xor_sum(
(a * x + b).int(),
[0]
)
```
On a warm Triton cache, this still takes a good 10s to compile the kernel for the first time on any given process invocation. This is way too slow, esp since once I'm done working on the kernel, it probably will never change. A good file system cache would improve this situation quite a bit.
### Versions
master
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 3 |
2,808 | 99,806 |
`cat` gradgrad tests failing
|
module: autograd, triaged, module: edge cases
|
## Issue description
Previously `cat` grad tests were surprisingly skipped here:
https://github.com/pytorch/pytorch/blob/e63c502baa4a6f2109749984be701e722b3b7232/torch/testing/_internal/common_utils.py#L4371-L4372
And the following tests are failing after removing the skip:
```
test/test_ops_gradients.py::TestBwdGradientsCPU::test_fn_gradgrad_cat_cpu_float64
test/test_ops_gradients.py::TestBwdGradientsCPU::test_fn_gradgrad_cat_cpu_complex128
```
```
# RuntimeError: The size of tensor a (25) must match the size of tensor b (0) at non-singleton dimension 0.
```
which is caused by the below input sample:
https://github.com/pytorch/pytorch/blob/6580b160d35a75d5ceebf376d55422376d0c0d2c/torch/testing/_internal/common_methods_invocations.py#L2043
Added these tests to xfail in #99596
## Version
On master
cc @ezyang @gchanan @zou3519 @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
2,809 | 99,802 |
torch.multinomial() always returns [0] using MPS
|
triaged, module: mps
|
### 🐛 Describe the bug
Using MPS, torch.multinomial() always return [0], even when the probability is 0.
It works as expected on CPU:
```
import torch
In [1]: x = torch.tensor([0.5, 0.5], dtype=torch.float, device='cpu')
...: print(set(torch.multinomial(x, 1, True).item() for i in range(100)))
{0, 1}
In [2]: x = torch.tensor([0, 0.5], dtype=torch.float, device='cpu')
...: print(set(torch.multinomial(x, 1, True).item() for i in range(100)))
{1}
```
But on MPS, it always return [0].
```
In [3]: x = torch.tensor([0.5, 0.5], dtype=torch.float, device='mps:1')
...: print(set(torch.multinomial(x, 1, True).item() for i in range(100)))
{0}
In [4]: x = torch.tensor([0, 0.5], dtype=torch.float, device='mps:1')
...: print(set(torch.multinomial(x, 1, True).item() for i in range(100)))
{0}
In [5]: torch.multinomial(x, 1, True)
Out[5]: tensor([0], device='mps:0')
```
I am actually quite new to torch, so please let me know if I misunderstood anything.
### Versions
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.3 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.9.6 (default, Jul 7 2021, 12:22:14) [Clang 12.0.5 (clang-1205.0.22.9)] (64-bit runtime)
Python platform: macOS-12.6.3-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[conda] No relevant packages
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
2,810 | 101,073 |
Windows fatal exception: stack overflow while using pytorch for computing
|
triaged, module: functorch
|
Hi, I am guessing that overflow is due to jacfwd, because if I increase the Nx, Ny, Nz here is the error (Windows fatal exception: stack overflow), and Nx Ny Nz is related to the size of the jacobian matrix. If Nx, Ny are small, like 10, the code works well
But this works fine on my friend's computer which has the same pytorch version and lower memory size
My pytorch is 2.0.0+cu117, but now this is only running on CPU
Thanks, @zou3519
```
# -*- coding: utf-8 -*-
"""
"""
import torch
from torch.func import jacfwd
import numpy as np
import matplotlib.pyplot as plt
from timeit import default_timer as timer
#from functorch import jacfwd
#from functorch import jacrev
#from torch.autograd.functional import jacobian
from scipy.sparse.linalg import gmres
def get_capillary(swnew):
capillary = torch.div(1, swnew)
capillary[capillary == float("Inf")] = 200
capillary = 2*swnew
capillary = swnew
return capillary
def get_relaperm (slnew):
k0r1=0.6
L1=1.8
L2=1.8
E1=2.1
E2=2.1
T1=2.3
T2=2.3
krl=torch.div(k0r1*slnew**L1,slnew**L2+E1*(1-slnew)**T1)
krg=torch.div((1-slnew)**L2,(1-slnew)**L2+E2*slnew**T2)
return krl, krg
def get_relaperm_classic (swnew):
s_w_r = 0.2
s_o_r = 0.2
nw = 2
no = 2
krw_ = 1
kro_ = 1
krw = krw_*((swnew-s_w_r)/(1-s_w_r))**nw
kro = kro_*(((1-swnew)-s_o_r)/(1-s_o_r))**no
return krw, kro
def get_residual (unknown):
residual = torch.zeros(Np*Nx*Ny*Nz, requires_grad=False, dtype=torch.float64)
pre_o = unknown[::2]
pre_w = pre_o
sat_w = unknown[1::2]
sat_o = 1 - sat_w
pre_o_old = unknownold[::2]
pre_w_old = pre_o_old
sat_w_old = unknownold[1::2]
sat_o_old = 1 - sat_w_old
poro = poroini*(1+c_r*(pre_o-p_ref)+0.5*(c_r*(pre_o-p_ref))**2)
poroold = poroini*(1+c_r*(pre_o_old-p_ref)+0.5*(c_r*(pre_o_old-p_ref))**2)
Bo = Bo_ref/((1+c_o*(pre_o-p_ref)+0.5*(c_o*(pre_o-p_ref))**2))
Boold = Bo_ref/((1+c_o*(pre_o_old-p_ref)+0.5*(c_o*(pre_o_old-p_ref))**2))
Bw = Bw_ref/((1+c_w*(pre_w-p_ref)+0.5*(c_w*(pre_w-p_ref))**2))
Bwold = Bw_ref/((1+c_w*(pre_w_old-p_ref)+0.5*(c_w*(pre_w_old-p_ref))**2))
miu_o = miu_o_ref*(((1+c_o*(pre_o-p_ref)+0.5*(c_o*(pre_o-p_ref))**2))/((1+(c_o-upsilon_o)*(pre_o-p_ref)+0.5*((c_o-upsilon_o)*(pre_o-p_ref))**2)))
miu_w = miu_w_ref*(((1+c_w*(pre_w-p_ref)+0.5*(c_w*(pre_w-p_ref))**2))/((1+(c_w-upsilon_w)*(pre_w-p_ref)+0.5*((c_w-upsilon_w)*(pre_w-p_ref))**2)))
Accumulation_o = (1/C1)*(sat_o*poro/Bo - sat_o_old*poroold/Boold)*vol
Accumulation_w = (1/C1)*(sat_w*poro/Bw - sat_w_old*poroold/Bwold)*vol
residual[::2] += Accumulation_o
residual[1::2] += Accumulation_w
kro = get_relaperm (sat_w)[1]
krw = get_relaperm (sat_w)[0]
mobi_o = torch.div(kro, (Bo*miu_o))
mobi_w = torch.div(krw, (Bw*miu_w))
oterm = mobi_o/(mobi_o+mobi_w)*qp*dt
wterm = mobi_w/(mobi_o+mobi_w)*qp*dt+qi*dt
residual[::2] += oterm
residual[1::2] += wterm
'''
capillary = get_capillary(sat_l)
pre_l = pre_g - capillary
pre_l = pre_g
gravity_g = rho_g*g
gravity_l = rho_l*g
'''
for i in connection_index:
phi_pre_o = pre_o[connection_a[i]] - pre_o[connection_b[i]]
phi_pre_w = pre_w[connection_a[i]] - pre_w[connection_b[i]]
up_o = connection_a[i] if phi_pre_o >= 0 else connection_b[i]
up_w = connection_a[i] if phi_pre_w >= 0 else connection_b[i]
K_h = 2*K[connection_a[i]]*K[connection_b[i]] / (K[connection_a[i]] + K[connection_b[i]])
Tran_h = K_h*A[i]/d[i]
Tran_o = Tran_h*kro[up_o]/miu_o[up_o]/Bo[up_o]
Tran_w = Tran_h*krw[up_w]/miu_w[up_w]/Bw[up_w]
flux_o = Tran_o*phi_pre_o
flux_w = Tran_w*phi_pre_w
ind_a = 2*connection_a[i]
ind_b = 2*connection_b[i]
residual[ind_a] += C2*dt*flux_o
residual[ind_b] -= C2*dt*flux_o
ind_a += 1
ind_b += 1
residual[ind_a] += C2*dt*flux_w
residual[ind_b] -= C2*dt*flux_w
return residual
if __name__ == '__main__':
C1 = 5.615
C2 = 1.12712e-3
Nx = 25
Ny = 25
Nz = 1
Lx = 500
Ly = 100
Lz = 100
p_ref = 14.7
dx = Lx/Nx
dy = Ly/Ny
dz = Lz/Nz
vol = dx*dy*dz;
K = 100*torch.ones(Nx*Ny*Nz, requires_grad=True, dtype=torch.float64)
dt = 0.1
tf = 1
time = 0
alpha_chop = 0.5
alpha_grow = 2
dt_min = 0.1
Max_iter = 10
Tol_resi = 1e-7
g = 9.80665
Np= 2
c_r = 1e-6
c_o = 1e-4 #oil
c_w = 1e-6 #water
p_ref = 14.7
Bo_ref = 1
Bw_ref = 1
miu_o_ref = 1
miu_w_ref = 1
rho_o_ref = 53
rho_w_ref = 64
poroini = 0.2
upsilon_o = 0
upsilon_w = 0
#Well
qi = torch.zeros(Nx, Ny)
qp = torch.zeros(Nx, Ny)
#wilocax = torch.randint(0, Nx, (1,))
#wilocay = torch.randint(0, Ny, (1,))
wilocax = 0*torch.ones(1, dtype=torch.int64)
wilocay = 0*torch.ones(1, dtype=torch.int64)
qi[wilocax, wilocay] = -100
#wplocax = torch.randint(0, Nx, (1,))
#wplocay = torch.randint(0, Ny, (1,))
wplocax = 4*torch.ones(1,dtype=torch.int64)
wplocay = 0*torch.ones(1,dtype=torch.int64)
qp[wplocax, wplocay] = 10
qi=qi.reshape([-1,])
qp=qp.reshape([-1,])
#grids = torch.arange(0, Nx*Ny*Nz, requires_grad=False, dtype=torch.int32)
#grids = torch.reshape(grids,(Nx,Ny)).t()
connection_x = torch.arange(0, (Nx-1)*Ny*Nz, requires_grad=False, dtype=torch.int32)
connection_y = torch.arange(0, Nx*(Ny-1)*Nz, requires_grad=False, dtype=torch.int32)
connection_z = torch.arange(0, Nx*Ny*(Nz-1), requires_grad=False, dtype=torch.int32)
connection = torch.cat((connection_x, connection_y, connection_z), 0)
A_x = dy*dz*torch.ones(connection_x.size(dim=0), requires_grad=False, dtype=torch.int32)
A_y = dx*dz*torch.ones(connection_y.size(dim=0), requires_grad=False, dtype=torch.int32)
A_z = dx*dy*torch.ones(connection_z.size(dim=0), requires_grad=False, dtype=torch.int32)
A = torch.cat((A_x, A_y, A_z), 0)
d_x = dx*torch.ones(connection_x.size(dim=0), requires_grad=False, dtype=torch.int32)
d_y = dy*torch.ones(connection_y.size(dim=0), requires_grad=False, dtype=torch.int32)
d_z = dz*torch.ones(connection_z.size(dim=0), requires_grad=False, dtype=torch.int32)
d = torch.cat((d_x, d_y, d_z), 0)
connection_x_index = torch.arange(0, (Nx-1)*Ny*Nz, requires_grad=False, dtype=torch.int32)
connection_y_index = torch.arange((Nx-1)*Ny*Nz, (Nx-1)*Ny*Nz+Nx*(Ny-1)*Nz, requires_grad=False, dtype=torch.int32)
connection_z_index = torch.arange((Nx-1)*Ny*Nz+Nx*(Ny-1)*Nz, (Nx-1)*Ny*Nz+Nx*(Ny-1)*Nz+Nx*Ny*(Nz-1), requires_grad=False, dtype=torch.int32)
connection_index = torch.cat((connection_x_index, connection_y_index, connection_z_index), 0)
connection_xa = connection_x + torch.div(connection_x, Nx-1, rounding_mode='trunc')
connection_xb = connection_xa + 1
connection_ya = connection_y + Nx*torch.div(connection_y, Nx*(Ny-1), rounding_mode='trunc')
connection_yb = connection_ya + Nx
connection_za = connection_z
connection_zb = connection_za + Nx*Ny
connection_a = torch.cat((connection_xa, connection_ya, connection_za), 0)
connection_b = torch.cat((connection_xb, connection_yb, connection_zb), 0)
# IC
swnew = 0.3*torch.ones(Nx*Ny*Nz, requires_grad=True, dtype=torch.float64)
ponew = 6000*torch.ones(Nx*Ny*Nz, requires_grad=True, dtype=torch.float64)
pwnew = 6000*torch.ones(Nx*Ny*Nz, requires_grad=True, dtype=torch.float64)
pc = get_capillary(swnew)
unknown = torch.ravel(torch.column_stack((ponew, swnew)))
unknownold = unknown.detach().clone()
ntimestep = 1;
print('PyTorch version is '+torch.__version__)
if torch.cuda.is_available():
print("PyTorch is installed with GPU support.")
print("Number of available GPUs:", torch.cuda.device_count())
for i in range(torch.cuda.device_count()):
print(f"GPU {i}: {torch.cuda.get_device_name(i)}")
else:
print("PyTorch is installed without GPU support.")
while abs(time - tf) > 1e-8:
niter = 0
start = timer()
r = get_residual(unknown)
end = timer()
print('get_residual timing:', end - start)
start = timer()
J = jacfwd(get_residual)(unknown)
#J = jacobian(get_residual, unknown)
end = timer()
print('Jacfwd timing: ', end - start)
#cr = r.detach().numpy()
#cJ = J.detach().numpy()
#x, exitCode = gmres(Jnew, -rnew)
while True:
update = torch.linalg.solve(J, r)
niter = niter+1;
unknown -= update
#XiaoYuLing = torch.where(unknown[1::2] < 0)
#unknown[1::2][XiaoYuLing] = 0
r = get_residual(unknown)
if (torch.linalg.vector_norm(r, 2) <= Tol_resi):
is_coverged = 1
print (' ')
print ('****************************************************************************************')
print ('From time '+str(time)+' to time '+str(time+dt))
print ('Timestep '+str(ntimestep)+' convergers, here is the report:')
print ('2-Norm of the residual system: '+ str(torch.linalg.vector_norm(r, 2).detach().numpy()))
print ('Number of iterations: '+ str(niter))
print ('****************************************************************************************')
print (' ')
ntimestep = ntimestep + 1
unknownold = unknown.detach().clone()
plt.plot(unknown[::2].detach().numpy())
plt.ylabel('Pressure')
#plt.show()
else:
is_coverged = 0
J = jacfwd(get_residual)(unknown)
if ((niter > Max_iter) or (is_coverged)):
break
if (not is_coverged):
#dt *= alpha_chop
dt = dt if dt >= dt_min else dt_min
else:
time += dt
#dt *= alpha_grow
dt = (tf - time) if (time + dt) > tf else dt
```
Main thread:
Current thread 0x00002934 (most recent call first):
File "d:\dlsim\ai\25x25\main_ow.py", line 328 in <module>
File "C:\Users\wec8371\Anaconda3\envs\AISim\lib\site-packages\spyder_kernels\py3compat.py", line 356 in compat_exec
File "C:\Users\wec8371\Anaconda3\envs\AISim\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 469 in exec_code
File "C:\Users\wec8371\Anaconda3\envs\AISim\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 611 in _exec_file
File "C:\Users\wec8371\Anaconda3\envs\AISim\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 524 in runfile
File "C:\Users\wec8371\AppData\Local\Temp\ipykernel_8860\3764171792.py", line 1 in <module>
Restarting kernel...
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 0 |
2,811 | 99,797 |
Automatic broadcasting for sparse csr tensors
|
module: sparse, triaged
|
### 🚀 The feature, motivation and pitch
Hey, would it be possible to add broadcasting to sparse csr tensors.
```python
import torch
input = torch.randn(6,6).to_sparse_csr()
bias = torch.tensor([[1,2,0,0,3,2]]).t().to_sparse_csr()
input.to_dense() + bias.to_dense() # working
input + bias.to_dense() # not working
input.to_dense() + bias # not working
input + bias # not working
```
### Alternatives
While elementwise dense matrix multplication/addition broadcasting is supported, it is not supported for sparse (csr) tensors.
Could that be added?
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 2 |
2,812 | 99,794 |
Apple metal (MPS) device returning incorrect keypoints for YOLOv8 pose estimation model
|
high priority, oncall: binaries, oncall: releng, triaged, module: correctness (silent), module: mps
|
### 🐛 Describe the bug
I'm encountering an issue with the YOLOv8 pose estimation model inference using the MPS model. The model is returning incorrect key points. However, when I use the same method and code on the CPU, the key points are correct. I believe the issue may be related to the Apple Metal library. From my research, it seems to be causing the error. The bounding boxes detection seems to be working fine, but there's an issue with calculating the key points. Could you please help me solve this problem?
```python
import cv2
import numpy as np
from ultralytics import YOLO
cv2.namedWindow("mps_test",cv2.WINDOW_NORMAL)
image = cv2.imread("pose_test.jpeg")
model = YOLO('yolov8s-pose.pt') # load an official model
results = model(source=image,device="cpu",conf=0.6)
#results = model(source=image,device="mps",conf=0.6)
result_image = results[0].plot()
cv2.imshow("cpu_test", result_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
Output With "MPS" (apple metal)
<img width="1168" alt="Screenshot 2023-04-19 at 15 54 42" src="https://user-images.githubusercontent.com/46287166/233779812-87e363e1-b37e-4cbe-b628-5952c3bfc35f.png">
Output With "CPU"
<img width="1212" alt="Screenshot 2023-04-19 at 16 18 57" src="https://user-images.githubusercontent.com/46287166/233779839-1fb70ff8-2bc4-43db-b270-caa495e19921.png">
### Versions
Macbook Air m1 chip
Metal: Version: 306.2.4
XCode:14.3
Macos Ventura:13.0
ultralytics Version: 8.0.81
Python Version: 3.10.8
Torch Version: 2.1.0.dev20230417
Opencv Version: 4.7.0.72
cc @ezyang @gchanan @zou3519 @seemethere @malfet @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 2 |
2,813 | 99,790 |
Cannot compile torch 1.10 in CentOS 7.3
|
triaged
|
### 🐛 Describe the bug
I tried to compile PyTorch 1.10 on CentOS, but after configuring the system environment, I always got compilation errors as follows. I found that the gcc version was incorrect and could not be upgraded on CentOS. How can I solve this?

### Versions
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (conda-forge gcc 12.2.0-19) 12.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 10.2.89
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 440.33.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 158
型号名称: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
步进: 10
CPU MHz: 3155.250
BogoMIPS: 6400.00
虚拟化: VT-x
L1d 缓存: 32K
L1i 缓存: 32K
L2 缓存: 256K
L3 缓存: 12288K
NUMA 节点0 CPU: 0-11
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 1.10.1 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.1 py39_cu102 pytorch
[conda] torchvision 0.11.2 py39_cu102 pytorch
| 2 |
2,814 | 99,781 |
2.0.0+cu118 package missing proper libnvrtc-builtins.so.11.8
|
oncall: binaries, module: cpp
|
### 🐛 Describe the bug
Within the following package:
https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.0.0%2Bcu118.zip
`libnvrtc-builtins.so.11.8` is not packaged correctly (under `./lib`):
```
(base) ➜ lib git:(master) ✗ ls -la | grep libnvrtc
-rwxr-xr-x 1 evadne evadne 54417561 Mar 10 00:04 libnvrtc-672ee683.so.11.2
-rwxr-xr-x 1 evadne evadne 7722649 Mar 10 00:04 libnvrtc-builtins-2dc4bf68.so.11.8
```
As a result nvFuser fails sometimes with the following error:
```
CUDA NVRTC
compile error: nvrtc: error: failed to open libnvrtc-builtins.so.11.8.
Make sure that libnvrtc-builtins.so.11.8 is installed correctly.
```
This can be worked around by symlinking / renaming `libnvrtc-builtins-2dc4bf68.so.11.8` to `libnvrtc-builtins.so.11.8` but in my opinion should be fixed from source.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 531.41
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
BogoMIPS: 8999.76
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
cc @seemethere @malfet @jbschlosser
| 2 |
2,815 | 99,774 |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides w/ `dynamo.export`, `make_fx` and `functionalize`
|
triaged, module: functionalization, oncall: pt2, module: export
|
### Latest update
This is the most distilled repro.
```python
import torch
import torch._dynamo
import torch.func
from torch.fx.experimental import proxy_tensor
from torch._dispatch.python import enable_python_dispatcher
def func(x, y):
return torch.matmul(x, y)
x = torch.randn(2, 4, 3, 4)
y = torch.randn(2, 4, 4, 3)
with enable_python_dispatcher():
# RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
gm = proxy_tensor.make_fx(torch.func.functionalize(func), tracing_mode="symbolic")(x, y)
```
Below is original issue post before further discussion.
### 🐛 Describe the bug
Distilled repro, greatly appreciate hints how to approach/debug this.
```python
import torch
import torch._dynamo
import torch.func
from torch.fx.experimental import proxy_tensor
def func(x, y):
return torch.matmul(x, y.transpose(-1, -2))
x = torch.randn(2, 4, 3, 4)
y = torch.randn(2, 4, 3, 4)
gm, _ = torch._dynamo.export(func, x, y)
gm.print_readable()
gm = proxy_tensor.make_fx(torch.func.functionalize(gm), tracing_mode="symbolic")(x, y)
gm.print_readable()
```
```
Traceback (most recent call last):
File "/home/bowbao/pytorch_dev/torch/fx/graph_module.py", line 271, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/bowbao/pytorch_dev/torch/fx/_symbolic_trace.py", line 756, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/experimental/proxy_tensor.py", line 433, in call_module
return forward(*args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/_symbolic_trace.py", line 749, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.3", line 7, in forward
matmul = torch.matmul(arg0, transpose); arg0 = transpose = None
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
Call using an FX-traced Module, line 7 of the traced Module's generated forward function:
transpose = arg1.transpose(-1, -2); arg1 = None
matmul = torch.matmul(arg0, transpose); arg0 = transpose = None
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return pytree.tree_unflatten([matmul], self._out_spec)
Traceback (most recent call last):
File "repro_simpler_func_dynamic.py", line 15, in <module>
gm = proxy_tensor.make_fx(torch.func.functionalize(gm), tracing_mode="symbolic")(x, y)
File "/home/bowbao/pytorch_dev/torch/fx/experimental/proxy_tensor.py", line 771, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs))
File "/home/bowbao/pytorch_dev/torch/_dynamo/eval_frame.py", line 252, in _fn
return fn(*args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/experimental/proxy_tensor.py", line 467, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/home/bowbao/pytorch_dev/torch/_dynamo/eval_frame.py", line 252, in _fn
return fn(*args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/_symbolic_trace.py", line 778, in trace
(self.create_arg(fn(*args)),),
File "/home/bowbao/pytorch_dev/torch/fx/experimental/proxy_tensor.py", line 484, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/home/bowbao/pytorch_dev/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/_functorch/eager_transforms.py", line 1600, in wrapped
func_outputs = func(*func_args, **func_kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/graph_module.py", line 662, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/bowbao/pytorch_dev/torch/fx/graph_module.py", line 279, in __call__
raise e.with_traceback(None)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
```
### Versions
Main on e9786149ab71874fad478109de173af6996f7eec
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 12 |
2,816 | 99,770 |
Deformable Convolution export to onnx
|
module: onnx, triaged
|
### Deformable Convs in onnx export
Dear Torch Team,
Is there a possibilty that the torch.onnx.export() function gets updated so that it support the deformable convolution layer that is now supported in the latest opset (version 19) of ONNX.
I believe that adding support for deformable convolutions in torch.onnx.export() would greatly enhance the functionality and versatility of the export possibilities.
Nick
| 0 |
2,817 | 99,722 |
cuda 12.0 support request for building pytorch from source code
|
module: build, module: cuda, triaged, enhancement
|
### 🚀 The feature, motivation and pitch
Motivation:
It does support cuda 12.1, but it does not support cuda 12.0
According to the doc: https://github.com/pytorch/pytorch#from-source , the magma-cuda* that does not match the CUDA version 12.0 from https://anaconda.org/pytorch/repo. Only magma-cuda121 can be found, no magma-cuda120
feature: please support cuda 12.0 for building pytorch from source code
Thank you!
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere @ngimel
| 3 |
2,818 | 99,715 |
no-duplicate-decl-specifier as a invalid compile flag for CXX in GCC
|
module: build, module: rocm, triaged
|
### 🐛 Describe the bug
[cmake/Dependencies.cmake](https://github.com/pytorch/pytorch/blob/9861ec9785b53e71c9affd7d268ef7073eb1c446/cmake/Dependencies.cmake#L1272) sets -Wno-duplicate-decl-specifier for HIP_CXX_FLAGS. at least for GCC 12 this is not a valid compile flag for c++ translation units causeing lots of `cc1plus: warning: command-line option ‘-Wno-duplicate-decl-specifier’ is valid for C/ObjC but not for C++` warnings
### Versions
9861ec9785b53e71c9affd7d268ef7073eb1c446
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
| 0 |
2,819 | 99,710 |
pca_lowrank and svd_lowrank broken under automatic mixed precision.
|
module: cuda, triaged, module: half, module: linear algebra, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
`torch.pca_lowrank` and `torch.svd_lowrank` does not work with automatic mixed precision, even if the inputs are 32 bit.
```python
import torch
x = torch.rand(1000, 3, device="cuda")
with torch.cuda.amp.autocast(True):
assert x.dtype is torch.float32
torch.pca_lowrank(x)
```
Trace:
```
Traceback (most recent call last):
File "/home/pbsds/ntnu/ifield/tmp/pca-repro.py", line 6, in <module>
torch.pca_lowrank(x)
File "/home/pbsds/.cache/pypoetry/virtualenvs/ifield-P6Ko3Gy1-py3.10/lib/python3.10/site-packages/torch/_lowrank.py", line 299, in pca_lowrank
return _svd_lowrank(A - C, q, niter=niter, M=None)
File "/home/pbsds/.cache/pypoetry/virtualenvs/ifield-P6Ko3Gy1-py3.10/lib/python3.10/site-packages/torch/_lowrank.py", line 174, in _svd_lowrank
Q = get_approximate_basis(A, q, niter=niter, M=M)
File "/home/pbsds/.cache/pypoetry/virtualenvs/ifield-P6Ko3Gy1-py3.10/lib/python3.10/site-packages/torch/_lowrank.py", line 70, in get_approximate_basis
Q = torch.linalg.qr(matmul(A, R)).Q
RuntimeError: "geqrf_cuda" not implemented for 'Half'
```
Likely the `matmul` in `get_approximate_basis` in `_lowrank.py` downcasts to float16, which is not supported in `torch.linalg.qr`.
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 11.3.0
Clang version: 15.0.7
CMake version: version 3.25.2
Libc version: glibc-2.37
Python version: 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.102-1-MANJARO-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 525.89.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 79%
CPU max MHz: 4600,0000
CPU min MHz: 800,0000
BogoMIPS: 4609,00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] numpy==1.23.3
[pip3] pytorch-lightning==1.9.4
[pip3] pytorch3d==0.7.2
[pip3] torch==1.13.1
[pip3] torchmeta==1.8.0
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.14.1
[pip3] torchviz==0.0.2
[conda] No relevant packages
cc @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 6 |
2,820 | 99,701 |
when convert to onnx with dynamix_axis, the Reshape op value is always the same as static, dynamic_axis is useless, it cant't inference right shape dynamically
|
module: onnx, triaged
|
### 🐛 Describe the bug

### 🐛 Describe the bug
when convert to onnx with dynamix_axis, the Reshape op value is always the same as static, dynamic_axis is useless, it cant't inference right shape dynamically
the graph is part of transfomers, in Fig , Thre Reshape value is always unchanged, is obtained by dummpy_input when using totch.onnx.export, after i convert to onnx ,and infer with another size Input , raise an error:
Non-zero status code returned while running Reshape node. Name:'Reshape_375' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{12,7,512}, requested shape:{18,36,128}
the shape {18,36,128} is obtained by torch.onnx.export , this shape unchanged , it results to error when i try to another Input, actually, i use dynamic_axis flags . i doubt q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1) shape is not right,here
tgt_len, bsz, embed_dim = query.shape is get the shape.
but its value is infered by dummpy_input shape by totch.onnx.export, the model only remember {18,36,128},not dynamic size, when infer with shape {12,7,512}, raise error:
q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1) this code can't get dynamic shape
what is the reason that results to errors??? it is very strange , i have no ideas to solve it???
the code to the graph is as follows:
def multi_head_attention_forward(
query: Tensor,
key: Tensor,
value: Tensor,
embed_dim_to_check: int,
num_heads: int,
in_proj_weight: Tensor,
in_proj_bias: Optional[Tensor],
bias_k: Optional[Tensor],
bias_v: Optional[Tensor],
add_zero_attn: bool,
dropout_p: float,
out_proj_weight: Tensor,
out_proj_bias: Optional[Tensor],
training: bool = True,
key_padding_mask: Optional[Tensor] = None,
need_weights: bool = True,
attn_mask: Optional[Tensor] = None,
use_separate_proj_weight: bool = False,
q_proj_weight: Optional[Tensor] = None,
k_proj_weight: Optional[Tensor] = None,
v_proj_weight: Optional[Tensor] = None,
static_k: Optional[Tensor] = None,
static_v: Optional[Tensor] = None,
) -> Tuple[Tensor, Optional[Tensor]]:
r"""
Args:
query, key, value: map a query and a set of key-value pairs to an output.
See "Attention Is All You Need" for more details.
embed_dim_to_check: total dimension of the model.
num_heads: parallel attention heads.
in_proj_weight, in_proj_bias: input projection weight and bias.
bias_k, bias_v: bias of the key and value sequences to be added at dim=0.
add_zero_attn: add a new batch of zeros to the key and
value sequences at dim=1.
dropout_p: probability of an element to be zeroed.
out_proj_weight, out_proj_bias: the output projection weight and bias.
training: apply dropout if is ``True``.
key_padding_mask: if provided, specified padding elements in the key will
be ignored by the attention. This is an binary mask. When the value is True,
the corresponding value on the attention layer will be filled with -inf.
need_weights: output attn_output_weights.
attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
the batches while a 3D mask allows to specify a different mask for the entries of each batch.
use_separate_proj_weight: the function accept the proj. weights for query, key,
and value in different forms. If false, in_proj_weight will be used, which is
a combination of q_proj_weight, k_proj_weight, v_proj_weight.
q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias.
static_k, static_v: static key and value used for attention operators.
tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias)
if has_torch_function(tens_ops):
return handle_torch_function(
multi_head_attention_forward,
tens_ops,
query,
key,
value,
embed_dim_to_check,
num_heads,
in_proj_weight,
in_proj_bias,
bias_k,
bias_v,
add_zero_attn,
dropout_p,
out_proj_weight,
out_proj_bias,
training=training,
key_padding_mask=key_padding_mask,
need_weights=need_weights,
attn_mask=attn_mask,
use_separate_proj_weight=use_separate_proj_weight,
q_proj_weight=q_proj_weight,
k_proj_weight=k_proj_weight,
v_proj_weight=v_proj_weight,
static_k=static_k,
static_v=static_v,
)
# set up shape vars
tgt_len, bsz, embed_dim = query.shape
src_len, _, _ = key.shape
assert embed_dim == embed_dim_to_check, \
f"was expecting embedding dimension of {embed_dim_to_check}, but got {embed_dim}"
if isinstance(embed_dim, torch.Tensor):
# embed_dim can be a tensor when JIT tracing
head_dim = embed_dim.div(num_heads, rounding_mode='trunc')
else:
head_dim = embed_dim // num_heads
assert head_dim * num_heads == embed_dim, f"embed_dim {embed_dim} not divisible by num_heads {num_heads}"
if use_separate_proj_weight:
# allow MHA to have different embedding dimensions when separate projection weights are used
assert key.shape[:2] == value.shape[:2], \
f"key's sequence and batch dims {key.shape[:2]} do not match value's {value.shape[:2]}"
else:
assert key.shape == value.shape, f"key shape {key.shape} does not match value shape {value.shape}"
#
# compute in-projection
#
if not use_separate_proj_weight:
q, k, v = _in_projection_packed(query, key, value, in_proj_weight, in_proj_bias)
else:
assert q_proj_weight is not None, "use_separate_proj_weight is True but q_proj_weight is None"
assert k_proj_weight is not None, "use_separate_proj_weight is True but k_proj_weight is None"
assert v_proj_weight is not None, "use_separate_proj_weight is True but v_proj_weight is None"
if in_proj_bias is None:
b_q = b_k = b_v = None
else:
b_q, b_k, b_v = in_proj_bias.chunk(3)
q, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v)
# prep attention mask
if attn_mask is not None:
if attn_mask.dtype == torch.uint8:
warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
attn_mask = attn_mask.to(torch.bool)
else:
assert attn_mask.is_floating_point() or attn_mask.dtype == torch.bool, \
f"Only float, byte, and bool types are supported for attn_mask, not {attn_mask.dtype}"
# ensure attn_mask's dim is 3
if attn_mask.dim() == 2:
correct_2d_size = (tgt_len, src_len)
if attn_mask.shape != correct_2d_size:
raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
attn_mask = attn_mask.unsqueeze(0)
elif attn_mask.dim() == 3:
correct_3d_size = (bsz * num_heads, tgt_len, src_len)
if attn_mask.shape != correct_3d_size:
raise RuntimeError(f"The shape of the 3D attn_mask is {attn_mask.shape}, but should be {correct_3d_size}.")
else:
raise RuntimeError(f"attn_mask's dimension {attn_mask.dim()} is not supported")
# prep key padding mask
if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
warnings.warn("Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
key_padding_mask = key_padding_mask.to(torch.bool)
# add bias along batch dimension (currently second)
if bias_k is not None and bias_v is not None:
assert static_k is None, "bias cannot be added to static key."
assert static_v is None, "bias cannot be added to static value."
k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
if attn_mask is not None:
attn_mask = pad(attn_mask, (0, 1))
if key_padding_mask is not None:
key_padding_mask = pad(key_padding_mask, (0, 1))
else:
assert bias_k is None
assert bias_v is None
#
# reshape q, k, v for multihead attention and make em batch first
#
q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
if static_k is None:
k = k.contiguous().view(k.shape[0], bsz * num_heads, head_dim).transpose(0, 1)
else:
# TODO finish disentangling control flow so we don't do in-projections when statics are passed
assert static_k.size(0) == bsz * num_heads, \
f"expecting static_k.size(0) of {bsz * num_heads}, but got {static_k.size(0)}"
assert static_k.size(2) == head_dim, \
f"expecting static_k.size(2) of {head_dim}, but got {static_k.size(2)}"
k = static_k
if static_v is None:
v = v.contiguous().view(v.shape[0], bsz * num_heads, head_dim).transpose(0, 1)
else:
# TODO finish disentangling control flow so we don't do in-projections when statics are passed
assert static_v.size(0) == bsz * num_heads, \
f"expecting static_v.size(0) of {bsz * num_heads}, but got {static_v.size(0)}"
assert static_v.size(2) == head_dim, \
f"expecting static_v.size(2) of {head_dim}, but got {static_v.size(2)}"
v = static_v
# add zero attention along batch dimension (now first)
if add_zero_attn:
zero_attn_shape = (bsz * num_heads, 1, head_dim)
k = torch.cat([k, torch.zeros(zero_attn_shape, dtype=k.dtype, device=k.device)], dim=1)
v = torch.cat([v, torch.zeros(zero_attn_shape, dtype=v.dtype, device=v.device)], dim=1)
if attn_mask is not None:
attn_mask = pad(attn_mask, (0, 1))
if key_padding_mask is not None:
key_padding_mask = pad(key_padding_mask, (0, 1))
# update source sequence length after adjustments
src_len = k.size(1)
# merge key padding and attention masks
if key_padding_mask is not None:
assert key_padding_mask.shape == (bsz, src_len), \
f"expecting key_padding_mask shape of {(bsz, src_len)}, but got {key_padding_mask.shape}"
key_padding_mask = key_padding_mask.view(bsz, 1, 1, src_len). \
expand(-1, num_heads, -1, -1).reshape(bsz * num_heads, 1, src_len)
if attn_mask is None:
attn_mask = key_padding_mask
elif attn_mask.dtype == torch.bool:
attn_mask = attn_mask.logical_or(key_padding_mask)
else:
attn_mask = attn_mask.masked_fill(key_padding_mask, float("-inf"))
# convert mask to float
if attn_mask is not None and attn_mask.dtype == torch.bool:
new_attn_mask = torch.zeros_like(attn_mask, dtype=torch.float)
new_attn_mask.masked_fill_(attn_mask, float("-inf"))
attn_mask = new_attn_mask
# adjust dropout probability
if not training:
dropout_p = 0.0
#
# (deep breath) calculate attention and out projection
#
attn_output, attn_output_weights = _scaled_dot_product_attention(q, k, v, attn_mask, dropout_p)
attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
if need_weights:
# average attention weights over heads
attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
return attn_output, attn_output_weights.sum(dim=1) / num_heads
else:
return attn_output, None
### Versions
pytorch 1.10.1
CUDA 10.2
@svenstaro @JackDanger @infil00p @eklitzke
@soulitzer @infil00p @svenstaro @svenstaro
### Versions
pytorch 1.10.1
CUDA 10.2
| 5 |
2,821 | 99,693 |
WARNING: The shape inference of prim::PadPacked type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
|
oncall: jit, onnx-triaged
|
### 🐛 Describe the bug
WARNING: The shape inference of prim::PadPacked type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
### Versions
CUDA 10.2
pytorch 1.10.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
2,822 | 99,690 |
gpu training work well, but cpu training not work
|
module: cpu, triaged, module: fft
|
### 🐛 Describe the bug
model{
self.a=torch.nn.Parameter(torch.rand(10,10))
self.b=torch.nn.Parameter(torch.rand(10,10))
self.c=torch.nn.Parameter(torch.rand(1))
}
forward{
fft
ifft
...
}
i do model=model.cuda().cpu()
cpu training will work
### Versions
1.10
cuda 11.1
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mruberry @peterbell10
| 5 |
2,823 | 99,689 |
[torch.compile] can't multiply sequence by non-int of type 'float' when enabling `shape_padding`
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
`torch.compile` raises an error that `can't multiply sequence by non-int of type 'float'` when enabling `shape_padding`
```py
import torch
torch.manual_seed(420)
class Model(torch.nn.Module):
def forward(self, x, y, inp):
return torch.add(torch.mm(x, y), inp)
x = torch.randn(3, 4).cuda()
y = torch.randn(4, 5).cuda()
inp = torch.randn(3, 5).cuda()
func = Model()
res1 = func(x, y, inp)
print(res1)
jit_func = torch.compile(func)
res2 = jit_func(x, y, inp)
print(res2)
# TypeError: can't multiply sequence by non-int of type 'float'
# While executing %mm : [#users=1] = call_function[target=torch.mm](args = (%l_x_, %l_y_), kwargs = {})
```
After checking the call stack, I fount that this bug is in `should_pad_bench` in the decomposition of `addmm`
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230419+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230419+cu118
[pip3] torchaudio==2.1.0.dev20230419+cu118
[pip3] torchvision==0.16.0.dev20230419+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230419+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230419+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230419+cu118 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
2,824 | 99,684 |
In torchelastic support running worker rank 0 on agent rank 0 consistently
|
oncall: distributed, triaged, module: elastic
|
### 🚀 The feature, motivation and pitch
Currently, when launching distributed jobs using torchrun, the Rank 0 worker can land on any arbitrary node. This ask is to add a new rendezvous implementation for which worker rank 0 always runs on agent rank 0.
### Additional context
This will improve observability of the distributed job by easily locating logs for the rank0 worker
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @dzhulgakov
| 0 |
2,825 | 99,681 |
`torch.ops.aten.empty` is not discoverable from `dir(torch.ops.aten)` until explicitly calling getattr
|
triaged, module: library
|
### 🐛 Describe the bug
Accidentally discovered this when I was trying to retrieve all `OpOverload`s under `torch.ops.aten`. I noticed that `torch.ops.aten.empty` (may or may not be the only case) is not discoverable from `for op in torch.ops.aten`.
However, it does appear there is such an op since `torch.ops.aten.empty` returns a valid object. And after retrieving that through `getattr` explicitly, it appears under `dir(torch.ops.aten)`.
I wonder if this is a bug or by design? And if it's not a bug what's the recommended way to loop over all `torch.ops.aten`?
```python
>>> import torch
>>> "empty" in torch.ops.aten
False
>>> torch.ops.aten.empty
<OpOverloadPacket(op='aten.empty')>
>>> "empty" in torch.ops.aten
True
```
### Versions
Main from 5315317b7bbb13d1b8d91a682cec1fb4dace79e4
cc @anjali411
| 8 |
2,826 | 99,653 |
Conda MacOS installation install pytorch-1.13 rather than 2.0 as of Apr 4th
|
high priority, triage review, oncall: binaries, module: regression
|
### 🐛 Describe the bug
See https://hud.pytorch.org/hud/pytorch/builder/main/1?per_page=50&name_filter=cron%20%2F%20release%20%2F%20mac%20%2F%20conda-py3
Last successful run: https://github.com/pytorch/builder/actions/runs/4619355597/jobs/8168032918
First faulty run: https://github.com/pytorch/builder/actions/runs/4629742465/jobs/8190405098
### Versions
2.0
cc @ezyang @gchanan @zou3519 @seemethere
| 2 |
2,827 | 99,652 |
DistributedDataParallel doesn't work with complex buffers
|
oncall: distributed, module: complex
|
### 🐛 Describe the bug
DistributedDataParallel doesn't work with complex buffers, even when `broadcast_buffers=False`.
```py
import os
import torch
from torch import nn
torch.distributed.init_process_group(backend="nccl")
rank = int(os.environ["LOCAL_RANK"])
device = f"cuda:{rank}"
torch.cuda.set_device(device)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("complex", torch.tensor([1.0 + 1.0j], requires_grad=False))
self.net = nn.Linear(16, 32)
def forward(self, x):
return self.net(x)
if __name__ == "__main__":
model = Net()
model = model.to(device)
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[rank], broadcast_buffers=False
)
```
Throws `RuntimeError: Input Tensor data type is not supported for NCCL process group: ComplexFloat`. But I do not need DDP to sync the buffer--it's a static parameter that I just want moved with the model when I do `.to(device)`. It doesn't have any gradients, it doesn't need any syncing, it will never change during training.
How can I get DDP to ignore this buffer?
### Versions
By the way, `https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py` is missing; it should be `wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py` (master should be main).
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230416+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.10.9 (main, Mar 22 2023, 11:20:39) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 3552.598
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5200.14
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230416+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 1 |
2,828 | 99,649 |
[torch.compile] raises an error that expanded size doesn't match when enabling `shape_padding`
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
`torch.compile` raises an error that expanded size doesn't match when enabling `shape_padding` by setting `TORCHINDUCTOR_SHAPE_PADDING=1`
```py
import torch
import torch.nn as nn
torch.manual_seed(420)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.functional.linear
self.linear_weight = torch.randn(4, 4).cuda()
self.bias = torch.randn(1, 4).cuda()
def forward(self, x):
x = self.linear(x, self.linear_weight, self.bias)
return x
input_tensor = torch.randn(1, 3, 4).cuda()
func = Model().cuda()
res1 = func(input_tensor)
print(res1)
# tensor([[[-1.2507, 1.2743, 2.1668, 2.3092],
# [ 0.2125, 0.0958, -2.3418, 3.3766],
# [-0.3756, 0.8750, -0.5950, 4.4472]]], device='cuda:0')
jit_func = torch.compile(func)
res2 = jit_func(input_tensor)
# RuntimeError: The expanded size of the tensor (4) must match the existing size (2) at non-singleton dimension 0. Target sizes: [4, 4]. Tensor sizes: [2, 4]
# While executing %linear : [#users=1] = call_function[target=torch._C._nn.linear](args = (%l_x_, %l__self___linear_weight, %l__self___bias), kwargs = {})
```
I think it would be caused by the decomposition of `addmm`
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0.dev20230419+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 6700.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+46672772b4
[pip3] torch==2.1.0.dev20230419+cu118
[pip3] torchaudio==2.1.0.dev20230419+cu118
[pip3] torchvision==0.16.0.dev20230419+cu118
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+46672772b4 pypi_0 pypi
[conda] torch 2.1.0.dev20230419+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230419+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230419+cu118 pypi_0 pypi
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
2,829 | 99,640 |
Ban GradScaler scale from being less than 1
|
triaged, module: amp (automated mixed precision)
|
### 🚀 The feature, motivation and pitch
We should standardize on a way to tell users when a reasonable/good scale for the AMP GradScaler cannot be found.
Context:
The grad scaler is awesome in that it "automatically" finetunes itself to settle on an ideal scale. However! There are instances where there is just no ideal scale for certain parameters, for example the one mentioned in issue #96755, where the scale becomes so so tiny that it causes infs and, consequently, nans. In this issue, we were nestled in the unlikely (but clearly not impossible) world where for a certain scale S, the scaled gradients were valid, but the unscaled (g/S) were invalid, leading to weird inconsistent behavior.
Generally, we want to be add invariants/guarantees to prevent weird inconsistent behavior and give users faster and clearer signal. Since the purpose of the scale is to scale UP the loss so that the grads do not underflow in fp16, we should set a hard boundary of scale must be greater than or equal to 1. Note that adding a check may induce a slight perf hit.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @ngimel @crcrpar @stas00 @albanD
### Alternatives
Do nothing, let users deal with nans from the perspective of BC + that there might be currently unimagined worlds where users want to handle nans themselves (?)
### Additional context
More context provided in https://github.com/pytorch/pytorch/issues/96755.
| 11 |
2,830 | 99,637 |
Torch hangs at import if tensorflow is imported first
|
oncall: binaries
|
### 🐛 Describe the bug
If I import tensorflow==2.12.0, and then try importing torch==2..0.0 , it hangs
Code to reproduce the error:
```
docker run --rm -it ubuntu:22.04
// once inside
apt-get update && apt-get install python3
apt-get install pip
pip install tensorflow==2.12.0
pip install torch --extra-index-url https://download.pytorch.org/whl/cu118
python3
import tensorflow
import torch
```
Note: the issue will persist even if you install torch-cpu, also on any Ubuntu:20.04 and above
### Versions
it says 404 not found, however, the version of torch is 2.0.0, same thing happens if it is torch-cpu
cc @seemethere @malfet
| 4 |
2,831 | 99,630 |
Parameterisation of MultivariateNormal distribution using Cholesky decomposition of precision matrix
|
module: distributions, feature, triaged
|
### 🚀 The feature, motivation and pitch
Currently, the `MultivariateNormal` distribution is parameterised based on the Cholesky factor of the covariance matrix, referred to in the code as `scale_tril` (I'm unsure where this naming comes from). While the distribution can be initialised with the full covariance or precision matrix instead, these are simply used to calculate `scale_tril` which is then used as the basis for all calculations, presumably for efficiency and numerical stability.
It is however possible to parameterise the distribution with Cholesky factor of the precision matrix. One possible implementation is laid out in [this report](https://arxiv.org/pdf/2003.05739.pdf) presented by a contributor to [FrEAI](https://github.com/vislearn/FrEIA), a PyTorch library for invertible NNs. The report argues a better numerical stability than traditional parameterisation, something which I have confirmed in my experiments (see “Additional context” section below).
I'm happy to work on this request myself. I have a working implementation of the `log_prob` and `sample` methods and am happy to work on expanding this to match the full `Distribution` interface and then integrating this into the existing implementation. One question would be whether this replaces the current functionality, or if this is activatable via a flag.
### Alternatives
A [PyTorch implementation of a GMM using this parameterisation](https://github.com/vislearn/FrEIA/blob/master/FrEIA/modules/gaussian_mixture.py) exists in [FrEAI](https://github.com/vislearn/FrEIA) but this is...
1. A full Gaussian mixture model, not a single multivariate distribution.
2. Built to be part of an invertible network, which makes the implementation difficult to use for non-invertible cases.
### Additional context
My use case is implementing a [Mixture Density Network](https://publications.aston.ac.uk/id/eprint/373/) to predict the parameters of a Gaussian Mixture Model for high-ish dimensional data (31 dimensions). I find the existing implementation of the `MultivariateNormal` distribution quickly runs into numerical stability issues and returns NaNs when calculating the log probabilities.
cc @fritzo @neerajprad @alicanb @nikitaved
| 3 |
2,832 | 99,625 |
Conda Pytorch set processor affinity to the first physical core after fork
|
high priority, triage review, module: dependency bug, oncall: binaries, triaged, module: mkl, module: third_party, module: intel
|
### 🐛 Describe the bug
This issue may be related to #98836 and #91989.
### Background
The issue arise when using PyTorch with Ray. The `raylet` process is forked from the main process after importing `torch`, then `raylet` uses `execvpe` to create worker processes. But ``taskset -pc `pgrep raylet` `` shows `current affinity list: 0,1`, and worker processes inherits this, so all Ray worker processes use only one physical core, causing significant performance penalty.
### Investigation
This issue boils down to the `fork` usage, a minimal Python example is
```python
# Env: conda create -n torch -c conda-forge -c pytorch --override-channels python=3.9 pytorch cpuonly
import torch
import time
import os
if os.fork() == 0:
print(f"Child PID: {os.getpid()}")
time.sleep(100)
```
Then `taskset -pc <child pid>` shows bad affinity. Using `strace` like `strace -e sched_setaffinity -f python ...` also clearly shows this.
I further narrowed down this issue with a C only script
```c
// Env: conda create -n mkl -c conda-forge --override-channels mkl mkl-include
// Build: gcc -I ~/miniconda/envs/mkl/include -L ~/miniconda/envs/mkl/lib -o test test-mkl.c -lgomp
// Run: LD_LIBRARY_PATH=~/miniconda/envs/mkl/lib strace -e sched_setaffinity -f ./test
#include <unistd.h>
#include <omp.h>
int main() {
omp_get_max_threads();
if (fork() == 0) {
omp_get_max_threads();
// sleep(100);
}
return 0;
}
```
So it turns out that this may not be an issue specific to PyTorch. Linking in an environment with `libgomp` and without `mkl` will not reproduce this issue, only `mkl` calls `sched_setaffinity`. As `mkl` does not come with source code, I was not able to further investigate.
### Workaround
Multiple ways could workaround this issue
1. Fork before importing torch
```python
import ray; ray.init()
import torch
```
2. Install the PYPI version of PyTorch
3. (not tested) manually reset affinity in the child processes
4. **Pin llvm-openmp to `14.0.*`** (found this when comparing dependencies with a older normal conda environment)
```
conda create -n torch2 -c conda-forge -c pytorch --override-channels python=3.9 pytorch cpuonly llvm-openmp=14
```
So this may be an issue caused by un-pinned upgraded dependency.
### Versions
The `mkl` conda environment
```
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_kmp_llvm conda-forge
icu 72.1 hcb278e6_0 conda-forge
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libhwloc 2.9.1 hd6dc26d_0 conda-forge
libiconv 1.17 h166bdaf_0 conda-forge
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libxml2 2.10.4 hfdac1af_0 conda-forge
libzlib 1.2.13 h166bdaf_4 conda-forge
llvm-openmp 16.0.1 h417c0b6_0 conda-forge
mkl 2022.1.0 h84fe81f_915 conda-forge
mkl-include 2022.1.0 h84fe81f_915 conda-forge
tbb 2021.9.0 hf52228f_0 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
zstd 1.5.2 h3eb15da_6 conda-forge
```
cc @ezyang @gchanan @zou3519 @seemethere @malfet @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 18 |
2,833 | 99,615 |
CUPTI Initialization error
|
module: cuda, triaged, oncall: profiler
|
### 🐛 Describe the bug
CUPTI error prevents CUDA profiling, even on Pytorch example code.
Error message:
```
WARNING:2023-04-20 11:13:16 7278:7278 init.cpp:146] function cbapi->getCuptiStatus() failed with error CUPTI_ERROR_NOT_INITIALIZED (15)
WARNING:2023-04-20 11:13:16 7278:7278 init.cpp:147] CUPTI initialization failed - CUDA profiler activities will be missing
INFO:2023-04-20 11:13:16 7278:7278 init.cpp:149] If you see CUPTI_ERROR_INSUFFICIENT_PRIVILEGES, refer to https://developer.nvidia.com/nvidia-development-tools-solutions-err-nvgpuctrperm-cupti
```
Example code:
```
import torch
import numpy as np
from torch import nn
import torch.autograd.profiler as profiler
class MyModule(nn.Module):
def __init__(self, in_features: int, out_features: int, bias: bool = True):
super(MyModule, self).__init__()
self.linear = nn.Linear(in_features, out_features, bias)
def forward(self, input, mask):
with profiler.record_function("LINEAR PASS"):
out = self.linear(input)
with profiler.record_function("MASK INDICES"):
threshold = out.sum(axis=1).mean().item()
hi_idx = np.argwhere(mask.cpu().numpy() > threshold)
hi_idx = torch.from_numpy(hi_idx).cuda()
return out, hi_idx
model = MyModule(500, 10).cuda()
input = torch.rand(128, 500).cuda()
mask = torch.rand((500, 500, 500), dtype=torch.float).cuda()
# warm-up
model(input, mask)
with profiler.profile(with_stack=True, profile_memory=True) as prof:
out, idx = model(input, mask)
print(prof.key_averages(group_by_stack_n=5).table(sort_by='self_cpu_time_total', row_limit=5))
```
My system:
```
CUDA: 12.1
Driver: 5.30
Pytorch: 2.0
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
```
I have checked that my CUDA installation works by running the `deviceQuery`and `bandwithTest` tests that are part of the cuda_samples library. I have also made sure the profiler is accesible by non-sudo code `RmProfilingAdminOnly: 0`
I have not added the collect_env.py output as I get a 404 not found error when trying to download?
### Versions
None
cc @ngimel @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 24 |
2,834 | 99,614 |
Make broadcast_coalesced to a op for processgroup
|
oncall: distributed
|
### 🚀 The feature, motivation and pitch
Today, broadcast_coalesced functions similarly to braoadcast, like a set communication operator. It's defined here:https://github.com/pytorch/pytorch/blob/4d8906885e672d088849303cc8e821cff81ba6d9/torch/csrc/distributed/c10d/comm.hpp#L12-L16
Because of the custom device part, we may need to have our own special logic for broadcast_coalesced, so one way to make broadcast_coalesced more general is to make it a collection communication operator so that users can customize their own broadcast_coalesced.
### Alternatives
broadcast_coalesced is a generic method for cpu and cuda, so an easy way to do this is to register the current implementation as the default operation. broadcast_coalesced internally uses different backends depending on the device.
If users need to implement special broadcast_coalesced for cpu/cuda/custom_device, they can register their implementations to the specific device.
Here is a sample:
First, broadcast_coalesced operator needs to be declared and register for all device, and then it will be called inside ProcessGroup
``` C++
// torch/csrc/distributed/c10d/Ops.cpp
m.def(
"broadcast_coalesced_(Tensor[] tensors, __torch__.torch.classes.c10d.ProcessGroup process_group, int buffer_size, int rank) -> ()");
void broadcast_coalesced_comm_(
at::TensorList tensors,
const c10::intrusive_ptr<ProcessGroup>& process_group,
int64_t buffer_size,
int64_t rank) {
// use current implementation as a default logic
broadcast_coalesced(process_group, tensors, buffer_size, rank);
return;
}
TORCH_LIBRARY_IMPL(c10d, CompositeExplicitAutograd, m) {
m.impl("broadcast_coalesced_", broadcast_coalesced_comm_);
}
```
```C++
// torch/csrc/distributed/c10d/ProcessGroup.hpp
virtual void broadcast_coalesced(
std::vector<at::Tensor>& tensors,
const BroadcastCoalescedOptions& opts = BroadcastCoalescedOptions()) {
static auto op =
c10::Dispatcher::singleton()
.findSchemaOrThrow("c10d::broadcast_coalesced_", "")
.typed<void(
at::TensorList,
const
c10::intrusive_ptr<::c10d::ProcessGroup>&,
int64_t,
int64_t)>();
return op.call(
tensors,
c10::intrusive_ptr<ProcessGroup>::unsafe_reclaim_from_nonowning(this),
opts.bufferSize,
opts.rank);
}
```
Of course, we need to bind it to the python
```C++
// torch/csrc/distributed/c10d/init.cpp
.def(
"broadcast_coalesced",
&::c10d::ProcessGroup::broadcast_coalesced,
py::arg("tensors"),
py::arg("opts") = ::c10d::BroadcastCoalescedOptions(),
py::call_guard<py::gil_scoped_release>())
```
```python
// torch/distributed/distributed_c10d.py
def _broadcast_coalesced(group, tensors, buffer_size, src=0):
# maybe we need to retain the original calling function for ProcessGroupNCCL which is not ProcessGroup
# rename original _broadcast_coalesced to _broadcast_coalesced_
if not issubclass(type(group), ProcessGroup):
dist._broadcast_coalesced_(group, tensors, buffer_size, src)
return
opts = BroadcastCoalescedOptions()
opts.bufferSize = buffer_size
opts.rank = src
return group.broadcast_coalesced(tensors, opts)
```
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,835 | 99,590 |
add Half support for layer_norm on CPU
|
module: cpu, open source, module: half, ciflow/trunk, topic: not user facing, ciflow/mps
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #99590
### Testing
Single socket (icx, 32cores):
| shape | fp32 forward (ms) | fp16 forward (ms) | mixed fp32 fp16 forward (ms) | fp32 backward (ms) | fp16 backward (ms) | mixed fp32 fp16 backward (ms) |
| -- | -- | -- | -- | -- | -- | -- |
| (1, 8, 16) | 0.012 | 0.011 | 0.011 | 0.051 | 0.051 | 0.050 |
| (8 ,8, 16) | 0.013 | 0.013 | 0.013 | 0.054 | 0.053 | 0.051 |
| (32, 8, 16) | 0.015 | 0.014 | 0.014 | 0.059 | 0.054 | 0.052 |
| (64, 128, 56, 56) | 1.875 | 0.790 | 1.016 | 12.845 | 7.151 | 6.985 |
| (64, 128, 256, 256) | 50.226 | 25.462 | 35.736 | 328.957 | 179.615 | 175.618 |
Single core (icx):
| shape | fp32 forward (ms) | fp16 forward (ms) | mixed fp32 fp16 forward (ms) | fp32 backward (ms) | fp16 backward (ms) | mixed fp32 fp16 backward (ms) |
| -- | -- | -- | -- | -- | -- | -- |
| (1, 8, 16) | 0.012 | 0.011 | 0.011 | 0.040 | 0.041 | 0.041 |
| (8 ,8, 16) | 0.012 | 0.012 | 0.012 | 0.042 | 0.042 | 0.042 |
| (32, 8, 16) | 0.027 | 0.014 | 0.014 | 0.048 | 0.048 | 0.046 |
| (64, 128, 56, 56) | 58.054 | 11.034 | 17.928 | 108.603 | 48.816 | 50.244 |
| (64, 128, 256, 256) | 1327.758 | 352.394 | 496.994 | 2846.182 | 1224.247 | 1218.422 |
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 10 |
2,836 | 99,584 |
Training Faster R-CNN model with COCO dataset has been consistently unsuccessful.
|
module: dataloader, triaged
|
### 📚 The doc issue
My computer has downloaded the COCO dataset and now I want to use PyTorch to load the dataset and train a Faster R-CNN object detection model. However, there seems to be a problem with loading the data. Can you help me solve this issue?The following is my code.
```python
import torchvision
import torch
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import CocoDetection
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
import torch.optim as optim
import torch.nn as nn
from torch.utils.tensorboard import SummaryWriter
## dataloader
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
train_data_dir = '../data/coco/train2017'
train_ann_file = '../data/coco/annotations/instances_train2017.json'
val_data_dir = '../data/coco/val2017'
val_ann_file = '../data/coco/annotations/instances_val2017.json'
train_dataset = CocoDetection(root=train_data_dir, annFile=train_ann_file, transform=transform)
test_dataset = CocoDetection(root=val_data_dir, annFile=val_ann_file, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True)
val_loader = DataLoader(test_dataset, batch_size=4, shuffle=False)
## model
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
backbone.out_channels = 1280
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),), aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0], output_size=7, sampling_ratio=2)
model = FasterRCNN(backbone, num_classes=80, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler)
## optimizeer and others
params = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)
lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
criterion = nn.SmoothL1Loss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
## train model
model.train()
num_epochs = 10
for epoch in range(num_epochs):
for i, (images, targets) in enumerate(train_loader):
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
optimizer.zero_grad()
losses.backward()
optimizer.step()
if i % 100 == 0:
print(f"Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Total Loss: {losses.item()}")
```
### Suggest a potential alternative/fix
_No response_
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 3 |
2,837 | 99,562 |
lintrunner mypy raises error in numpy
|
module: lint, triaged
|
On my machine, `lintrunner -a` produces the following output:
```
(/raid/rzou/pt/debug-cpu4-env) [0] rzou@devfair0317:/raid/rzou/pt/debug-cpu4 (debug-cpu4) $ lintrunner -a
FLAKE8 success!
CLANGFORMAT success!
MYPY failure
MYPYNOFOLLOW success!
MYPYSTRICT success!
CLANGTIDY success!
TYPEIGNORE success!
NOQA success!
CIRCLECI success!
NATIVEFUNCTIONS success!
NEWLINE success!
CONSTEXPR success!
TABS success!
INCLUDE success!
SPACES success!
ERROR_PRONE_ISINSTANCE success!
PYBIND11_INCLUDE success!
PYPIDEP success!
PYBIND11_SPECIALIZATION success!
EXEC success!
RAWCUDA success!
RAWCUDADEVICE success!
ROOT_LOGGING success!
CMAKE success!
SHELLCHECK success!
ACTIONLINT success!
TESTOWNERS success!
CALL_ONCE success!
ONCE_FLAG success!
WORKFLOWSYNC success!
CUBINCLUDE success!
UFMT success!
COPYRIGHT success!
BAZEL_LINTER success!
LINTRUNNER_VERSION success!
>>> Lint for ../debug-cpu4-env/lib/python3.11/site-packages/numpy/__init__.pyi:
Error (MYPY) [syntax]
Positional-only parameters are only supported in Python 3.8 and greater
633 | def flush(self) -> object: ...
634 | def fileno(self) -> int: ...
635 | def tell(self) -> SupportsIndex: ...
>>> 636 | def seek(self, offset: int, whence: int, /) -> object: ...
637 |
638 |# NOTE: `seek`, `write` and `flush` are technically only required
639 |# for `readwrite`/`write` modes
Successfully applied all patches.
```
the numpy error blocks any other errors from showing up.
| 4 |
2,838 | 99,561 |
Pytorch mobile crashes on Android when loading a custom model
|
module: android, oncall: mobile
|
### 🐛 Describe the bug
```
val assetFilePath = assetFilePath(context, "model.ptl")
val module = LiteModuleLoader.load(assetFilePath)
```
The app crashes with following messages:
```
12:57:43.179 E type=1400 audit(1682009863.176:17659): avc: denied { search } for pid=21355 comm="linker64" name="tests" dev="dm-49" ino=308 scontext=u:r:untrusted_app:s0:c104,c257,c512,c768 tcontext=u:object_r:shell_test_data_file:s0 tclass=dir permissive=0 SEPF_SM-G990U1_12_0001 audit_filtered
12:57:43.180 E type=1300 audit(1682009863.176:17659): arch=c00000b7 syscall=48 success=no exit=-13 a0=ffffff9c a1=70222dc0e0 a2=4 a3=0 items=1 ppid=21018 pid=21355 auid=4294967295 uid=10360 gid=10360 euid=10360 suid=10360 fsuid=10360 egid=10360 sgid=10360 fsgid=10360 tty=(none) ses=4294967295 comm="linker64" exe="/apex/com.android.runtime/bin/linker64" subj=u:r:untrusted_app:s0:c104,c257,c512,c768 key=(null)
12:57:43.180 E type=1302 audit(1682009863.176:17659): item=0 name="/data/local/tests/product" nametype=UNKNOWN cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
12:57:43.180 E type=1327 audit(1682009863.176:17659): proctitle=2F73797374656D2F62696E2F6C696E6B65723634002F646174612F6170702F7E7E6541565347482D505253754C505A53677369786252413D3D2F6465762E69766B696E2E61636574726163652D577162756B39677A67715A7A725A644F334A44674D673D3D2F626173652E61706B212F6C69622F61726D36342D7638612F6C69
12:57:43.181 D Probability - Cloud gaming: [0.06354574]
12:57:43.182 D Probability - Real time: [0.09131927]
12:57:43.182 D Probability - Non real time: [0.9352345]
12:57:43.182 D 1 sample inference time: 0.540417 msecs
12:57:43.183 D L2 RT 1 sample inference time: 1.311406 msecs
12:57:43.184 D L2 NRT 1 sample inference time: 0.788802 msecs
12:57:43.269 E [21355:21355:20230420,125743.268602:ERROR elf_dynamic_array_reader.h:64] tag not found
12:57:43.287 E [21355:21355:20230420,125743.287130:ERROR file_io_posix.cc:144] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: Permission denied (13)
12:57:43.308 A Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x30 in tid 21018 (.ivkin.acetrace), pid 21018 (.ivkin.acetrace)
12:57:43.343 D !@ 8,0 r 2432862 69069464 w 1721643 27354320 d 375839 112681504 f 0 0 iot 2306120 0 th 0 0 0 pt 0 inp 0 1 32244.986
12:57:43.343 D !@ Read_top(KB): .ivkin.acetrace(21018) 168104 earchbox:search(18948) 5624 linker64(21355) 5288
12:57:43.344 D !@ Write_top(KB): .ivkin.acetrace(21018) 476 kworker/u16:0(32005) 320 system_server(1651) 120
12:57:43.391 I obtaining output fd from tombstoned, type: kDebuggerdTombstoneProto
12:57:43.396 I received crash request for pid 21018
12:57:43.397 I performing dump of process 21018 (target tid = 21018)
12:57:43.490 D onWifiUsabilityStats - seqNum 4680, isSameBssidAndFreq true
12:57:43.490 I Link Qos Query: 0.038 ms / 210.563 Mbps (526 / 0.014 / 1.296)
12:57:43.518 I [RequestManager.cpp]releaseLocked(): Released ID : 15890720
12:57:43.685 D Probability - Cloud gaming: [0.10355392]
12:57:43.685 D Probability - Real time: [0.21334295]
12:57:43.686 D Probability - Non real time: [0.94348097]
12:57:43.686 D 1 sample inference time: 0.812448 msecs
12:57:43.687 D L2 RT 1 sample inference time: 1.311198 msecs
12:57:43.688 D L2 NRT 1 sample inference time: 1.058854 msecs
12:57:43.759 I 0(delay), 50(swap), 5(freelimit), 0(reentrymode) memory pressure events were skipped after a kill!
12:57:44.004 A *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
12:57:44.004 A Build fingerprint: 'samsung/r9quew/r9q:13/TP1A.220624.014/G990U1UES5EWC2:user/release-keys'
12:57:44.004 A Revision: '10'
12:57:44.004 A ABI: 'arm64'
12:57:44.004 A Processor: '6'
12:57:44.004 A Timestamp: 2023-04-20 12:57:43.403319106-0400
12:57:44.004 A Process uptime: 11s
12:57:44.004 A Cmdline: dev.ivkin.acetrace
12:57:44.004 A pid: 21018, tid: 21018, name: .ivkin.acetrace >>> dev.ivkin.acetrace <<<
12:57:44.004 A uid: 10360
12:57:44.004 A signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0000000000000030
12:57:44.004 A Cause: null pointer dereference
12:57:44.004 A x0 0000007fc15708e0 x1 0000007fc156fbf8 x2 0000000000000001 x3 0000007246b72b70
12:57:44.004 A x4 0000007fc15708e0 x5 000000717ddbfcc4 x6 fefeff6097ec9b0a x7 7f7f7f7fffffff7f
12:57:44.004 A x8 0000000000000030 x9 0000000000000003 x10 000000000000521a x11 0000000000000000
12:57:44.004 A x12 0000000000000000 x13 0000000000000001 x14 ffffffffffffffff x15 00000074594db812
12:57:44.004 A x16 00000070b5bb0f50 x17 0000007459501880 x18 ffffffffffffffe8 x19 0000007fc156fbf8
12:57:44.004 A x20 0000007fc15708e0 x21 000000745fa2a000 x22 0000000000000001 x23 0000007fc15708e0
12:57:44.004 A x24 0000000000000000 x25 0000007fc1570e10 x26 0000007266b9b6d8 x27 0000000000000066
12:57:44.004 A x28 0000000000000000 x29 0000007fc156fbd0
12:57:44.004 A lr 000000717ddc423c sp 0000007fc156fbd0 pc 0000000000000030 pst 0000000000001000
12:57:44.004 A backtrace:
12:57:44.004 A #00 pc 0000000000000030 <unknown>
12:57:44.004 A #01 pc 00000000000e8238 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libc++_shared.so (BuildId: 0da82722c95ec8827f133f0d1abfb38f0e6fc085)
12:57:44.004 A #02 pc 00000000000e7e6c /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libc++_shared.so (BuildId: 0da82722c95ec8827f133f0d1abfb38f0e6fc085)
12:57:44.004 A #03 pc 00000000000e3f50 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libc++_shared.so (BuildId: 0da82722c95ec8827f133f0d1abfb38f0e6fc085)
12:57:44.004 A #04 pc 00000000000e3da4 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libc++_shared.so (__gxx_personality_v0+224) (BuildId: 0da82722c95ec8827f133f0d1abfb38f0e6fc085)
12:57:44.004 A #05 pc 000000000276fbdc /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #06 pc 00000000027700e4 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #07 pc 00000000025d1eac /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const&)+304) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #08 pc 00000000023bd944 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (torch::jit::mobile::Function::initialize_operators(bool)+1640) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #09 pc 00000000023de728 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (torch::jit::mobile::parseOperators(c10::ivalue::TupleElements&&, unsigned long const&, torch::jit::mobile::Function*)+620) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #10 pc 00000000023cbf18 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #11 pc 00000000023c7188 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #12 pc 00000000023c883c /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (torch::jit::_load_for_mobile(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const&, c10::optional<c10::Device>, std::__ndk1::unordered_map<std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> >, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> >, std::__ndk1::hash<std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > >, std::__ndk1::equal_to<std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > >, std::__ndk1::allocator<std::__ndk1::pair<std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > > > >&, unsigned long)+324) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #13 pc 0000000000477688 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (pytorch_jni::PytorchJni::PytorchJni(facebook::jni::alias_ref<_jstring*>, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >, int)+512) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #14 pc 00000000004772b4 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (facebook::jni::basic_strong_ref<facebook::jni::detail::HybridData, facebook::jni::LocalReferenceAllocator> facebook::jni::HybridClass<pytorch_jni::PytorchJni, facebook::jni::detail::BaseHybridClass>::makeCxxInstance<facebook::jni::alias_ref<_jstring*>&, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >&, int&>(facebook::jni::alias_ref<_jstring*>&, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >&, int&)+96) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #15 pc 000000000047714c /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (pytorch_jni::PytorchJni::initHybrid(facebook::jni::alias_ref<_jclass*>, facebook::jni::alias_ref<_jstring*>, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >, int)+56) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #16 pc 00000000004771dc /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk!libpytorch_jni_lite.so (facebook::jni::detail::FunctionWrapper<facebook::jni::basic_strong_ref<facebook::jni::detail::JTypeFor<facebook::jni::detail::HybridData, facebook::jni::JObject, void>::_javaobject*, facebook::jni::LocalReferenceAllocator> (*)(facebook::jni::alias_ref<_jclass*>, facebook::jni::alias_ref<_jstring*>, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >, int), _jclass*, facebook::jni::basic_strong_ref<facebook::jni::detail::JTypeFor<facebook::jni::detail::HybridData, facebook::jni::JObject, void>::_javaobject*, facebook::jni::LocalReferenceAllocator>, facebook::jni::alias_ref<_jstring*>, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >, int>::call(_JNIEnv*, _jobject*, _jstring*, facebook::jni::detail::JTypeFor<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString>, facebook::jni::JObject, void>::_javaobject*, int, facebook::jni::basic_strong_ref<facebook::jni::detail::JTypeFor<facebook::jni::detail::HybridData, facebook::jni::JObject, void>::_javaobject*, facebook::jni::LocalReferenceAllocator> (*)(facebook::jni::alias_ref<_jclass*>, facebook::jni::alias_ref<_jstring*>, facebook::jni::alias_ref<facebook::jni::JMap<facebook::jni::JString, facebook::jni::JString> >, int))+96) (BuildId: c4c23dc7eb04cacc035cb308a69aacf71e383ab5)
12:57:44.004 A #17 pc 000000000043e154 /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+148) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.004 A #18 pc 0000000000209398 /apex/com.android.art/lib64/libart.so (nterp_helper+152) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.004 A #19 pc 00000000003d4d56 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (org.pytorch.LiteNativePeer.<init>+10)
12:57:44.005 A #20 pc 000000000043476c /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+556) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #21 pc 0000000000571a44 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+988) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #22 pc 0000000000212974 /apex/com.android.art/lib64/libart.so (void art::interpreter::ExecuteSwitchImplCpp<false, false>(art::interpreter::SwitchImplContext*)+5284) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #23 pc 00000000004409d8 /apex/com.android.art/lib64/libart.so (ExecuteSwitchImplAsm+8) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #24 pc 00000000003d4c24 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (org.pytorch.LiteModuleLoader.load+0)
12:57:44.005 A #25 pc 0000000000471bd4 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.__uniq.112435418011751916792819755956732575238.llvm.18358736361643412929)+396) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #26 pc 0000000000471238 /apex/com.android.art/lib64/libart.so (artQuickToInterpreterBridge+1104) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #27 pc 000000000043e288 /apex/com.android.art/lib64/libart.so (art_quick_to_interpreter_bridge+88) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #28 pc 0000000000209398 /apex/com.android.art/lib64/libart.so (nterp_helper+152) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #29 pc 0000000000007d8a /data/data/dev.ivkin.acetrace/code_cache/.overlay/base.apk/classes19.dex (dev.ivkin.acetrace.tracking.PytorchTest.test+178)
12:57:44.005 A #30 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #31 pc 0000000000003956 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (dev.ivkin.acetrace.app.presentation.logic.start.StartScreenKt$StartLayout$3$2.invoke+14)
12:57:44.005 A #32 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #33 pc 000000000000390c /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (dev.ivkin.acetrace.app.presentation.logic.start.StartScreenKt$StartLayout$3$2.invoke+0)
12:57:44.005 A #34 pc 000000000020b074 /apex/com.android.art/lib64/libart.so (nterp_helper+7540) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #35 pc 000000000026b528 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.foundation.ClickableKt$clickable$4$gesture$1$2.invoke-k-4lQ0M+24)
12:57:44.005 A #36 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #37 pc 000000000026b4ce /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.foundation.ClickableKt$clickable$4$gesture$1$2.invoke+14)
12:57:44.005 A #38 pc 000000000020b074 /apex/com.android.art/lib64/libart.so (nterp_helper+7540) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #39 pc 00000000002839b8 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.foundation.gestures.TapGestureDetectorKt$detectTapAndPress$2$1$1.invokeSuspend+300)
12:57:44.005 A #40 pc 0000000002023278 /memfd:jit-cache (deleted) (kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith+312)
12:57:44.005 A #41 pc 00000000020b4558 /memfd:jit-cache (deleted) (kotlinx.coroutines.DispatchedTaskKt.resume+1032)
12:57:44.005 A #42 pc 0000000002070cc8 /memfd:jit-cache (deleted) (kotlinx.coroutines.DispatchedTaskKt.dispatch+728)
12:57:44.005 A #43 pc 00000000020bb918 /memfd:jit-cache (deleted) (kotlinx.coroutines.CancellableContinuationImpl.dispatchResume+200)
12:57:44.005 A #44 pc 00000000020836f8 /memfd:jit-cache (deleted) (kotlinx.coroutines.CancellableContinuationImpl.resumeImpl+408)
12:57:44.005 A #45 pc 000000000020a2b0 /apex/com.android.art/lib64/libart.so (nterp_helper+4016) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #46 pc 0000000000321a6a /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default+14)
12:57:44.005 A #47 pc 00000000020bc59c /memfd:jit-cache (deleted) (kotlinx.coroutines.CancellableContinuationImpl.resumeWith+204)
12:57:44.005 A #48 pc 000000000020b120 /apex/com.android.art/lib64/libart.so (nterp_helper+7712) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #49 pc 0000000000f1cf72 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.SuspendingPointerInputFilter$PointerEventHandlerCoroutine.offerPointerEvent+66)
12:57:44.005 A #50 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #51 pc 0000000000f1e218 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.SuspendingPointerInputFilter.dispatchPointerEvent+112)
12:57:44.005 A #52 pc 000000000020a958 /apex/com.android.art/lib64/libart.so (nterp_helper+5720) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #53 pc 0000000000f1e5c8 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.SuspendingPointerInputFilter.onPointerEvent-H0pRuoY+56)
12:57:44.005 A #54 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #55 pc 0000000000f16e04 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.Node.dispatchMainEventPass+324)
12:57:44.005 A #56 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #57 pc 0000000000f16db6 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.Node.dispatchMainEventPass+246)
12:57:44.005 A #58 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #59 pc 0000000000f164cc /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.NodeParent.dispatchMainEventPass+88)
12:57:44.005 A #60 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #61 pc 0000000000f158d0 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.HitPathTracker.dispatchChanges+68)
12:57:44.005 A #62 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #63 pc 0000000000f1a402 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.input.pointer.PointerInputEventProcessor.process-BIzXfog+446)
12:57:44.005 A #64 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #65 pc 0000000000f42a94 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.platform.AndroidComposeView.sendMotionEvent-8iAsVTc+140)
12:57:44.005 A #66 pc 000000000020a958 /apex/com.android.art/lib64/libart.so (nterp_helper+5720) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #67 pc 0000000000f4298c /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.platform.AndroidComposeView.handleMotionEvent-8iAsVTc+364)
12:57:44.005 A #68 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #69 pc 0000000000f423b6 /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.compose.ui.platform.AndroidComposeView.dispatchTouchEvent+138)
12:57:44.005 A #70 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #71 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.005 A #72 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #73 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.005 A #74 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #75 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.005 A #76 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #77 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.005 A #78 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #79 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.005 A #80 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #81 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.005 A #82 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #83 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.005 A #84 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #85 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.005 A #86 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.005 A #87 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.005 A #88 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #89 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.006 A #90 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #91 pc 00000000001fdada /system/framework/framework.jar (android.view.ViewGroup.dispatchTransformedTouchEvent+118)
12:57:44.006 A #92 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #93 pc 00000000001fd8a6 /system/framework/framework.jar (android.view.ViewGroup.dispatchTouchEvent+2014)
12:57:44.006 A #94 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #95 pc 000000000050f0e0 /system/framework/framework.jar (com.android.internal.policy.DecorView.superDispatchTouchEvent+0)
12:57:44.006 A #96 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #97 pc 000000000051d458 /system/framework/framework.jar (com.android.internal.policy.PhoneWindow.superDispatchTouchEvent+4)
12:57:44.006 A #98 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #99 pc 00000000001cbc00 /system/framework/framework.jar (android.app.Activity.dispatchTouchEvent+44)
12:57:44.006 A #100 pc 000000000020b74c /apex/com.android.art/lib64/libart.so (nterp_helper+9292) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #101 pc 00000000002090ec /data/app/~~eAVSGH-PRSuLPZSgsixbRA==/dev.ivkin.acetrace-Wqbuk9gzgqZzrZdO3JDgMg==/base.apk (androidx.appcompat.view.WindowCallbackWrapper.dispatchTouchEvent+4)
12:57:44.006 A #102 pc 000000000020b74c /apex/com.android.art/lib64/libart.so (nterp_helper+9292) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #103 pc 000000000050dbdc /system/framework/framework.jar (com.android.internal.policy.DecorView.dispatchTouchEvent+224)
12:57:44.006 A #104 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #105 pc 0000000000223ad4 /system/framework/framework.jar (android.view.View.dispatchPointerEvent+12)
12:57:44.006 A #106 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #107 pc 000000000020be6c /system/framework/framework.jar (android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent+192)
12:57:44.006 A #108 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #109 pc 000000000020baf2 /system/framework/framework.jar (android.view.ViewRootImpl$ViewPostImeInputStage.onProcess+74)
12:57:44.006 A #110 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #111 pc 0000000000208734 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.deliver+52)
12:57:44.006 A #112 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #113 pc 0000000000208850 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.onDeliverToNext+112)
12:57:44.006 A #114 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #115 pc 00000000002087c8 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.forward+0)
12:57:44.006 A #116 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #117 pc 0000000000207bbc /system/framework/framework.jar (android.view.ViewRootImpl$AsyncInputStage.forward+20)
12:57:44.006 A #118 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #119 pc 0000000000208698 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.apply+4)
12:57:44.006 A #120 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #121 pc 0000000000207a86 /system/framework/framework.jar (android.view.ViewRootImpl$AsyncInputStage.apply+14)
12:57:44.006 A #122 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #123 pc 0000000000208744 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.deliver+68)
12:57:44.006 A #124 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #125 pc 0000000000208850 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.onDeliverToNext+112)
12:57:44.006 A #126 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #127 pc 00000000002087c8 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.forward+0)
12:57:44.006 A #128 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #129 pc 0000000000208698 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.apply+4)
12:57:44.006 A #130 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #131 pc 0000000000208744 /system/framework/framework.jar (android.view.ViewRootImpl$InputStage.deliver+68)
12:57:44.006 A #132 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #133 pc 0000000000213cc8 /system/framework/framework.jar (android.view.ViewRootImpl.deliverInputEvent+668)
12:57:44.006 A #134 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #135 pc 0000000000214e1a /system/framework/framework.jar (android.view.ViewRootImpl.doProcessInputEvents+206)
12:57:44.006 A #136 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #137 pc 00000000002158de /system/framework/framework.jar (android.view.ViewRootImpl.enqueueInputEvent+246)
12:57:44.006 A #138 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #139 pc 000000000020ce3a /system/framework/framework.jar (android.view.ViewRootImpl$WindowInputEventReceiver.onInputEvent+238)
12:57:44.006 A #140 pc 000000000020a254 /apex/com.android.art/lib64/libart.so (nterp_helper+3924) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #141 pc 00000000001c5f8a /system/framework/framework.jar (android.view.InputEventReceiver.dispatchInputEvent+18)
12:57:44.006 A #142 pc 000000000043476c /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+556) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #143 pc 00000000004c87e8 /apex/com.android.art/lib64/libart.so (art::JValue art::InvokeVirtualOrInterfaceWithVarArgs<art::ArtMethod*>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, art::ArtMethod*, std::__va_list)+828) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #144 pc 00000000005edcb8 /apex/com.android.art/lib64/libart.so (art::JNI<true>::CallVoidMethodV(_JNIEnv*, _jobject*, _jmethodID*, std::__va_list)+140) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #145 pc 0000000000472ab0 /apex/com.android.art/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, std::__va_list, art::Primitive::Type, art::InvokeType) (.__uniq.99033978352804627313491551960229047428)+624) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #146 pc 00000000005c63f0 /apex/com.android.art/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallVoidMethodV(_JNIEnv*, _jobject*, _jmethodID*, std::__va_list) (.__uniq.99033978352804627313491551960229047428.llvm.13572457048004851375)+72) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #147 pc 00000000000c0620 /system/lib64/libandroid_runtime.so (_JNIEnv::CallVoidMethod(_jobject*, _jmethodID*, ...)+124) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.006 A #148 pc 000000000013190c /system/lib64/libandroid_runtime.so (android::NativeInputEventReceiver::consumeEvents(_JNIEnv*, bool, long, bool*)+380) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.006 A #149 pc 00000000001316c0 /system/lib64/libandroid_runtime.so (android::NativeInputEventReceiver::handleEvent(int, int, void*)+180) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.006 A #150 pc 0000000000018028 /system/lib64/libutils.so (android::Looper::pollInner(int)+1064) (BuildId: 97f353c1a350efeb766e1e852854da85)
12:57:44.006 A #151 pc 0000000000017b9c /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+116) (BuildId: 97f353c1a350efeb766e1e852854da85)
12:57:44.006 A #152 pc 00000000001689ec /system/lib64/libandroid_runtime.so (android::android_os_MessageQueue_nativePollOnce(_JNIEnv*, _jobject*, long, int)+48) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.006 A #153 pc 00000000002ec504 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (art_jni_trampoline+116)
12:57:44.006 A #154 pc 00000000020772ec /memfd:jit-cache (deleted) (android.os.MessageQueue.next+236)
12:57:44.006 A #155 pc 0000000002069f24 /memfd:jit-cache (deleted) (android.os.Looper.loopOnce+180)
12:57:44.006 A #156 pc 0000000000209a9c /apex/com.android.art/lib64/libart.so (nterp_helper+1948) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #157 pc 00000000001feed0 /system/framework/framework.jar (android.os.Looper.loop+164)
12:57:44.006 A #158 pc 0000000000209334 /apex/com.android.art/lib64/libart.so (nterp_helper+52) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.006 A #159 pc 00000000001c7432 /system/framework/framework.jar (android.app.ActivityThread.main+214)
12:57:44.007 A #160 pc 0000000000434a00 /apex/com.android.art/lib64/libart.so (art_quick_invoke_static_stub+576) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #161 pc 0000000000467134 /apex/com.android.art/lib64/libart.so (_jobject* art::InvokeMethod<(art::PointerSize)8>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jobject*, _jobject*, unsigned long)+1960) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #162 pc 0000000000466964 /apex/com.android.art/lib64/libart.so (art::Method_invoke(_JNIEnv*, _jobject*, _jobject*, _jobjectArray*) (.__uniq.165753521025965369065708152063621506277)+48) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #163 pc 00000000002f2148 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (art_jni_trampoline+120)
12:57:44.007 A #164 pc 000000000020a2b0 /apex/com.android.art/lib64/libart.so (nterp_helper+4016) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #165 pc 00000000004fcb62 /system/framework/framework.jar (com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run+22)
12:57:44.007 A #166 pc 0000000000a436b4 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (com.android.internal.os.ZygoteInit.main+3604)
12:57:44.007 A #167 pc 0000000000434a00 /apex/com.android.art/lib64/libart.so (art_quick_invoke_static_stub+576) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #168 pc 000000000057df48 /apex/com.android.art/lib64/libart.so (art::JValue art::InvokeWithVarArgs<_jmethodID*>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, std::__va_list)+900) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #169 pc 00000000005f194c /apex/com.android.art/lib64/libart.so (art::JNI<true>::CallStaticVoidMethodV(_JNIEnv*, _jclass*, _jmethodID*, std::__va_list)+160) (BuildId: 28c5aa8a2e8fc5df069f717d6e94f7fe)
12:57:44.007 A #170 pc 00000000000c1c04 /system/lib64/libandroid_runtime.so (_JNIEnv::CallStaticVoidMethod(_jclass*, _jmethodID*, ...)+124) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.007 A #171 pc 00000000000ce470 /system/lib64/libandroid_runtime.so (android::AndroidRuntime::start(char const*, android::Vector<android::String8> const&, bool)+856) (BuildId: 0ca3873507c04a77dfdb2555ddd0dfdc)
12:57:44.007 A #172 pc 0000000000002570 /system/bin/app_process64 (main+1304) (BuildId: df8ee709f77c2e3b9fca33b5a3ced970)
12:57:44.007 A #173 pc 000000000004a7d4 /apex/com.android.runtime/lib64/bionic/libc.so (__libc_init+100) (BuildId: ef11d8d2511bfd3cab1588a6cb2014bb)
12:57:44.030 E Tombstone written to: tombstone_15
```
The same exact model files loads fine on iOS.
Here is the model file: https://drive.google.com/file/d/1kpBXy8YB_YReG0FarxdTwSvWubLucPRt/view?usp=share_link
### Versions
```
implementation 'org.pytorch:pytorch_android_lite:1.13.1'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.13.1'
```
| 0 |
2,839 | 99,556 |
torch.func.jacrev fails if model contains full_backward_hook
|
module: autograd, triaged, module: functorch
|
### 🐛 Describe the bug
Hi All,
**TL;DR** :bug: If your model involves using a `full_backward_hook`, computing derivatives throws a missing `setup_context` error. As a full backward hook is technically a `torch.autograd.Function`, and hence requires a `setup_context` method in pytorch2, which it doesn't seem to have by default.
Here's a minimal reproducible example,
```
import torch
from torch import nn, Tensor
from torch.func import vmap, jacrev, functional_call
from typing import Tuple
from collections import defaultdict
class model(nn.Module):
def __init__(self, num_input, num_hidden):
super(model, self).__init__()
self.fc1 = nn.Linear(num_input, num_hidden)
self.fc2 = nn.Linear(num_hidden, 1)
self.act_func = nn.Tanh()
def forward(self, x):
x = self.fc1(x)
x = self.act_func(x)
x = self.fc2(x)
return x
num_samples = 4096
num_input=2
num_hidden=64
device=torch.device("cpu")
state = defaultdict(dict) #equivalent to optim.state
def forward_pre_hook(module: nn.Module, input: Tuple[Tensor]) -> None:
a=input[0]
state[module]['a'] = a
def full_backward_hook(module: nn.Module, grad_input: Tuple[Tensor], grad_output: Tuple[Tensor]) -> None:
e = grad_output[0]
state[module]['e'] = e
#The input
x = torch.randn(num_samples, num_input, device=device)
#Our model
net = model(num_input=num_input,
num_hidden=num_hidden)
#Add the hooks
for mod in net.modules():
if(mod.__class__.__name__ == 'Linear'): #avoids registering hook to nn.Tanh() etc.
mod.register_forward_pre_hook(forward_pre_hook)
mod.register_full_backward_hook(full_backward_hook)
y = net(x) #compute output (works fine)
#Compute trace of Hessian
def calc_hessian_trace(params, x):
def output(params, x):
return functional_call(net, params, x)
_hessian = jacrev(jacrev(output, argnums=(1)), argnums=(1))(params, x)
return _hessian.diagonal(0,-2,-1).sum(-1)
laplacian = vmap(calc_hessian_trace, in_dims=(None, 0))(dict(net.named_parameters()), x) #fails
```
The resultant error (with complete stack trace) is,
```
Traceback (most recent call last):
File "~/Downloads/hooks_issue.py", line 64, in <module>
laplacian = vmap(calc_hessian_trace, in_dims=(None, 0))(dict(net.named_parameters()), x) #fails
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 434, in wrapped
return _flat_vmap(
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 619, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "~/Downloads/hooks_issue.py", line 61, in calc_hessian_trace
_hessian = jacrev(jacrev(output, argnums=(1)), argnums=(1))(params, x)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 489, in wrapper_fn
vjp_out = _vjp_with_argnums(func, *args, argnums=argnums, has_aux=has_aux)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 291, in _vjp_with_argnums
primals_out = func(*primals)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 489, in wrapper_fn
vjp_out = _vjp_with_argnums(func, *args, argnums=argnums, has_aux=has_aux)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 291, in _vjp_with_argnums
primals_out = func(*primals)
File "~/Downloads/hooks_issue.py", line 59, in output
return functional_call(net, params, x)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/_functorch/functional_call.py", line 143, in functional_call
return nn.utils.stateless._functional_call(
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/nn/utils/stateless.py", line 262, in _functional_call
return module(*args, **kwargs)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "~/Downloads/hooks_issue.py", line 19, in forward
x = self.fc1(x)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1536, in _call_impl
args = bw_hook.setup_input_hook(args)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/utils/hooks.py", line 191, in setup_input_hook
res, input_idx = self._apply_on_tensors(fn, args)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/utils/hooks.py", line 170, in _apply_on_tensors
new_tensors = torch.nn.modules._functions.BackwardHookFunction.apply(*tensors)
File "~/anaconda3/envs/pytorch2/lib/python3.10/site-packages/torch/autograd/function.py", line 509, in apply
raise RuntimeError(
RuntimeError: In order to use an autograd.Function with functorch transforms (vmap, grad, jvp, jacrev, ...), it must override the setup_context staticmethod. For more details, please see https://pytorch.org/docs/master/notes/extending.func.html
```
I _think_ this emerges because `full_backward_hook` is technically a `torch.autograd.Function` and so when you call any `torch.func` method it requires a `setup_context` in order to handle any outputs within pytorch2.0. In previous versions of pytorch, `full_backward_hook` methods were skipped entirely if I recall correctly.
Is there a way to add a `setup_context` to a `full_backward_hook`, or perhaps define a custom `full_backward_hook` with a `setup_context` method itself?
After following through the stack trace, I believe I have found where the error is emerging. In https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/_functions.py, `BackwardHookFunction` is defined as a `torch.autograd.Function` object, but it's in the style of pytorch1.x.
```
class BackwardHookFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, *args):
ctx.mark_non_differentiable(*[arg for arg in args if not arg.requires_grad])
return args
@staticmethod
def backward(ctx, *args):
return args
```
So, I'd assume that changing this function to something like,
```
class BackwardHookFunction(torch.autograd.Function):
@staticmethod
def forward(*args):
return args
@staticmethod
def setup_context(ctx, *args):
ctx.mark_non_differentiable(*[arg for arg in args if not arg.requires_grad])
@staticmethod
def backward(ctx, *args):
return args
```
Might resolve this issue? However, I have no idea if this would mess with other parts of pytorch.
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.2
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
Stepping: 5
CPU MHz: 3800.000
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7599.80
Virtualisation: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] functorch==1.13.0a0+8e2f53b
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.0.0 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.0 py310_cu118 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.0 py310_cu118 pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @kshitij12345 @janeyx99 @soumith
| 0 |
2,840 | 99,558 |
Batching rule not implemented for aten::narrow.Tensor
|
triaged, module: functorch
|
Hi,
I was trying out and benchmarking different single-device Mixture of Experts layer implementations, and I naively assumed vmap could be of help in this task. In some cases a simple vmap-based implementation was faster than my other implementations. However, I quickly came to the conclusion that this implementation is impractical due to its memory requirements. I am not sure that this is the only thing making my implementation memory inefficient, but I presume it is due to the copies. It cannot avoid copies because `.select()` uses Python ints, which are naturally unsupported by vmap, and indexing with a one-dimensional LongTensor or using `.index_select()` creates copies. Some [have proposed](https://github.com/pytorch/functorch/issues/337#issuecomment-994965422) to allow passing tensors to `.select()`, but I think no work is being done towards making that happen - correct me if I am wrong.
I tried to work around this by using `torch.narrow()`, but it seems to be unsupported by vmap:
```
RuntimeError: Batching rule not implemented for aten::narrow.Tensor; the fallback path doesn't work on out= or view ops.
```
Note that dynamic models may become more popular with time, and MoE layers are a very hot research topic recently. Judging by the other linked issue I speculate that JAX already supports this case, so functorch is behind in this aspect. I therefore kindly request `torch.narrow()` support in vmap.
I paste the code for my implementation that reproduces this below. I tested this on pytorch 2.0.0:
```python
import math
import functorch
import torch
import torch.nn as nn
class NarrowVmapExperts(nn.Module):
def __init__(self,
dim,
num_experts,
hidden_dim=None,
activation=nn.GELU):
super().__init__()
self.hidden_dim = dim * 4 if hidden_dim is None else hidden_dim
self.num_experts = num_experts
# assumes homogeneous experts without biases
w1 = torch.zeros(num_experts, dim, hidden_dim)
w2 = torch.zeros(num_experts, hidden_dim, dim)
self.init_(w1)
self.init_(w2)
self.w1 = nn.Parameter(w1)
self.w2 = nn.Parameter(w2)
self.act = activation()
self.batched_expert_forward = functorch.vmap(self.single_expert_forward)
self.batched_forward = functorch.vmap(self.one_sample_forward)
@staticmethod
def init_(t):
dim = t.size(-1)
std = 1 / math.sqrt(dim)
t.uniform_(-std, std)
def single_expert_forward(self, x, expert_index):
# x is of size (dim)
# expert_index is of size () - a scalar
x = self.act(x @ self.w1.narrow(0, expert_index, 1).squeeze())
x = x @ self.w2.narrow(0, expert_index, 1).squeeze()
return x
def one_sample_forward(self, x, expert_indices, expert_scores):
# x is of size (dim)
# expert indices is of size (k)
# expert scores is of size (k)
x = x.expand(expert_indices.size(0), -1)
x = self.batched_expert_forward(x, expert_indices)
x = torch.sum(x * expert_scores.view(-1, 1), dim=0)
return x
def forward(self, x, expert_indices, expert_scores):
return self.batched_forward(x, expert_indices, expert_scores)
class TopKMoE(nn.Module):
def __init__(self, num_experts, in_dim, out_dim, hidden_dim, k=2, noisy_gating=True, experts_class=NarrowVmapExperts):
assert k <= num_experts
super().__init__()
self.num_experts = num_experts
self.in_dim = in_dim
self.out_dim = out_dim
self.hidden_dim = hidden_dim
self.k = k
self.noisy_gating = noisy_gating
# instantiate experts
self.experts = experts_class(dim=in_dim, num_experts=num_experts, hidden_dim=self.hidden_dim)
self.w_g = nn.Parameter(torch.zeros(in_dim, num_experts), requires_grad=True)
self.w_noise = nn.Parameter(torch.zeros(in_dim, num_experts), requires_grad=True)
def gate(self, x):
x_w_g = x @ self.w_g
if self.training and self.noisy_gating:
x_w_noise = x @ self.w_noise
h_x = x_w_g + torch.randn_like(x_w_noise) * nn.functional.softplus(x_w_noise)
top_logits, top_indices = h_x.topk(self.k, dim=-1, sorted=False)
g_x = torch.softmax(top_logits, dim=-1)
else:
top_logits, top_indices = x_w_g.topk(self.k, dim=-1, sorted=False)
g_x = torch.softmax(top_logits, dim=-1)
return g_x, top_indices
def forward(self, x, return_g=False):
# x is of size (batch_size, sequence_length, dim)
orig_size = x.size()
x = x.view(-1, x.size(-1))
g_x, g_indices = self.gate(x)
out = self.experts(x, g_indices, g_x)
out = out.view(orig_size)
if return_g:
return out, g_x.view(*orig_size[:-1], -1)
else:
return out
batch_size = 128
sequence_len = 64
num_experts = 64
dim = 256
hidden_dim = 384
layers = 9
k = 4
# instantiate
moe_layer = TopKMoE(num_experts, dim, dim, hidden_dim, k)
experts = moe_layer.experts
# check output size for a batch of inputs
x = torch.randn(batch_size, sequence_len, dim)
moe_out, g_x = moe_layer(x, return_g=True)
print(f'moe_out.size(): {moe_out.size()}')
print(f'g_x.size(): {g_x.size()}')
```
I also accept suggestions how may I implement MoE layers in an alternative, efficient manner - forgive me if, because of my limited knowledge, I missed some other obvious way to implement this efficiently.
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 5 |
2,841 | 99,544 |
Cross compile Pytorch for ARM in Bazel
|
module: build, triaged, enhancement, module: arm
|
### 🚀 The feature, motivation and pitch
My company has bazel as our build system, we have x86 and arm platforms in our environments. We have a cross compiling toolchain for ARM from x86. Because of this, we desire to have pytorch cross compile to ARM in bazel.
We have a working prototype that we would like to upstream.
We are filing this feature request to gauge community interest in bazel cross compile support for ARM in pytorch!
Doc outlining a bit more: https://docs.google.com/document/d/1U4cUcfC_IYePL3LRfaqih21gB8I9L2Te_VUvaX9Yia4/edit#heading=h.y26v9c48rho1
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 3 |
2,842 | 99,573 |
Jacfwd become slower after update pytorch ("We’ve integrated functorch into PyTorch---Documentation")
|
high priority, needs reproduction, triaged, module: functorch
|
Hello, I used to use
```
from functorch import jacfwd
```
and Jacfwd takes 3.5s, after the update, I am using
```
from torch.func import jacfwd
```
Jacfwd is taking almost 6s, is anyone noticed this?
cc @ezyang @gchanan @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
2,843 | 99,524 |
Inserting observer bug in FX quantization
|
oncall: quantization, triaged
|
### 🐛 Describe the bug
Following the tutorial or [FX quantization](https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html), when running `prepare_fx` of a float point model, basically pytorch would just insert an observer between two modules, for example a `Conv2d -> Conv2d` module would be: `Conv2d -> Observer -> Conv2d`, then these `Conv2d` modules will be converted to `QuantizeConv2d` modules after acting `convert_fx`
Buf for some customize structure of models, this function failed to insert, a simple way to reprod:
```
class M(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3,16,3,1,1)
self.bn1 = nn.BatchNorm2d(16)
self.check = nn.BatchNorm2d(16)
def forward(self, x):
x = self.check((self.bn1(self.conv1(x)))
return x
```
and the graph after running `prepare_fx`:
```
GraphModule(
(activation_post_process_0): HistogramObserver(min_val=inf, max_val=-inf)
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(check): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation_post_process_1): HistogramObserver(min_val=inf, max_val=-inf)
)
```
after graph after convert:
```
GraphModule(
(conv1): QuantizedConv2d(Reference)(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(check): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
```
see, the `conv1` is still a `Reference` module and `check` is still a float-point version.
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3574.472
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6200.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 72 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.0
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.9.2
[pip3] torch==1.13.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.1
[pip3] torchsde==0.2.5
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 6 |
2,844 | 99,515 |
Support polyphase channelizer
|
feature, triaged, module: fft
|
### 🚀 The feature, motivation and pitch
When doing audio or in general 1D time series data processing, one can use `torch.stft()` to convert 1D into 2D data. One can then use a 2D model for the actual task. However, `torch.sftf()` is a spectral analysis operation. Really, you want a polyphase channelizer, which may look similar, but isn't, and is efficiently channelizing your data into equally spaced (in frequency) downsampled time series channels.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @peterbell10
| 4 |
2,845 | 99,509 |
'Illegal instruction (core dumped)' for gpt-j bf16 generation task using greedy search
|
module: crash, module: cpu, triaged, module: intel
|
### 🐛 Describe the bug
GPT-J model bf16 inference crashes for generation task with greedy search.
**Error info:**
`Illegal instruction (core dumped)`
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import torch
generate_kwargs = dict(do_sample=False, temperature=0.9)
model_id = "EleutherAI/gpt-j-6B"
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = model.eval()
model = model.to(torch.bfloat16)
# 32 tokens input
prompt = "Once upon a time, there existed a little girl, who liked to have adventures." + \
" She wanted to go to places and meet new people, and have fun."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, max_new_tokens=32, **generate_kwargs)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
### Versions
torch 2.1.0.dev20230418+cpu
cc @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 15 |
2,846 | 99,455 |
Not Preserving Grad For Tensor Created Inside torch.compile
|
triaged, oncall: pt2, module: aotdispatch
|
### 🐛 Describe the bug
```
import torch
def foo():
x = torch.randn(5, 5, requires_grad=True)
y = x + 2
return x, y
x, y = foo()
print(x.requires_grad, y.requires_grad)
# True True
x, y = torch.compile()(foo)()
print(x.requires_grad, y.requires_grad)
# False False
```
### Versions
master
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 6 |
2,847 | 99,444 |
Print the index and summary of the SampleInput that failed an OpInfo test
|
ciflow/trunk, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #99444
Related to the Reproducible Testing BE project. Goal is to print out the sample input that failed an OpInfo test.
Crazy idea: to avoid requiring widespread changes across tests that use OpInfo sample inputs, return a new special iterator type from `OpInfo.sample_inputs()`, etc. that tracks the most recent item seen. If a test fails later on, print out this info to identify the sample that failed the test.
This solves the problem that the test framework currently has no concept of which sample input is being operated on.
This PR contains the following changes:
* New `TrackedInputIter` that wraps a sample inputs func iterator and tracks the most recent input seen in a `TrackedInput` structure
* The information is stored in a dictionary on the test function itself, mapping `full test ID -> most recent TrackedInput`
* To determine the test function that is being run, we do some stack crawling hackery in `extract_test_fn_and_id()`
* Above applies only when one of the following is called: `OpInfo.sample_inputs()`, `OpInfo.error_inputs()`, `OpInfo.reference_inputs()`, and `OpInfo.conjugate_sample_inputs()`. This could easily be extended to `ModuleInfo`s and the sparse sample input funcs as well
Example output when a sample input causes a failure:
```
======================================================================
ERROR: test_foo_add_cpu_uint8 (__main__.TestFakeTensorCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jbschlosser/branches/reproducible_testing/torch/testing/_internal/common_device_type.py", line 911, in test_wrapper
return test(*args, **kwargs)
File "/home/jbschlosser/branches/reproducible_testing/torch/testing/_internal/common_device_type.py", line 1097, in only_fn
return fn(slf, *args, **kwargs)
File "/home/jbschlosser/branches/reproducible_testing/test/test_ops.py", line 2211, in test_foo
self.fail('Example failure')
AssertionError: Example failure
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jbschlosser/branches/reproducible_testing/torch/testing/_internal/common_utils.py", line 2436, in wrapper
method(*args, **kwargs)
File "/home/jbschlosser/branches/reproducible_testing/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
result = test(self, **param_kwargs)
File "/home/jbschlosser/branches/reproducible_testing/torch/testing/_internal/common_device_type.py", line 917, in test_wrapper
raise Exception(
Exception: Caused by sample input at index 2: SampleInput(input=Tensor[size=(5, 1), device="cpu", dtype=torch.uint8], args=TensorList[Tensor[size=(5,), device="cpu", dtype=torch.uint8]], kwargs={}, broadcasts_input=True, name='')
To execute this test, run the following from the base repo dir:
python test/test_ops.py -k test_foo_add_cpu_uint8
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
```
This notably doesn't print the actual `SampleInput` values, as that's hard without fully reproducible random sample generation. I went down this path for a while and it seems infeasible without adding an untenable amount of overhead to set the random seed per SampleInput (see https://github.com/pytorch/pytorch/issues/86694#issuecomment-1614943708 for more details). For now, I am settling for at least spitting out the index and some metadata of the `SampleInput`, as it seems better than nothing.
| 4 |
2,848 | 99,438 |
vision_maskrcnn failing on periodic dynamic_aot_eager_torchbench
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
For example https://hud.pytorch.org/pytorch/pytorch/commit/2611fccfed83669a3f1221af0131bb99d3f6dbb1
Command to reproduce:
`PYTHONPATH=$(pwd)/torchbench python benchmarks/dynamo/torchbench.py --accuracy --backend aot_eager --dynamic-shapes --dynamic-batch-only --device cuda --inference --amp --only vision_maskrcnn`
It fails with the error `RecursionError: maximum recursion depth exceeded in __instancecheck__`
### Versions
PyTorch CI
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 1 |
2,849 | 99,433 |
RuntimeError: "replication_pad1d_cuda" not implemented for 'BFloat16'
|
module: cuda, triaged, module: bfloat16
|
### 🐛 Describe the bug
I am using the [diffsptk](https://github.com/sp-nitech/diffsptk) library PQMF functionality, because it has low reconstruction error.
When enabling BFloat16, I get the following error:
```
File "/home/ubuntu/spooky-source-separation/demucs_lightning/demucs/pqmf.py", line 64, in reconstruct
x = self.pqmf.reconstruct(x)
File "/home/ubuntu/spooky-source-separation/demucs_lightning/demucs/pqmf.py", line 32, in reconstruct
x = self.ipqmf(self.interpolate(self.n_subbands * x, dim=-1), keepdim=False)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/diffsptk/core/ipqmf.py", line 99, in forward
x = F.conv1d(self.pad(y), self.filters)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/padding.py", line 335, in forward
return F.pad(input, self.padding, 'replicate')
RuntimeError: "replication_pad1d_cuda" not implemented for 'BFloat16'
```
### Versions
Sorry this link is broken and I googled for it, but I am happy to run it if you provide me with an updated link.
```
In [5]: torch.__version__
Out[5]: '2.0.0+cu117'
```
cc @ngimel
| 0 |
2,850 | 99,432 |
[DTensor] parallelize_module failed with nn.Transformer and the PairwiseParallel plan
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
`parallelize_module` failed with `nn.Transformer` and the `PairwiseParallel` plan, which is unexpected according to [the doc](https://pytorch.org/docs/stable/distributed.tensor.parallel.html#torch.distributed.tensor.parallel.style.PairwiseParallel) of `PairwiseParallel`.
```python
import torch
import torch.nn as nn
from torch.distributed._tensor import DeviceMesh
from torch.distributed.tensor.parallel import (
PairwiseParallel,
parallelize_module,
)
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import (
DTensorTestBase,
NUM_DEVICES,
skip_unless_torch_gpu,
with_comms,
)
class TransformerWrap(nn.Module):
def __init__(self, embed_dim, num_heads, device=None):
super().__init__()
self.transformer = nn.Transformer(
d_model=embed_dim*num_heads, nhead=num_heads, device=device
)
def forward(self, src, tgt):
return self.transformer(src, tgt)
class DistTensorParallelExampleTest(DTensorTestBase):
@with_comms
@skip_unless_torch_gpu
def test_transformer_megatron_e2e(self):
torch.manual_seed(0)
src = torch.rand((10, 16, 128), device=self.device_type)
tgt = torch.rand((20, 16, 128), device=self.device_type)
torch.manual_seed(5)
model_tp = TransformerWrap(embed_dim=16, num_heads=8, device=self.device_type)
# Shard module and initialize optimizer.
device_mesh = DeviceMesh(self.device_type, list(range(NUM_DEVICES)))
parallelize_module(model_tp, device_mesh, PairwiseParallel())
LR = 0.25
optim_tp = torch.optim.SGD(model_tp.parameters(), lr=LR)
output_tp = model_tp(src, tgt)
output_tp.sum().backward()
optim_tp.step()
if __name__ == "__main__":
run_tests()
```
Errors:
[test_tp_transformer.log](https://github.com/pytorch/pytorch/files/11264608/test_tp_transformer.log)
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230417
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0.dev20230417
[pip3] torchaudio==2.1.0.dev20230417
[pip3] torchdata==0.7.0.dev20230417
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.16.0.dev20230417
[pip3] torchvision==0.16.0.dev20230417
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.1.0.dev20230417 py3.10_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h778d358_3 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.1.0.dev20230417 py310_cu117 pytorch-nightly
[conda] torchdata 0.7.0.dev20230417 py310 pytorch-nightly
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.16.0.dev20230417 py310 pytorch-nightly
[conda] torchtriton 2.1.0+46672772b4 py310 pytorch-nightly
[conda] torchvision 0.16.0.dev20230417 py310_cu117 pytorch-nightly
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,851 | 99,421 |
Question about GRU(RNN/LSTM) outputs shape
|
module: docs, module: nn, module: rnn, triaged, actionable
|
### 📚 The doc issue
[https://pytorch.org/docs/1.8.1/generated/torch.nn.GRU.html#torch.nn.GRU](https://pytorch.org/docs/1.8.1/generated/torch.nn.GRU.html#torch.nn.GRU)
[https://pytorch.org/docs/1.9.0/generated/torch.nn.GRU.html#torch.nn.GRU](https://pytorch.org/docs/1.9.0/generated/torch.nn.GRU.html#torch.nn.GRU)
Versions of pytorch after 1.9.0 have made some changes to the GRU(RNN/LSTM?) documentation.
The h_n of shape of 1.8.1 is like this:
**(num_layers * num_directions, batch, hidden_size):**
The h_n of shape of 1.9 is like this:
**(D∗num_layers,N,Hout)**
where D is "2 if bidirectional=True otherwise 1"
So I think D is the num_directions of the previous version, then h_n of shape after 1.9.0 is:
**(num_directions * num_layers, batch, hidden_size)**
This is inconsistent with the order of the previous version in the first two dimensions. What should I do if I want to fetch the data of the last layer? Is there something wrong with my understanding?
In addition, I also found that two different descriptions exist at the same time in version 2.0, which is a bit confusing...
[https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html](https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html)
[https://pytorch.org/docs/stable/generated/torch.nn.GRU.html#torch.nn.GRU](https://pytorch.org/docs/stable/generated/torch.nn.GRU.html#torch.nn.GRU)
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zou3519
| 2 |
2,852 | 99,419 |
Extend TorchInductor to support more backends
|
triaged, oncall: pt2, module: inductor
|
### 🚀 The feature, motivation and pitch
This RFC proposes supporting TorchInductor to add a new backend at runtime to generate code for a particular device.
### Motivation
Currently, there are two integration levels to add a new backend for PT compiler: at the AtenIR/PrimsIR level as a new backend of Dynamo, and at the Inductor loop IR level as a new codegen backend of the Inductor.
- Integrating a new backend at the AtenIR/PrimsIR level is straightforward and provides more freedom to the new backend to optimize the captured graph. But it might be suboptimal performance if the backend lacks the DL compiler capability because the decomposed operations for a single operation might need more memory and worse data locality compared with the single operation if the decomposed operations can't be fused.
- Integrating at the Inductor loop IR level can significantly simplify the complexity of design and implementation by leveraging the Inductor's fusion capability and other optimizations directly. The new backend just needs to focus on how to generate optimal code for a particular device.
This RFC focuses on the latter one.
### Proposal
There are currently two backends for the Inductor - C++/OpenMP and Triton. And we propose providing a dynamic registration mechanism on the Inductor side for a new backend. The new backend can be out-of-tree and register its codegen for a particular device at runtime.
It can provide more flexibility as a new backend just needs to customize three fundamental structures: `Scheduling`, `Kernel` and `WrapperCodegen`. The sample code could be as follows:
```python
# in-tree/out-of-tree
class ExtensionKernel(Kernel):
pass
# in-tree/out-of-tree
class ExtensionScheduling(Scheduling):
...
def codegen_nodes(self, nodes):
with ExtensionKernel(...):
pass
...
# in-tree/out-of-tree
class ExtensionWrapperCodeGen(WrapperCodeGen):
...
```
The `ExtensionKernel` class and `ExtensionScheduling` class are used to define the operations and scheduling, respectively, for the new backend. The `ExtensionWrapperCodeGen` class generates the Python wrapper code for the backend to glue the kernel. All three classes can be out-of-tree as they can be registered to the Inductor at runtime.
Besides the key structures, we will provide two utility functions to support the runtime registration - `register_scheduling_for_device` and `get_scheduling_for_device`. The two utility functions are used to register and get the backend for a particular device, respectively.
```python
# in-tree
def register_scheduling_for_device(device:str, ext_scheduling: Scheduling):
pass
# in-tree
def get_scheduling_for_device(device:str) -> Scheduling:
pass
```
After that, we just need to add a default path to route to the newly added backend according to the device type.
```python
# in-tree
def create_backend(self, device: torch.device):
...
if device.type == "cpu":
from .codegen.cpp import CppScheduling
return CppScheduling(self)
elif device.type == "cuda":
from .codegen.triton import TritonScheduling
return TritonScheduling(self)
else:
return get_scheduling_for_device(device.type)(self)
```
### Alternatives
Another option is to add a new backend-related code to Inductor directly. Take the backend creation as an example, the code sample could be as follows:
```python
# in-tree
class NewBackendKernel(Kernel):
pass
# in-tree
class NewBackendScheduling(Scheduling):
...
def codegen_nodes(self, nodes):
with NewBackendKernel(...):
pass
...
# in-tree
class NewBackendWrapperCodeGen(WrapperCodeGen):
...
# in-tree
def create_backend(self, device: torch.device):
...
if device.type == "cpu":
from .codegen.cpp import CppScheduling
return CppScheduling(self)
elif device.type == "xpu":
from .codegen.new_device import NewBackendScheduling
return NewBackendScheduling(self)
else:
from .codegen.triton import TritonScheduling
return TritonScheduling(self)
```
Compared to the former option, the latter is more straightforward and simple. But it requires all the new backend-related code to be in-tree. In other words, the new backend must be landed the PyTorch. It is a high bar to support other devices.
Currently, the common practice to support new devices is through PyTorch Extension(out of the pytorch code tree), where hardware vendors implement all the necessary operations in their own PyTorch extensions. And it might take longer to finalize the design, maturize the feature, and some other things to meet the bar of upstreaming. Before it is a part of PyTorch, it will conflict with this alternative's "in-tree" requirement.
Besides that, it's worth noting that the CI/CD pipeline for the latter option may be more challenging since new devices are typically not available on AWS.
### Additional context
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 5 |
2,853 | 99,414 |
The meta implementation of `index_put` does not do any check
|
triaged, module: meta tensors, release notes: Meta API
|
### 🐛 Describe the bug
For example, the following code breaks in eager but successfully runs under `torch.compile`:
```python
Before, the following code would not error out
@compile()
def f(x, y, z):
x.index_put_((y,), z)
return x
x = torch.randn(3, 2, 4, device="cuda")
y = torch.arange(2, device="cuda")
z = torch.zeros(4, 2, 4, device="cuda")
f(x, y, z)
```
FWIW, reusing the meta function from `index.Tensor` breaks every subsystem. See https://github.com/pytorch/pytorch/pull/98589. I've seen that most tests special-case `index.Tensor`, so something similar would need to be done for this function.
### Versions
master
cc @ezyang @eellison @bdhirsh @soumith
| 0 |
2,854 | 99,410 |
torch.nn.functional.multilabel_margin_loss cuda lacks checking of "out of bound"
|
module: nn, module: cuda, module: error checking, triaged
|
### 🐛 Describe the bug
Though [#73176](https://github.com/pytorch/pytorch/issues/73176) lists a lot of APIs lacks checking out of bound in cuda. It still can be repro in torch.nn.functional.multilabel_margin_loss.
```
import torch
results={}
arg_1 = torch.as_tensor([[0.0360, 0.6321, 0.6267, 0.4555]])
print(arg_1)
arg_2 = torch.as_tensor([[-2489, -776, 380, -1566]])
print(arg_2)
arg_3 = "mean"
try:
results["res_cpu"] = torch.nn.functional.multilabel_margin_loss(input=arg_1,target=arg_2,reduction=arg_3,)
except Exception as e:
results["err_cpu"] = "ERROR:"+str(e)
arg_1 = arg_1.clone().cuda()
arg_2 = arg_2.clone().cuda()
try:
results["res_gpu"] = torch.nn.functional.multilabel_margin_loss(input=arg_1,target=arg_2,reduction=arg_3,)
except Exception as e:
results["err_gpu"] = "ERROR:"+str(e)
print(results)
#res: {'err_cpu': "ERROR:argument #2 'target' is out of range", 'res_gpu': tensor(0., device='cuda:0')}
```
Also it still can be seen in torch.nn.MultiLabelMarginLoss, even though [#73176](https://github.com/pytorch/pytorch/issues/73176) has fixed this issue.
```
import torch
results={}
arg_class = torch.nn.MultiLabelMarginLoss()
arg_1_0 = torch.as_tensor([[0.3449, 0.5441, 0.7594, 0.5161]])
print(arg_1_0)
arg_1_1 = torch.as_tensor([[-59, 45, 187, 131]])
print(arg_1_1)
arg_1 = [arg_1_0,arg_1_1,]
try:
results["res_cpu"] = arg_class(*arg_1)
except Exception as e:
results["err_cpu"] = "ERROR:"+str(e)
arg_class = arg_class.cuda()
arg_1_0 = arg_1_0.clone().cuda()
arg_1_1 = arg_1_1.clone().cuda()
arg_1 = [arg_1_0,arg_1_1,]
try:
results["res_gpu"] = arg_class(*arg_1)
except Exception as e:
results["err_gpu"] = "ERROR:"+str(e)
print(results)
# results: {'err_cpu': "ERROR:argument #2 'target' is out of range", 'res_gpu': tensor(0., device='cuda:0')}
```
### Versions
pytorch version: 2.0.0+cu118
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ngimel @malfet
| 0 |
2,855 | 99,407 |
Torch.fx.symbolic_trace removes some of the keys from module state_dict
|
triaged, module: fx
|
### 🐛 Describe the bug
I’m doing some graph editing using torch.fx and timm liblary.
I was editing already resnet18 and it was completely successful.
But when using mobilenet I run into problem cause obtained graph doesn’t have some keys already that our original model had.
As an outcome of that our orignal model is fitting good (as supposed) but the obtained graph is not fitting.
Example
```
from torch import fx
import timm
model = timm.create_model("mobilenetv2_035", num_classes=10)
fx_module = fx.symbolic_trace(model)
print(len(model.state_dict().keys()))
print(len(fx_module.state_dict().keys()))
print(set(model.state_dict().keys()).difference(fx_module.state_dict().keys()))
```
```
314
262
{'blocks.1.0.bn3.num_batches_tracked', 'blocks.3.0.bn3.num_batches_tracked', 'blocks.2.2.bn2.num_batches_tracked', 'blocks.5.1.bn3.num_batches_tracked', 'blocks.3.2.bn1.num_batches_tracked', 'blocks.4.2.bn3.num_batches_tracked', 'blocks.2.0.bn3.num_batches_tracked', 'blocks.5.1.bn2.num_batches_tracked', 'blocks.4.2.bn2.num_batches_tracked', 'blocks.4.1.bn1.num_batches_tracked', 'blocks.3.1.bn3.num_batches_tracked', 'blocks.1.1.bn3.num_batches_tracked', 'bn1.num_batches_tracked', 'blocks.4.2.bn1.num_batches_tracked', 'bn2.num_batches_tracked', 'blocks.4.0.bn1.num_batches_tracked', 'blocks.4.0.bn3.num_batches_tracked', 'blocks.5.1.bn1.num_batches_tracked', 'blocks.4.1.bn2.num_batches_tracked', 'blocks.4.0.bn2.num_batches_tracked', 'blocks.5.0.bn1.num_batches_tracked', 'blocks.5.2.bn1.num_batches_tracked', 'blocks.3.3.bn2.num_batches_tracked', 'blocks.5.2.bn3.num_batches_tracked', 'blocks.2.2.bn1.num_batches_tracked', 'blocks.5.0.bn3.num_batches_tracked', 'blocks.3.1.bn1.num_batches_tracked', 'blocks.2.1.bn1.num_batches_tracked', 'blocks.4.1.bn3.num_batches_tracked', 'blocks.2.2.bn3.num_batches_tracked', 'blocks.0.0.bn2.num_batches_tracked', 'blocks.3.3.bn1.num_batches_tracked', 'blocks.3.2.bn3.num_batches_tracked', 'blocks.0.0.bn1.num_batches_tracked', 'blocks.1.1.bn1.num_batches_tracked', 'blocks.3.2.bn2.num_batches_tracked', 'blocks.3.1.bn2.num_batches_tracked', 'blocks.2.0.bn1.num_batches_tracked', 'blocks.3.0.bn2.num_batches_tracked', 'blocks.2.1.bn3.num_batches_tracked', 'blocks.1.1.bn2.num_batches_tracked', 'blocks.3.0.bn1.num_batches_tracked', 'blocks.6.0.bn3.num_batches_tracked', 'blocks.5.0.bn2.num_batches_tracked', 'blocks.2.0.bn2.num_batches_tracked', 'blocks.6.0.bn1.num_batches_tracked', 'blocks.2.1.bn2.num_batches_tracked', 'blocks.6.0.bn2.num_batches_tracked', 'blocks.1.0.bn2.num_batches_tracked', 'blocks.3.3.bn3.num_batches_tracked', 'blocks.5.2.bn2.num_batches_tracked', 'blocks.1.0.bn1.num_batches_tracked'}
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 7 5800X 8-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3800,0000
CPU min MHz: 2200,0000
BogoMIPS: 7585.45
Virtualization: AMD-V
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 4 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.6.3
[pip3] flamingo-pytorch==0.1.2
[pip3] lion-pytorch==0.0.7
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-ignite==0.4.8
[pip3] pytorch-lightning==1.7.0
[pip3] pytorch-metric-learning==1.6.3
[pip3] segmentation-models-pytorch==0.2.1
[pip3] torch==2.0.0+cu118
[pip3] torch-audiomentations==0.11.0
[pip3] torch-pitch-shift==1.2.2
[pip3] torchaudio==0.9.0
[pip3] torchmetrics==0.10.2
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1+cu118
[pip3] vit-pytorch==1.2.0
[conda] Could not collect
cc @ezyang @SherlockNoMad @soumith @EikanWang @jgong5 @wenzhe-nrv
| 0 |
2,856 | 99,404 |
FakeTensor lacks support for sparse compressed tensors
|
module: sparse, triaged, module: fakeTensor
|
## Issue description
As in the title.
This lack of support prevents using `OpInfo.error_input_func` for sparse compressed tensors. For instance, `test_python_ref_errors` would fail with `UnsupportedFakeTensorException` on an `ErrorInput` instance that contains a sparse compressed tensor as a sample input (ref: https://github.com/pytorch/pytorch/actions/runs/4723316774/jobs/8379634648).
## Code example
```python
>>> import torch
>>> from torch._subclasses.fake_tensor import FakeTensorMode, FakeTensor
>>> mode = FakeTensorMode()
>>> FakeTensor.from_tensor(torch.tensor([[1, 2], [3, 4]]).to_sparse_csr(), mode)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pearu/git/pytorch/pytorch-sparse-compressed/torch/_subclasses/fake_tensor.py", line 922, in from_tensor
return fake_mode.from_tensor(t)
File "/home/pearu/git/pytorch/pytorch-sparse-compressed/torch/_subclasses/fake_tensor.py", line 1437, in from_tensor
return self.fake_tensor_converter(
File "/home/pearu/git/pytorch/pytorch-sparse-compressed/torch/_subclasses/fake_tensor.py", line 328, in __call__
return self.from_real_tensor(
File "/home/pearu/git/pytorch/pytorch-sparse-compressed/torch/_subclasses/fake_tensor.py", line 292, in from_real_tensor
raise UnsupportedFakeTensorException("meta converter nyi")
torch._subclasses.fake_tensor.UnsupportedFakeTensorException: meta converter nyi
```
## System Info
- PyTorch version: main
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 1 |
2,857 | 99,401 |
Support `cond` branches that reference variables defined in an outer scope
|
triaged, oncall: pt2, module: dynamo, module: export
|
### 🚀 The feature, motivation and pitch
Related: https://github.com/pytorch/pytorch/issues/90469
Attempting to inline a nested function that closes over variables in outer scopes results in an error. For example, the following module cannot be exported:
```
class ModuleClosureReproError(torch.nn.Module):
# error
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(3, 3)
def forward(self, pred, x):
y = x + x
def true_fn(val):
return self.linear(val) * (x + y)
def false_fn(val):
return val * (y - x)
return cond(pred, true_fn, false_fn, [x])
```
### Alternatives
The workaround today is to rewrite such functions to take closed-over variables as additional arguments.
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @ngimel @desertfire
| 2 |
2,858 | 99,397 |
Internal errors with cuda graph (CUBLAS_STATUS_NOT_INITIALIZED and jit failure)
|
triaged, module: cuda graphs
|
### 🐛 Describe the bug
# Cublas bug
Cuda graphs fail if cublas is not initialized. The code
```
import torch
x=torch.empty([32, 32]).cuda()
with torch.cuda.graph(torch.cuda.CUDAGraph()):
torch.matmul(x,x)
```
fails with:
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/graphs.py", line 176, in __exit__
self.cuda_graph.capture_end()
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/graphs.py", line 82, in capture_end
super(CUDAGraph, self).capture_end()
RuntimeError: CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
The code runs fine if I do any multiplication before creating the graph.
# Jit bug
Compiling jit code while creating a cuda graph breaks jit. The code
```
import torch
@torch.jit.script
def f(x):
return 2*x+1
x=torch.randn([32, 32]).cuda()
g0 = torch.cuda.CUDAGraph()
with torch.cuda.graph(g0):
y = f(x)
g1 = torch.cuda.CUDAGraph()
with torch.cuda.graph(g1):
y = f(x)
```
fails with:
```
a.py:16: UserWarning: FALLBACK path has been taken inside: runCudaFusionGroup. This is an indication that codegen Failed for some reason.
To debug try disable codegen fallback path via setting the env variable `export PYTORCH_NVFUSER_DISABLE=fallback`
(Triggered internally at /opt/pytorch/pytorch/third_party/nvfuser/csrc/manager.cpp:335.)
y= f(x)
Traceback (most recent call last):
File "a.py", line 16, in <module>
y= f(x)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<string>", line 3, in fallback_cuda_fuser
def mul(a : int, b : Tensor) -> Tensor:
return b * a
~~~~~ <--- HERE
def add(a : int, b : Tensor) -> Tensor:
return b + a
RuntimeError: status != cudaStreamCaptureStatus::cudaStreamCaptureStatusInvalidated INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/CUDACachingAllocator.cpp":1490, please report a bug to PyTorch.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "a.py", line 16, in <module>
y= f(x)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/graphs.py", line 176, in __exit__
self.cuda_graph.capture_end()
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/graphs.py", line 82, in capture_end
super(CUDAGraph, self).capture_end()
RuntimeError: CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
The first graph compiles correctly and is perfectly usable, with the crash happens when creating the second graph. Again, the crash can be avoided by first calling the method, though it needs the exact same tensor shape and dtype.
### Versions
[Temporarily unable to run, will update later. Using docker image nvcr.io/nvidia/pytorch:23.03-py3 (pytorch version [2.0.0a0+1767026](https://github.com/pytorch/pytorch/commit/1767026)) and running on a A100-80 GB]
cc @mcarilli @ezyang
| 1 |
2,859 | 99,390 |
torch.compile error
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
When I used torch.compile to compile and accelerate this code https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/scripts/txt2img.py#L303, this error occurred.
In file included from /tmp/torchinductor_hujiahao1/zt/cztcl2vp5yqlnhofzpqfficjcxgyict6e3xhfdd7sdbkipp4p44x.h:8:0,
from /tmp/torchinductor_hujiahao1/ke/ckeyipvhzljoe53twvjbo23ihq2q26ser2onsutamnzl7ytce5dn.cpp:2:
/home/hujiahao1/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/include/ATen/core/PhiloxRNGEngine.h:199:35: error: enclosing class of constexpr non-static member function ‘float at::philox_engine::uint32_to_uniform_float(uint32_t)’ is not a literal type
C10_HOST_DEVICE constexpr float uint32_to_uniform_float(uint32_t value) {
^
In file included from /tmp/torchinductor_hujiahao1/zt/cztcl2vp5yqlnhofzpqfficjcxgyict6e3xhfdd7sdbkipp4p44x.h:8:0,
from /tmp/torchinductor_hujiahao1/ke/ckeyipvhzljoe53twvjbo23ihq2q26ser2onsutamnzl7ytce5dn.cpp:2:
/home/hujiahao1/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/include/ATen/core/PhiloxRNGEngine.h:68:7: note: ‘at::philox_engine’ is not literal because:
class philox_engine {
^
/home/hujiahao1/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/include/ATen/core/PhiloxRNGEngine.h:68:7: note: ‘at::philox_engine’ is not an aggregate, does not have a trivial default constructor, and has no constexpr constructor that is not a copy or move constructor
/tmp/torchinductor_hujiahao1/ke/ckeyipvhzljoe53twvjbo23ihq2q26ser2onsutamnzl7ytce5dn.cpp: In function ‘void kernel(float*)’:
/tmp/torchinductor_hujiahao1/ke/ckeyipvhzljoe53twvjbo23ihq2q26ser2onsutamnzl7ytce5dn.cpp:13:26: error: ‘simdlen’ is not valid for ‘#pragma omp simd’
#pragma omp simd simdlen(4)
^
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
### Versions
not found collect_env.py
my env:
torch 2.0.0
cuda 11.8
gcc 5.3
cmake 3.26.3
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
2,860 | 99,372 |
PyTorch 2.0.0 encountered CUDA error: an illegal memory access was encountered
|
module: cuda, triaged, module: multithreading
|
### 🐛 Describe the bug
Running PyTorch 2.0.0 encountered CUDA error: an illegal memory access was encountered.
We wrote a [benchmark](https://github.com/deepjavalibrary/djl-serving/tree/master/benchmark) tool to use pytorch to run inference (See the commands below on how to run).
Specifically, this benchmark tool uses libtorch. And we use JNI to use with our Java program.
It worked with multi-threading. We were having no problems with previous Pytorch version. Since recently upgrading to 2.0.0, using libtorch 2.0.0-cu118, we start having error with multi-threaded inference, it only fail when using 20+ threads.
Steps to reproduce:
1) Run in docker:
```
docker run -it --runtime=nvidia --rm nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 /bin/bash
```
2) Inside docker:
```
apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
openjdk-11-jdk-headless \
locales \
git \
curl
git clone https://github.com/deepjavalibrary/djl-serving.git
cd djl-serving
export PYTORCH_VERSION=2.0.0
export PYTORCH_NVFUSER_DISABLE=fallback
export PYTORCH_JIT_USE_NNC_NOT_NVFUSER=1
./gradlew benchmark '--args=-e PyTorch -t 20 -c 1200 -s 1,3,224,224 -u djl://ai.djl.pytorch/resnet/0.0.1/traced_resnet18'
```
Error message:
```
root@fc40d69fadbf:/djl-serving# export PYTORCH_VERSION=2.0.0
root@fc40d69fadbf:/djl-serving# export PYTORCH_NVFUSER_DISABLE=fallback
root@fc40d69fadbf:/djl-serving# export PYTORCH_JIT_USE_NNC_NOT_NVFUSER=1
root@fc40d69fadbf:/djl-serving# ./gradlew benchmark '--args=-e PyTorch -t 20 -c 1200 -s 1,3,224,224 -u djl://ai.djl.pytorch/resnet/0.0.1/traced_resnet18'
> Task :benchmark:benchmark
[WARN ] - Override PyTorch version: 2.0.0.
[INFO ] - PyTorch graph executor optimizer is enabled, this may impact your inference latency and throughput. See: https://docs.djl.ai/docs/development/inference_performance_optimization.html#graph-executor-optimization
[INFO ] - Number of inter-op threads is 1
[INFO ] - Number of intra-op threads is 1
[INFO ] - Load PyTorch (2.0.0) in 0.046 ms.
[INFO ] - Running MultithreadedBenchmark on: [gpu(0)].
[INFO ] - Multithreading inference with 20 threads.
Loading: 100% |========================================|
[INFO ] - Model traced_resnet18 loaded in: 2015.853 ms.
[INFO ] - Warmup with 2 iteration ...
[INFO ] - Warmup latency, min: 4.081 ms, max: 1631.479 ms
[ERROR] -
java.util.concurrent.ExecutionException: ai.djl.translate.TranslateException: ai.djl.engine.EngineException: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at ai.djl.benchmark.MultithreadedBenchmark.predict(MultithreadedBenchmark.java:110) [main/:?]
at ai.djl.benchmark.AbstractBenchmark.runBenchmark(AbstractBenchmark.java:125) [main/:?]
at ai.djl.benchmark.Benchmark.main(Benchmark.java:56) [main/:?]
Caused by: ai.djl.translate.TranslateException: ai.djl.engine.EngineException: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
at ai.djl.inference.Predictor.batchPredict(Predictor.java:191) ~[api-0.22.1-SNAPSHOT.jar:?]
at ai.djl.inference.Predictor.predict(Predictor.java:128) ~[api-0.22.1-SNAPSHOT.jar:?]
at ai.djl.benchmark.MultithreadedBenchmark$PredictorCallable.call(MultithreadedBenchmark.java:179) ~[main/:?]
at ai.djl.benchmark.MultithreadedBenchmark$PredictorCallable.call(MultithreadedBenchmark.java:140) ~[main/:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: ai.djl.engine.EngineException: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
at ai.djl.pytorch.jni.PyTorchLibrary.torchTo(Native Method) ~[pytorch-engine-0.22.1-SNAPSHOT.jar:?]
at ai.djl.pytorch.jni.JniUtils.to(JniUtils.java:324) ~[pytorch-engine-0.22.1-SNAPSHOT.jar:?]
at ai.djl.pytorch.engine.PtNDArray.toDevice(PtNDArray.java:165) ~[pytorch-engine-0.22.1-SNAPSHOT.jar:?]
at ai.djl.pytorch.jni.JniUtils.getByteBuffer(JniUtils.java:1635) ~[pytorch-engine-0.22.1-SNAPSHOT.jar:?]
at ai.djl.pytorch.engine.PtNDArray.toByteBuffer(PtNDArray.java:221) ~[pytorch-engine-0.22.1-SNAPSHOT.jar:?]
at ai.djl.benchmark.AbstractBenchmark$BenchmarkTranslator.processOutput(AbstractBenchmark.java:316) ~[main/:?]
at ai.djl.benchmark.AbstractBenchmark$BenchmarkTranslator.processOutput(AbstractBenchmark.java:293) ~[main/:?]
at ai.djl.inference.Predictor.batchPredict(Predictor.java:172) ~[api-0.22.1-SNAPSHOT.jar:?]
at ai.djl.inference.Predictor.predict(Predictor.java:128) ~[api-0.22.1-SNAPSHOT.jar:?]
at ai.djl.benchmark.MultithreadedBenchmark$PredictorCallable.call(MultithreadedBenchmark.java:179) ~[main/:?]
at ai.djl.benchmark.MultithreadedBenchmark$PredictorCallable.call(MultithreadedBenchmark.java:140) ~[main/:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
[ERROR] - Only 0/20 threads finished.
> Task :benchmark:benchmark FAILED
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1081-aws-x86_64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 3099.937
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @ngimel
| 4 |
2,861 | 99,359 |
[c++17] Replace lock_guard with scoped_lock
|
module: internals, triaged
|
@voznesenskym pointed out that `lock_guard` is deprecated in C++-17 and we should move to `scoped_lock`.
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 0 |
2,862 | 99,352 |
add `-std=c++20` build-only CI job
|
module: build, module: ci, triaged, module: devx
|
### 🚀 The feature, motivation and pitch
We won't be adopting C++20 any time soon, but we do have users who wish to build with it. We should keep our code at least building with C++20, using whatever CI config is the cheapest to keep it working.
I'm happy to get the code working if someone else can contribute the CI job.
See #99013 #85703 #98917 for some related issues/changes.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn @clee2000
| 1 |
2,863 | 99,351 |
we should make semantically meaningless positional arguments positional only in our operator API
|
feature, triaged, actionable, module: python array api, module: python frontend
|
### 🚀 The feature, motivation and pitch
See #99265 for an example of some of the chaos this causes. Position-only arguments were added in Python 3.8 and we recently dropped support for 3.7 so now is the first opportunity to do this.
Prior art: the Python array API is aggressively using position only arguments.
Removing this will be BC breaking, so here is a rough idea for how it should be done.
1. add support for doing this in our codegen
2. add a linter to prevent new uses of things like tensor1, tensor2, etc.
3. add a warning that fires when calling existing functions with named arguments
4. remove warning and migrate to position only arguments after a release
@mruberry @rgommers
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi @albanD
| 8 |
2,864 | 99,316 |
torch.linalg.lstsq doc arguments error
|
module: docs, triaged, module: linear algebra
|
### 📚 The doc issue
According to [doc](https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html), the arguments of torch.linalg.lstsq are `A`、`B`、`rcond`、`driver`. But when using these arguments, below code will throw exception: `ERROR:linalg_lstsq() missing 2 required positional argument: "input", "b"`. Actually the arguments should be `input`、`b`、`rcond`、`driver`.
```
import torch
arg_1 = torch.rand([5, 3], dtype=torch.float32)
arg_2 = torch.rand([5, 3], dtype=torch.float32)
results = torch.linalg.lstsq(A=arg_1,B=arg_2,)
print(results)
```
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 4 |
2,865 | 99,311 |
Functorch pytrees with custom iterables
|
triaged, enhancement, module: pytree, module: functorch
|
### 🚀 The feature, motivation and pitch
`torch.func` / `functorch` does support some sort of pytrees similar to JAX. However, it only seems to accept Lists and Dicts as iterable inputs.
Example:
```
import torch.func as FT
import torch
class DotDict(dict):
"""dot.notation access to dictionary attributes"""
__getattr__ = dict.get
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
a = DotDict({
"A": torch.randn(2,2),
"inner": {
"B": torch.randn(2,2),
"C": torch.randn(2,2)
}
})
b = [
torch.randn(2,2),
torch.randn(2,2),
]
def f(x, y):
return (x.A + x.inner["B"] @ x.inner["C"]).sum() + y[0].sum() + y[1].sum()
grad, val = FT.grad_and_value(f, argnums=(0,1))(a, b)
print(grad)
```
This doesn't work.
```
Thing passed to transform API must be Tensor, got <class '__main__.DotDict'>
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 55, in create_differentiable
raise ValueError(f'Thing passed to transform API must be Tensor, '
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/utils/_pytree.py", line 196, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/utils/_pytree.py", line 196, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 57, in _create_differentiable
return tree_map(create_differentiable, inps)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/pytree_hacks.py", line 12, in <listcomp>
[fn_(arg) for arg in flat_args]
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/pytree_hacks.py", line 12, in tree_map_
[fn_(arg) for arg in flat_args]
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 1243, in wrapper
tree_map_(partial(_create_differentiable, level=level), diff_args)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/cornelius/Projects/CalibrationBound/test_functorch_pytrees.py", line 30, in <module>
grad, val = FT.grad_and_value(f, argnums=(0,1))(a, b)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/cornelius/anaconda3/envs/general/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
ValueError: Thing passed to transform API must be Tensor, got <class '__main__.DotDict'>
```
However, if `a` is a `Dict` it works fine.
It would be nice, if the definition of pytrees was extended to duck-typed iterables. I do not see why the example above shouldnt work.
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
2,866 | 99,310 |
Torch func Documentation for trees
|
module: docs, triaged, module: functorch
|
### 📚 The doc issue
`functorch` / `torch.func` seems to support pytrees similar to `JAX`. However, there seems to be no documentation on that.
For example, this works:
```
import torch.func as FT
import torch
a = {
"A": torch.randn(2,2),
"inner": {
"B": torch.randn(2,2),
"C": torch.randn(2,2)
}
}
b = [
torch.randn(2,2),
torch.randn(2,2),
]
def f(x, y):
return (x["A"] + x["inner"]["B"] @ x["inner"]["C"]).sum() + y[0].sum() + y[1].sum()
grad, val = FT.grad_and_value(f, argnums=(0,1))(a, b)
print(grad)
```
But I couldnt find anything in the docs, nor does there seem to be a `tree_utils` package like in JAX.
### Suggest a potential alternative/fix
It would be useful to clarify what is possible and what isnt.
cc @svekars @carljparker @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 0 |
2,867 | 99,305 |
CI for s390x
|
module: ci, triaged, enhancement
|
It'd be great to have s390x CI for pytorch. It'd help to improve support for s390x in pytorch.
It's possible to request s390x machine for CI for pytorch. Only question is who'll manage it: pytorch maintainers or us. For me first option would be more convenient, but second one is also possible.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 4 |
2,868 | 99,304 |
Need TransformerEncoder to output attention map
|
oncall: transformer/mha, topic: new features
|
### 🚀 The feature, motivation and pitch
I need to observe the strength of attention for each element in the sequence with all the elements in the same sequence. This is called the attention map in the language of the transformer and is one of the important metrics for visualization. I want the output to be of shape num_layers x [batch_size, sequence_length, sequence_length].
The MultiheadAttention layer can return the attention map when `need_weights` is provided in the forward call, however, it is always set to `false` from the forward call of layer `TransformerEncoder`. It would be better if we could specify `need_weights` in `TransformerEncoder`'s forward call.
## Proposed working changes:
File:
[torch/nn/modules/transformer.py](https://github.com/pytorch/pytorch/blob/148d49260aa29ba6d4c8354a019f64a27503d5b8/torch/nn/modules/transformer.py)
### Before:
```
...
class TransformerEncoder(Module):
...
def forward(
self,
src: Tensor,
mask: Optional[Tensor] = None,
src_key_padding_mask: Optional[Tensor] = None,
is_causal: Optional[bool] = None) -> Tensor:
...
for mod in self.layers:
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
...
return output
...
class TransformerEncoderLayer(Module):
...
x = src
if self.norm_first:
x = x + self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)
x = x + self._ff_block(self.norm2(x))
else:
x = self.norm1(x + self._sa_block(x, src_mask, src_key_padding_mask))
x = self.norm2(x + self._ff_block(x))
return x
# self-attention block
def _sa_block(self, x: Tensor,
attn_mask: Optional[Tensor], key_padding_mask: Optional[Tensor]) -> Tensor:
x = self.self_attn(x, x, x,
attn_mask=attn_mask,
key_padding_mask=key_padding_mask,
need_weights=False)[0]
return self.dropout1(x)
...
```
### After:
```
...
class TransformerEncoder(Module):
...
def forward(
self,
src: Tensor,
mask: Optional[Tensor] = None,
src_key_padding_mask: Optional[Tensor] = None,
is_causal: Optional[bool] = None,
need_weights: Optional[bool] = False) -> Tensor:
...
attn_maps = []
for mod in self.layers:
output, attn_map = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers, need_weights=need_weights)
attn_maps.append(attn_map)
...
return output, attn_maps
...
class TransformerEncoderLayer(Module):
...
x = src
if self.norm_first:
_x, attn_map = self._sa_block(self.norm1(x), src_mask, src_key_padding_mask, need_weights)
x = x + _x
x = x + self._ff_block(self.norm2(x))
else:
_x, attn_map = self._sa_block(x, src_mask, src_key_padding_mask, need_weights)
x = self.norm1(x + _x)
x = self.norm2(x + self._ff_block(x))
return x, attn_map
# self-attention block
def _sa_block(self, x: Tensor,
attn_mask: Optional[Tensor], key_padding_mask: Optional[Tensor], need_weights: Optional[bool] = False) -> Tensor:
x, attn_map = self.self_attn(x, x, x,
attn_mask=attn_mask,
key_padding_mask=key_padding_mask,
need_weights=need_weights)
return self.dropout1(x), attn_map
...
```
### Additional context
Pytorch version: 2.0.0
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 3 |
2,869 | 99,299 |
There has implmenet bug in LTC IrBuilder's MakeSizeMul method.
|
triaged, lazy, module: lazy
|
The default implementation of LTC IrBuilder's MakeSizeMul has error called method. The detail code is listed as below:
```c++
static inline NodePtr MakeSizeMul(const Value& a, const Value& b) {
return getIrBuilder()->MakeSizeAdd(a, b);
}
```
It is located in `pytorch/torch/csrc/lazy/core/ir_builder.h`, and the currently `main` branch also has this bug.
### Versions
The nightly main branch.
| 1 |
2,870 | 99,297 |
Slicing and indexing support negative steps
|
feature, triaged, module: advanced indexing
|
### 🚀 The feature, motivation and pitch
At present pytorch is not compatible with the following code(pytorch 2.0), and will raise an error 'step must be greater than zero'
```
a=torch.arange(12).float()
print(a[::-1])
```
At the same time, this is always available in numpy. Is it possible this basic feature be add in pytorch in the near future?
### Alternatives
An alternative to this is using torch.flip(a, dims=[0]), but it would be great if [::-1] can be used in both 0d/1d and multidim tensors.
### Additional context
_No response_
| 2 |
2,871 | 99,293 |
Automatically set dropout for SDPA depending on training mode / `training` argument
|
enhancement, oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
Hi,
`torch.nn.functional.dropout` has a `training` argument that allows to disable/enable dropout during inference. In a module, it simplifies code by passing simply `training=self.training` instead of `p=self.droupout_p if self.training else 0.0`. This avoids as well a python controlflow (that dynamo may not like).
```python
a = torch.rand(20, 30)
b = torch.nn.functional.dropout(a, p=0.5, training=False)
```
It would be great to have a similar option in `torch.nn.scaled_dot_product_attention`, as e.g. `eval()` does not change the behavior (dummy example):
```python
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.q = nn.Parameter(torch.rand(1, 64, 32, 64, dtype=torch.float16).to("cuda"))
self.k = nn.Parameter(torch.rand(1, 64, 32, 64, dtype=torch.float16).to("cuda"))
self.v = nn.Parameter(torch.rand(1, 64, 32, 64, dtype=torch.float16).to("cuda"))
def forward(self, x):
res = torch.nn.functional.scaled_dot_product_attention(self.q, self.k, self.v, is_causal=True, dropout_p=0.8) * x
return res
model = MyModel()
model = model.eval()
inp = torch.Tensor([3.]).to("cuda", torch.float16)
with torch.inference_mode():
res = model(inp)
print(res)
"""
prints
tensor([[[[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[1.0000, 0.4944, 0.9648, ..., 0.9160, 0.9844, 1.2383],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[3.4277, 1.6973, 2.8184, ..., 3.3672, 2.6348, 2.9453]],
...
"""
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikekgfb
### Alternatives
Adding a controlflow
### Additional context
/
| 2 |
2,872 | 99,287 |
Add `TORCH_ASSERT_ONLY_METHOD_OPERATORS` to functorch codebase
|
triaged, better-engineering, module: functorch
|
### 🚀 The feature, motivation and pitch
`TORCH_ASSERT_ONLY_METHOD_OPERATORS` was introduced to the native codebase a few months ago, and it brings us a significant speed-up in terms of incremental compilation because the codebase doesn't need to be re-compiled every time an operator is changed or added.
Therefore, I would like to propose adding `TORCH_ASSERT_ONLY_METHOD_OPERATORS` to the functorch codebase.
If this sounds a good idea I can propose PRs doing so.
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 3 |
2,873 | 99,278 |
Build error on libstc++ header stl_alogbase.h on riscv
|
module: build, good first issue, triaged
|
### 🐛 Describe the bug
I am currently trying to build PyTorch using the latest master commit (fdbc8625a1a255cc7156cbbe54f681851de96c5f) on riscv. While so far I encountered another issue that was clearly caused by compiling for riscv, this error might be a general issue:
```c++
[ 78%] Building CXX object test_api/CMakeFiles/test_api.dir/init.cpp.o
In file included from /usr/include/c++/12/memory:63,
from /home/user/git/pytorch/third_party/googletest/googletest/include/gtest/gtest.h:57,
from /home/user/git/pytorch/test/cpp/api/dataloader.cpp:1:
In static member function _static _Tp* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(const _Tp*, const _Tp*, _Tp*) [with _Tp = long unsigned int; bool _IsMove = false]_,
inlined from __OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]_ at /usr/include/c++/12/bits/stl_algobase.h:495:30,
inlined from __OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]_ at /usr/include/c++/12/bits/stl_algobase.h:522:42,
inlined from __OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false; _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<lo
ng unsigned int*, vector<long unsigned int> >]_ at /usr/include/c++/12/bits/stl_algobase.h:529:31,
inlined from __OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long u
nsigned int> >]_ at /usr/include/c++/12/bits/stl_algobase.h:620:7,
inlined from _std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = long unsigned int; _Alloc = std::allocator<long unsigned int>]_ at /usr/include/c++/12
/bits/vector.tcc:244:21:
/usr/include/c++/12/bits/stl_algobase.h:431:30: error: argument 1 null where non-null expected [-Werror=nonnull]
431 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/12/bits/stl_algobase.h:431:30: note: in a call to built-in function _void* __builtin_memmove(void*, const void*, long unsigned int)_
[ 78%] Building CXX object test_api/CMakeFiles/test_api.dir/jit.cpp.o
At global scope:
cc1plus: note: unrecognized command-line option _-Wno-aligned-allocation-unavailable_ may have been intended to silence earlier diagnostics
cc1plus: note: unrecognized command-line option _-Wno-unused-private-field_ may have been intended to silence earlier diagnostics
cc1plus: note: unrecognized command-line option _-Wno-invalid-partial-specialization_ may have been intended to silence earlier diagnostics
cc1plus: some warnings being treated as errors
gmake[2]: *** [test_api/CMakeFiles/test_api.dir/build.make:118: test_api/CMakeFiles/test_api.dir/dataloader.cpp.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
```
My first guess is that this is an unlucky combination of the libstc++ version, compiler version and the `-Werror=nonnull` flag. I will try to set up the same build on an x86 machine but this might take some time. I will also try to compile using the tagged 2.0.0 version and report if this changes something.
Note for riscv compilation (In case someone wants to reproduce this exactly):
The third-party lib SLEEF (https://github.com/shibatch/sleef) will only compile with the small fix from https://github.com/shibatch/sleef/pull/448. It is e.g. possible to compile SLEEF separately with the fix included and use `USE_SYSTEM_SLEEF=ON` for compiling PyTorch.
Addition: Kineto also does not build for now, and can be disabled with `USE_KINETO=0`.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux bookworm/sid (riscv64)
GCC version: (Debian 12.2.0-10) 12.2.0
Clang version: 14.0.6
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.10.9 (main, Dec 7 2022, 13:47:07) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-starfive-riscv64-with-glibc2.36
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: riscv64
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Versions of relevant libraries:
[pip3] numpy==1.24.2
[conda] Could not collect
cc @malfet @seemethere
| 5 |
2,874 | 99,272 |
[MPS] Add support for autocast in MPS
|
triaged, open source, module: amp (automated mixed precision), ciflow/trunk, release notes: jit, ciflow/mps
|
Fixes https://github.com/pytorch/pytorch/issues/88415
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 12 |
2,875 | 99,270 |
Remove lr_scheduler.print_lr
|
module: optimizer, triaged, module: LrScheduler
|
### 📚 The doc issue
The purpose of `print_lr` is unclear, and arguments are undocumented.
Also, the API is counter-intuitive. For example, in python API, we do not use `print('hello', verbose=False)` to not print something, and here the `verbose` is even a required argument. Instead, the user should put an if condition before calling the print function.
Related thread: https://discuss.pytorch.org/t/how-to-use-print-lr-in-the-lr-scheduler/132420
Source: https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L113-L124
### Suggest a potential alternative/fix
I suggest removing that function since users can still simply get the current LR via `get_last_lr` and design the printing message on their own.
cc @vincentqb @jbschlosser @albanD @janeyx99
| 1 |
2,876 | 99,269 |
[MPS] Add lu_factor
|
open source, release notes: mps, ciflow/mps
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #99269
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at d75cde1</samp>
Added MPS support and autograd formulas for LU factorization of tensors. Implemented the `linalg_lu_factor` and `linalg_lu_factor.out` functions for the MPS backend in `LinearAlgebra.mm` and added tests in `test_mps.py`. Added the corresponding dispatch entries in `native_functions.yaml` and the backward and forward formulas in `derivatives.yaml`.
| 5 |
2,877 | 99,268 |
Embedding layer tensor shape
|
needs reproduction, triaged, module: embedding
|
### 🐛 Describe the bug
I use captum.attr.LayerIntegratedGradients to provide some insight on the embedding layer. During the process, the mebdedding layer is called multiple time. On one occasion, it shows a bizarre behavior in terms of the tensor shape, that I can hardly explain. In the trace mose, I ran this command:
`y = x[:1,:1]
y
tensor([[242]], dtype=torch.int32)
self.embedding_matrix
Embedding(12507, 300, padding_idx=0)
z = self.embedding_matrix(y)
z.shape
torch.Size([1600, 20, 300])
type(self.embedding_matrix)
<class 'torch.nn.modules.sparse.Embedding'>
z = self.embedding_matrix(y.long())
z.shape
torch.Size([1600, 20, 300])`
<img width="367" alt="image" src="https://user-images.githubusercontent.com/30732870/232330519-37aa53f0-77a5-4e7a-8abc-7572457824d4.png">
My expectation was to get a [1,1,300] matrix, and not [1600, 20, 300].
### Versions
Versions:
I have tried on both 2.0.0, and 1.9.0, and the same behavior was observed.
| 1 |
2,878 | 99,265 |
the error message of torch.addcmul is wrong
|
module: error checking, triaged
|
### 🐛 Describe the bug
```
import torch
result={}
arg_1 = torch.rand([], dtype=torch.float64)
arg_2 = 0.5
result = torch.addcmul(tensor2=arg_1,value=arg_2,)
print(result)
```
Above code throws an exception `TypeError: addcmul() received an invalid combination of arguments - got unrecognized keyword arguments: tensor2`. But accroding to [doc](https://pytorch.org/docs/stable/generated/torch.addcmul.html) ,`tensor2` is keyword arguments. I think the error message should be `Error:addcmul() missing 2 required positional argument: "input", "tensor1"`
### Versions
pytorch version: 2.0.0+cu118
cc @svekars @carljparker @malfet
| 2 |
2,879 | 99,248 |
tools PYTHONPATH trick in run_test.py does not work reliably
|
triaged, module: testing
|
### 🐛 Describe the bug
Steps to reproduce:
1. python setup.py develop in torchtext
2. Try to run `run_test.py`
Expected result: all imports work
Actual result: the tools/ import fails (and is suppressed). This is because we accidentally pick up the tools/ folder in torchtext first, before our newly added path gets triggered.
I tried to fix this in https://github.com/pytorch/pytorch/pull/99173 but it didn't work for other reasons.
The most reliable fix would be to move these files into torch/_testing so that they are in a proper module.
### Versions
master
| 0 |
2,880 | 99,247 |
Broken link for torch dynamo FAQ in docs
|
triaged, topic: docs
|
### 📚 The doc issue
I was reading in the docs and learning about torch dynamo (its very cool ;)). There was a broken link that redirects the user to a "dead end" page. I created a PR to correct the link in the docs.
#99242
### Suggest a potential alternative/fix
Correct the link as in the PR (maybe?)
| 0 |
2,881 | 99,246 |
Adding MPS support for 3D convolutions
|
triaged, open source, Stale, ciflow/trunk, release notes: mps, ciflow/mps
|
Fixes #77818
- this pull request enables 3D convolutions (forward/backward) for MPS (Apple Silicon) within the same Convolution.mm file as conv2d.
- does not support channel_last (since pytorch doesn't implement channel_last for 3D tensors)
- does not support conv3d_transpose and treats depth-separable convolutions not as normal case (there are no MPS kernels available for either of those so far)
- requires MacOS >=13.2 (Ventura), I'm not sure how to add this specific case to MPSGraphVenturaOps.h
@kulinseth @albanD could you please check whether this is implemented as intended by you and whether we would require additional tests in test_mps.py?
| 20 |
2,882 | 99,230 |
vision_maskrcnn failing in periodic/trunk
|
triaged, bug, oncall: pt2
|
<img width="1668" alt="image" src="https://user-images.githubusercontent.com/4984825/232229096-cabf4a9d-23bc-4b3d-ab8e-ebc85e4b6171.png">
Failures appear to start with #98923.
Note: i also tried and failed to bisect an issue with vision_maskrcnn on inductor earlier this week, because looking at the pt2dash it looked like there was a drop between these revisions. However when bisecting, I found all revs were failing, but the error message changed at some point.
Good: 670c5cf96249db28cde757da5a6aa97569760102
Bad: c4f81cb6f49fe69f3a04867f9c445923af69113
cc @ezyang @soumith @msaroufim @ngimel @bdhirsh
| 1 |
2,883 | 99,225 |
Libtorch consumes too much memory as 16225
|
module: cpp, module: memory usage, triaged
|
### 🐛 Describe the bug
I met the same problem as https://github.com/pytorch/pytorch/issues/16255 when I was using latest libtorch . I convert resnet152 in torchvision to torchscript model. The following is my code:
#include <torch/torch.h>
#include <torch/script.h>
#include <nvToolsExt.h>
#include <unistd.h>
#include <iostream>
#include <string>
#include <vector>
#include <chrono>
#include <pthread.h>
void run_inference(torch::jit::script::Module& model, std::vector<torch::jit::IValue>& imgs)
{
for (int i = 0; i < 100; ++i)
model.forward(imgs);
torch::cuda::synchronize();
}
int main(int arc, char** argv) {
c10::InferenceMode guard(true);
torch::Device device(torch::kCUDA);
torch::Tensor img = torch::rand({4, 3, 224, 224}).to(device);
std::vector<torch::jit::IValue> inputs;
inputs.push_back(img);
std::string model_path = argv[1];
// torch::jit::script::Module model = torch::jit::load(model_path);
auto model = torch::jit::load(model_path);
model.to(device);
if(torch::cuda::is_available()) {
std::cout << "CUDA now is available ! Inference on GPU." << std::endl;
}
// warm up
run_inference(model, inputs);
nvtxRangePushA("MEASURE");
auto start = std::chrono::high_resolution_clock::now();
run_inference(model, inputs);
auto stop = std::chrono::high_resolution_clock::now();
nvtxRangePop();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
std::cout << "Running time : " << duration.count() << " ms" << std::endl;
}
pytorch memory: 740MB
libtorch memory: 2000MB
I was confused...
### Versions
libtorch build-version: 2.0.0+cu118
hardware: RTX 4090Ti
cc @jbschlosser
| 5 |
2,884 | 99,222 |
intermittent inductor segfault in CI (test_ops.py)
|
triaged, bug, oncall: pt2, module: inductor
|
https://github.com/pytorch/pytorch/actions/runs/4705446491/jobs/8346194860
filing for tracking, appears to be an intermittent issue in trunk. confirmed retroactively this has been showing up at least for the last week or so, but less than once a day. so it's probably gonna be hard to find, unless you can get some info from the logs.
cc @ezyang @soumith @msaroufim @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 0 |
2,885 | 99,218 |
Sporadic CUDA error in `test_nccl_warn_not_in_group_debug_detail`
|
oncall: distributed
|
Running
```
CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES=0,7 numactl -C 2 python test/distributed/test_c10d_nccl.py -k test_nccl_warn_not_in_group_debug_detail --repeat 20
```
fails sporadically with error like:
```
Exception raised from c10_cuda_check_implementation at /fsx/users/andgu/work/pytorch/c10/cuda/CUDAException.cpp:44 (most recent call first):
```
<details>
<summary> Full stack trace </summary>
```
INFO:torch.testing._internal.common_distributed:Started process 0 with pid 3154296
INFO:torch.testing._internal.common_distributed:Started process 1 with pid 3154297
INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 0
INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 1
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:2 to store for rank: 1
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:2 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
[E ProcessGroupNCCL.cpp:830] [Rank 0] NCCL watchdog thread terminated with exception: CUDA error: driver shutting down
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /fsx/users/andgu/work/pytorch/c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f5a5cea89ec in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f5a5ce6c65a in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3cc (0x7f5a5cf3d24c in /fsx/users/andgu/work/pytorch/torch/lib/libc10_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::startedGPUExecutionInternal() const + 0x6c (0x7f5a5ddaf86c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isStarted() + 0x78 (0x7f5a5ddb3388 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x201 (0x7f5a5ddc15e1 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x8c (0x7f5a5ddc187c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #9: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 0] NCCL watchdog thread terminated with exception: CUDA error: driver shutting down
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /fsx/users/andgu/work/pytorch/c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f5a5cea89ec in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f5a5ce6c65a in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3cc (0x7f5a5cf3d24c in /fsx/users/andgu/work/pytorch/torch/lib/libc10_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::startedGPUExecutionInternal() const + 0x6c (0x7f5a5ddaf86c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isStarted() + 0x78 (0x7f5a5ddb3388 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x201 (0x7f5a5ddc15e1 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x8c (0x7f5a5ddc187c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #9: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154296:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: std::vector<c10::Type::SingletonOrSharedTypePtr<c10::Type>, std::allocator<c10::Type::SingletonOrSharedTypePtr<c10::Type> > >::~vector() + 0 (0x7f5a729b6140 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_python.so)
frame #3: <unknown function> + 0x15d122c (0x7f5a67a3a22c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x48 (0x7f5a72651178 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x15d1ed4 (0x7f5a67a3aed4 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #6: std::_Sp_counted_ptr_inplace<torch::jit::Operator, std::allocator<torch::jit::Operator>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x50d (0x7f5a6b20b76d in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x4d9bca8 (0x7f5a6b204ca8 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x468a7 (0x7f5aa414c8a7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #9: on_exit + 0 (0x7f5aa414ca60 in /lib/x86_64-linux-gnu/libc.so.6)
frame #10: <unknown function> + 0x1168d7 (0x561c0ca438d7 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #11: <unknown function> + 0x116903 (0x561c0ca43903 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #12: <unknown function> + 0x116952 (0x561c0ca43952 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #13: PyRun_SimpleStringFlags + 0x4d (0x561c0ca44dae in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #14: <unknown function> + 0x118fdf (0x561c0ca45fdf in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #15: Py_BytesMain + 0x39 (0x561c0cb84729 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #16: __libc_start_main + 0xf3 (0x7f5aa412a083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #17: <unknown function> + 0x1e6995 (0x561c0cb13995 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
SIGABRT(6), PID: 3154296, Thread 3154350:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f5aa421899f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x292ec9 (0x7f5a73582ec9 in /lib/x86_64-linux-gnu/libcuda.so)
frame #4: <unknown function> + 0x34d9ab (0x7f5a7363d9ab in /lib/x86_64-linux-gnu/libcuda.so)
frame #5: <unknown function> + 0x2957f8 (0x7f5a735857f8 in /lib/x86_64-linux-gnu/libcuda.so)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154386:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::fatalSignalHandler(int) + 0x152 (0x7f5a5ceafa62 in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: gsignal + 0xcb (0x7f5aa414900b in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: abort + 0x12b (0x7f5aa4128859 in /lib/x86_64-linux-gnu/libc.so.6)
frame #5: __gnu_cxx::__verbose_terminate_handler() + 0xbc (0x7f5a750ed84a in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0xabf47 (0x7f5a750ebf47 in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #7: <unknown function> + 0xabf7d (0x7f5a750ebf7d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0xabf44 (0x7f5a750ebf44 in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #9: <unknown function> + 0xc5e08a (0x7f5a5dbbf08a in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #11: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #12: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154387:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: epoll_wait + 0x5e (0x7f5aa422546e in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: gloo::transport::tcp::Loop::run() + 0x6a (0x7f5a6d8def2a in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154391:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f5a751044cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f5a6b7e0371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154392:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f5a751044cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f5a6b7e0371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154395:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_timedwait + 0x271 (0x7f5aa43077d1 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0xef (0x7f5a5ddc14cf in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x8c (0x7f5a5ddc187c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
[E ProcessGroupNCCL.cpp:830] [Rank 1] NCCL watchdog thread terminated with exception: CUDA error: driver shutting down
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /fsx/users/andgu/work/pytorch/c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f12c6fb69ec in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f12c6f7a65a in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3cc (0x7f12c704b24c in /fsx/users/andgu/work/pytorch/torch/lib/libc10_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::startedGPUExecutionInternal() const + 0x6c (0x7f12c7ebd86c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isStarted() + 0x78 (0x7f12c7ec1388 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x201 (0x7f12c7ecf5e1 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x8c (0x7f12c7ecf87c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #9: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 1] NCCL watchdog thread terminated with exception: CUDA error: driver shutting down
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /fsx/users/andgu/work/pytorch/c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7f12c6fb69ec in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7f12c6f7a65a in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3cc (0x7f12c704b24c in /fsx/users/andgu/work/pytorch/torch/lib/libc10_cuda.so)
frame #3: c10d::ProcessGroupNCCL::WorkNCCL::startedGPUExecutionInternal() const + 0x6c (0x7f12c7ebd86c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::WorkNCCL::isStarted() + 0x78 (0x7f12c7ec1388 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #5: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x201 (0x7f12c7ecf5e1 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x8c (0x7f12c7ecf87c in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #9: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154396:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: epoll_wait + 0x5e (0x7f5aa422546e in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: gloo::transport::tcp::Loop::run() + 0x6a (0x7f5a6d8def2a in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154397:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f5a751044cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f5a6b7e0371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154398:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f5a751044cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f5a6b7e0371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f5a7510819d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154399:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f5aa421899f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x292ec9 (0x7f5a73582ec9 in /lib/x86_64-linux-gnu/libcuda.so)
frame #4: <unknown function> + 0x34d9ab (0x7f5a7363d9ab in /lib/x86_64-linux-gnu/libcuda.so)
frame #5: <unknown function> + 0x2957f8 (0x7f5a735857f8 in /lib/x86_64-linux-gnu/libcuda.so)
frame #6: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154502:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f5aa421899f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x308ab43 (0x7f5a5ffebb43 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154503:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: <unknown function> + 0x308a150 (0x7f5a5ffeb150 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154524:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f5aa421899f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x308ab43 (0x7f5a5ffebb43 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154296, Thread 3154560:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f5a5ceaf51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f5aa430c420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f5aa4307376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: <unknown function> + 0x308a150 (0x7f5a5ffeb150 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f5aa4300609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f5aa4225133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154297:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: c10::impl::OperatorEntry::getKernelForDispatchKey(c10::DispatchKey) const + 0x38 (0x7f12d1ae9088 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #3: c10::impl::OperatorEntry::computeDispatchTableEntryWithDebug(c10::Dispatcher const&, c10::DispatchKey) const + 0xb8 (0x7f12d1aea338 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: c10::impl::OperatorEntry::computeDispatchTableEntry(c10::Dispatcher const&, c10::DispatchKey) const + 0xd (0x7f12d1aea55d in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: c10::impl::OperatorEntry::updateDispatchTableEntry_(c10::Dispatcher const&, c10::DispatchKey) + 0xa9 (0x7f12d1aea619 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #6: c10::impl::OperatorEntry::updateDispatchTable_(c10::Dispatcher const&, c10::DispatchKey) + 0x311 (0x7f12d1aeaa21 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #7: c10::impl::OperatorEntry::deregisterKernel_(c10::Dispatcher const&, c10::optional<c10::DispatchKey>, std::_List_iterator<c10::impl::AnnotatedKernel>) + 0x405 (0x7f12d1af04f5 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #8: c10::Dispatcher::deregisterImpl_(c10::OperatorHandle const&, c10::OperatorName const&, c10::optional<c10::DispatchKey>, std::_List_iterator<c10::impl::AnnotatedKernel>) + 0x52 (0x7f12d1adddd2 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0xe38abd (0x7f12c7ea7abd in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x468a7 (0x7f130e25a8a7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #11: on_exit + 0 (0x7f130e25aa60 in /lib/x86_64-linux-gnu/libc.so.6)
frame #12: <unknown function> + 0x1168d7 (0x55bd6ef718d7 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #13: <unknown function> + 0x116903 (0x55bd6ef71903 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #14: <unknown function> + 0x116952 (0x55bd6ef71952 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #15: PyRun_SimpleStringFlags + 0x4d (0x55bd6ef72dae in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #16: <unknown function> + 0x118fdf (0x55bd6ef73fdf in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #17: Py_BytesMain + 0x39 (0x55bd6f0b2729 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
frame #18: __libc_start_main + 0xf3 (0x7f130e238083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #19: <unknown function> + 0x1e6995 (0x55bd6f041995 in /fsx/users/andgu/conda/envs/pytorch/bin/python)
SIGABRT(6), PID: 3154297, Thread 3154351:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f130e32699f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x292ec9 (0x7f12dd690ec9 in /lib/x86_64-linux-gnu/libcuda.so)
frame #4: <unknown function> + 0x34d9ab (0x7f12dd74b9ab in /lib/x86_64-linux-gnu/libcuda.so)
frame #5: <unknown function> + 0x2957f8 (0x7f12dd6937f8 in /lib/x86_64-linux-gnu/libcuda.so)
frame #6: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154389:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::fatalSignalHandler(int) + 0x152 (0x7f12c6fbda62 in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: gsignal + 0xcb (0x7f130e25700b in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: abort + 0x12b (0x7f130e236859 in /lib/x86_64-linux-gnu/libc.so.6)
frame #5: __gnu_cxx::__verbose_terminate_handler() + 0xbc (0x7f12df1fb84a in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0xabf47 (0x7f12df1f9f47 in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #7: <unknown function> + 0xabf7d (0x7f12df1f9f7d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #8: <unknown function> + 0xabf44 (0x7f12df1f9f44 in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #9: <unknown function> + 0xc5e08a (0x7f12c7ccd08a in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #11: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #12: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154390:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: epoll_wait + 0x5e (0x7f130e33346e in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: gloo::transport::tcp::Loop::run() + 0x6a (0x7f12d79ecf2a in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154393:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f130e415376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f12df2124cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f12d58ee371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154394:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f130e415376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: std::condition_variable::wait(std::unique_lock<std::mutex>&) + 0x9 (0x7f12df2124cb in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #4: c10d::ProcessGroupGloo::runLoop(int) + 0x221 (0x7f12d58ee371 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xc819d (0x7f12df21619d in /fsx/users/andgu/conda/envs/pytorch/bin/../lib/libstdc++.so.6)
frame #6: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154400:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f130e32699f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x292ec9 (0x7f12dd690ec9 in /lib/x86_64-linux-gnu/libcuda.so)
frame #4: <unknown function> + 0x34d9ab (0x7f12dd74b9ab in /lib/x86_64-linux-gnu/libcuda.so)
frame #5: <unknown function> + 0x2957f8 (0x7f12dd6937f8 in /lib/x86_64-linux-gnu/libcuda.so)
frame #6: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154434:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f130e32699f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x292ec9 (0x7f12dd690ec9 in /lib/x86_64-linux-gnu/libcuda.so)
frame #4: <unknown function> + 0x34d9ab (0x7f12dd74b9ab in /lib/x86_64-linux-gnu/libcuda.so)
frame #5: <unknown function> + 0x2957f8 (0x7f12dd6937f8 in /lib/x86_64-linux-gnu/libcuda.so)
frame #6: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154525:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: __poll + 0x4f (0x7f130e32699f in /lib/x86_64-linux-gnu/libc.so.6)
frame #3: <unknown function> + 0x308ab43 (0x7f12ca0f9b43 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 3154297, Thread 3154559:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f12c6fbd51b in /fsx/users/andgu/work/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14420 (0x7f130e41a420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #2: pthread_cond_wait + 0x216 (0x7f130e415376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #3: <unknown function> + 0x308a150 (0x7f12ca0f9150 in /fsx/users/andgu/work/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x8609 (0x7f130e40e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: clone + 0x43 (0x7f130e333133 in /lib/x86_64-linux-gnu/libc.so.6)
F
======================================================================
FAIL: test_nccl_warn_not_in_group_debug_detail (__main__.CommTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/fsx/users/andgu/work/pytorch/torch/testing/_internal/common_distributed.py", line 541, in wrapper
self._join_processes(fn)
File "/fsx/users/andgu/work/pytorch/torch/testing/_internal/common_distributed.py", line 760, in _join_processes
self._check_return_codes(elapsed_time)
File "/fsx/users/andgu/work/pytorch/torch/testing/_internal/common_distributed.py", line 835, in _check_return_codes
self.assertEqual(
File "/fsx/users/andgu/work/pytorch/torch/testing/_internal/common_utils.py", line 3031, in assertEqual
raise error_metas[0].to_error(
AssertionError: Scalars are not equal!
Expected 0 but got -6.
Absolute difference: 6
Relative difference: inf
Expected zero exit code but got -6 for pid: 3154296
----------------------------------------------------------------------
Ran 1 test in 16.997s
FAILED (failures=1)
```
</details>
Note that the `numactl -C 2` helps increase the failure rate.
Possibly, this is a duplicate of https://github.com/pytorch/pytorch/issues/90848 but needs an expert to confirm.
Thanks @Aidyn-A for flagging this!
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 1 |
2,886 | 99,201 |
opacus_cifar10 fails in dynamo due to hooks
|
triaged, bug, oncall: pt2, module: dynamo
|
After adding support for hooks on allowed modules via graph-breaks, opacus_cifar10 crashes during dynamo tracing with
`python benchmarks/dynamo/torchbench.py --only opacus_cifar10 --backend aot_eager --accuracy --inference`
```
/scratch/whc/work/pytorch/torch/_dynamo/bytecode_analysis.py(190)stacksize_analysis()
-> assert low >= 0
```
It looks like the crash is happening while tracing some exception handling code, relating to hooks on GradSampleModule in opacus.
cc @ezyang @soumith @msaroufim @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 2 |
2,887 | 99,200 |
Unused `import torch` followed by `cuml.NearestNeighbors` leads to nondeterministic segfault (during Python process exit?)
|
needs reproduction, module: crash, triaged
|
### 🐛 Describe the bug
I've also filed this at https://github.com/rapidsai/cuml/issues/5343, since it appears there is some interaction between an (unused) `import torch` and a `cuml.NearestNeighbors` call. There's also more details (including instructions and templates for launching an EC2 GPU node that repros the issue) at [ryan-williams/torch-cuml-gpu-segfault](https://github.com/ryan-williams/torch-cuml-gpu-segfault).
Below is a summary of the repro; I've run into it on AWS `p3.2xlarge` instances, with a variety of "Deep Learning" AMIs published by Amazon and NVIDIA, and using Pytorch 1.12.1 and 1.13.1.
`neighbors.py`:
```python
import numpy as np
np.random.seed(123)
X = np.random.random((10, 2))
# This unused import, when executed before the cuml block below, leads to a segfault
# (seemingly during Python process cleanup) on ≈10% of runs.
import torch
# Adapted from scanpy.neighbors.compute_neighbors_rapids:
# https://github.com/scverse/scanpy/blob/1.8.2/scanpy/neighbors/__init__.py#L318-L338
from cuml.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=X.shape[1], metric='euclidean')
X_contiguous = np.ascontiguousarray(X, dtype=np.float32)
nn.fit(X_contiguous)
# This line also required for segfault
knn_indices, knn_distances = nn.kneighbors(X_contiguous)
# This print always occurs, segfault happens later (during Python process exit?)
print('Done!')
```
</details>
Exercising this code repeatedly exits 139 (segmentation fault) about ≈10% of the time:
```bash
n=30
for i in `seq $n`; do
python neighbors.py && echo "$i/$n: ✅" || echo "$i/$n: ❌ ($?)"
done
# Done!
# 1/30: ✅
# Done!
# 2/30: ✅
# Done!
# 3/30: ✅
# Done!
# 4/30: ✅
# Done!
# 5/30: ✅
# Done!
# Segmentation fault (core dumped)
# 6/30: ❌ (139)
# Done!
# 7/30: ✅
# Done!
# 8/30: ✅
# Done!
# 9/30: ✅
# Done!
# 10/30: ✅
# Done!
# 11/30: ✅
# Done!
# 12/30: ✅
# Done!
# 13/30: ✅
# Done!
# 14/30: ✅
# Done!
# 15/30: ✅
# Done!
# 16/30: ✅
# Done!
# Segmentation fault (core dumped)
# 17/30: ❌ (139)
# Done!
# 18/30: ✅
# Done!
# 19/30: ✅
# Done!
# Segmentation fault (core dumped)
# 20/30: ❌ (139)
# Done!
# 21/30: ✅
# Done!
# 22/30: ✅
# Done!
# Segmentation fault (core dumped)
# 23/30: ❌ (139)
# Done!
# 24/30: ✅
# Done!
# Segmentation fault (core dumped)
# 25/30: ❌ (139)
# Done!
# 26/30: ✅
# Done!
# 27/30: ✅
# Done!
# 28/30: ✅
# Done!
# 29/30: ✅
# Done!
# 30/30: ✅
```
`environment.yml`:
```yml
channels:
- pytorch
- rapidsai
- nvidia
- conda-forge
dependencies:
- python==3.9.13
- cuml==22.12.00[build=cuda11_py39*]
- numpy==1.23.5
- pytorch==1.13.1
```
<details><summary><code>nvidia-smi</code></summary>
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 |
| N/A 32C P0 48W / 300W | 2323MiB / 16384MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 43189 C python 2321MiB |
+-----------------------------------------------------------------------------+
```
</details>
I originally hit this issue with CUDA 11.6, `cuml==22.06.01`, and Torch 1.12.1, but the repro persists on CUDA 11.7, `cuml==22.12.00`, and Torch 1.13.1, as documented here.
<details><summary>Here's <code>nvidia-smi</code> from a CUDA 11.6 / Driver Version 510.47.03 AMI where I first saw the error</summary>
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 |
| N/A 26C P0 24W / 300W | 0MiB / 16384MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
</details>
### Versions
Here are two environments where I see the issue:
<details><summary><code>python collect_env.py</code> (Pytorch version 1.13.1 / cudatoolkit 11.7.0)</summary>
```
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.26
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 450.142.00
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2697.660
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.03
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.7.0 hd8887f6_10 nvidia
[conda] mkl 2023.0.0 h6d00ec8_25399
[conda] numpy 1.23.5 py39h3d75532_0 conda-forge
[conda] pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
```
</details>
<details><summary><code>python collect_env.py</code> (Pytorch version 1.12.1 / cudatoolkit 11.6.0)</summary>
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:59) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 3000.000
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.01
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] numpyro==0.11.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.1.0.post1
[pip3] torch-scatter==2.1.0
[pip3] torch-sparse==0.6.16
[pip3] torchmetrics==0.10.1
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.6.0 habf752d_9 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.22.4 py39hc58783e_0 conda-forge
[conda] numpyro 0.11.0 pypi_0 pypi
[conda] pyg 2.1.0 py39_torch_1.12.0_cu116 pyg
[conda] pytorch 1.12.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cluster 1.6.0 py39_torch_1.12.0_cu116 pyg
[conda] pytorch-lightning 1.7.7 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-scatter 2.1.0 py39_torch_1.12.0_cu116 pyg
[conda] pytorch-sparse 0.6.16 py39_torch_1.12.0_cu116 pyg
[conda] torchmetrics 0.10.1 pypi_0 pypi
```
</details>
| 2 |
2,888 | 99,176 |
Add Debug builds for python with pydebug
|
triaged, module: devx
|
Requested by @albanD:
## The Ask
We need to very seriously consider adding a pydebug CI build. We have major bugs like https://github.com/pytorch/pytorch/pull/91684#issuecomment-1504062950 that keep going unoticed. Dynamo is especially at risk and the cornerstone of PT2.
## pydebug
pydebug is a version of CPython with internal asserts basically. We rely on a lot of low level details in CPython. So we break these asserts regularly for very various reasons
## Installation
More details: https://devguide.python.org/getting-started/setup-building/#unix
Depending on the environment, maybe you will have a package for it. Otherwise you might need to build CPython from source (which we already do anyways) with an extra flag.
## Jobs added
Re the config that should be built, any one config would be file. `linux cpu-only` may be best since it's the cheapest
cc @kit1980 @huydhn @clee2000
| 3 |
2,889 | 99,160 |
Run ChatRWKV on MBP(intel CPU)+eGPU[rx6800 16G], returna a very big number -9223372036854775808, looks like overflow
|
triaged, module: mps
|
### 🐛 Describe the bug
Run [ChatRWKV](https://github.com/BlinkDL/ChatRWKV) using 'mps', returna a very big number, looks like overflow.
MBP(intel CPU, not M1/M2), with eGPU[rx6800 16G]
pytorch==2.0.0
It can load model, but when calculate the 1st token, it gets a very big number `-9223372036854775808` and return error.
error log:
```
Bob: hi
len(tokens), tokens[]: 8 [26845, 27, 14260, 187, 187, 2422, 547, 27]
Alice:
token, out: -9223372036854775808 tensor([ -5.4473, -25.4831, -6.7508, ..., -7.4079, -6.1975, -4.1842],
device='mps:0')
len(tokens), tokens[]: 1 [-9223372036854775808]
Traceback (most recent call last):
File "/Users/ppt/Github/ChatRWKV/v2/chat.py", line 474, in <module>
on_message(msg)
File "/Users/ppt/Github/ChatRWKV/v2/chat.py", line 387, in on_message
out = run_rnn([token], newline_adj=newline_adj)
File "/Users/ppt/Github/ChatRWKV/v2/chat.py", line 163, in run_rnn
out, model_state = model.forward(tokens[:CHUNK_LEN], model_state)
File "/Users/ppt/Github/ChatRWKV/v2/../rwkv_pip_package/src/rwkv/model.py", line 563, in forward
x = w['emb.weight'][tokens if seq_mode else tokens[0]]
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
(ChatRWKV) ppt@pptdeMacBook-Pro v2 %
```
I tried to rollback to the previous version of pytorch(1.12.1/1.13.1), it will send a warning msg and fallback to cpu, but it can return a correct number:
```
Bob: 你好
len(tokens), tokens[]: 10 [26845, 27, 209, 24553, 34439, 187, 187, 2422, 547, 27]
Alice:/Users/ppt/Github/ChatRWKV/v2/../rwkv_pip_package/src/rwkv/utils.py:68: UserWarning: The operator 'aten::sort.values_stable' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
sorted_ids = torch.argsort(probs)
token, out: 27 tensor([ -5.4373, -25.4622, -6.7405, ..., -7.3974, -6.1903, -4.1751],
device='mps:0')
len(tokens), tokens[]: 1 [27]
:
token, out: 24553 tensor([-1.0000e+09, -2.1039e+01, -1.5952e+00, ..., -1.3116e+00,
-6.4459e-01, -1.3537e-01], device='mps:0')
len(tokens), tokens[]: 1 [24553]
你
token, out: 34439 tensor([-1.0000e+09, -2.2532e+01, -6.4443e+00, ..., -7.8328e+00,
-7.3615e+00, -5.4704e+00], device='mps:0')
len(tokens), tokens[]: 1 [34439]
好
token, out: 30676 tensor([-1.0000e+09, -1.7130e+01, 1.8807e+00, ..., -9.7331e-01,
-2.4166e-01, 1.1678e-01], device='mps:0')
len(tokens), tokens[]: 1 [30676]
token, out: 221 tensor([-1.0000e+09, -3.8395e+01, -1.6493e+01, ..., -2.1061e+01,
-2.0698e+01, -1.8379e+01], device='mps:0')
len(tokens), tokens[]: 1 [221]
啊
token, out: 26680 tensor([-1.0000e+09, -1.5628e+01, 3.8878e+00, ..., 5.0439e-01,
1.1729e+00, 2.6680e+00], device='mps:0')
len(tokens), tokens[]: 1 [26680]
!
token, out: 15367 tensor([-1.0000e+09, -1.8554e+01, 4.5207e-01, ..., -1.0022e+00,
-3.3723e-01, 1.5153e+00], device='mps:0')
```
https://github.com/BlinkDL/ChatRWKV/issues/93
### Versions
(ChatRWKV) ppt@pptdeMacBook-Pro ChatRWKV % python ./collect_env.py
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (main, Mar 21 2023, 13:41:39) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchvision==0.15.1
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
(ChatRWKV) ppt@pptdeMacBook-Pro ChatRWKV %
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
2,890 | 99,155 |
TORCH_COMPILE_ABLATE envvar
|
low priority, triaged, enhancement, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
I propose we add this envvar. Here's what it does:
* `TORCH_COMPILE_ABLATE=inductor`: stop using inductor in all torch.compile calls, if it was being used
* `TORCH_COMPILE_ABLATE=aot`: stop using AOTAutograd in all torch.compile calls, if it was being used. This implies inductor
* anything else people want to add. For example, we could ablate optimizations, etc
You can use this to easily pinpoint if a particular component is causing a problem.
Bikeshedding name is also OK. Other proposal is `TORCH_COMPILE_DISABLE`
### Versions
master
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @ngimel @desertfire
| 4 |
2,891 | 99,149 |
Spectral Normalization can not be applied to Conv{1,2,3}d
|
module: nn, triaged, needs research
|
### 🐛 Describe the bug
I would like to raise a concern about the spectral_norm parameterization.
I strongly believe that Spectral-Normalization Parameterization introduced several versions ago does not work for Conv{1,2,3}d layers.
The reason is that reshaping the weight into a 2D is not enough.
An easy fix could be obtained by rescaling through a scale factor of 1/(k1*k2)**0.5 the parameterized weights, where k1, k2 are the dimensions of the kernel filters. However, note that only should solve the problem for the case stride=1.
By running the following code
```
import torch
from torch.nn.utils.parametrizations import spectral_norm
original_conv = torch.nn.Conv2d(200, 200, 7, padding=7//2, bias=False)
lip_conv = spectral_norm(original_conv, n_power_iterations=40)
with torch.no_grad():
# Initializing original_conv weight to higher values
for p in original_conv.parameters():
p += 10*torch.rand(200, 200, 7, 7)
# Estimating the Lipschitz constant on several inputs
lip_cons = 0.
for _ in range(100):
dim = torch.randint(7, 100, [1]).item()
x = 100*torch.rand(1, 200, dim, dim)
lip_cons = max(lip_cons, lip_conv(x).norm(2)/x.norm(2))
print(f'Lipschitz constant of lip_conv is greater than 1: {lip_cons}')
# Power method applied to the weights of the lip_conv
w = lip_conv.weight
x = torch.randn(1, 200, 250, 250)
for _ in range(40):
x = torch.nn.functional.conv2d(x, w, padding=7//2,)
sigma = x.norm(2)
x /= sigma
print(f'Estimated Lipschitz constant of lip_conv: {sigma} ~ 7')
print()
```
you should obtain something like
```
Lipschitz constant of lip_conv is greater than 1: 5.880033016204834
Esitmated Lipschitz constant of lip_conv: 6.902995586395264 ~ 7
```
There is something I am doing wrong? If this is not the case, I would like to ask the developers, to properly notify in the Documentation that this feature is not valid for convolutional layers. This could avoid misuse of this implementation.
Thanks for your time,
Fabio
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.4 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-DGXS-32GB
GPU 1: Tesla V100-DGXS-32GB
GPU 2: Tesla V100-DGXS-32GB
GPU 3: Tesla V100-DGXS-32GB
Nvidia driver version: 515.65.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1485.012
CPU max MHz: 3600,0000
CPU min MHz: 1200,0000
BogoMIPS: 4397.23
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.0.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_3 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.0 py310_cu117 pytorch
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
2,892 | 99,147 |
`torch.sparse.sum` backward fails when reducing over dense dimensions.
|
module: sparse, module: autograd, triaged
|
### 🐛 Describe the bug
As per title. To reproduce:
```python
In [1]: import torch
In [2]: def make_args(x, dim):
...: x_g = x.clone().requires_grad_(True)
...: y_g = torch.sparse.sum(x_g, dim=dim)
...: return x_g, y_g
...:
In [3]: idx = torch.tensor([[0, 0, 0], [0, 1, 2]])
In [4]: val = torch.rand(3, 5, 5)
In [5]: x = torch.sparse_coo_tensor(idx, val, (5, 5, 5, 5))
In [6]: x_g, y_g = make_args(x, -1)
In [7]: torch.autograd.grad(y_g, x_g, torch.ones(*y_g.shape).to_sparse(y_g.sparse_dim()))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [7], in <cell line: 1>()
----> 1 torch.autograd.grad(y_g, x_g, torch.ones(*y_g.shape).to_sparse(y_g.sparse_dim()))
File ~/git/Quansight/pytorch/torch/autograd/__init__.py:319, in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched, materialize_grads)
317 result = _vmap_internals._vmap(vjp, 0, 0, allow_none_pass_through=True)(grad_outputs_)
318 else:
--> 319 result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
320 t_outputs, grad_outputs_, retain_graph, create_graph, t_inputs,
321 allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass
322 if materialize_grads:
323 result = tuple(output if output is not None else torch.zeros_like(input, requires_grad=True)
324 for (output, input) in zip(result, t_inputs))
RuntimeError: The expanded size of the tensor (3) must match the existing size (25) at non-singleton dimension 0. Target sizes: [3, 5, 5]. Tensor sizes: [25, 5, 1]
```
No issues like that when reducing over sparse dimensions.
### Versions
Current master.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer @ezyang @albanD @zou3519 @gqchen @soulitzer @Lezcano @Varal7
| 2 |
2,893 | 99,143 |
No documentation to show how to implement aten::view for custom backend
|
module: cpp-extensions, module: docs, triaged
|
### 📚 The doc issue
The original code is:
```py
x = torch.empty([1024], device='privateuseone:0')
y = x.view([2, -1]) # raise error by missing aten::view
```
Then I get following errors:
```txt
NotImplementedError: Could not run 'aten::view' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for ..
```
According to some interface declaration in Pytorch source code, the extension looks like this:
```cpp
static at::Tensor __view(c10::DispatchKeySet ks, const at::Tensor & self, c10::SymIntArrayRef size) {
return at::_ops::view::redispatch(ks, self, size);
}
TORCH_LIBRARY_IMPL(aten, Antares, m) {
m.impl("view", __view);
}
```
However, it results in infinite recursive call of this function and ends with stack overflow.
I don't think `x.view([2, -1])` really requires user to define its implementation. If this definition is a must, what documentation can I refer to get it passed correctly?
### Suggest a potential alternative/fix
An document example of how to implement custom `aten::view`, or any simpler solutions to solve the reshape problem above.
cc @malfet @zou3519 @svekars @carljparker
| 0 |
2,894 | 99,142 |
More Nested Tensor Functionality (layer_norm, cross_entropy / log_softmax&nll_loss)
|
triaged, module: nestedtensor, topic: new features
|
### 🚀 The feature, motivation and pitch
I am working on Graphs. Right now I have a model running that takes a subgraph and does some predictions.
To improve throughput I want to batch multiple subgraphs of different sizes together.
Padding them to the same size does not work in my case as I use an aggregation operation where I don't want to aggregate the padded neighbours but masking some (the padded) neighbours is not possible.
I tried modifiying my model to support nested tensors as input which somewhat worked, but I had to cut out some unsupported operations, specifically layer_norm.
Also currently there are no supported loss functions, so a cross_entropy or nll_loss (and log_softmax) that supports nested tensors would be a big usability upgrade.
Also some error messages related to nested tensors point to [https://github.com/pytorch/nestedtensor](https://github.com/pytorch/nestedtensor) which I suspect is not correct anymore since nested tensors were moved to the core.
### Alternatives
I tried implementing layer_norm myself using the currently supported nested ops, but was not sucessfull.
The issue is the "a/sqrt(b)" calculation, which I did not get to work without a .pow() or element wise division of two nested tensors.
For the loss function I can work around it by unbinding and stacking the output nested tensors, but this is very ugly.
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 1 |
2,895 | 99,140 |
Why nn.Upsample/F.interpolate followed by nn.InstanceNorm2d will report error "Unsupported: ONNX export of instance_norm for unknown channel size."
|
module: onnx, triaged, module: norms and normalization
|
### 🐛 Describe the bug
Hi~
When run **python main.py**, the error is
`torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of instance_norm for unknown channel size. [Caused by the value 'input.199 defined in (%input.199 : Float(*, *, *, *, strides=[262144, 1024, 32, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input.195, %middle.1.weight, %middle.1.bias), scope: __main__.InpaintGenerator::/torch.nn.modules.container.Sequential::middle/torch.nn.modules.conv.Conv2d::middle.1`
Having tried:
1. Changing opset from 9 to 17
2. Different version of Pytorch (1.9, 1.10, 1.12, 1.13, 2.0)
But after removing IN layers, it works. So how to solve it?
Thanks~
By the way, the code of the network architecture is from https://github.com/tsingqguo/misf.
**Code File 1: ./main.py**
import torch
import torch.nn as nn
from kpn.network import KernelConv
import kpn.utils as kpn_utils
import numpy as np
class BaseNetwork(nn.Module):
def __init__(self):
super(BaseNetwork, self).__init__()
def init_weights(self, init_type='normal', gain=0.02):
def init_func(m):
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
if init_type == 'normal':
nn.init.normal_(m.weight.data, 0.0, gain)
elif init_type == 'xavier':
nn.init.xavier_normal_(m.weight.data, gain=gain)
elif init_type == 'kaiming':
nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
elif init_type == 'orthogonal':
nn.init.orthogonal_(m.weight.data, gain=gain)
if hasattr(m, 'bias') and m.bias is not None:
nn.init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1:
nn.init.normal_(m.weight.data, 1.0, gain)
nn.init.constant_(m.bias.data, 0.0)
self.apply(init_func)
class InpaintGenerator(BaseNetwork):
def __init__(self, config=None, residual_blocks=8, init_weights=True):
super(InpaintGenerator, self).__init__()
# self.filter_type = config.FILTER_TYPE
# self.kernel_size = config.kernel_size
self.encoder0 = nn.Sequential(
nn.ReflectionPad2d(3),
nn.Conv2d(in_channels=4, out_channels=64, kernel_size=7, padding=0),
nn.InstanceNorm2d(64, track_running_stats=False),
nn.ReLU(True)
)
self.encoder1 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(128, track_running_stats=False),
nn.ReLU(True)
)
self.encoder2 = nn.Sequential(
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(256, track_running_stats=False),
nn.ReLU(True)
)
blocks = []
for _ in range(residual_blocks):
block = ResnetBlock(256, 2)
blocks.append(block)
self.middle = nn.Sequential(*blocks)
self.middle = nn.Sequential(
# nn.Conv2d(256, 256, 3, 1, 1),
# nn.InstanceNorm2d(256)
nn.ReflectionPad2d(2),
spectral_norm(nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=0, dilation=2, bias=not False), False),
nn.InstanceNorm2d(256),
nn.ReLU(True),
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(128, track_running_stats=False),
nn.ReLU(True),
nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(64, track_running_stats=False),
nn.ReLU(True),
nn.ReflectionPad2d(3),
nn.Conv2d(in_channels=64, out_channels=3, kernel_size=7, padding=0),
)
self.kernel_pred = KernelConv(kernel_size=[3], sep_conv=False, core_bias=False)
self.kpn_model = kpn_utils.create_generator()
if init_weights:
self.init_weights()
def forward(self, x):
inputs = x.clone()
x = self.encoder0(x) # 64*256*256
x = self.encoder1(x) # 128*128*128
kernels, kernels_img = self.kpn_model(inputs, x)
x = self.encoder2(x) # 256*64*64
x = self.kernel_pred(x, kernels, white_level=1.0, rate=1)
x = self.middle(x) # 256*64*64
x = self.decoder(x) # 3*256*256
x = self.kernel_pred(x, kernels_img, white_level=1.0, rate=1)
x = (torch.tanh(x) + 1) / 2
return x
def save_feature(self, x, name):
x = x.cpu().numpy()
np.save('./result/{}'.format(name), x)
class ResnetBlock(nn.Module):
def __init__(self, dim, dilation=1, use_spectral_norm=False):
super(ResnetBlock, self).__init__()
self.conv_block = nn.Sequential(
nn.ReflectionPad2d(dilation),
spectral_norm(nn.Conv2d(in_channels=dim, out_channels=dim, kernel_size=3, padding=0, dilation=dilation, bias=not use_spectral_norm), use_spectral_norm),
nn.InstanceNorm2d(dim),
nn.ReLU(True),
nn.ReflectionPad2d(1),
spectral_norm(nn.Conv2d(in_channels=dim, out_channels=dim, kernel_size=3, padding=0, dilation=1, bias=not use_spectral_norm), use_spectral_norm),
nn.InstanceNorm2d(dim),
)
def forward(self, x):
out = x + self.conv_block(x)
return out
def spectral_norm(module, mode=True):
if mode:
return nn.utils.spectral_norm(module)
return module
if __name__ == '__main__':
import pdb
x = torch.randn(1, 4, 128, 128)
model = InpaintGenerator()
model.eval()
torch.set_grad_enabled(False)
torch.onnx.export(
model, x, 'test.onnx',
opset_version=14, # 9 ~ 14
input_names=['x'],
output_names=['output'],
)
**Code File 2: ./kpn/utils.py**
import json
import math
import os
import cv2
import numpy as np
import skimage
import torch
import kpn.network as network
# ----------------------------------------
# Network
# ----------------------------------------
def create_generator():
generator = network.KPN()
return generator
def load_dict(process_net, pretrained_net):
# Get the dict from pre-trained network
pretrained_dict = pretrained_net
# Get the dict from processing network
process_dict = process_net.state_dict()
# Delete the extra keys of pretrained_dict that do not belong to process_dict
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in process_dict}
# Update process_dict using pretrained_dict
process_dict.update(pretrained_dict)
# Load the updated dict to processing network
process_net.load_state_dict(process_dict)
return process_net
def save_model(config, iteration, generator, t=''):
model_name = '{}_KPN_bs_{}_{}.pth'.format(iteration, config.BATCH_SIZE, t)
save_model_path = os.path.join(config.kpn_model_save_path)
if not os.path.exists(save_model_path):
os.mkdir(save_model_path)
save_model_path = os.path.join(save_model_path, model_name)
if len(config.GPU) > 1:
torch.save(generator.module.state_dict(), save_model_path)
print('mul_gpu_The trained model is successfully saved at iteration {}'.format(iteration))
else:
torch.save(generator.state_dict(), save_model_path)
print('The trained model is successfully saved at iteration {}'.format(iteration))
# ----------------------------------------
# Validation and Sample at training
# ----------------------------------------
def save_sample_png(sample_folder, sample_name, img_list, name_list, pixel_max_cnt = 255, height = -1, width = -1):
if not os.path.exists(sample_folder):
os.mkdir(sample_folder)
# Save image one-by-one
for i in range(len(img_list)):
img = img_list[i]
# Recover normalization
img = img * 255.0
# Process img_copy and do not destroy the data of img
#print(img.size())
img_copy = img.clone().data.permute(0, 2, 3, 1).cpu().numpy()
img_copy = np.clip(img_copy, 0, pixel_max_cnt)
img_copy = img_copy.astype(np.uint8)[0, :, :, :]
img_copy = cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)
if (height != -1) and (width != -1):
img_copy = cv2.resize(img_copy, (width, height))
# Save to certain path
save_img_name = sample_name + '_' + name_list[i] + '.png'
save_img_path = os.path.join(sample_folder, save_img_name)
aa = img_copy[img_copy > 255]
b = img_copy[img_copy < 0]
cv2.imwrite(save_img_path, img_copy)
return img_copy
def save_sample_png_test(sample_folder, sample_name, img_list, name_list, pixel_max_cnt = 255):
# Save image one-by-one
for i in range(len(img_list)):
img = img_list[i]
# Recover normalization
img = img * 255.0
# Process img_copy and do not destroy the data of img
img_copy = img.clone().data.permute(0, 2, 3, 1).cpu().numpy()
img_copy = np.clip(img_copy, 0, pixel_max_cnt)
img_copy = img_copy.astype(np.uint8)[0, :, :, :]
img_copy = img_copy.astype(np.float32)
img_copy = cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)
# Save to certain path
save_img_name = sample_name + '_' + name_list[i] + '.png'
save_img_path = os.path.join(sample_folder, save_img_name)
cv2.imwrite(save_img_path, img_copy)
def recover_process(img, height = -1, width = -1):
img = img * 255.0
img_copy = img.clone().data.permute(0, 2, 3, 1).cpu().numpy()
img_copy = np.clip(img_copy, 0, 255)
img_copy = img_copy.astype(np.uint8)[0, :, :, :]
img_copy = img_copy.astype(np.float32)
img_copy = cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)
if (height != -1) and (width != -1):
img_copy = cv2.resize(img_copy, (width, height))
return img_copy
def psnr(pred, target):
#print(pred.shape)
#print(target.shape)
mse = np.mean( (pred - target) ** 2 )
if mse == 0:
return 100
PIXEL_MAX = 255.0
return 20 * math.log10(PIXEL_MAX / math.sqrt(mse))
def grey_psnr(pred, target, pixel_max_cnt = 255):
pred = torch.sum(pred, dim = 0)
target = torch.sum(target, dim = 0)
mse = torch.mul(target - pred, target - pred)
rmse_avg = (torch.mean(mse).item()) ** 0.5
p = 20 * np.log10(pixel_max_cnt * 3 / rmse_avg)
return p
def ssim(pred, target):
pred = pred.clone().data.permute(0, 2, 3, 1).cpu().numpy()
target = target.clone().data.permute(0, 2, 3, 1).cpu().numpy()
target = target[0]
pred = pred[0]
ssim = skimage.measure.compare_ssim(target, pred, multichannel = True)
return ssim
# ----------------------------------------
# PATH processing
# ----------------------------------------
def check_path(path):
if not os.path.exists(path):
os.makedirs(path)
def savetxt(name, loss_log):
np_loss_log = np.array(loss_log)
np.savetxt(name, np_loss_log)
#rain100H/L / SPA
def get_files(path):
if path is None:
return []
with open(path, 'r') as j:
f_list = json.load(j)
return f_list
def get_jpgs(path):
# read a folder, return the image name
ret = []
for root, dirs, files in os.walk(path):
for filespath in files:
ret.append(filespath)
return ret
def get_last_2paths(path):
# read a folder, return the image name
ret = []
for root, dirs, files in os.walk(path):
for filespath in files:
if filespath[-4:] == '.png':
wholepath = os.path.join(root, filespath)
last_2paths = os.path.join(wholepath.split('/')[-2], wholepath.split('/')[-1])
ret.append(last_2paths)
return ret
def text_readlines(filename):
# Try to read a txt file and return a list.Return [] if there was a mistake.
try:
file = open(filename, 'r')
except IOError:
error = []
return error
content = file.readlines()
# This for loop deletes the EOF (like \n)
for i in range(len(content)):
content[i] = content[i][:len(content[i])-1]
file.close()
return content
def text_save(content, filename, mode = 'a'):
# save a list to a txt
# Try to save a list variable in txt file.
file = open(filename, mode)
for i in range(len(content)):
file.write(str(content[i]))
file.close()
**Code File 3: ./kpn/network.py**
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
# ----------------------------------------
# Initialize the networks
# ----------------------------------------
def weights_init(net, init_type = 'normal', init_gain = 0.02):
def init_func(m):
classname = m.__class__.__name__
if hasattr(m, 'weight') and classname.find('Conv') != -1:
if init_type == 'normal':
torch.nn.init.normal_(m.weight.data, 0.0, init_gain)
elif init_type == 'xavier':
torch.nn.init.xavier_normal_(m.weight.data, gain = init_gain)
elif init_type == 'kaiming':
torch.nn.init.kaiming_normal_(m.weight.data, a = 0, mode = 'fan_in')
elif init_type == 'orthogonal':
torch.nn.init.orthogonal_(m.weight.data, gain = init_gain)
else:
raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
elif classname.find('BatchNorm2d') != -1:
torch.nn.init.normal_(m.weight.data, 1.0, 0.02)
torch.nn.init.constant_(m.bias.data, 0.0)
# apply the initialization function <init_func>
print('initialize network with %s type' % init_type)
net.apply(init_func)
# ----------------------------------------
# Kernel Prediction Network (KPN)
# ----------------------------------------
class Basic(nn.Module):
def __init__(self, in_ch, out_ch, g=16, channel_att=False, spatial_att=False):
super(Basic, self).__init__()
self.channel_att = channel_att
self.spatial_att = spatial_att
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=3, stride=1, padding=1),
# nn.BatchNorm2d(out_ch),
nn.ReLU(),
nn.Conv2d(in_channels=out_ch, out_channels=out_ch, kernel_size=3, stride=1, padding=1),
# nn.BatchNorm2d(out_ch),
nn.ReLU(),
nn.Conv2d(in_channels=out_ch, out_channels=out_ch, kernel_size=3, stride=1, padding=1),
# nn.BatchNorm2d(out_ch),
nn.ReLU()
)
if channel_att:
self.att_c = nn.Sequential(
nn.Conv2d(2*out_ch, out_ch//g, 1, 1, 0),
nn.ReLU(),
nn.Conv2d(out_ch//g, out_ch, 1, 1, 0),
nn.Sigmoid()
)
if spatial_att:
self.att_s = nn.Sequential(
nn.Conv2d(in_channels=2, out_channels=1, kernel_size=7, stride=1, padding=3),
nn.Sigmoid()
)
def forward(self, data):
"""
Forward function.
:param data:
:return: tensor
"""
fm = self.conv1(data)
if self.channel_att:
# fm_pool = F.adaptive_avg_pool2d(fm, (1, 1)) + F.adaptive_max_pool2d(fm, (1, 1))
fm_pool = torch.cat([F.adaptive_avg_pool2d(fm, (1, 1)), F.adaptive_max_pool2d(fm, (1, 1))], dim=1)
att = self.att_c(fm_pool)
fm = fm * att
if self.spatial_att:
fm_pool = torch.cat([torch.mean(fm, dim=1, keepdim=True), torch.max(fm, dim=1, keepdim=True)[0]], dim=1)
att = self.att_s(fm_pool)
fm = fm * att
return fm
class KPN(nn.Module):
def __init__(self, kernel_size=[3], sep_conv=False, channel_att=False, spatial_att=False, upMode='bilinear', core_bias=False):
super(KPN, self).__init__()
self.upMode = upMode
self.core_bias = core_bias
self.kernel_size = kernel_size
in_channel = 4
out_channel = 64 * (self.kernel_size[0] ** 2)
self.conv1 = Basic(in_channel, 64, channel_att=False, spatial_att=False) # 256*256
self.conv2 = Basic(64, 128, channel_att=False, spatial_att=False) # 128*128
self.conv3 = Basic(128 + 128, 256, channel_att=False, spatial_att=False) # 64*64
self.conv4 = Basic(256, 512, channel_att=False, spatial_att=False)
self.conv7 = Basic(256 + 512, 256, channel_att=channel_att, spatial_att=spatial_att)
self.conv8 = Basic(256 + 256, 128, channel_att=channel_att, spatial_att=spatial_att)
self.conv9 = Basic(128 + 64, 64, channel_att=channel_att, spatial_att=spatial_att)
self.kernels = nn.Conv2d(256, out_channel, 1, 1, 0)
out_channel_img = 3 * (self.kernel_size[0] ** 2)
self.core_img = nn.Conv2d(64, out_channel_img, 1, 1, 0)
self.kernel_pred = KernelConv(kernel_size, sep_conv, self.core_bias)
self.conv_final = nn.Conv2d(in_channels=12, out_channels=3, kernel_size=3, stride=1, padding=1)
self.iteration = 0
def forward(self, data_with_est, x):
conv1 = self.conv1(data_with_est) #64*256*256
conv2 = self.conv2(F.avg_pool2d(conv1, kernel_size=2, stride=2)) # 128*128*128
conv2 = torch.cat([conv2, x], dim=1)
conv3 = self.conv3(F.avg_pool2d(conv2, kernel_size=2, stride=2)) # 256*64*64
kernels = self.kernels(conv3)
kernels = kernels.unsqueeze(dim=0)
# kernels = F.interpolate(input=kernels, size=(256*9, 64, 64), mode='nearest')
kernels = F.interpolate(input=kernels, size=(256*9, data_with_est.shape[-1]//4, data_with_est.shape[-2]//4), mode='nearest')
kernels = kernels.squeeze(dim=0)
conv4 = self.conv4(conv3)
conv7 = self.conv7(torch.cat([conv3, conv4], dim=1))
conv8 = self.conv8(torch.cat([conv2, F.interpolate(conv7, scale_factor=2, mode=self.upMode)], dim=1))
conv9 = self.conv9(torch.cat([conv1, F.interpolate(conv8, scale_factor=2, mode=self.upMode)], dim=1))
core_img = self.core_img(conv9)
return kernels, core_img
class KernelConv(nn.Module):
"""
the class of computing prediction
"""
def __init__(self, kernel_size=[5], sep_conv=False, core_bias=False):
super(KernelConv, self).__init__()
self.kernel_size = sorted(kernel_size)
self.sep_conv = sep_conv
self.core_bias = core_bias
def _sep_conv_core(self, core, batch_size, N, color, height, width):
"""
convert the sep_conv core to conv2d core
2p --> p^2
:param core: shape: batch*(N*2*K)*height*width
:return:
"""
kernel_total = sum(self.kernel_size)
core = core.view(batch_size, N, -1, color, height, width)
if not self.core_bias:
core_1, core_2 = torch.split(core, kernel_total, dim=2)
else:
core_1, core_2, core_3 = torch.split(core, kernel_total, dim=2)
# output core
core_out = {}
cur = 0
for K in self.kernel_size:
t1 = core_1[:, :, cur:cur + K, ...].view(batch_size, N, K, 1, 3, height, width)
t2 = core_2[:, :, cur:cur + K, ...].view(batch_size, N, 1, K, 3, height, width)
core_out[K] = torch.einsum('ijklno,ijlmno->ijkmno', [t1, t2]).view(batch_size, N, K * K, color, height, width)
cur += K
# it is a dict
return core_out, None if not self.core_bias else core_3.squeeze()
def _convert_dict(self, core, batch_size, N, color, height, width):
"""
make sure the core to be a dict, generally, only one kind of kernel size is suitable for the func.
:param core: shape: batch_size*(N*K*K)*height*width
:return: core_out, a dict
"""
core_out = {}
core = core.view(batch_size, N, -1, color, height, width)
core_out[self.kernel_size[0]] = core[:, :, 0:self.kernel_size[0]**2, ...]
bias = None if not self.core_bias else core[:, :, -1, ...]
return core_out, bias
def forward(self, frames, core, white_level=1.0, rate=1):
"""
compute the pred image according to core and frames
:param frames: [batch_size, N, 3, height, width]
:param core: [batch_size, N, dict(kernel), 3, height, width]
:return:
"""
if len(frames.size()) == 5:
batch_size, N, color, height, width = frames.size()
else:
batch_size, N, height, width = frames.size()
color = 1
frames = frames.view(batch_size, N, color, height, width)
if self.sep_conv:
core, bias = self._sep_conv_core(core, batch_size, N, color, height, width)
else:
core, bias = self._convert_dict(core, batch_size, N, color, height, width)
img_stack = []
pred_img = []
kernel = self.kernel_size[::-1]
for index, K in enumerate(kernel):
if not img_stack:
padding_num = (K//2) * rate
frame_pad = F.pad(frames, [padding_num, padding_num, padding_num, padding_num])
for i in range(0, K):
for j in range(0, K):
img_stack.append(frame_pad[..., i*rate:i*rate + height, j*rate:j*rate + width])
img_stack = torch.stack(img_stack, dim=2)
else:
k_diff = (kernel[index - 1] - kernel[index]) // 2
img_stack = img_stack[:, :, k_diff:-k_diff, ...]
# print('img_stack:', img_stack.size())
pred_img.append(torch.sum(
core[K].mul(img_stack), dim=2, keepdim=False
))
pred_img = torch.stack(pred_img, dim=0)
# print('pred_stack:', pred_img.size())
pred_img_i = torch.mean(pred_img, dim=0, keepdim=False)
#print("pred_img_i", pred_img_i.size())
# N = 1
pred_img_i = pred_img_i.squeeze(2)
#print("pred_img_i", pred_img_i.size())
# if bias is permitted
if self.core_bias:
if bias is None:
raise ValueError('The bias should not be None.')
pred_img_i += bias
# print('white_level', white_level.size())
pred_img_i = pred_img_i / white_level
#pred_img = torch.mean(pred_img_i, dim=1, keepdim=True)
# print('pred_img:', pred_img.size())
# print('pred_img_i:', pred_img_i.size())
return pred_img_i
class LossFunc(nn.Module):
"""
loss function of KPN
"""
def __init__(self, coeff_basic=1.0, coeff_anneal=1.0, gradient_L1=True, alpha=0.9998, beta=100):
super(LossFunc, self).__init__()
self.coeff_basic = coeff_basic
self.coeff_anneal = coeff_anneal
self.loss_basic = LossBasic(gradient_L1)
self.loss_anneal = LossAnneal(alpha, beta)
def forward(self, pred_img_i, pred_img, ground_truth, global_step):
"""
forward function of loss_func
:param frames: frame_1 ~ frame_N, shape: [batch, N, 3, height, width]
:param core: a dict coverted by ......
:param ground_truth: shape [batch, 3, height, width]
:param global_step: int
:return: loss
"""
return self.coeff_basic * self.loss_basic(pred_img, ground_truth), self.coeff_anneal * self.loss_anneal(global_step, pred_img_i, ground_truth)
class LossBasic(nn.Module):
"""
Basic loss function.
"""
def __init__(self, gradient_L1=True):
super(LossBasic, self).__init__()
self.l1_loss = nn.L1Loss()
self.l2_loss = nn.MSELoss()
self.gradient = TensorGradient(gradient_L1)
def forward(self, pred, ground_truth):
return self.l2_loss(pred, ground_truth) + \
self.l1_loss(self.gradient(pred), self.gradient(ground_truth))
class LossAnneal(nn.Module):
"""
anneal loss function
"""
def __init__(self, alpha=0.9998, beta=100):
super(LossAnneal, self).__init__()
self.global_step = 0
self.loss_func = LossBasic(gradient_L1=True)
self.alpha = alpha
self.beta = beta
def forward(self, global_step, pred_i, ground_truth):
"""
:param global_step: int
:param pred_i: [batch_size, N, 3, height, width]
:param ground_truth: [batch_size, 3, height, width]
:return:
"""
loss = 0
for i in range(pred_i.size(1)):
loss += self.loss_func(pred_i[:, i, ...], ground_truth)
loss /= pred_i.size(1)
return self.beta * self.alpha ** global_step * loss
class TensorGradient(nn.Module):
"""
the gradient of tensor
"""
def __init__(self, L1=True):
super(TensorGradient, self).__init__()
self.L1 = L1
def forward(self, img):
w, h = img.size(-2), img.size(-1)
l = F.pad(img, [1, 0, 0, 0])
r = F.pad(img, [0, 1, 0, 0])
u = F.pad(img, [0, 0, 1, 0])
d = F.pad(img, [0, 0, 0, 1])
if self.L1:
return torch.abs((l - r)[..., 0:w, 0:h]) + torch.abs((u - d)[..., 0:w, 0:h])
else:
return torch.sqrt(
torch.pow((l - r)[..., 0:w, 0:h], 2) + torch.pow((u - d)[..., 0:w, 0:h], 2)
)
if __name__ == '__main__':
kpn = KPN().cuda()
a = torch.randn(4, 3, 224, 224).cuda()
b = kpn(a, a)
print(b.shape)
### Versions
pip install opencv==4.7.0.72
pip install numpy==1.24.2
pip install skimage==0.20.0
torch 1.13.0+cu117
torchvision 0.14.0+cu117
| 1 |
2,896 | 99,138 |
torch.cuda.is_available() return False
|
module: docs, module: cuda, triaged
|
### 🐛 Describe the bug

### Versions

GPU 0: Quadro K620 (UUID: GPU-56622fc6-a9b3-5594-4c6d-b0b231be656a)
python version:3.8.16
cuda version:cuda10.2
cudnn version:8.7.0

torch version:

cc @svekars @carljparker @ngimel
| 4 |
2,897 | 99,126 |
DISABLED test_fake_crossref_backward_no_amp_index_fill_cuda_float32 (__main__.TestFakeTensorCUDA)
|
triaged, module: flaky-tests, skipped, module: unknown, oncall: pt2, module: fakeTensor
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fake_crossref_backward_no_amp_index_fill_cuda_float32&suite=TestFakeTensorCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/12736953983).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fake_crossref_backward_no_amp_index_fill_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops.py`
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
2,898 | 99,107 |
Invalid Reference to Class
|
oncall: quantization, triaged, topic: docs
|
### 📚 The doc issue
In this file, https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/_learnable_fake_quantize.py#L11
`For literature references, please see the class _LearnableFakeQuantizePerTensorOp.`
`Class _LearnableFakeQuantizePerTensorOp's` implementation is missing in the repo.
### Suggest a potential alternative/fix
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
2,899 | 99,082 |
Look into test coverage for `UntypedStorage`
|
module: typing, module: tests, triaged
|
In #94503, there was a mypy error when trying to use `_StorageBase.__setitem__`. Since we don't currently have this mypy error in the main branch, this suggests that `UntypedStorage.__setitem__` is not getting tested anywhere, and a test should be added for it. There might be other tests that `UntypedStorage` is lacking as well
cc @ezyang @malfet @rgommers @xuzhao9 @gramster @mruberry
| 0 |
2,900 | 99,042 |
Memory allocation issues in distributions.multivariate_normal.MultivariateNormal
|
module: distributions, triaged
|
### 🐛 Describe the bug
I'm trying to recreate tensorflow's `MultivariateNormalDiagWithSoftplusScale` so I went ahead and did this
```
dist = MultivariateNormal(mu.squeeze(), torch.diag_embed(F.softplus(sig.squeeze())), validate_args=False)
sample = dist.rsample()
```
The `mu` and `sig` are just 1D vectors after the squeeze (+batch dim). The sample is used for loss calculation. When I try to train my network like this I get this error
```
Traceback (most recent call last):
File "/home/name123/UE/fastreid-UE/tools/train_net.py", line 51, in <module>
launch(
File "/home/name123/UE/fastreid-UE/./fastreid/engine/launch.py", line 71, in launch
main_func(*args)
File "/home/name123/UE/fastreid-UE/tools/train_net.py", line 45, in main
return trainer.train()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/defaults.py", line 348, in train
super().train(self.start_epoch, self.max_epoch, self.iters_per_epoch)
File "/home/name123/UE/fastreid-UE/./fastreid/engine/train_loop.py", line 145, in train
self.run_step()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/defaults.py", line 357, in run_step
self._trainer.run_step()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/train_loop.py", line 347, in run_step
self.grad_scaler.scale(losses).backward()
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 5.93 GiB total capacity; 4.77 GiB already allocated; 127.00 MiB free; 4.89 GiB reserved in total by PyTorch)
```
If I switch to `sample()`, it works, but I do need the gradient. It looks to me as if there is a problem allocating the memory because, as you can see, there is enough memory to go around, it just has to take it (already confirmed it's not fragmentation issues, I can allocate more memory in a different scenario).
I do get a very similar error if I enable `validate_args`:
```
Traceback (most recent call last):
File "/home/name123/UE/fastreid-UE/tools/train_net.py", line 51, in <module>
launch(
File "/home/name123/UE/fastreid-UE/./fastreid/engine/launch.py", line 71, in launch
main_func(*args)
File "/home/name123/UE/fastreid-UE/tools/train_net.py", line 45, in main
return trainer.train()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/defaults.py", line 348, in train
super().train(self.start_epoch, self.max_epoch, self.iters_per_epoch)
File "/home/name123/UE/fastreid-UE/./fastreid/engine/train_loop.py", line 145, in train
self.run_step()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/defaults.py", line 357, in run_step
self._trainer.run_step()
File "/home/name123/UE/fastreid-UE/./fastreid/engine/train_loop.py", line 343, in run_step
loss_dict = self.model(data)
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/name123/UE/fastreid-UE/./fastreid/modeling/meta_arch/baseline_DNet.py", line 147, in forward
outputs = self.heads(features, targets)
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/name123/UE/fastreid-UE/./fastreid/modeling/heads/embedding_head_DNet.py", line 138, in forward
dist = MultivariateNormal(mu.squeeze(), torch.diag_embed(F.softplus(sig.squeeze())))
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/distributions/multivariate_normal.py", line 150, in __init__
super(MultivariateNormal, self).__init__(batch_shape, event_shape, validate_args=validate_args)
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/distributions/distribution.py", line 54, in __init__
valid = constraint.check(value)
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/distributions/constraints.py", line 509, in check
sym_check = super().check(value)
File "/home/name123/anaconda3/envs/UE/lib/python3.9/site-packages/torch/distributions/constraints.py", line 490, in check
return torch.isclose(value, value.mT, atol=1e-6).all(-2).all(-1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 5.93 GiB total capacity; 4.89 GiB already allocated; 127.00 MiB free; 5.02 GiB reserved in total by PyTorch)
```
Now, for my personal use case, I can work around this by just manually doing the reparametrization trick:
```
eps = torch.empty(mu.shape, dtype=mu.dtype, device=mu.device).normal_()
sample = mu + eps * torch.sqrt(F.softplus(sig))
```
The calculation of `eps` is copied from the actual implementation of `MultivariateNormal`.
I thought that maybe this is related to #71149, but I don't know. Unfortunately, I can't easily test this on cpu since the framework I'm working with (fastreid) isn't exactly bug-free and doesn't let me work on cpu.
Since the literal only difference between working and not working is the gradient during the `rsample` (`sample` is just `rsample` without gradient after all), I think this has to be the problem. But as I said I am a noob and also don't have the time to investigate further.
### Versions
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Linux Mint 19.3 Tricia (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.16 (main, Jan 11 2023, 16:05:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.0.0-32-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: GeForce GTX 1060 6GB
Nvidia driver version: 460.39
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Byte-Reihenfolge: Little Endian
CPU(s): 4
Liste der Online-CPU(s): 0-3
Thread(s) pro Kern: 1
Kern(e) pro Socket: 4
Sockel: 1
NUMA-Knoten: 1
Anbieterkennung: GenuineIntel
Prozessorfamilie: 6
Modell: 94
Modellname: Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
Stepping: 3
CPU MHz: 932.174
Maximale Taktfrequenz der CPU: 3900,0000
Minimale Taktfrequenz der CPU: 800,0000
BogoMIPS: 6624.00
Virtualisierung: VT-x
L1d Cache: 32K
L1i Cache: 32K
L2 Cache: 256K
L3 Cache: 6144K
NUMA-Knoten0 CPU(s): 0-3
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 habf752d_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
[conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 1.13.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] tensorflow 2.4.1 mkl_py39h4683426_0
[conda] tensorflow-base 2.4.1 mkl_py39h43e0292_0
[conda] torchaudio 0.13.1 py39_cu116 pytorch
[conda] torchvision 0.14.1 py39_cu116 pytorch
cc @fritzo @neerajprad @alicanb @nikitaved
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.