Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
5,701 | 77,552 |
Functional Jacobian does not work with Torchdiffeq
|
module: autograd, triaged
|
### π Describe the bug
I am following the implementation of #49171 to obtain the numerical jacobian wrt model parameters. My model, however, is a Neural ODE defined following torchdiffeq. I observe that the jacobian returns all zeros when I wrap my neural network in an ODESolve. Please kindly point out whether this is a feature of `autograd.jacobian`, or perhaps I misunderstood something. Thanks a lot!
```
import time
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd.functional import jacobian
from torch.nn.utils import _stateless
from torchdiffeq import odeint_adjoint as odeint
class ODEfunc(nn.Module):
def __init__(self, dim, nhidden):
super(ODEfunc, self).__init__()
self.elu = nn.ELU(inplace=True)
self.fc1 = nn.Linear(dim, nhidden)
self.fc2 = nn.Linear(nhidden, nhidden)
self.fc3 = nn.Linear(nhidden, dim)
self.nfe = 0
def forward(self, t, x):
self.nfe += 1
out = self.fc1(x)
out = self.elu(out)
out = self.fc2(out)
out = self.elu(out)
out = self.fc3(out)
return out
class ODEBlock(nn.Module):
def __init__(self, odefunc, t):
super(ODEBlock, self).__init__()
self.odefunc = odefunc
if len(t) == 2:
self.integration_times = torch.tensor([t[0], t[1]]).float()
else:
self.integration_times = t.float()
def forward(self, x):
out = odeint(self.odefunc, x, self.integration_times, rtol=1e-3, atol=1e-3)
return out
model = ODEBlock(ODEfunc(1, 20), torch.linspace(0, 10, 101)).to(device)
P = 0.01
K = 1
tspan = torch.linspace(0, 10, 101)
func = lambda x: K * P * torch.exp(x) / (K + P * torch.exp(x) - P)
logistic_data = []
for i in tspan:
logistic_data.append([func(i).item()])
z0 = torch.tensor(logistic_data[:-1]).float().to(device)
def functional_loss(*functional_params):
out: torch.Tensor = _stateless.functional_call(model, {n: p for n, p in zip(names, functional_params)}, z0)
return out
names = list(n for n, _ in model.named_parameters())
temp = jacobian(functional_loss, tuple(model.parameters()))
print(temp)
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.17.1
Libc version: N/A
Python version: 3.8.12 (default, Oct 12 2021, 06:23:56) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] backpack-for-pytorch==1.5.0
[pip3] functorch==0.1.0
[pip3] numpy==1.22.3
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.11.0
[pip3] torch-optimizer==0.1.0
[pip3] torchaudio==0.11.0
[pip3] torchdiffeq==0.2.2
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.12.0
[conda] backpack-for-pytorch 1.5.0 pypi_0 pypi
[conda] blas 1.0 mkl defaults
[conda] ffmpeg 4.3 h0a44026_0 pytorch
[conda] functorch 0.1.0 pypi_0 pypi
[conda] mkl 2021.4.0 hecd8cb5_637 defaults
[conda] mkl-service 2.4.0 py38h9ed2024_0 defaults
[conda] mkl_fft 1.3.1 py38h4ab4a9b_0 defaults
[conda] mkl_random 1.2.2 py38hb2f4e1b_0 defaults
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.21.2 py38he0bd621_0 defaults
[conda] pytorch 1.11.0 py3.8_0 pytorch
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] torch-optimizer 0.1.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cpu pytorch
[conda] torchdiffeq 0.2.2 pypi_0 pypi
[conda] torchmetrics 0.7.3 pypi_0 pypi
[conda] torchvision 0.12.0 py38_cpu pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,702 | 77,548 |
lintrunner doesn't give good error message suggesting lintrunner init
|
module: lint, triaged
|
### π Describe the bug
```
Error (CLANGTIDY) Linter failed
Linter failed. This a bug, please file an issue against the linter
maintainer.
CONTEXT:
Linter command failed with non-zero exit code.
STDERR:
Traceback (most recent call last):
File "tools/linter/adapters/clangtidy_linter.py", line 138, in <module>
] + clang_search_dirs()
File "tools/linter/adapters/clangtidy_linter.py", line 106, in
clang_search_dirs
result = subprocess.run(
File "/scratch/ezyang/pytorch-scratch2-env/lib/python3.8/subprocess.py",
line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['clang', '-E', '-x', 'c++', '-',
'-v']' returned non-zero exit status 1.
STDOUT:
```
cc @suo
I have to say this is a very useless error message lol. I think the problem is I didn't run lintrunner init.
### Versions
master
| 1 |
5,703 | 77,546 |
Build check for AVX512 fails with AMD CPU and march=native
|
module: build, triaged, module: vectorization
|
### π Describe the bug
```
Performing C SOURCE FILE Test C_HAS_AVX512_1 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_02713 && [1/2] Building C object CMakeFiles/cmTC_02713.dir/src.c.o
FAILED: CMakeFiles/cmTC_02713.dir/src.c.o
/usr/bin/cc -DC_HAS_AVX512_1 -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -std=gnu11 -o CMakeFiles/cmTC_02713.dir/src.c.o -c ./build/CMakeFiles/CMakeTmp/src.c
./build/CMakeFiles/CMakeTmp/src.c: In function 'main':
./build/CMakeFiles/CMakeTmp/src.c:6:13: warning: AVX512F vector return without AVX512F enabled changes the ABI [-Wpsabi]
6 | __m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
| ^
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/immintrin.h:59,
from ./build/CMakeFiles/CMakeTmp/src.c:2:
/usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512bwintrin.h:3088:1: error: inlining failed in call to 'always_inline' '_mm512_cmp_epi8_mask': target specific option mismatch
3088 | _mm512_cmp_epi8_mask (__m512i __X, __m512i __Y, const int __P)
| ^~~~~~~~~~~~~~~~~~~~
./build/CMakeFiles/CMakeTmp/src.c:15:31: note: called from here
15 | __mmask64 equality_mask = _mm512_cmp_epi8_mask(a, b, _MM_CMPINT_EQ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/immintrin.h:49:
/usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:121:1: error: inlining failed in call to 'always_inline' '_mm512_set_epi8': target specific option mismatch
121 | _mm512_set_epi8 (char __q63, char __q62, char __q61, char __q60,
| ^~~~~~~~~~~~~~~
./build/CMakeFiles/CMakeTmp/src.c:6:17: note: called from here
6 | __m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
8 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
9 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
10 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
11 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
12 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
13 | 0, 0, 0, 0, 0, 0, 0, 0);
| ~~~~~~~~~~~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
Source file was:
#include <immintrin.h>
int main()
{
__m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0);
__m512i b = a;
__mmask64 equality_mask = _mm512_cmp_epi8_mask(a, b, _MM_CMPINT_EQ);
return 0;
}
Performing C SOURCE FILE Test CXX_HAS_AVX512_1 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_d12b0 && [1/2] Building C object CMakeFiles/cmTC_d12b0.dir/src.c.o
FAILED: CMakeFiles/cmTC_d12b0.dir/src.c.o
/usr/bin/cc -DCXX_HAS_AVX512_1 -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -std=gnu11 -o CMakeFiles/cmTC_d12b0.dir/src.c.o -c ./build/CMakeFiles/CMakeTmp/src.c
./build/CMakeFiles/CMakeTmp/src.c: In function 'main':
./build/CMakeFiles/CMakeTmp/src.c:6:13: warning: AVX512F vector return without AVX512F enabled changes the ABI [-Wpsabi]
6 | __m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
| ^
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/immintrin.h:59,
from ./build/CMakeFiles/CMakeTmp/src.c:2:
/usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512bwintrin.h:3088:1: error: inlining failed in call to 'always_inline' '_mm512_cmp_epi8_mask': target specific option mismatch
3088 | _mm512_cmp_epi8_mask (__m512i __X, __m512i __Y, const int __P)
| ^~~~~~~~~~~~~~~~~~~~
./build/CMakeFiles/CMakeTmp/src.c:15:31: note: called from here
15 | __mmask64 equality_mask = _mm512_cmp_epi8_mask(a, b, _MM_CMPINT_EQ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/immintrin.h:49:
/usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:121:1: error: inlining failed in call to 'always_inline' '_mm512_set_epi8': target specific option mismatch
121 | _mm512_set_epi8 (char __q63, char __q62, char __q61, char __q60,
| ^~~~~~~~~~~~~~~
./build/CMakeFiles/CMakeTmp/src.c:6:17: note: called from here
6 | __m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
8 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
9 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
10 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
11 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
12 | 0, 0, 0, 0, 0, 0, 0, 0,
| ~~~~~~~~~~~~~~~~~~~~~~~
13 | 0, 0, 0, 0, 0, 0, 0, 0);
| ~~~~~~~~~~~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
Source file was:
#include <immintrin.h>
int main()
{
__m512i a = _mm512_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0);
__m512i b = a;
__mmask64 equality_mask = _mm512_cmp_epi8_mask(a, b, _MM_CMPINT_EQ);
return 0;
}
Performing C SOURCE FILE Test BLAS_F2C_DOUBLE_WORKS failed with the following compile output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_47eb1 && [1/2] Building C object CMakeFiles/cmTC_47eb1.dir/src.c.o
[2/2] Linking C executable cmTC_47eb1
...and run output:
Return value: 1
Source file was:
#include <stdlib.h>
#include <stdio.h>
float x[4] = { 1, 2, 3, 4 };
float y[4] = { .1, .01, .001, .0001 };
int four = 4;
int one = 1;
extern double sdot_();
int main() {
int i;
double r = sdot_(&four, x, &one, y, &one);
exit((float)r != (float).1234);
}
Determining if the function sbgemm_ exists failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_fd0ef && [1/2] Building C object CMakeFiles/cmTC_fd0ef.dir/CheckFunctionExists.c.o
[2/2] Linking C executable cmTC_fd0ef
FAILED: cmTC_fd0ef
: && /usr/bin/cc -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -DCHECK_FUNCTION_EXISTS=sbgemm_ -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -rdynamic CMakeFiles/cmTC_fd0ef.dir/CheckFunctionExists.c.o -o cmTC_fd0ef /usr/lib/libopenblas.so && :
/usr/bin/ld: CMakeFiles/cmTC_fd0ef.dir/CheckFunctionExists.c.o: in function `main':
CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `sbgemm_'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
Performing C SOURCE FILE Test NNPACK_ARCH_IS_X86_32 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_22689 && [1/2] Building C object CMakeFiles/cmTC_22689.dir/src.c.o
FAILED: CMakeFiles/cmTC_22689.dir/src.c.o
/usr/bin/cc -DNNPACK_ARCH_IS_X86_32 -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -std=gnu11 -o CMakeFiles/cmTC_22689.dir/src.c.o -c ./build/CMakeFiles/CMakeTmp/src.c
./build/CMakeFiles/CMakeTmp/src.c:3:10: error: #error AVX only on x86_64
3 | #error AVX only on x86_64
| ^~~~~
ninja: build stopped: subcommand failed.
Source file was:
#if ! (defined(__i386) || defined(_M_IX86))
#error AVX only on x86_64
#endif
int main() {
return 0;
}
Performing C++ SOURCE FILE Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_37d75 && [1/2] Building CXX object CMakeFiles/cmTC_37d75.dir/src.cxx.o
FAILED: CMakeFiles/cmTC_37d75.dir/src.cxx.o
/usr/bin/c++ -DHAVE_CXX_FLAG_WSHORTEN_64_TO_32 -march=native -mtune=native -O3 -pipe -fno-plt -Wno-deprecated-declarations -Wno-maybe-uninitialized -Wno-array-bounds -Wno-uninitialized -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -std=c++11 -Wall -Wextra -Wshadow -Wsuggest-override -pedantic -pedantic-errors -Wshorten-64-to-32 -Wshorten-64-to-32 -std=gnu++14 -o CMakeFiles/cmTC_37d75.dir/src.cxx.o -c ./build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command-line option '-Wshorten-64-to-32'
c++: error: unrecognized command-line option '-Wshorten-64-to-32'
ninja: build stopped: subcommand failed.
Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test HAVE_CXX_FLAG_WD654 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_0b897 && [1/2] Building CXX object CMakeFiles/cmTC_0b897.dir/src.cxx.o
FAILED: CMakeFiles/cmTC_0b897.dir/src.cxx.o
/usr/bin/c++ -DHAVE_CXX_FLAG_WD654 -march=native -mtune=native -O3 -pipe -fno-plt -Wno-deprecated-declarations -Wno-maybe-uninitialized -Wno-array-bounds -Wno-uninitialized -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -std=c++11 -Wall -Wextra -Wshadow -Wsuggest-override -pedantic -pedantic-errors -fstrict-aliasing -Wno-deprecated-declarations -Wstrict-aliasing -wd654 -wd654 -std=gnu++14 -o CMakeFiles/cmTC_0b897.dir/src.cxx.o -c ./build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command-line option '-wd654'
c++: error: unrecognized command-line option '-wd654'
ninja: build stopped: subcommand failed.
Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test HAVE_CXX_FLAG_WTHREAD_SAFETY failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_34f90 && [1/2] Building CXX object CMakeFiles/cmTC_34f90.dir/src.cxx.o
FAILED: CMakeFiles/cmTC_34f90.dir/src.cxx.o
/usr/bin/c++ -DHAVE_CXX_FLAG_WTHREAD_SAFETY -march=native -mtune=native -O3 -pipe -fno-plt -Wno-deprecated-declarations -Wno-maybe-uninitialized -Wno-array-bounds -Wno-uninitialized -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -std=c++11 -Wall -Wextra -Wshadow -Wsuggest-override -pedantic -pedantic-errors -fstrict-aliasing -Wno-deprecated-declarations -Wstrict-aliasing -Wthread-safety -Wthread-safety -std=gnu++14 -o CMakeFiles/cmTC_34f90.dir/src.cxx.o -c ./build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command-line option '-Wthread-safety'
c++: error: unrecognized command-line option '-Wthread-safety'
ninja: build stopped: subcommand failed.
Source file was:
int main() { return 0; }
Determining if the prototype magma_get_sgeqrf_nb exists for MAGMA_V2 failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_e0004 && [1/2] Building C object CMakeFiles/cmTC_e0004.dir/CheckPrototypeDefinition.c.o
FAILED: CMakeFiles/cmTC_e0004.dir/CheckPrototypeDefinition.c.o
/usr/bin/cc -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -fopenmp -DNDEBUG -fPIE -std=gnu11 -o CMakeFiles/cmTC_e0004.dir/CheckPrototypeDefinition.c.o -c ./build/CMakeFiles/CMakeTmp/CheckPrototypeDefinition.c
./build/CMakeFiles/CMakeTmp/CheckPrototypeDefinition.c:1:10: fatal error: magma.h: No such file or directory
1 | #include <magma.h>
| ^~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
#include <magma.h>
static void cmakeRequireSymbol(int dummy, ...) {
(void) dummy;
}
static void checkSymbol(void) {
#ifndef magma_get_sgeqrf_nb
cmakeRequireSymbol(0, &magma_get_sgeqrf_nb);
#endif
}
magma_int_t magma_get_sgeqrf_nb( magma_int_t m, magma_int_t n ) {
return 0;
}
#ifdef __CLASSIC_C__
int main() {
int ac;
char*av[];
#else
int main(int ac, char *av[]) {
#endif
checkSymbol();
if (ac > 1000) {
return *av[0];
}
return 0;
}
Determining if the strtod_l exist failed with the following output:
Change Dir: ./build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/ninja cmTC_d0839 && [1/2] Building C object CMakeFiles/cmTC_d0839.dir/CheckSymbolExists.c.o
FAILED: CMakeFiles/cmTC_d0839.dir/CheckSymbolExists.c.o
/usr/bin/cc -march=native -mtune=native -O3 -pipe -Wno-deprecated-declarations -Wno-maybe-uninitialized -fopenmp -DNDEBUG -fPIE -std=gnu11 -o CMakeFiles/cmTC_d0839.dir/CheckSymbolExists.c.o -c ./build/CMakeFiles/CMakeTmp/CheckSymbolExists.c
./build/CMakeFiles/CMakeTmp/CheckSymbolExists.c: In function 'main':
./build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:8:19: error: 'strtod_l' undeclared (first use in this function); did you mean 'strtoull'?
8 | return ((int*)(&strtod_l))[argc];
| ^~~~~~~~
| strtoull
./build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:8:19: note: each undeclared identifier is reported only once for each function it appears in
ninja: build stopped: subcommand failed.
File ./build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */
#include <stdlib.h>
int main(int argc, char** argv)
{
(void)argv;
#ifndef strtod_l
return ((int*)(&strtod_l))[argc];
#else
(void)argc;
return 0;
#endif
}
```
### Versions
Pytorch: git master 1.11.1
GCC: 12.1 - supports AVX512
cc @malfet @seemethere
| 0 |
5,704 | 77,544 |
Modernize LoggingTensorMode
|
triaged, module: __torch_dispatch__
|
### π Describe the bug
Now that @samdow has updated the dispatch mode API, we should make LoggingMode use it too.
### Versions
master
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 1 |
5,705 | 77,538 |
Failed to run on iOS - Couldn't find an operator for `aten::conv1d`
|
triaged, module: ios
|
### π Describe the bug
Hi, I'm testing one of my projects with `libtorch 1.11.0` for iOS simulator and I'm facing the following issue.
```python
Caused by:
Internal torch error:
Couldn't find an operator for aten::conv1d(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups) -> Tensor. Do you have to update a set of hardcoded JIT ops?
The above operation failed shape propagation in this context:
/home/aalvarez/Projects/main/apps/lmrescore/trace-gpt2.py(32): forward
/home/aalvarez/.virtualenvs/lmrescore-u7Iy2esc-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py(1092): _slow_forward
/home/aalvarez/.virtualenvs/lmrescore-u7Iy2esc-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py(1108): _call_impl
/home/aalvarez/.virtualenvs/lmrescore-u7Iy2esc-py3.10/lib/python3.10/site-packages/torch/jit/_trace.py(939): trace_module
/home/aalvarez/.virtualenvs/lmrescore-u7Iy2esc-py3.10/lib/python3.10/site-packages/torch/jit/_trace.py(735): trace
/home/aalvarez/Projects/main/apps/lmrescore/trace-gpt2.py(45): <module>
Serialized File "code/__torch__.py", line 17
transformer = model0.transformer
position_ids = self.position_ids
_0 = ops.prim.NumToTensor(torch.size(input_ids, 1))
~~~~~~~~~~ <--- HERE
_1 = int(_0)
position_ids0 = torch.slice(torch.unsqueeze(position_ids, 0), 1, 0, _1)
```
The same works for `libtorch 1.10.2`.
I believe the compilation is OK because I see the same exported symbols for both version (1.10.2 and 1.11.0) regarding `conv1d`
Here is the output of `libtorch_cpu.a` for v1.11.0.
```
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:autocast_mode.cpp.o: U at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:autocast_mode.cpp.o: 00000000000178d6 T at::autocast::WrapFunction_<(at::autocast::CastPolicy)0, (c10::DeviceType)0, at::Tensor (at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long), &(at::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long> >::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:autocast_mode.cpp.o: 0000000000002034 T at::autocast::WrapFunction_<(at::autocast::CastPolicy)0, (c10::DeviceType)1, at::Tensor (at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long), &(at::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long> >::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Convolution.cpp.o: 0000000000002556 T at::native::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Convolution.cpp.o: 0000000000001048 T at::native::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d110 b guard variable for at::_ops::conv1d_padding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d0f8 b guard variable for at::_ops::conv1d_padding::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d0b0 b guard variable for at::_ops::conv1d::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d098 b guard variable for at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 0000000000007c9e T at::_ops::conv1d_padding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 0000000000007b5a T at::_ops::conv1d_padding::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 00000000000076da T at::_ops::conv1d::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000000758c T at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000000767c t at::_ops::create_conv1d_typed_handle()
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 0000000000007c4a t at::_ops::create_conv1d_padding_typed_handle()
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d100 b at::_ops::conv1d_padding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d0e8 b at::_ops::conv1d_padding::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d0a0 b at::_ops::conv1d::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000010d088 b at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)::op
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: 0000000000001f07 t at::(anonymous namespace)::(anonymous namespace)::wrapper__conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: 0000000000002045 t at::(anonymous namespace)::(anonymous namespace)::wrapper_padding_conv1d_padding(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: 0000000000002010 T at::compositeimplicitautograd::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: 0000000000001ed2 T at::compositeimplicitautograd::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: U at::native::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::basic_string_view<char>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:RegisterCompositeImplicitAutograd.cpp.o: U at::native::conv1d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:helper.cpp.o: 0000000000003424 T torch::jit::is_conv1d_module(torch::jit::Match const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, torch::jit::Value*, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, torch::jit::Value*> > > const&)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:insert_observers.cpp.o: U torch::jit::is_conv1d_module(torch::jit::Match const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, torch::jit::Value*, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, torch::jit::Value*> > > const&)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:external_functions.cpp.o: U at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:external_functions.cpp.o: 0000000000003f20 T _nnc_aten_conv1d
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:external_functions.cpp.o: 000000000000226b T _nnc_aten_quantized_conv1d
```
Essentially, `at::_ops::conv1d::call` is found
```
libtorch-1.11.0-ios-fat/lib/libtorch_cpu.a:Operators_3.cpp.o: 000000000000758c T at::_ops::conv1d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)
```
Also in the linking I `force_load` libraries:
```bash
external/libtorch_cpu_ios_fl/lib/libtorch.a \
external/libtorch_cpu_ios_fl/lib/libtorch_cpu.a \
external/libtorch_cpu_ios_fl/lib/libc10.a \
external/libtorch_cpu_ios_fl/lib/libclog.a \
external/libtorch_cpu_ios_fl/lib/libcpuinfo.a \
external/libtorch_cpu_ios_fl/lib/libpthreadpool.a \
external/libtorch_cpu_ios_fl/lib/libpytorch_qnnpack.a \
external/libtorch_cpu_ios_fl/lib/libXNNPACK.a \
...
-force_load \
external/libtorch_cpu_ios_fl/lib/libtorch.a \
-force_load \
external/libtorch_cpu_ios_fl/lib/libtorch_cpu.a \
-force_load \
external/libtorch_cpu_ios_fl/lib/libc10.a \
-force_load \
external/libtorch_cpu_ios_fl/lib/libpytorch_qnnpack.a \
-force_load \
external/libtorch_cpu_ios_fl/lib/libXNNPACK.a \
```
The compilation was done with the following confguration:
```json
"cacheVariables": {
"CMAKE_BUILD_TYPE": "MinSizeRel",
"CMAKE_CXX_COMPILER_LAUNCHER": {
"type": "FILEPATH",
"value": "/Users/xdev/.cargo/bin/sccache"
},
"CMAKE_C_COMPILER_LAUNCHER": {
"type": "FILEPATH",
"value": "/Users/xdev/.cargo/bin/sccache"
},
"CMAKE_TOOLCHAIN_FILE": {
"type": "FILEPATH",
"value": "${sourceDir}/cmake/iOS.cmake"
},
"CMAKE_THREAD_LIBS_INIT": "-lpthread",
"CMAKE_HAVE_THREADS_LIBRARY": "1",
"CMAKE_USE_PTHREADS_INIT": "1",
"PYTHON_EXECUTABLE": {
"type": "FILEPATH",
"value": "python3"
},
"CMAKE_CXX_FLAGS": "-fobjc-arc",
"TRACING_BASED": false,
"BUILD_BINARY": false,
"BUILD_CUSTOM_PROTOBUF": false,
"BUILD_LITE_INTERPRETER": false,
"BUILD_PYTHON": false,
"BUILD_SHARED_LIBS": false,
"BUILD_TEST": false,
"USE_CUDA": false,
"USE_GFLAGS": false,
"USE_LEVELDB": false,
"USE_LITE_INTERPRETER_PROFILER": false,
"USE_LMDB": false,
"USE_MKLDNN": false,
"USE_MPI": false,
"USE_NNPACK": false,
"USE_NUMPY": false,
"USE_OPENCV": false,
"IOS_PLATFORM": "SIMULATOR",
"IOS_ARCH": "x86_64",
"CMAKE_CXX_FLAGS": "-fobjc-arc -mios-simulator-version-min=15.0",
"CMAKE_C_FLAGS": "-fobjc-arc -mios-simulator-version-min=15.0"
}
```
### Versions
```zsh
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 12.3 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.12 (main, May 8 2022, 18:05:47) [Clang 13.1.6 (clang-1316.0.21.2)] (64-bit runtime)
Python platform: macOS-12.3-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] Could not collect
```
| 3 |
5,706 | 77,537 |
batch Kronecker product
|
triaged, enhancement
|
### π The feature, motivation and pitch
I have two tensors that are batches of matrices:
```
x = torch.randn(100,10,10)
y = torch.randn(100,2,2)
```
I want to parallelize the kronecker on each matrix, not doing the kronecker product of the tensors. torch.kron(x,y) gives me a tensor of size (10000, 20,20), but I want a shape out of size (100, 20, 20) that computes the kronecker product for each matrix. Is there any way to do so ?
Something like torch.kron(x,y, start_dim = 1) is what I tried but it does not seem to be implemented.
Thanks
### Alternatives
A parameter start_dim that tells form which dimension we want to do perform a kronecker product.
### Additional context
_No response_
| 5 |
5,707 | 77,529 |
softmarginloss should use `log1p` and has an incorrect out= behaviour.
|
module: loss, triaged
|
### π Describe the bug
For more details see https://github.com/pytorch/pytorch/pull/77486#discussion_r873056795 and https://github.com/pytorch/pytorch/pull/77486#discussion_r873056847.
### Versions
master
| 0 |
5,708 | 77,527 |
CUDA: Illegal memory access in `torch.linalg.solve()`
|
high priority, module: cuda, triaged, module: linear algebra
|
### π Describe the bug
Hi,
My program randomly terminates both during training and validation and I strongly suspect that it is due to torch.linalg.solve(), although I have not been able to reproduce the bug in a simple script with a for loop.
Here's a snippet for convenience, (ignore the edge case where the rows of x would be linearly dependent, as this does not happens in the actual function):
```python
import torch
device = torch.device("cuda")
x = torch.randn(64, 81, 9, 5, device=device)
y = torch.randn(64, 81, 9, 1, device=device)
A = x.transpose(-1, -2) @ x
B = x.transpose(-1, -2) @ y
b = torch.linalg.solve(A, B)
```
The following error is consistently returned after enough epochs:
```
CUDA runtime error: an illegal memory access was encountered (700) in apply_lu_factor_batched_magma at /opt/conda/conda-bld/pytorch_1646756402876/work/aten/src/ATen/native/cuda/BatchLinearAlgebra.cpp:1910
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:944
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:945
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:946
[...]
File "x.py", line 164, in g
b = torch.linalg.solve(A, B)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
[...]
File "x.py", line 164, in g
b = torch.linalg.solve(A, B)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1646756402876/work/c10/cuda/CUDACachingAllocator.cpp:1230 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f3e10d9d1bd in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1f037 (0x7f3e433ea037 in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x23a (0x7f3e433ee3ea in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x2ecdd8 (0x7f3e93de3dd8 in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #4: c10::TensorImpl::release_resources() + 0x175 (0x7f3e10d83fb5 in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #5: <unknown function> + 0x1db769 (0x7f3e93cd2769 in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x4c6c8c (0x7f3e93fbdc8c in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #7: THPVariable_subclass_dealloc(_object*) + 0x292 (0x7f3e93fbdf92 in /workspace/miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x129bcb (0x5555ecf2cbcb in /workspace/miniconda3/bin/python)
frame #9: <unknown function> + 0x2429aa (0x5555ed0459aa in /workspace/miniconda3/bin/python)
frame #10: <unknown function> + 0x129d5b (0x5555ecf2cd5b in /workspace/miniconda3/bin/python)
frame #11: <unknown function> + 0x194655 (0x5555ecf97655 in /workspace/miniconda3/bin/python)
frame #12: <unknown function> + 0x129bcb (0x5555ecf2cbcb in /workspace/miniconda3/bin/python)
frame #13: <unknown function> + 0x2429aa (0x5555ed0459aa in /workspace/miniconda3/bin/python)
frame #14: <unknown function> + 0x129d5b (0x5555ecf2cd5b in /workspace/miniconda3/bin/python)
frame #15: <unknown function> + 0x194655 (0x5555ecf97655 in /workspace/miniconda3/bin/python)
frame #16: <unknown function> + 0x129bcb (0x5555ecf2cbcb in /workspace/miniconda3/bin/python)
frame #17: <unknown function> + 0x2429aa (0x5555ed0459aa in /workspace/miniconda3/bin/python)
frame #18: <unknown function> + 0x129d5b (0x5555ecf2cd5b in /workspace/miniconda3/bin/python)
frame #19: <unknown function> + 0x12a950 (0x5555ecf2d950 in /workspace/miniconda3/bin/python)
frame #20: <unknown function> + 0x13a9dd (0x5555ecf3d9dd in /workspace/miniconda3/bin/python)
frame #21: _PyGC_CollectNoFail + 0x35 (0x5555ed05d705 in /workspace/miniconda3/bin/python)
frame #22: <unknown function> + 0x2744ba (0x5555ed0774ba in /workspace/miniconda3/bin/python)
frame #23: Py_FinalizeEx + 0x186 (0x5555ed0777a6 in /workspace/miniconda3/bin/python)
frame #24: Py_RunMain + 0x10c (0x5555ed07ce8c in /workspace/miniconda3/bin/python)
frame #25: Py_BytesMain + 0x39 (0x5555ed07d309 in /workspace/miniconda3/bin/python)
frame #26: __libc_start_main + 0xf3 (0x7f3ec6e2a0b3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #27: <unknown function> + 0x2010a0 (0x5555ed0040a0 in /workspace/miniconda3/bin/python)
```
This happened both on single GPU and multi GPU (DDP) settings.
Am I better off using something like this even though this is not recommended?
```python
b = A.inverse() @ B
```
If you have got any pointers on what I should be doing instead :)
Best regards,
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_2
[conda] numpy-base 1.21.5 py39hf524024_2
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.12.0 py39_cu113 pytorch
```
cc @ezyang @gchanan @zou3519 @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 12 |
5,709 | 77,523 |
DDP window TCP bug [socket.cpp:558] [c10d] The client socket has failed to
|
oncall: distributed
|
### π Describe the bug
Hi
I have two window boxes, each host has a single GPU. Both can communicate and I disabled the firewall etc.
Both hosts added A recond in the DNS server. I added both while troubleshooting. added in both host's file
Master node. (IP address 192.168.254.230)
```python
torch.distributed.init_process_group(
backend="gloo",
init_method="tcp://localhost:54321" // I tried to us 192.168.254.230 , actual hostname. win11 , win11.mydomain_name
world_size=2,
rank=0)
```
Second node.
Master node. (IP address 192.168.254.230)
```python
torch.distributed.init_process_group(
backend="gloo",
init_method="tcp://192.168.254.230:54321" actual hostname. win11 , win11.mydomain_name
world_size=2,
rank=1)
```
Tried to use the os.env variable etc, In all cases, I see this error on the client node ( i.e rank 1)
In all cases, I see the error and both master and node never return from the init call.
Meanwhile, I run Wireshark and I see a TCP session and handshake on the same port 54321.
```
[W C:\cb\pytorch_1000000000000\work\torch\csrc\distributed\c10d\socket.cpp:558] [c10d] The client socket has failed to connect to [win11.vmwarelab.edu]:54321 (system error: 10049 - The requested address is not valid in its context.).
```
```
>>> import torch
>>> print(torch.__version__)
1.11.0
```
If you need Wireshark PCAPI can send but I seek the last packet from the master ACK
PCAP screenshoot
<img width="839" alt="image" src="https://user-images.githubusercontent.com/11797329/168544258-d191881d-ce10-4b87-a4d9-f6d3041d7874.png">
### Versions
1.11.0
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 10 |
5,710 | 77,515 |
Inplace Bool API + `sum` will trigger INTERNAL ASSERT FAILED
|
module: autograd, triaged, module: edge cases
|
### π Describe the bug
Inplace Bool API + `sum` will trigger INTERNAL ASSERT FAILED, like `eq_, le_, ne_, ge_, gt_`
```python
import torch
input = torch.randint(-4, 8, [4, 3], dtype=torch.int64)
other = torch.rand([4, 3], dtype=torch.float64, requires_grad=True)
input.eq_(other)
print(input)
input.sum()
# tensor([[0, 0, 0],
# [0, 0, 0],
# [0, 0, 0],
# [0, 0, 0]], grad_fn=<EqBackward1>)
#
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
By the way, `index_fill_` can also trigger such bug
```python
import torch
input = torch.tensor(True, dtype=torch.bool)
index = torch.tensor(0, dtype=torch.int64)
dim = 0
value = torch.rand([], dtype=torch.float64, requires_grad=True)
input.index_fill_(dim, index, value).sum()
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,711 | 77,514 |
`max_pool1d` can succeed when padding is negative for tensor requiring grad
|
module: nn, triaged, actionable, module: pooling, module: edge cases
|
### π Describe the bug
`max_pool1d` can succeed when padding is negative for tensor requiring grad
```python
import torch
input = torch.rand([20, 16, 50], dtype=torch.float32, requires_grad=True)
kernel_size = 3
stride = 2
padding = -1
torch.nn.functional.max_pool1d(input, kernel_size, stride=stride, padding=padding)
# work
```
```python
import torch
input = torch.rand([20, 16, 50], dtype=torch.float32, requires_grad=False)
kernel_size = 3
stride = 2
padding = -1
torch.nn.functional.max_pool1d(input, kernel_size, stride=stride, padding=padding)
# RuntimeError: max_pool1d() padding must be non-negative, but got -1
```
### Versions
pytorch: 1.11.0
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 2 |
5,712 | 77,478 |
Standalone unittests for checkpoint_wrapper
|
oncall: distributed, module: bootcamp, triaged, better-engineering, module: fsdp
|
### π The feature, motivation and pitch
New features to checkpoint_wrapper are being added such as in https://github.com/pytorch/pytorch/pull/77224, but the file is missing standalone unittests and is instead mostly tested through FSDP (`test_fsdp_checkpoint` and `test_fsdp_state_dict`). To ensure more robust support, we should have standalone unittests independent of FSDP.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,713 | 77,463 |
conda CPU installation for LTS fails with UnsatisfiableError
|
triaged, module: lts
|
### π Describe the bug
In a totally empty conda environment on Linux, copy pasting the installation command from [here](https://pytorch.org/get-started/locally):
```sh
conda install pytorch torchvision torchaudio cpuonly -c pytorch-lts
```
Output:
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): doneSolving environment: failed with initial frozen solve. Retrying with flexible solve.Solving environment: | Found conflicts! Looking for incompatible packages.This can take several minutes. Press CTRL-C to abort.failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package typing conflicts for:pytorch -> typing_extensions -> typing[version='>=3.6.2|>=3.7.4']pytorch -> typing
Package pytorch conflicts for:torchvision -> pytorch[version='*|*|1.10|1.10|1.8.1|1.8.2|>=1.11.0,<1.12.0a0|>=1.10.2,<1.11.0a0|1.10.0.*|>=1.8.0|>=1.8.0|1.7.1.*|1.3.1.*|1.2.0.*|1.1.*|>=0.4|>=0.3',build='cpu*|cuda*|cuda*|cpu*|cuda*|cpu*']torchvision -> pytorch-cpu -> pytorch[version='1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.0|1.10.1|1.10.1|1.10.1|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.11.0|1.9.1|1.9.1|1.9.1|1.9.1|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.6.0|1.6.0|1.6.0|1.6|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.11.0|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.2|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.10.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0',build='cuda92py36h7ecc001_1|cuda100py36hd82b6f9_1|cuda102py36h8620ce9_1|cuda101py36h42dc283_1|cuda92py37hc3ec645_1|cuda102py37h4d98c68_1|cuda100py37h50b9e00_1|cuda101py37h7589291_1|cuda92py38hb6ed0dd_1|cuda100py38h679e3f5_1|cuda102py39h09d0254_1|cuda92py39hde86683_1|cuda100py39h2b73809_1|cuda102py38h9f8c3ab_1|cuda92py36h7ecc001_1|cuda102py36h8620ce9_1|cuda100py36hd82b6f9_1|cuda92py37hc3ec645_1|cuda102py37h4d98c68_1|cuda100py37h50b9e00_1|cuda101py38h2499a06_1|cuda92py39hde86683_1|cuda102py39h09d0254_1|cuda100py39h2b73809_1|cuda92py38hb6ed0dd_1|cuda100py38h679e3f5_1|cuda101py36h42dc283_1|cuda101py39h41d04a9_1|cuda102py36hf4eb8d7_0|cuda102py38h540557e_0|cuda102py39hf89b2ab_0|cuda110py36h7ef7e1d_0|cuda110py38h65e529b_0|cuda111py38he2736ed_1|cuda112py39h716d6ff_1|cuda111py37h50e976f_1|cuda102py38hf03d9dd_1|cuda112py39hbeb36f3_1|cuda110py38hc2289b8_1|cuda112py36h755b813_1|cuda112py37hcb91bf2_1|cuda111py36hc5445e6_1|cuda110py37h00edf66_1|cuda102py37h92fd811_1|cuda110py39hd6acddb_1|cuda112py38h4f2a933_3|cuda112py36h36e649e_3|cuda111py36h8a2106e_3|cuda111py37he371307_3|cuda112py37h3bec1eb_3|cuda111py38h2f85826_3|cuda102py37h98b7ee3_3|cuda102py39h2fcd037_3|cuda110py36he570edd_3|cuda110py38hf84197b_0|cuda112py37h3bec1eb_0|cuda112py38h4f2a933_0|cuda102py37h98b7ee3_0|cuda110py39h5cf7045_0|cuda111py39hb4a4491_0|cuda112py37haf94430_1|cuda102py38h17946ce_1|cuda111py39h7295ad4_1|cuda102py39h06ffc54_1|cuda111py38h9575ccd_1|cuda110py37h7b7832c_1|cuda102py39hfe0cb5b_0|cuda112py38h6425f36_0|cuda112py37hc1ee5ce_0|cuda111py39h930882a_0|cuda110py39he47eb21_0|cuda112py38h6425f36_0|cuda111py39h930882a_0|cuda110py39he47eb21_0|cuda102py39hfe0cb5b_0|cuda111py37hc0ce48b_1|cuda110py38hf0a79ac_1|cuda111py38hc64aeea_1|cuda102py39hfe0cb5b_1|cuda111py39h930882a_1|cuda102py39hfe0cb5b_0|cuda112py38habe9d5a_1|cuda102py310hdf4a2db_1|cuda110py38h386aa8f_1|cuda110py37h0def887_1|cuda111py37hdb2541a_1|cuda102py37haad9b4f_1|cuda110py310hfdf97d1_1|cuda111py39h9f128c5_1|cuda112py39ha0cca9b_1|cuda111py38h2d04dd0_1|cuda110py39h0a9da28_1|cuda112py310h51fe464_1|cuda111py310h385535d_1|cpu_py37hf1c21f6_1|cpu_py39h714fb45_1|cpu_py38h36eccb8_1|cpu_py36h63cae03_1|cpu_py37hf1c21f6_1|cpu_py36h63cae03_2|cpu_py39h714fb45_2|cpu_py36h2ecc29a_0|cpu_py39h0fbb4fb_0|cpu_py38hd248515_1|cpu_py37hd5260e0_2|cpu_py38h91ab35c_2|cpu_py39hfbcbfe4_2|cpu_py38h91ab35c_3|cpu_py36h95c28ec_3|cpu_py39hfbcbfe4_0|cpu_py37hd5260e0_0|cpu_py36h95c28ec_0|cpu_py36h3564fbe_1|cpu_py38hfb3baa6_1|cpu_py37hff829bd_1|cpu_py39h818de69_2|cpu_py37hf3cc979_3|cpu_py36ha8b20dc_3|cpu_py38h1ee18c8_3|cpu_py38h1ee18c8_0|cpu_py39h5e9ed0b_0|cpu_py39h5e9ed0b_0|cpu_py37h76afcab_1|cpu_py39h5e9ed0b_1|cpu_py37hd754017_0|cpu_py39h5d22d69_1|cpu_py310h75c9ab6_1|cpu_py38h39c826d_1|cpu_py37h14e09b7_1|cpu_py310h2272b30_0|cpu_py39h7613f69_0|cpu_py38hde1b6bc_0|cpu_py38hb2150b6_1|cpu_py38hb2150b6_0|cpu_py37h76afcab_0|cpu_py37h76afcab_0|cpu_py38hb2150b6_0|cpu_py39hc70245e_1|cpu_py38h7c5583f_1|cpu_py37h2761dfd_1|cpu_py37hf3cc979_0|cpu_py39hc5866cc_0|cpu_py39hc5866cc_3|cpu_py38h4bbe6ce_2|cpu_py37hb06efa0_2|cpu_py36h1c7b8ea_2|cpu_py39h818de69_1|cpu_py38h91ab35c_0|cpu_py39hfbcbfe4_3|cpu_py37hd5260e0_3|cpu_py36h95c28ec_2|cpu_py39h0fbb4fb_1|cpu_py37ha70c682_1|cpu_py36h2d15a6b_1|cpu_py38he614459_0|cpu_py37hafa7651_0|cpu_py37hf1c21f6_2|cpu_py38h36eccb8_2|cpu_py38h36eccb8_1|cpu_py36h63cae03_1|cuda102py39hbbcd3cb_1|cuda112py37hfcfbd4c_1|cuda102py38hfdb21e3_1|cuda110py38hade5236_0|cuda112py39h4de5995_1|cuda110py39he47eb21_1|cuda102py38h9fb240c_1|cuda110py37h4121e64_1|cuda112py38h6425f36_1|cuda112py37hc1ee5ce_1|cuda102py37hc804c4d_1|cuda102py38h9fb240c_0|cuda102py37hc804c4d_0|cuda110py37h4121e64_0|cuda110py38hf0a79ac_0|cuda111py37hc0ce48b_0|cuda111py38hc64aeea_0|cuda112py37hc1ee5ce_0|cuda112py39h4de5995_0|cuda110py37h4121e64_0|cuda110py38hf0a79ac_0|cuda111py37hc0ce48b_0|cuda111py38hc64aeea_0|cuda112py39h4de5995_0|cuda102py37hc804c4d_0|cuda102py38h9fb240c_0|cuda102py37h689c94d_1|cuda111py37h07fa5b8_1|cuda110py39h423d6c6_1|cuda110py38h68479e5_1|cuda112py39h3ad47f5_1|cuda112py38had345c2_1|cuda112py39h4e14dd4_0|cuda110py37h4a33b93_0|cuda102py38ha031fbe_0|cuda111py38h2f85826_0|cuda111py37he371307_0|cuda102py39h2fcd037_0|cuda110py39h5cf7045_3|cuda110py38hf84197b_3|cuda110py37h4a33b93_3|cuda102py38ha031fbe_3|cuda102py36h3d4679f_3|cuda111py39hb4a4491_3|cuda112py39h4e14dd4_3|cuda112py38h3d13190_1|cuda111py37h78388d7_1|cuda102py39h9bf10ef_1|cuda111py38h5169e65_1|cuda110py36h768fbb7_1|cuda102py36he3537ca_1|cuda111py39hc274426_1|cuda112py37h946b90b_1|cuda112py36h5fea6e2_1|cuda111py36h3cb1cac_1|cuda111py39h37e5b68_1|cuda112py38h3bc52bc_1|cuda110py39hbc72f07_0|cuda110py37h5fb8b0b_0|cuda102py37h4454d97_0|cuda102py38h9f8c3ab_1|cuda101py37h7589291_1|cuda101py39h41d04a9_1|cuda101py38h2499a06_1']torchaudio -> pytorch[version='1.8.1|1.8.2']
pytorch
Package nccl conflicts for:
pytorch -> nccl[version='<2|>=2.10.3.1,<3.0a0|>=2.11.4.1,<3.0a0|>=2.12.10.1,<3.0a0|>=2.12.7.1,<3.0a0|>=2.8.4.1,<3.0a0|>=2.7.8.1,<3.0a0']
torchvision -> pytorch[version='>=1.11.0,<1.12.0a0'] -> nccl[version='<2|>=2.10.3.1,<3.0a0|>=2.11.4.1,<3.0a0|>=2.12.10.1,<3.0a0|>=2.12.7.1,<3.0a0|>=2.8.4.1,<3.0a0|>=2.7.8.1,<3.0a0']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.31=0
- feature:|@/linux-64::__glibc==2.31=0
- torchvision -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
Your installed version is: 2.31
Note that strict channel priority may have removed packages required for satisfiability.
```
Perhaps relevant:
```
β conda info
active environment : lts-scratch
active env location : /home/azureuser/miniconda3/envs/lts-scratch
shell level : 1
user config file : /home/azureuser/.condarc
populated config files : /home/azureuser/.condarc
conda version : 4.12.0
conda-build version : 3.21.4
python version : 3.9.10.final.0
virtual packages : __linux=5.4.0=0
__glibc=2.31=0
__unix=0=0
__archspec=1=x86_64
base environment : /home/azureuser/miniconda3 (writable)
conda av data dir : /home/azureuser/miniconda3/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/azureuser/miniconda3/pkgs
/home/azureuser/.conda/pkgs
envs directories : /home/azureuser/miniconda3/envs
/home/azureuser/.conda/envs
platform : linux-64
user-agent : conda/4.12.0 requests/2.27.1 CPython/3.9.10 Linux/5.4.0-1041-azure ubuntu/20.04.2 glibc/2.31
UID:GID : 1000:1000
netrc file : None
offline mode : False
```
| 5 |
5,714 | 77,454 |
More clarity in doc for `torch.cuda.Event.record`?
|
module: docs, triaged
|
### π The doc issue
Hi, the documentation for `torch.cuda.Event.record` describes it as "Records the event in a given stream." For someone who is new to CUDA Events, this description is not very clear to me.
https://pytorch.org/docs/stable/generated/torch.cuda.Event.html#torch.cuda.Event.record
### Suggest a potential alternative/fix
Could we add more details to this description so it is clear to a n00b what "record" actually does?
cc @svekars @holly1238
| 0 |
5,715 | 77,439 |
FSDP: Mixed precision should not cast ignored buffers
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
Currently, FSDP mixed precision casts all buffers, but if user uses `ignore_modules` API, then these buffers (and parameters) should not be casted.
This currently works for ignored parameters but buffers are still casted. We should add testing as well to ensure params and buffers are not casted if they are part of ignored modules.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @pietern @rohan-varma @SciPioneer
| 0 |
5,716 | 77,415 |
Suboptimal error message - nn.Linear with double argument
|
module: error checking, triaged
|
### π Describe the bug
Following code fails with error message below:
```python
from torch import nn
net = nn.Linear(10, 10)
t = torch.tensor([1.]*10, dtype=torch.double)
net(t)
```
Error:
```
Traceback (most recent call last):
File "~/Library/Application Support/JetBrains/PyCharm2022.1/scratches/ntuple.py", line 15, in <module>
net(t)
File "~/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "~/venv/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Double but found Float
```
There are two things I don't get with this message:
1. I'm passing in a dtype=torch.double, but the message complains inversely that it *expects* Double and found Float. This must be due to order of operations inside nn.Linear, but as a user I am responsible in the first line for type of my inputs, so saying "expects Float, found Double" would make me discover my bugs faster.
2. Why does it say `scalar type`? weight, bias and input are all non-scalars.
In my opinion it would make sense to say explicitly what had what time in this mismatch. Something along lines: "nn.Linear of type `torch.float` expects input of type `torch.float`, but input of type `torch.double` was given."
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 11.6.5 (x86_64)
GCC version: Could not collect
Clang version: 12.0.5 (clang-1205.0.22.9)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-11.6.5-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] torch==1.11.0
[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main
[conda] mkl 2021.2.0 hecd8cb5_269 https://repo.anaconda.com/pkgs/main
[conda] mkl-service 2.3.0 py38h9ed2024_1 https://repo.anaconda.com/pkgs/main
[conda] mkl_fft 1.3.0 py38h4a7008c_2 https://repo.anaconda.com/pkgs/main
[conda] mkl_random 1.2.1 py38hb2f4e1b_2 https://repo.anaconda.com/pkgs/main
[conda] numpy 1.22.2 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchvision 0.10.0 pypi_0 pypi
| 3 |
5,717 | 77,413 |
Process hangs after calling conv2d() in pytorch 1.11.0 with CUDA 11.3
|
module: cudnn, module: convolution, triaged, module: deadlock
|
### π Describe the bug
When running the following code, the python process hangs and becomes unresponsive to Ctrl-C, forcing me to manually kill the process in the task manager.
Monitoring the resources shows the memory for data is allocated, but no GPU activity is monitored.
```python
import torch
#from hanging_threads import start_monitoring
#start_monitoring()
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([4, 1, 1, 262656], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(1, 16, kernel_size=[1, 513], padding=[0, 0], stride=[1, 16], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
```
Using the hanging_threads module to get information on what code causes the problem yields the following:
```
---------- Thread 22344 "MainThread" hangs ----------
File "D:\AI wobble\RAVE\example.py", line 11, in <module>
out = net(data)
File "D:\Anaconda3\envs\rave\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Anaconda3\envs\rave\lib\site-packages\torch\nn\modules\conv.py", line 447, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Anaconda3\envs\rave\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
```
Running the same code on a python 3.7 environment with pytorch 1.8.1 cuda10.2 cudnn7.0 (see below) runs fine; execution finishes after a few seconds.
Hardware details: NVIDIA RTX2060
### Versions
This is the environment where the problem occurs:
```
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19043-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 512.59
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] pytorch-lightning==1.6.1
[pip3] torch==1.11.0
[pip3] torchmetrics==0.8.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2022.0.0 haa95532_115
[conda] numpy 1.20.3 pypi_0 pypi
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-lightning 1.6.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
```
This is an environment where the problem does not occur:
```
PyTorch version: 1.8.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.7.10 (default, Feb 26 2021, 13:06:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 512.59
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] pytorch-lightning==1.6.1
[pip3] torch==1.8.1
[pip3] torchaudio==0.8.1
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.9.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h74a9793_1
[conda] mkl 2021.2.0 haa95532_296
[conda] mkl-service 2.3.0 py37h2bbff1b_1
[conda] mkl_fft 1.3.0 py37h277e83a_2
[conda] mkl_random 1.2.1 py37hf11a4ad_2
[conda] numpy 1.20.3 pypi_0 pypi
[conda] numpy-base 1.20.2 py37hc2deb75_0
[conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch
[conda] pytorch-lightning 1.6.1 pypi_0 pypi
[conda] torchaudio 0.8.1 py37 pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.9.1 py37_cu102 pytorch
```
cc @csarofeen @ptrblck @xwang233 @ngimel
| 0 |
5,718 | 77,411 |
Allow force building with/without AVX
|
module: build, triaged
|
### π The feature, motivation and pitch
Currently it seems impossible to force a non-AVX build on an AVX-available machine, or vice versa. This poses a problem on packaging a non-AVX version of pytorch on Arch Linux ([FS#73012](https://bugs.archlinux.org/task/73012)), since it involves building the package for non-AVX machines on a machine with AVX.
I'd like to suggest adding some CMake options for force building with/without AVX or specifying the desired target CPU features.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 0 |
5,719 | 77,410 |
torch.onnx.export does not track Tensor.data.size() for dynamic axes
|
module: onnx, triaged, onnx-triaged, release notes: onnx
|
### π Describe the bug
While I was implementing dynamic batch support for [nanodet](https://github.com/RangiLyu/nanodet), one of the output's batch size turned out to be statically defined. After some log reading, I found out data.size() is not tracked for dynamic axe. After changing into shape it was fixed.
I don't know if this is how it is supposed to work as there is no documentation about this.
Here is an example code that demonstrates the bug
``` python
import torch
class ExampleDataSize(torch.nn.Module):
def forward(self, a):
s = a.data.size()
t = 0
for i in range(4):
t += s[i]
return t
class ExampleShape(torch.nn.Module):
def forward(self, a):
s = a.shape
t = 0
for i in range(4):
t += s[i]
return t
data_size_ex = ExampleDataSize()
shape_ex = ExampleShape()
dummy_input = torch.autograd.Variable(
torch.randn(1, 3, 64, 128)
)
torch.onnx.export(
data_size_ex,
dummy_input,
"data_size_ex.onnx",
verbose=True,
keep_initializers_as_inputs=True,
opset_version=11,
input_names=["data"],
dynamic_axes={
"data": [0, 1, 2, 3]
},
output_names=["output"],
)
torch.onnx.export(
shape_ex,
dummy_input,
"shape_ex.onnx",
verbose=True,
keep_initializers_as_inputs=True,
opset_version=11,
input_names=["data"],
dynamic_axes={
"data": [0, 1, 2, 3]
},
output_names=["output"],
)
```
This is the output of the above code
```
/usr/lib/python3.10/site-packages/torch/onnx/utils.py:1329: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input data
warnings.warn("No names were found for specified dynamic axes of provided input."
graph():
%output : Long(requires_grad=0, device=cpu) = onnx::Constant[value={196}]() # /tmp/bug/bug.py:7:0
return (%output)
graph(%data : Float(*, *, *, *, strides=[24576, 8192, 128, 1], requires_grad=0, device=cpu)):
%1 : Long(4, strides=[1], device=cpu) = onnx::Shape(%data) # /tmp/bug/bug.py:14:0
%2 : Long(device=cpu) = onnx::Constant[value={0}]() # /tmp/bug/bug.py:14:0
%3 : Long(device=cpu) = onnx::Gather[axis=0](%1, %2) # /tmp/bug/bug.py:14:0
%4 : Long(4, strides=[1], device=cpu) = onnx::Shape(%data) # /tmp/bug/bug.py:14:0
%5 : Long(device=cpu) = onnx::Constant[value={1}]() # /tmp/bug/bug.py:14:0
%6 : Long(device=cpu) = onnx::Gather[axis=0](%4, %5) # /tmp/bug/bug.py:14:0
%7 : Long(4, strides=[1], device=cpu) = onnx::Shape(%data) # /tmp/bug/bug.py:14:0
%8 : Long(device=cpu) = onnx::Constant[value={2}]() # /tmp/bug/bug.py:14:0
%9 : Long(device=cpu) = onnx::Gather[axis=0](%7, %8) # /tmp/bug/bug.py:14:0
%10 : Long(4, strides=[1], device=cpu) = onnx::Shape(%data) # /tmp/bug/bug.py:14:0
%11 : Long(device=cpu) = onnx::Constant[value={3}]() # /tmp/bug/bug.py:14:0
%12 : Long(device=cpu) = onnx::Gather[axis=0](%10, %11) # /tmp/bug/bug.py:14:0
%13 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={0}]()
%14 : Long(requires_grad=0, device=cpu) = onnx::Add(%3, %13) # /tmp/bug/bug.py:16:0
%15 : Long(requires_grad=0, device=cpu) = onnx::Add(%14, %6) # /tmp/bug/bug.py:16:0
%16 : Long(requires_grad=0, device=cpu) = onnx::Add(%15, %9) # /tmp/bug/bug.py:16:0
%output : Long(requires_grad=0, device=cpu) = onnx::Add(%16, %12) # /tmp/bug/bug.py:16:0
return (%output)
```
As you can see Tensor.data.size() returns a constant node, while Tensor.shape returns the correct onnx.
### Versions
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 13.0.1
CMake version: version 3.22.3
Libc version: glibc-2.35
Python version: 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0] (64-bit runtime)
Python platform: Linux-5.15.28-1-MANJARO-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 510.54
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.3.2
/usr/lib/libcudnn_adv_infer.so.8.3.2
/usr/lib/libcudnn_adv_train.so.8.3.2
/usr/lib/libcudnn_cnn_infer.so.8.3.2
/usr/lib/libcudnn_cnn_train.so.8.3.2
/usr/lib/libcudnn_ops_infer.so.8.3.2
/usr/lib/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0rc5
[pip3] torchvision==0.12.0a0+b3c8b20
| 1 |
5,720 | 77,397 |
Large numerical inconsistency for `torch.einsum` on RTX30 series GPU.
|
module: cuda, triaged, module: tf32
|
### π Describe the bug
```python
import torch
print(torch.__version__)
a = torch.rand(1, 8192*10, 3) * 100
b = torch.rand(1, 3, 3)
c_1 = torch.einsum('c...i,cji->c...j', a, b)
c_2 = torch.einsum('c...i,cji->c...j', a.cuda(), b.cuda())
c_3 = (a.cuda()[...,None,:] * b.cuda()[:,None,...]).sum(-1)
print((c_1 - c_2.cpu()).abs().max(), (c_1 - c_3.cpu()).abs().max())
```
Output:
```
1.11.0+cu113
tensor(0.0833), tensor(3.0518e-05)
```
The difference between c_1 and c_2 is ridiculously large. However, when I rewrote cuda einsum to `( _ * _ ).sum(-1)`, the result is normal.
I tested this short script on RTX3090, RTX3060, and they both show similar problematic results.
I also tested on RTX2070, Telsa T4 (colab), they do not have such issue.
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.11.0-40-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.4.48
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.82.00
cuDNN version: Probably one of the following:
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0+cu113
[pip3] torch-ema==0.3
[pip3] torch-scatter==2.0.9
[pip3] torch-tb-profiler==0.4.0
[pip3] torchvision==0.12.0
[conda] cudatoolkit 11.1.1 h6406543_8 conda-forge
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.11.0+cu113 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-scatter 2.0.9 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @ngimel @zasdfgbnm @ptrblck
| 4 |
5,721 | 77,380 |
microbenchmark-style tests
|
module: tests, triaged
|
### π The feature, motivation and pitch
We can add a set of tests that are based lists of IR, like the microbenchmarks in torchbench.
Reason - a lot of the microbenchmarks have revealed non-performance issues - so it might be good to add these into the pytorch tests to catch failures, instead of waiting for torchbench failures.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry
| 0 |
5,722 | 77,374 |
[distributed] c10d crashing on assert
|
oncall: distributed
|
### π Describe the bug
Finally got a simple script that reproduces the pt-1.11/c10d crash on assert and or exit - which on JeanZay HPC most of the time leads to core dumps. The script is totally unrelated to BigScience but the C traceback looks similar.
```
$ CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.run --nproc_per_node=1 --master_addr='127.0.0.1' --master_port=9901 test.py
[...]
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 279, in fetch_sub_module
assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionError: {'id': 6, 'status': 'INFLIGHT', 'numel': 0, 'ds_numel': 64, 'shape': (0,), 'ds_shape': (64,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {116}}
10%|βββββββββββ | 1/10 [00:00<00:01, 4.79it/s]
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: driver shutting down
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from query at ../aten/src/ATen/cuda/CUDAEvent.h:95 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f51ab1577d2 in /home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::finishedGPUExecutionInternal() const + 0x11a (0x7f518b4d79ea in /home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::isCompleted() + 0x50 (0x7f518b4d9fd0 in /home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x145 (0x7f518b4db265 in /home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #4: <unknown function> + 0xc9039 (0x7f51ca9cd039 in /mnt/nvme0/anaconda3/envs/py38-pt111/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x94947 (0x7f51cdd19947 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7f51cdda9a44 in /lib/x86_64-linux-gnu/libc.so.6)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 654206) of binary: /home/stas/anaconda3/envs/py38-pt111/bin/python
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/run.py", line 728, in <module>
main()
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
test.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-05-11_18:54:22
host : localhost
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 654206)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 654206
```
I attached the 2 needed files:
[test.txt](https://github.com/pytorch/pytorch/files/8682579/test.txt)
[ds_config.txt](https://github.com/pytorch/pytorch/files/8682580/ds_config.txt)
please rename those upon saving to:
```
mv ds_config.txt ds_config.json
mv test.txt test.py
```
Here are the right sha's otherwise the initial problem will be fixed shortly and there will be no assert. Please use the following order:
```
pip install deepspeed transformers # gets the deps right quickly
pip install git+https://github.com/microsoft/DeepSpeed.git@50893458d66f27289934dfcc8fd0d1e02a8dcbd7
pip install git+https://github.com/huggingface/transformers.git@d4cc56227e31593a4df4c3fc5c83f34e87c1f6b4
```
Then just:
```
$ CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.run --nproc_per_node=1 --master_addr='127.0.0.1' --master_port=9901 test.py
```
Clearly there is some bad interaction happening between deepspeed, which uses pytorch CUDA extensions and pytorch c10d.
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64)
GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.18.0-305.40.2.el8_4.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.4.152
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] torch==1.11.0+cu115
[pip3] torchaudio==0.11.0+cu115
[pip3] torchvision==0.12.0+cu115
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.22.2 pypi_0 pypi
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.11.0+cu115 pypi_0 pypi
[conda] torchaudio 0.11.0+cu115 pypi_0 pypi
[conda] torchvision 0.12.0+cu115 pypi_0 pypi
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 4 |
5,723 | 77,354 |
outputs_[i]->uses().empty()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755853042/work/torch/csrc/jit/ir/ir.cpp":1314, please report a bug to PyTorch.
|
oncall: jit
|
### π Describe the bug
When I use torch.jit.script for the class, in the line p = pt_2d[b, :, pix2pt] it will pop up the issue descriped in the title. The snippet :
```python
import math
import torch
import torch.nn as nn
class DifferentiableRasterizer(nn.Module):
def __init__(self,face,height,width,block_size=32):
super(DifferentiableRasterizer, self).__init__()
self.block_size = block_size
self.width = width
self.height = height
self.width_exp = int(math.ceil(float(width)/float(self.block_size)))*self.block_size
self.height_exp = int(math.ceil(float(height)/float(self.block_size)))*self.block_size
self.face = face
self.index_buf = torch.full((self.height_exp, self.width_exp), face.shape[1], dtype=torch.long).to(face.device)
self.face_index = torch.LongTensor(range(0,self.face.shape[1])).to(face.device)
self.x_grid = torch.tensor(range(0,self.width)).unsqueeze(0).to(face.device)
self.y_grid = torch.tensor(range(0,self.height)).unsqueeze(1).to(face.device)
self.x_grid_block = torch.tensor(range(0,self.block_size)).unsqueeze(0).unsqueeze(2).to(face.device)
self.y_grid_block = torch.tensor(range(0,self.block_size)).unsqueeze(1).unsqueeze(2).to(face.device)
def forward(self,pt_2d,color,pt_3d,normal,R,T):
ftiny = 1.17e-35
inf_value = 3.40e+35
lower_inf_value = 3.40e+34
batch, vnum, pnum = pt_2d.shape
cnum = color.shape[1]
image = torch.zeros(batch,cnum,self.height,self.width,device=pt_2d.device)
mask = torch.zeros(batch,self.height,self.width,device=pt_2d.device)
for b in range(batch):
with torch.no_grad():
norm_cul = torch.sum((pt_3d[b,:,self.face[0, :]] + (R[b,:,:].t()@T[b,:, :])) * normal[b,:,:],0) < 0
depth_cul = torch.min(pt_2d[b,2,self.face], 0)[0] > 0
if torch.sum(norm_cul * depth_cul).item()==0:
continue
face_red = self.face[:, norm_cul * depth_cul]
face_index_red = self.face_index[norm_cul * depth_cul]
num = face_red.shape[1]
self.index_buf[:] = num
p = pt_2d[b, :, face_red]
pz_min,_ = torch.min(p[2,:,:],0)
px_min,_ = torch.min(p[0,:,:].int(),0)
px_max,_ = torch.max(p[0,:,:].int(),0)
py_min,_= torch.min(p[1, :, :].int(), 0)
py_max,_ = torch.max(p[1,:,:].int(),0)
x_min,_ = torch.min(px_min,0)
x_max,_ = torch.max(px_max,0)
y_min,_ = torch.min(py_min,0)
y_max,_ = torch.max(py_max,0)
range_x_min = max(x_min.item()-x_min.item()%self.block_size,0)
range_y_min = max(y_min.item() - y_min.item() % self.block_size, 0)
range_x_max = min(x_max.item(), self.width_exp)
range_y_max = min(y_max.item(), self.height_exp)
det = ((p[1, 1, :] - p[1, 2, :]) * (p[0, 0, :] - p[0, 2, :]) + (p[0, 2, :] - p[0, 1, :]) * (p[1, 0, :] - p[1, 2, :])).unsqueeze(0).unsqueeze(0)
det = det.sign()*torch.clamp(det.abs(),min=ftiny)
inv_det = 1/det
l0_x = (p[1, 1, :] - p[1, 2, :]) * inv_det
l0_y = (p[0, 2, :] - p[0, 1, :]) * inv_det
l0_c = -l0_x*p[0, 2, :] - l0_y *p[1, 2, :]
l1_x = (p[1, 2, :] - p[1, 0, :]) * inv_det
l1_y = (p[0, 0, :] - p[0, 2, :]) * inv_det
l1_c = -l1_x*p[0, 2, :] - l1_y *p[1, 2, :]
l2_x = -l0_x - l1_x
l2_y = -l0_y - l1_y
l2_c = 1-l0_c-l1_c
p = p.unsqueeze(1).unsqueeze(1)
D_x = p[2, :, :, 0, :] * l0_x + p[2, :, :, 1, :] * l1_x + p[2, :, :, 2, :] * l2_x
D_y = p[2, :, :, 0, :] * l0_y + p[2, :, :, 1, :] * l1_y + p[2, :, :, 2, :] * l2_y
D_c = (p[2,:,:,0,:]*l0_c + p[2,:,:,1,:]*l1_c + p[2,:,:,2,:]*l2_c)
for i in range(int(range_y_min),int(range_y_max),int(self.block_size)):
D_yc = D_y * (float(i)+self.y_grid_block) + D_c
l0_yc = l0_y * (float(i) + self.y_grid_block) + l0_c
l1_yc = l1_y * (float(i) + self.y_grid_block) + l1_c
l2_yc = l2_y * (float(i) + self.y_grid_block) + l2_c
for k in range(int(range_x_min),int(range_x_max),int(self.block_size)):
target = (px_max>=k)*(px_min<k+self.block_size)*(py_max>=i)*(py_min<i+self.block_size)
face_ct = torch.sum(target, 0, dtype=torch.long)
if face_ct == 0:
continue
kxg = float(-k)-self.x_grid_block
M = (l0_yc[:,:,target] >= (l0_x[:,:,target]* kxg)) * (l1_yc[:,:,target] >= (l1_x[:,:,target]* kxg)) * (l2_yc[:,:,target] >= (l2_x[:,:,target]* kxg))
vis_ct = torch.max(torch.sum(M, 2)).item()
if vis_ct==1:
vis, idx = torch.max(M, 2)
self.index_buf[i:i + self.block_size, k:k + self.block_size][vis] = (self.face_index[0:target.shape[0]][target])[idx[vis]]
elif vis_ct>1:
D = M.bitwise_not().float() * inf_value + D_x[:,:,target] * (float(k)+self.x_grid_block) + D_yc[:,:,target]
D[D!=D]=inf_value
depth, idx = torch.min(D,2)
vis = depth< lower_inf_value
self.index_buf[i:i+self.block_size, k:k+self.block_size][vis] = (self.face_index[0:target.shape[0]][target])[idx[vis]]
mask_ = (self.index_buf[0:self.height,0:self.width]!=num).float()
index_buf_tmp = self.index_buf[0:self.height,0:self.width]
index_buf_tmp[index_buf_tmp==num] = 0
pix2pt = self.face[:,face_index_red[index_buf_tmp]]
p = pt_2d[b, :, pix2pt]
face = torch.randint(1,100, (3, 20))
traced_script_module = torch.jit.script(DifferentiableRasterizer(face,224,224,32))
```
# the error occur in the last line p = pt_2d[b, :, pix2pt]
### Versions
Versions of relevant libraries:
[pip3] facenet-pytorch==2.5.2
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.1
[pip3] numpydoc==1.1.0
[pip3] pytorch3d==0.6.1
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.20.1 py38h93e21f0_0
[conda] numpy-base 1.20.1 py38h7d8b39e_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch 1.11.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.6.1 py38_cu102_pyt1110 pytorch3 d-nightly
[conda] torchaudio 0.11.0 py38_cu102 pytorch
[conda] torchvision 0.12.0 py38_cu102 pytorch
| 0 |
5,724 | 77,342 |
DISABLED test_ddp_profiling_autograd_profiler (__main__.TestDistBackendWithSpawn)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_ddp_profiling_autograd_profiler%2C%20TestDistBackendWithSpawn) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6399576647).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 4 green.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 4 |
5,725 | 77,340 |
Disable issue doesn't disable multiple dtypes correctly
|
module: ci, triaged
|
### π Describe the bug
I wanted to disable `test_ref_small_input_prod_cuda` and `test_ref_small_input__masked_prod_cuda`, but creating a disabling issue for them didn't actually stop the tests from running (there are int8, int16, and int32 variants for both). I ended up having to create 6 separate issues.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 1 |
5,726 | 77,332 |
wrong overload resolved for `torch.mul(x, 4)` in `__torch_dispatch__`
|
triaged, module: __torch_dispatch__
|
### π Describe the bug
minimal repro
```python
import torch
class ElementwiseMulScalarIntModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.mul(x, 4)
class TrivTensor(torch.Tensor):
@staticmethod
def __new__(cls, elem, *, requires_grad=None):
if requires_grad is None:
return super().__new__(cls, elem)
else:
return cls._make_subclass(cls, elem, requires_grad)
@classmethod
def __torch_dispatch__(cls, func, _types, args=(), kwargs=None):
print(func)
for arg in args:
if isinstance(arg, cls): continue
print(arg, type(arg))
ElementwiseMulScalarIntModule()(TrivTensor(torch.randint(10, (3, 4))))
> aten.mul.Tensor
> 4 <class 'int'>
```
might be related to https://github.com/pytorch/pytorch/issues/77223
cc @Chillee @ezyang @zou3519 @albanD @samdow @silvasean @cathyzhyi @powderluv
### Versions
PyTorch version: 1.12.0.dev20220511+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.21.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 510.39.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0.dev20220511+cpu
[pip3] torchvision==0.13.0.dev20220511+cpu
[conda] No relevant packages
| 3 |
5,727 | 77,317 |
DISABLED test_DistributedDataParallel (__main__.TestDistBackendWithSpawn)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_DistributedDataParallel%2C%20TestDistBackendWithSpawn) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6396724783).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 4 green.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 6 |
5,728 | 77,284 |
non-rentrant checkpointing uses same memory as non-checkpointed code
|
high priority, oncall: distributed, module: checkpoint, triaged
|
### π Describe the bug
The newly addressed [checkpoint_without_reentrant](https://github.com/pytorch/pytorch/blob/master/torch/utils/checkpoint.py#L309) never clears the `storage` List. So it uses the same memory as non-checkpointed code.
Changing the `unpack` to should fix it:
```
def unpack(x):
nonlocal storage
unpack_counter = 0
if len(storage) == 0:
def inner_pack(inner):
nonlocal unpack_counter
storage[unpack_counter] = inner
unpack_counter += 1
return None
def inner_unpack(packed):
raise RuntimeError("You are calling backwards on a tensor that is never exposed. Please open an issue.")
# Stash the surrounding rng state, and mimic the state that was
# present at this time during forward. Restore the surrounding state
# when we're done.
rng_devices = []
if preserve_rng_state and had_cuda_in_fwd:
rng_devices = fwd_gpu_devices
with torch.random.fork_rng(devices=rng_devices, enabled=preserve_rng_state):
if preserve_rng_state:
torch.set_rng_state(fwd_cpu_state)
if had_cuda_in_fwd:
set_device_states(fwd_gpu_devices, fwd_gpu_states)
with torch.enable_grad(), torch.cuda.amp.autocast(had_autocast_in_fwd):
with torch.autograd.graph.saved_tensors_hooks(inner_pack, inner_unpack):
_unused = function(*args, **kwargs)
return storage.pop(x)
```
### Versions
Versions
```
Collecting environment information...
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1063-azure-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.4.152
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.82.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.11.0+cu113
[pip3] torchviz==0.0.2
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.11.0+cu113 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,729 | 77,265 |
Subclasses with unwrapping `__torch_dispatch__` impls as parameters
|
module: nn, triaged, module: __torch_dispatch__, tensor subclass
|
## Issue description
As pointed out by @another-pjohnson [here](https://github.com/pytorch/pytorch/pull/73459#issuecomment-1121690504), if a tensor subclass implements `__torch_dispatch__` that doesn't re-wrap to return a subclass instance, the Parameter version of the instance will be of the unwrapped type rather than the subclass type.
```python
import torch
from torch.utils._pytree import tree_map
class NonRewrappingTensor(torch.Tensor):
@staticmethod
def __new__(
cls, t: torch.Tensor
):
r = super(NonRewrappingTensor, cls)._make_wrapper_subclass(cls, t.shape, dtype=t.dtype, requires_grad=t.requires_grad, device=t.device)
return r
def __init__(self, t) -> None:
self.tensor: torch.Tensor = t
__torch_function__ = torch._C._disabled_torch_function_impl
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs=None):
def unwrap(e) -> torch.Tensor:
if isinstance(e, NonRewrappingTensor):
t = e.tensor
return t
else:
return e
r = func(*tree_map(unwrap, args), **tree_map(unwrap, kwargs))
# Return an unwrapped tensor no longer of original subclass type.
return r
t = NonRewrappingTensor(torch.randn(3))
param = torch.nn.Parameter(t)
print(type(param)) # torch.Tensor, should be NonRewrappingTensor
```
This happens because `Parameter.__new__` relies on a call to `detach()`, calling the subclass's `__torch_dispatch__` impl.
https://github.com/pytorch/pytorch/blob/3d0e6f169c6745a548ca3e71af9a719ba67dff20/torch/nn/parameter.py#L40
What should be done here to avoid confusion? A few options:
1. Explicitly enforce that any tensor subclass usable as a Parameter be closed under *all* ops (i.e. its `__torch_dispatch__` impl must always re-wrap). I think this is a non-starter, as e.g. the `SparseTensor` example [here](https://github.com/pytorch/pytorch/blob/3c2e0dc6574e1ddc1dda16067200d5b491a18fa7/torch/testing/_internal/common_subclass.py#L99-L154) does not satisfy this and we still want it usable as a Parameter.
2. Explicitly enforce that any tensor subclass usable as a Parameter be closed under some subset of ops. At the very least, this should include `detach()` (I think this should suffice as it's the only op the new parameter mechanism relies on, but open to discussion). This puts the responsibility to specifically handle `detach()` on the subclass writer, but it's possible for us to throw a nice error in `Parameter.__new__` to inform subclass writers if the `detach()` constraint is violated.
```python
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs=None):
def unwrap(e) -> torch.Tensor:
if isinstance(e, NonRewrappingTensor):
t = e.tensor
return t
else:
return e
r = func(*tree_map(unwrap, args), **tree_map(unwrap, kwargs))
# Specifically handle detach() so this is usable as a Parameter.
if func is torch.ops.aten.detach.default:
return NonRewrappingTensor(r)
return r
```
3. Leave as-is; working as intended. Are there any valid use cases for using a tensor subclass as parameter and having `detach()` behavior returning the unwrapped type or is this just a problem waiting to happen for subclass writers that need unwrapping behavior in their `__torch_dispatch__` impl?
4. Some other mechanism to avoid invoking `__torch_dispatch__` impl when detaching in `Parameter.__new__`?
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @Chillee @ezyang @zou3519 @samdow
| 2 |
5,730 | 77,256 |
CrossEntropyLoss computes SoftMax always across the second dimension
|
module: nn, module: loss, triaged, actionable
|
### π The feature, motivation and pitch
When working with recurrent neural networks, such as `torch.nn.RNN`, the shape of the output (the logits) of that network is `NxLxD` where `N` is the batch size, `L` is the sequence length and `D` is the feature dimension. When using one-hot encodings for our targets in the same dimensions, e.g., for text processing, we want to use categorical cross entropy-loss on top of softmax activation (i.e. `torch.nn.CrossEntropyLoss`).
According to the documentation, `torch.nn.CrossEntropyLoss` performs sodtmax **always** on the second dimension of the given logits. In our case, however, we need to compute softmax across the **third** dimension. Currently, it is required to `permute(0,2,1)` both the logits and the targets to artificially turn the third dimension second.
The requested feature would be a `dim` parameter in the constructor of the `torch.nn.CrossEntropyLoss` that enables to select the dimension over which softmax is computed. Similarly, for `torch.nn.functional.cross_entropy`, a `dim` parameter would be required.
### Alternatives
I guess, a quick-and-dirty implementation would simply apply the `permute` internally before passing the data to the C implementation.
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 5 |
5,731 | 77,253 |
lintrunner not working
|
module: lint, triaged
|
### π Describe the bug
On this commit https://github.com/pytorch/pytorch/commit/a98bb228552beacc173bdf93a150dd611b65707e running lintrunner locally did not catch black problems, even though there are problems.
```
(base) ezyang-mbp:pytorch ezyang$ lintrunner -a -vv
[2022-05-11T14:37:44Z DEBUG lintrunner::lint_config] Found linters: {"SHELLCHECK", "FLAKE8", "ACTIONLINT", "NOQA", "TESTOWNERS", "SPACES", "INCLUDE", "MYPYSTRICT", "CIRCLECI", "EXEC", "MYPY", "PYPIDEP", "NEWLINE", "TYPEIGNORE", "CUBINCLUDE", "CMAKE", "CLANGTIDY", "NATIVEFUNCTIONS", "BLACK", "TABS", "RAWCUDA", "CLANGFORMAT"}
[2022-05-11T14:37:44Z DEBUG lintrunner] Running linters: ["FLAKE8", "CLANGFORMAT", "MYPY", "MYPYSTRICT", "CLANGTIDY", "TYPEIGNORE", "NOQA", "CIRCLECI", "NATIVEFUNCTIONS", "NEWLINE", "SPACES", "TABS", "INCLUDE", "PYPIDEP", "EXEC", "CUBINCLUDE", "RAWCUDA", "CMAKE", "SHELLCHECK", "ACTIONLINT", "TESTOWNERS", "BLACK"]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linting commit diff files: {".lintrunner.toml", "torch/_prims/executor.py", "torch/_prims/utils.py"}
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linting working tree diff files: {}
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linting files: [/Users/ezyang/Dev/pytorch/.lintrunner.toml, /Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'CLANGFORMAT' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'MYPYSTRICT' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'FLAKE8' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'TYPEIGNORE' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'CLANGTIDY' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'MYPY' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'CIRCLECI' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'NOQA' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'NEWLINE' matched files: [/Users/ezyang/Dev/pytorch/.lintrunner.toml, /Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'SPACES' matched files: [/Users/ezyang/Dev/pytorch/.lintrunner.toml, /Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'TABS' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'NATIVEFUNCTIONS' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'PYPIDEP' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'CUBINCLUDE' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'EXEC' matched files: [/Users/ezyang/Dev/pytorch/.lintrunner.toml]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'TESTOWNERS' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'BLACK' matched files: [/Users/ezyang/Dev/pytorch/torch/_prims/executor.py, /Users/ezyang/Dev/pytorch/torch/_prims/utils.py]
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'RAWCUDA' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'CMAKE' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'INCLUDE' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'SHELLCHECK' matched files: []
[2022-05-11T14:37:44Z TRACE lintrunner::log_utils] Linter 'ACTIONLINT' matched files: []
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter EXEC: python3 tools/linter/adapters/exec_linter.py -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpfNRg5O
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter NEWLINE: python3 tools/linter/adapters/newlines_linter.py -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpAgigCj
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter BLACK: python3 tools/linter/adapters/black_linter.py -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpqSQYeZ
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter SPACES: python3 tools/linter/adapters/grep_linter.py --pattern=[[:blank:]]$ --linter-name=SPACES --error-name=trailing spaces --replace-pattern=s/[[:blank:]]+$// --error-description=This line has trailing spaces; please remove them. -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmppKEWNe
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter TYPEIGNORE: python3 tools/linter/adapters/grep_linter.py --pattern=# type:\s*ignore([^\[]|$) --linter-name=TYPEIGNORE --error-name=unqualified type: ignore --error-description=This line has an unqualified `type: ignore`; please convert it to `type: ignore[xxxx]` -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmp8f64VB
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter MYPY: python3 tools/linter/adapters/mypy_linter.py --config=mypy.ini -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpTcB6bU
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter NOQA: python3 tools/linter/adapters/grep_linter.py --pattern=# noqa([^:]|$) --linter-name=NOQA --error-name=unqualified noqa --error-description=This line has an unqualified `noqa`; please convert it to `noqa: XXXX` -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpRYnvNx
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter FLAKE8: python3 tools/linter/adapters/flake8_linter.py -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmpQuOP1F
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Running linter TABS: python3 tools/linter/adapters/grep_linter.py --pattern= --linter-name=TABS --error-name=saw some tabs --replace-pattern=s/\t/ / --error-description=This line has tabs; please replace them with spaces. -- @/var/folders/11/bcmcs8d57q7dxbysb4w_h1ym0000gn/T/.tmppZR2f5
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter EXEC took: 103.265458ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter NEWLINE took: 105.674791ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter SPACES took: 120.552875ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter TABS took: 118.828291ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter TYPEIGNORE took: 122.363458ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter NOQA took: 122.022291ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter BLACK took: 316.047708ms
[2022-05-11T14:37:44Z DEBUG lintrunner::linter] Linter FLAKE8 took: 463.736833ms
[2022-05-11T14:38:02Z DEBUG lintrunner::linter] Linter MYPY took: 18.497567833s
ok No lint issues.
```
Also you probably should have some way of reporting the lintrunner version
cc @suo
### Versions
master
| 4 |
5,732 | 77,251 |
The codegen unconditionaly generate code even when it is not going to be used
|
triaged, module: codegen
|
The codegen today generate cpp and header for all the dispatch keys listed in https://github.com/pytorch/pytorch/blob/2083b16f68963e563c6fbf6bf1084e78a7f66139/torchgen/model.py#L144-L163
We should be able to skip all the cuda-related ones when doing a CPU-only build and skip the mkldnn ones when it is not available.
This would speed up codegen for such builds.
cc @ezyang @bhosmer @bdhirsh
| 1 |
5,733 | 77,241 |
libtorch1.8 torch::sigmoid is wrong
|
module: cpp, triaged, module: jetson
|
### π Describe the bug
when I use libtorch1.8 for NVIDIA Jetson AGX, I find the problem that torch::sigmoid() function is wrong. When I use libtorch1.7, the result is currect!
```
#include <torch/torch.h>
#include<torch/csrc/autograd/function.h>
#include <iostream>
using namespace std;
int main(){
// at::Tensor input = torch::tensor(5.5);
at::Tensor input = torch::ones({1, 2, 152, 152});
at::Tensor output = input.sigmoid_();
cout << output.sum() << endl;
return 0;
}
```
the result:
libtorch1.8
```
27271.1
[ CPUFloatType{} ]
```
libtorch1.7
```
33780.8
[ CPUFloatType{} ]
```
### Versions
libtorch1.8 for NVIDIA Jetson AGX
cc @jbschlosser @ptrblck @puririshi98
| 0 |
5,734 | 77,233 |
`tensordot` does check the dtype of empty tensor
|
triaged, module: linear algebra
|
### π Describe the bug
`tensordot` does check the dtype of empty tensor
```python
import torch
a = torch.rand([3, 2, 1], dtype=torch.float64)
b = torch.rand([1, 2, 0], dtype=torch.complex128)
torch.tensordot(a, b)
# tensor([], size=(3, 0), dtype=torch.float64)
```
By contrast,
```python
import torch
a = torch.rand([3, 2, 1], dtype=torch.float64)
b = torch.rand([1, 2, 1], dtype=torch.complex128)
torch.tensordot(a, b)
# RuntimeError: expected scalar type Double but found ComplexDouble
```
Interestingly, when the tensor is 2d, it will raise the error even `b` is empty
```python
import torch
a = torch.rand([2, 1], dtype=torch.float64)
b = torch.rand([1, 0], dtype=torch.complex128)
torch.tensordot(a, b)
# RuntimeError: expected scalar type Double but found ComplexDouble
```
### Versions
pytorch: 1.11.0
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
5,735 | 77,232 |
Write decomposition conditionals in a way that leads to simpler shape expressions
|
triaged, module: primTorch
|
### π The feature, motivation and pitch
Take code like
```
assert x.numel() != 0
```
vs.
```
assert all([i > 0 for i in x.shape])
```
Both of these are mathematically equivalent. But, the second one will be more friendly to a dynamic shapes tracing model that guards on 0/1. It doesn't lead to less recompiles, but it could lead to simpler guard expressions.
cc: @jansel @ezyang
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @mruberry @ngimel
| 5 |
5,736 | 77,231 |
`torch.scatter_add` will succeed when the `index` is a complex tensor
|
triaged, module: scatter & gather ops
|
### π Describe the bug
`torch.scatter_add` will succeed when the `index` is a complex tensor without any elements.
```python
import torch
input_tensor = torch.rand([10, 5], dtype=torch.float64)
index_tensor = torch.rand([13, 0], dtype=torch.complex128)
src_tensor = torch.rand([10, 2], dtype=torch.float64)
def fn(input, index, src):
dim = -1
fn_res = torch.scatter_add(input, dim, index, src, )
return fn_res
fn(input_tensor, index_tensor, src_tensor)
# tensor([[0.9610, 0.2629, 0.8555, 0.7965, 0.3472],
# [0.7140, 0.6187, 0.4872, 0.3589, 0.7170],
# [0.3184, 0.3303, 0.8061, 0.6865, 0.5176],
# [0.6451, 0.1152, 0.4974, 0.0535, 0.0350],
# [0.1497, 0.7439, 0.7563, 0.8654, 0.6401],
# [0.1090, 0.9057, 0.2156, 0.3272, 0.6849],
# [0.8402, 0.4956, 0.4937, 0.9882, 0.1275],
# [0.0889, 0.8429, 0.3421, 0.1373, 0.1697],
# [0.1318, 0.0984, 0.1662, 0.4122, 0.1132],
# [0.9094, 0.2276, 0.8924, 0.3781, 0.7588]], dtype=torch.float64)
```
However, when trying to compute the gradient
```python
import torch
from torch.autograd import gradcheck
input_tensor = torch.rand([10, 5], dtype=torch.float64, requires_grad=True)
index_tensor = torch.rand([13, 0], dtype=torch.complex128, requires_grad=True)
src_tensor = torch.rand([10, 2], dtype=torch.float64, requires_grad=True)
def fn(input, index, src):
dim = -1
fn_res = torch.scatter_add(input, dim, index, src, )
return fn_res
gradcheck(fn, (input_tensor, index_tensor, src_tensor))
# RuntimeError: Function ScatterAddBackward0 returned an invalid gradient at index 1 - got [13, 0] but expected shape compatible with [10, 2]
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 10 |
5,737 | 77,230 |
fast `gradcheck` fails when outputs that do not require grad precede outputs that do
|
module: autograd, triaged, module: linear algebra, actionable
|
### π Describe the bug
`gradcheck` fails for `torch.linalg.slogdet` when `fast_mode=True`
```python
import torch
from torch.autograd import gradcheck
A = torch.rand([1, 1], dtype=torch.float64, requires_grad=True)
gradcheck(torch.linalg.slogdet, (A), fast_mode=True)
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @jianyuh @mruberry @walterddr @IvanYashchuk @xwang233
| 7 |
5,738 | 77,223 |
torch.ops.aten.ceil(1.5) returns Tensor rather than scalar
|
triaged, module: type promotion
|
### π Describe the bug
```
torch.ops.aten.ceil(1.5)
```
gives `tensor(2., dtype=torch.float64)`
However, the result is expected to be scalar type since the input is scalar type.
Moreover, when using this operator in torchscript
```
import torch
@torch.jit.script
def forward(self, lhs, rhs):
return torch.ops.aten.ceil(float(lhs-rhs))
print(forward.graph)
```
The torchscript IR shows
```
graph(%self : Tensor,
%lhs.1 : Tensor,
%rhs.1 : Tensor):
%5 : int = prim::Constant[value=1]()
%6 : Tensor = aten::sub(%lhs.1, %rhs.1, %5) # /usr/local/google/home/cathyzhyi/items/test/test.py:5:35
%8 : float = aten::Float(%6) # /usr/local/google/home/cathyzhyi/items/test/test.py:5:29
%9 : int = aten::ceil(%8) # /usr/local/google/home/cathyzhyi/items/test/test.py:5:9
return (%9)
```
It seems the output is supposed to be an int scalar not the Tensor dtype `torch.float64`
### Versions
Collecting environment information...
PyTorch version: 1.12.0.dev20220510+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 11.2.0-16+build1) 11.2.0
Clang version: 13.0.1-3+build2
CMake version: version 3.22.1
Libc version: glibc-2.33
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.15-1rodete2-amd64-x86_64-with-glibc2.33
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0.dev20220510+cpu
[pip3] torchvision==0.13.0.dev20220510+cpu
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.12.0.dev20220510+cpu pypi_0 pypi
[conda] torchvision 0.13.0.dev20220510+cpu pypi_0 pypi
cc @nairbv @mruberry
| 1 |
5,739 | 77,216 |
[primTorch] Reduction references don't return views consistent with their original operators
|
triaged, module: primTorch
|
This occurs because the reduction references model the keepdim arg as a broadcast following the reduction. This makes a lot of sense but causes view inconsistency. For example:
```
a = torch.randn(2, 2)
result = torch.amin(a, keepdim=True, dim=0)
print(result._is_view())
: False
result = refs.amin(a, keepdim=True, dim=0)
print(result._is_view())
: True
```
We should think about a way to resolve this -- making the operations consistent is one option, but maybe primTorch can have a different view behavior in this case.
cc @ezyang @mruberry @ngimel
| 4 |
5,740 | 77,200 |
[RFC] Allow device override during Tensor unpickling without torch.load
|
module: pickle, module: serialization, triaged, enhancement
|
### π The feature, motivation and pitch
Enable the behavior of `map_location` argument of `torch.load` when unpickling a tensor directly.
This is a recurring problem with torch.distributed and its reliance on pickle. Currently object collectives(1) that send around tensors produce undefined behavior as tensor can potentially surface on the wrong device or on an invalid device.
To address this we should offer a context manager that overrides the device tensors are placed at:
```python
data: BytesIO = ....
pickle.dump(torch.rand(1, device="cuda:0", data))
data.seek(0)
with pickle_device_override("cuda:1"):
t = pickle.load(data)
assert str(x.device) =="cuda:1"
```
(1) https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast_object_list
### Alternatives
Tensors could use python 3.8 support for OOB buffers: https://docs.python.org/3/library/pickle.html#out-of-band-buffers
This is a much more desirable approach as users that need special care for tensor location during pickling and unpickling could do it with custom Picklers without PT having to introduce a new special purpose hook.
One nice property of this alternative is that it would allow object collectives / rpc to send tensors around without copying them to the CPU first.
### Additional context
This is a known issue for PTD: https://github.com/pytorch/pytorch/issues/76865
cc @mruberry
| 1 |
5,741 | 93,753 |
Don't populate f_locals to check guards
|
triaged, enhancement, oncall: pt2, module: dynamo
|
Currently, TorchDynamo calls [PyFrame_FastToLocalsWithError](https://github.com/pytorch/torchdynamo/blob/7aeb1d4516a1b792cfb04807f6ea9faa14461147/torchdynamo/_eval_frame.c#L323) ([docs](https://bugs.python.org/file32537/c_api_frame.patch)).
Internally in cpython, this converts between the "frame fast variables" (a flat C array of PyObject*) and `frame->f_locals` (a python dictionary).
`frame->f_locals` starts as NULL, and is only populated by this function.
This works, but is higher overhead than it needs to be.
We should rewrite the [generated guards](https://github.com/pytorch/torchdynamo/blob/7aeb1d4516a1b792cfb04807f6ea9faa14461147/torchdynamo/guards.py#L350) to use the "fast variables" data structure directly.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 3 |
5,742 | 77,184 |
Undefined symbol error when compiling and loading C++ extension
|
module: cpp-extensions, module: cpp, triaged
|
### π Describe the bug
We are trying to use a custom C++ extension in our project, and it successfully compiles and runs on one Linux environment but fails with an undefined symbol error in another environment. We are JIT-compiling this extension using the `cpp_extension.load` API. For this particular extension there are no CUDA source files, only a single C++ source file. Here is the relevant part of the traceback in the failing environment:
```
Using /home/jovyan/.cache/torch_extensions as PyTorch extensions root...
Emitting ninja build file /home/jovyan/.cache/torch_extensions/segmented_lookup_cpp/build.ninja...
Building extension module segmented_lookup_cpp...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] c++ segmented_lookup.o -shared -ltorch_cpu -L/home/jovyan/ColBERT-4-fast_search/env-gpu/lib/python3.7/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o segmented_lookup_cpp.so
Loading extension module segmented_lookup_cpp...
Traceback (most recent call last):
...
File "/home/jovyan/ColBERT-4-fast_search/env-gpu/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1092, in load
keep_intermediates=keep_intermediates)
File "/home/jovyan/ColBERT-4-fast_search/env-gpu/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1318, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
File "/home/jovyan/ColBERT-4-fast_search/env-gpu/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1701, in _import_module_from_library
module = importlib.util.module_from_spec(spec)
ImportError: /home/jovyan/.cache/torch_extensions/segmented_lookup_cpp/segmented_lookup_cpp.so: undefined symbol: _ZNK2at6Tensor5dtypeEv
```
Things we have tried include explicitly specifying `extra_ldflags=["-ltorch_cpu"]` (as suggested by https://github.com/pytorch/pytorch/issues/60341) and clearing the `torch_extension` cache (as suggested by https://github.com/pytorch/pytorch/issues/68905), but neither of these has worked.
We suspect this issue is due to some compiler version mismatch (note that we use GCC 5.4 in the successful environment and GCC 9.3 in the failing environment) but we're not sure if there is a way to verify this without downgrading the GCC version. Any suggestions for debugging this further?
### Versions
This is the environment where the extension successfully runs:
```
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.6 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Clang version: Could not collect
CMake version: version 3.5.1
Python version: 3.7 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.0.194
GPU models and configuration:
GPU 0: TITAN V
GPU 1: TITAN V
GPU 2: TITAN V
GPU 3: TITAN V
Nvidia driver version: 460.39
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.3
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.9.0
[pip3] torchaudio==0.9.0a0+33b2469
[pip3] torchvision==0.10.0
[conda] blas 2.113 mkl conda-forge
[conda] blas-devel 3.9.0 13_linux64_mkl conda-forge
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 13_linux64_mkl conda-forge
[conda] libcblas 3.9.0 13_linux64_mkl conda-forge
[conda] liblapack 3.9.0 13_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 13_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h8d4b97c_803 conda-forge
[conda] mkl-devel 2022.0.1 ha770c72_804 conda-forge
[conda] mkl-include 2022.0.1 h8d4b97c_803 conda-forge
[conda] numpy 1.21.5 py37hf2998dd_0 conda-forge
[conda] pytorch 1.9.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch
[conda] torchaudio 0.9.0 py37 pytorch
[conda] torchvision 0.10.0 py37_cu111 pytorch
```
And this is the environment where loading the extension results in an error:
```
PyTorch version: 1.9.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:21) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GRID V100D-32C
GPU 1: GRID V100D-32C
GPU 2: GRID V100D-32C
Nvidia driver version: 460.32.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.9.0+cu111
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] numpy 1.21.6 py37h976b520_0 conda-forge
[conda] torch 1.9.0+cu111 pypi_0 pypi
```
cc @malfet @zou3519 @jbschlosser
| 0 |
5,743 | 77,176 |
Improve the overall design of MPSGraphCache
|
triaged, enhancement, module: backend, module: mps
|
### π Describe the bug
Currently the MPSGraphCache uses string keys to retrieve the Graphs which can be error-prone. Instead, storing the string in the Graph can alleviate some of these issues and takes care of situations where collisions happen. This issue tracks to improve the design of MPSGraphCache.
```
struct MPSGraphCache
{
typedef MPSCachedGraph * (^CreateCachedGraphBlock)();
struct CacheEntry {
CacheEntry(std::string key, MPSCachedGraph *cachedGraph) : cachedGraph_(cachedGraph), key_(key) {}
MPSCachedGraph* cachedGraph_ = nullptr;
std::string key_ = nullptr;
};
public:
static MPSGraphCache* getInstance() {
if(_instance_cache == nullptr) {
_instance_cache = new MPSGraphCache();
}
return _instance_cache;
}
....
```
### Versions
Python version: 3.8.11 (default, Aug 6 2021, 08:56:27) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.12.0a0+gitcfd4b64
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-include 2022.0.0 hecd8cb5_105
[conda] mkl-service 2.4.0 py38h9ed2024_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] nomkl 3.0 0
[conda] numpy 1.21.2 py38h0fa1045_0
[conda] numpy-base 1.21.2 py38hbbe2e76_0
[conda] torch 1.12.0a0+git64704d5 pypi_0 pypi
cc @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
5,744 | 77,171 |
Allow users to express fused matmul/bias/relu
|
triaged, enhancement, needs research, module: python frontend
|
### π The feature, motivation and pitch
We should let users have the benefit of `_addmm_activation` (fused matmul/bias/relu; also matmul/bias/gelu if/when that is fixed in cublasLt) from #74490 in their models. Some ideas on how we might do this:
- add a LinearReLU module that calls addmm + relu in training and just _addmm_activation in inference
- add a backward & documentation for _addmm_activation and make it public
- both?
- something else?
CC @ngimel
### Alternatives
- Do nothing and rely on nvfuser; doesn't help eager mode
- Do nothing and let users implement their own solution if they want it
### Additional context
_No response_
| 3 |
5,745 | 77,170 |
Move the MPSGuardImpl to inherit from NoOpDeviceGuardImpl
|
triaged, module: backend, module: mps
|
### π Describe the bug
Currently the MPSGuardImpl is inheriting from:
```
struct TORCH_API MPSGuardImpl final : public c10::impl::DeviceGuardImplInterface {
static constexpr DeviceType static_type = DeviceType::MPS;
```
This could be moved to NoOpDeviceGuardImpl. This issue is tracking that cleanup.
### Versions
```
Python version: 3.8.11 (default, Aug 6 2021, 08:56:27) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.12.0a0+gitcfd4b64
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-include 2022.0.0 hecd8cb5_105
[conda] mkl-service 2.4.0 py38h9ed2024_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] nomkl 3.0 0
[conda] numpy 1.21.2 py38h0fa1045_0
[conda] numpy-base 1.21.2 py38hbbe2e76_0
[conda] torch 1.12.0a0+git64704d5 pypi_0 pypi
```
| 0 |
5,746 | 77,167 |
nn.functional.pad accepts bool values but raises internal assert when converted to JIT
|
oncall: jit
|
### π Describe the bug
While working on Longformer in `transformers`, I came across this comment: https://github.com/huggingface/transformers/issues/13126#issuecomment-993645323. An incorrect implementation passes a bool as `value` into `nn.functional.pad`, which normally works. However, it raises an internal assert when used with `torch.jit`:
Example:
```python
import torch
from torch import nn
class MyModule(nn.Module):
def forward(self, inputs):
return nn.functional.pad(
inputs, (0, inputs.size(1) + 1), value=False # works if value=0
)
```
`torch.jit.trace_module(MyModule(), {"forward": torch.zeros(3, 4)})` results in
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/Users/patrick/Projects/open-source/transformers/notebooks/torch-jit-pad.ipynb Cell 3' in <cell line: 1>()
----> 1 torch.jit.trace_module(MyModule(), {"forward": torch.zeros(3, 4)})
File ~/.pyenv-x86/versions/transformers-x86/lib/python3.9/site-packages/torch/jit/_trace.py:958, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
954 argument_names = get_callable_argument_names(func)
956 example_inputs = make_tuple(example_inputs)
--> 958 module._c._create_method_from_trace(
959 method_name,
960 func,
961 example_inputs,
962 var_lookup_fn,
963 strict,
964 _force_outplace,
965 argument_names,
966 )
967 check_trace_method = module._c._get_method(method_name)
969 # Check the trace against new traces created from user-specified inputs
RuntimeError: 0INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool,
Candidates:
aten::constant_pad_nd(Tensor self, int[] pad, Scalar value=0) -> (Tensor)
```
`torch.jit.script(MyModule())` results in
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/Users/patrick/Projects/open-source/transformers/notebooks/torch-jit-pad.ipynb Cell 3' in <cell line: 1>()
----> 1 torch.jit.script(MyModule())
File ~/.pyenv/versions/torch/lib/python3.9/site-packages/torch/jit/_script.py:1265, in script(obj, optimize, _frames_up, _rcb, example_inputs)
1263 if isinstance(obj, torch.nn.Module):
1264 obj = call_prepare_scriptable_func(obj)
-> 1265 return torch.jit._recursive.create_script_module(
1266 obj, torch.jit._recursive.infer_methods_to_compile
1267 )
1269 if isinstance(obj, dict):
1270 return create_script_dict(obj)
File ~/.pyenv/versions/torch/lib/python3.9/site-packages/torch/jit/_recursive.py:454, in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
452 if not is_tracing:
453 AttributeTypeIsSupportedChecker().check(nn_module)
--> 454 return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File ~/.pyenv/versions/torch/lib/python3.9/site-packages/torch/jit/_recursive.py:520, in create_script_module_impl(nn_module, concrete_type, stubs_fn)
518 # Compile methods if necessary
519 if concrete_type not in concrete_type_store.methods_compiled:
--> 520 create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
521 # Create hooks after methods to ensure no name collisions between hooks and methods.
522 # If done before, hooks can overshadow methods that aren't exported.
523 create_hooks_from_stubs(concrete_type, hook_stubs, pre_hook_stubs)
File ~/.pyenv/versions/torch/lib/python3.9/site-packages/torch/jit/_recursive.py:371, in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
368 property_defs = [p.def_ for p in property_stubs]
369 property_rcbs = [p.resolution_callback for p in property_stubs]
--> 371 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
_pad(Tensor input, int[] pad, str mode="constant", float value=0.) -> (Tensor):
Expected a value of type 'float' for argument 'value' but instead found type 'bool'.
:
File "/var/folders/qm/r50_76fn105g_18z8w81058c0000gn/T/ipykernel_98988/2039179918.py", line 3
def forward(self, inputs):
return nn.functional.pad(
~~~~~~~~~~~~~~~~~ <--- HERE
inputs, (0, inputs.size(1) + 1), value=False # works if value=0
)
```
Is this considered a user error, or should it be addressed? Maybe we could at least improve the error message for `torch.jit.trace_module`.
Best,
Patrick
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.3)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.11 (main, May 4 2022, 09:48:34) [Clang 13.1.6 (clang-1316.0.21.2.3)] (64-bit runtime)
Python platform: macOS-12.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
| 1 |
5,747 | 77,166 |
torch.cholesky has been deprecated in favour of torch.linalg.cholesky. However, torch.cholesky_inverse remains as is. It should also be moved to torch.linalg
|
triaged, module: linear algebra
|
### π Describe the bug
Not sure where this should be filed under. But I noticed that, in the most recent version of pytorch `1.11.0+cu102`, when using `torch.cholesky`, it says it's deprecated in favour of `torch.linalg.cholesky`, and will getting phased out in future versions. However, the complementary function `torch.cholesky_inverse` still runs with no deprecation warning. Indeed, there's currently no `torch.linalg.cholesky_inverse`, nor does it seem like there are any such plans. If you saw it fit to move `cholesky` to `torch.linalg`, then it seems only right that `cholesky_inverse` should join it there.
### Versions
Running PyTorch 1.11.0+cu102 on WSL.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 3 |
5,748 | 77,165 |
Automate cleanup of header includes
|
module: internals, triaged
|
Many files currently include unnecessary headers, or include non-specific headers like `ATen.h` or `Functions.h` instead of the specific per-operator header e.g. `ATen/ops/sum.h`. It would be nice to have tools which can clean these up automatically. A fully automatic solution would also greatly simplify syncing header changes with meta-internal.
Currently there are configs for running `include-what-you-use` in `tools/iwyu` but this still requires a lot of manual tweaks,
- sometimes need to manually remove system-specific headers like `<bits/foo.h>` instead of `<foo>` for C++ stdlib
- it isn't aware of build configurations e.g. `ATen/ops` includes need to be guarded with `#ifdef AT_PER_OPERATOR_HEADERS`
- it can't be run on files in `ATen/native/cpu/` since these files aren't built by the build system directly, so it can't determine build flags
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @albanD
| 2 |
5,749 | 77,161 |
SymInt shouldn't be in dynamic_type.h
|
module: cpp, triaged
|
### π Describe the bug
We don't expect symbolic ints to ever show up in mobile, so they shouldn't be taking up space in the bitset. cc @jbschlosser @Krovatkin
### Versions
master
| 0 |
5,750 | 77,159 |
A somewhat cryptic error message (for newcomers) - "Cannot re-initialize CUDA in forked subprocess" - report and suggestion for a possible solution
|
module: dataloader, triaged
|
### π Describe the bug
I've seen multiple individuals and groups using num_workers=0 in DataLoader, and significantly under-utilizing GPUs due to the following error message which appeared cryptic (to newcomers) when they used num_workers bigger than 0
"Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method"
The error message was gone when they switched to num_workers=0
Using "spawn" start method did not solve the problem.
I analyzed the problem and found that inside their custom Dataset implementation that as a final step there they transfer the tensor into the GPU.
(Which could make sense from the point of view of non-experts, as they remove the need to do this as another step before feeding the tensor to a model.)
I was wondering - perhaps there should be a warning or an exception when a non-cpu tensor is being created inside a worker process spawn by DataLoader?
Or maybe an exception when a non-cpu tensor is being created inside a Dataset in general?
I can try to investigate and propose a PR if you think it's a good direction.
A short snippet to reproduce the problem:
```python
import torch
from torch.utils.data import DataLoader, Dataset
import torch.nn as nn
import torch.optim as optim
from tqdm import tqdm
class SomeDataset(Dataset):
def __len__(self):
return 100
def __getitem__(self, idx):
assert idx>=0 and idx<len(self)
t = torch.rand(3, 400,400, device=torch.device('cuda'))
return t
class SomeNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
def forward(self, x):
x = self.conv1(x)
return x
if __name__=='__main__':
ds = SomeDataset()
dl = DataLoader(ds, batch_size=8, num_workers=4)
model = SomeNet().cuda()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
for s in tqdm(iter(dl), total=len(dl)):
optimizer.zero_grad()
pred = model(s)
pred.mean().backward()
optimizer.step()
```
the output:
```bash
Traceback (most recent call last):
File "/some/path/reproduce_dataloader_cuda_tensors_problem.py", line 36, in <module>
for s in tqdm(iter(dl), total=len(dl)):
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
data = self._next_data()
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
data.reraise()
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/_utils.py", line 457, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/some/path/reproduce_dataloader_cuda_tensors_problem.py", line 16, in __getitem__
t = torch.rand(3, 400,400, device=torch.device('cuda'))
File "/some/path/to/download/miniconda3/envs/bio/lib/python3.7/site-packages/torch/cuda/__init__.py", line 207, in _lazy_init
"Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-167-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
Nvidia driver version: 460.106.00
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.782
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.0
[pip3] torch==1.11.0
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.12.0
[conda] mypy 0.782 pypi_0 pypi
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.21.5 pypi_0 pypi
[conda] pytorch-lightning 1.6.0 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchmetrics 0.7.3 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
```
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 15 |
5,751 | 77,155 |
FSDP: ability to ignore parameters
|
high priority, triage review, oncall: distributed, module: fsdp
|
### π The feature, motivation and pitch
https://github.com/pytorch/pytorch/issues/75255 implemented the ability to ignore FSDP parameters at the module level, i.e. by passing in an `ignore_module` list to the FSDP API.
Although, we've also seen the use case where users don't want to shard particular parameters of a module, but want to ensure the rest of the module still has parameters sharded. To support use cases such as these, we'd like to be able to ignore parameters specifically and not have them coupled to be all the parameters of a module.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 5 |
5,752 | 77,154 |
Distributed Weighted Sampler.
|
oncall: distributed
|
### π The feature, motivation and pitch
We have a `DistributedSampler` and we have a `WeightedRandomSampler`, but we don't have a distributed weighted sampler, to be used in say Distributed Data Parallel training with weighted sampling.
### Alternatives
There is no real alternative, unless we have to hack our way into weighted sampler, which essentially is my pitch for implementation as well.
### Additional context
Here is how I am imagining this implementation be.
```
class DistributedWeightedSampler(Sampler):
"""
A class for distributed data sampling with weights.
.. note::
For this to work correctly, global seed must be set to be the same across
all devices.
:param weights: A list of weights to sample with.
:type weights: list
:param num_samples: Number of samples in the dataset.
:type num_samples: int
:param replacement: Do we sample with or without replacement.
:type replacement: bool
:param num_replicas: Number of processes running training.
:type num_replicas: int
:param rank: Current device number.
:type rank: int
"""
def __init__(
self,
weights: list,
num_samples: int = None,
replacement: bool = True,
num_replicas: int = None,
):
if num_replicas is None:
num_replicas = torch.cuda.device_count()
self.num_replicas = num_replicas
self.num_samples_per_replica = int(
math.ceil(len(weights) * 1.0 / self.num_replicas)
)
self.total_num_samples = self.num_samples_per_replica * self.num_replicas
self.weights = weights
self.replacement = replacement
def __iter__(self):
"""
Produces mini sample list for current rank.
:returns: A generator of samples.
:rtype: Generator
"""
rank = os.environ["LOCAL_RANK"]
if rank >= self.num_replicas or rank < 0:
raise ValueError(
"Invalid rank {}, rank should be in "
"the interval [0, {}]".format(rank, self.num_replicas - 1)
)
weights = self.weights.copy()
# add extra samples to make it evenly divisible
weights += weights[: (self.total_num_samples) - len(weights)]
if not len(weights) == self.total_num_samples:
raise RuntimeError(
"There is a distributed sampler error. Num weights: {}, total size: {}".format(
len(weights), self.total_size
)
)
# subsample for this rank
weights = weights[rank : self.total_num_samples : self.num_replicas]
weights_used = [0] * self.total_num_samples
weights_used[rank : self.total_num_samples : self.num_replicas] = weights
return iter(
torch.multinomial(
input=torch.as_tensor(weights_used, dtype=torch.double),
num_samples=self.num_samples_per_replica,
replacement=self.replacement,
).tolist()
)
def __len__(self):
return self.num_samples_per_replica
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 2 |
5,753 | 77,144 |
Add Tensor compare support for MPS backend
|
triaged, module: backend, module: testing, module: mps
|
### π Describe the bug
Currently the MPS backend doesn't support all the operations needed for assertEqual. So currently the tensors on `mps` are mapped to cpu and the Equal operation is performed on CPU. This issue tracks enabling all the necessary operations for MPS backend and not-falling back to CPU. This issue was currently identified [here.](https://github.com/pytorch/pytorch/pull/76725/files#r866238060)
### Versions
Python version: 3.8.11 (default, Aug 6 2021, 08:56:27) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.12.0a0+gitcfd4b64
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-include 2022.0.0 hecd8cb5_105
[conda] mkl-service 2.4.0 py38h9ed2024_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] nomkl 3.0 0
[conda] numpy 1.21.2 py38h0fa1045_0
[conda] numpy-base 1.21.2 py38hbbe2e76_0
[conda] torch 1.12.0a0+gitcfd4b64 pypi_0 pypi
| 4 |
5,754 | 77,141 |
[FSDP] `ignored_modules` follow-ups
|
oncall: distributed, module: fsdp
|
This issue is to track a few follow-ups regarding `ignored_modules`.
1. Users may want to ignore specific parameters or buffers within a module. How should we modify the API to accommodate this?
2. What should happen if a user passes a module into `ignored_modules` that contains a submodule that is an `FSDP` instance?
- The current implementation ignores it without warning, but alternatives are to issue a warning or to error.
3. How should `ignored_modules` interact with shared parameters and buffers? For example, what should happen if a parameter is in a module in `ignored_modules` but also in a module that is not in `ignored_modules`? What if that latter module is an `FSDP` instance and already flattened its parameters?
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 1 |
5,755 | 77,140 |
torch.randperm uses too much cpu, but not efficient.
|
module: performance, triaged, module: random, module: multithreading
|
### π Describe the bug
I found that when running the torch.randperm function on the cpu, it uses more threads by default, but is rather less fast than using only one thread.
A reproduction code is as follows:
```python
import torch
from tqdm import tqdm
N_iter = 10000
for _ in tqdm(range(N_iter)):
idxs = torch.randperm(65536)
```
using default OMP_NUM_THREADS, the cpu usage is ~3800%, and produce more subthreads:
```
python test_cpu.py
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10000/10000 [00:18<00:00, 539.00it/s]
```
set OMP_NUM_THREADS=1, the cpu usage is ~100%, and it's more efficient:
```
OMP_NUM_THREADS=1 python test_cpu.py
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10000/10000 [00:12<00:00, 770.86it/s]
```
Although I can avoid this problem by setting OMP_NUM_THREADS or torch.set_num_threads() globally, I don't want to force a limit on the number of threads because I don't know if this will run other code that really needs multiple threads.
Is there a better way to avoid this unnecessary use of multiple threads?
### Versions
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-151-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
GPU 2: Tesla V100-PCIE-16GB
GPU 3: Tesla V100-PCIE-16GB
GPU 4: Tesla V100-PCIE-16GB
GPU 5: Tesla V100-PCIE-16GB
GPU 6: Tesla V100-PCIE-16GB
GPU 7: Tesla V100-PCIE-16GB
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.10.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.10.0
[pip3] torchsearchsorted==1.1
[pip3] torchvision==0.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] cudatoolkit-dev 11.4.0 h5e8e339_5 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.21.5 py38he7a7128_1
[conda] numpy-base 1.21.5 py38hf524024_1
[conda] pytorch 1.10.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.10.0 py38_cu102 pytorch
[conda] torchsearchsorted 1.1 pypi_0 pypi
[conda] torchvision 0.11.0 py38_cu102 pytorch
cc @VitalyFedyunin @ngimel @pbelevich
| 1 |
5,756 | 77,118 |
__name__ on OpOverload should not contain period
|
triaged, module: __torch_dispatch__
|
### π Describe the bug
Conventionally, `__name__` refers to the name of the Python object without any namespacing. However, for OpOverloads as retrieved from `torch.ops.aten.abs.default` these have the `__name__` `abs.default` which violates this
cc @Chillee @ezyang @zou3519 @albanD @samdow @anjali411
### Versions
master
| 2 |
5,757 | 77,113 |
broadcast_object_list with GPU tensors can lead to deadlock on PyTorch CI machines
|
high priority, triage review, oncall: distributed, module: fsdp
|
### π Describe the bug
See for example: https://github.com/pytorch/pytorch/runs/6357736633?check_suite_focus=true
Here, we are using `broadcast_object_list` on a dict with GPU tensors, and running into deadlocks which cause the process to hang and eventually terminate. This issue is not reproducible on V100 / A100 HW architecture and only appears to manifest itself on M60 PyTorch CI machines. https://github.com/pytorch/pytorch/issues/76865 runs into a similar deadlock.
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 0 |
5,758 | 77,067 |
Unable to continue adding modules to `nn.Sequential` after using `del` method
|
module: nn, triaged, enhancement
|
### π Describe the bug
When the module deleted by the `del` method is not the last module of the `torch.nn.Sequential` object, the index will be discontinuous and the module cannot be correctly added.
```python
from torch import nn
net = nn.Sequential(
nn.Linear(20, 10),
nn.ReLU(),
nn.Linear(10, 5)
)
del net[1]
print(net)
```
Output:
```python
Sequential(
(0): Linear(in_features=20, out_features=10, bias=True)
(2): Linear(in_features=10, out_features=5, bias=True)
)
```
It can be seen that the index is no longer consecutive. Continue to add modules:
```python
net.append(nn.ReLU())
net.append(nn.Sigmoid())
print(net)
```
Output:
```python
Sequential(
(0): Linear(in_features=20, out_features=10, bias=True)
(2): Sigmoid()
)
```
This does not match the expected result.
---
If the last module is removed, other modules can be added correctly later:
```python
from torch import nn
net = nn.Sequential(
nn.Linear(20, 10),
nn.ReLU(),
nn.Linear(10, 5)
)
del net[2]
net.append(nn.ReLU())
net.append(nn.Sigmoid())
print(net)
```
Output:
```python
Sequential(
(0): Linear(in_features=20, out_features=10, bias=True)
(1): ReLU()
(2): ReLU()
(3): Sigmoid()
)
```
### Versions
Pytorch Version: 1.11.0
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 3 |
5,759 | 77,053 |
Incorrect documentation in ``gumble_softmax`` function.
|
module: docs, module: nn, triaged
|
### π The doc issue
[Gumbel-Softmax documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html) states that the ``logits`` argument should be unnormalized. However, this is not how the formula is defined in the [original article](https://arxiv.org/abs/1611.01144), but apparently it's how it is coded in the accompanying notebook. The issue was referenced in this [PyTorch forum post](https://discuss.pytorch.org/t/gumbel-softmax-function-logits/22797).
The problem is that unnormalized log-probabilities may be to large w.r.t. the values introduced by the random samples from the Gumbel distribution, which defeats the purpose of the layer. This in turn leads to poor training performance.
### Suggest a potential alternative/fix
Two possible fixes:
1. Change the documentation to state that the ``logits`` should be normalized.
2. Normalize the logits inside the function using ``log_softmax``.
cc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 1 |
5,760 | 77,052 |
Building from source results in broken __version__
|
oncall: binaries, module: build, triaged
|
### π Describe the bug
I'm doing
```bash
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
git submodule update --init --recursive --jobs 0
# I'm not trying to build main, just latest release
git checkout tags/v1.11.0 -b v1.11.0-branch
# I've previously installed all the dependencies
python setup.py install --prefix ~/.local
```
and I'm seeing
```
Building wheel torch-1.11.0a0+gitbc2c6ed
-- Building version 1.11.0a0+gitbc2c6ed
```
so even though I'm building v1.11.0, the `__version__` is all screwed up.
I did this for v1.10.0 and I'm trying again with v1.11.0 because maybe I did something wrong?
This becomes a pretty big problem when I need to install packages that require torch like sentence_transformers. The `torch>=1.6.0` dependency of sentence_transformers sees `1.10.0a0+git71f889c` and doesn't know what the heck it is, so it starts downloading torch, like I didn't go to all the trouble of installing CUDNN and building the package from source lol
```
Collecting torch>=1.6.0
Downloading torch-1.11.0-cp38-cp38-manylinux1_x86_64.whl (750.6 MB)
βΈβββββββββββββββββββββββββββββββββββββββ 13.9/750.6 MB 5.4 MB/s eta 0:02:16
ERROR: Operation cancelled by user
```
I've also tried to checkout the tag/<release> without create a new branch, just in detached HEAD state. Still gives the <version>a0+git934ac58 tag when building.
Really prefer not to pass `--no-deps` when installing packages that require torch then `pip show` so I can install the dependencies by hand. What am I doing wrong?
### Versions
Collecting environment information...
PyTorch version: 1.10.0a0+git71f889c
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.4.152
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 470.82.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.761
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.10.0a0+git71f889c
[pip3] torch-geometric==2.0.3
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.12
[conda] Could not collect
cc @ezyang @seemethere @malfet
| 6 |
5,761 | 77,050 |
ENORMOUS OVERHEAD from mp.get_context('spawn')
|
module: performance, module: multiprocessing, triaged, module: POWER
|
### π Describe the bug
The bug is that using the pool from:
```
import torch.multiprocessing as mp # use spawn context
mp = mp.get_context('spawn')
```
**incurs ENORMOUS latency (x10 what normal)**, which for example,
isn't present when simply using:
```
import torch.multiprocessing as mp # no spawn context
```
Here is an example script that demonstrates the issue:
```
import time
import random
import math
import os
#import multiprocessing as mp
import torch.multiprocessing as mp # mp wrapper for pytorch compatibility!
mp = mp.get_context('spawn') # use get_context() rather than mp.set_start_method()!
# Simulate exponentially distributed worker latency
def laggy_worker(i):
laziness = random.random()*5
laziness = math.exp(laziness)
print(f"starting laggy worker, duration: {laziness}, pid: {os.getpid()}", flush=True)
time.sleep(laziness)
return laziness
# A helper class to circumvent exponentially distributed latency
class FastPool:
def __init__(self, *args, drop_rate=0.2, **kwd_args):
self.drop_rate = drop_rate
self.pool_args = args
self.pool_kwd_args = kwd_args
def map(self, fun, iter_):
iter_ = list(iter_)
result = []
pool_start = time.time()
with mp.Pool(*self.pool_args, **self.pool_kwd_args) as p:
print(f'took {time.time()-pool_start} seconds to start the pool!')
for i, res in enumerate(p.imap_unordered(fun, iter_)):
result.append(res)
print(f"finished laggy worker, reported duration: {res}, actual_duration: {time.time()-pool_start}, id: {i}", flush=True)
if i > (1-self.drop_rate)*len(iter_):
break_start = time.time()
break
print(f'took {time.time()-break_start} seconds to close pool')
return result
if __name__=='__main__':
pool = FastPool(drop_rate=0.2)
for i in range(5):
start = time.time()
result = pool.map(laggy_worker, range(42))
print('num results: ', len(result))
print('duration: ', time.time()-start)
```
====================================================================================
As long as you run this script with the line: ``` mp = mp.get_context('spawn') ``` uncommented you will see output like this:
====================================================================================

====================================================================================
If however you comment out that line (using default pytorch mp module), OR if you use the builtin python module. You will notice that _reported duration matches actual duration almost exactly_:
====================================================================================

====================================================================================
P.S. original script was a toy example for a different purpose, but it is short and sweet + demonstrates the issue well.
### Versions
pytorch version: 1.10.2
Python version: 3.8.10
Architecture: IBM Power9
cc @VitalyFedyunin @ngimel
| 5 |
5,762 | 77,049 |
Peak GPU-memory usage extremely huge when sorting with torch.sort
|
module: cuda, module: memory usage, triaged, module: sorting and selection
|
### π Describe the bug
As mentioned in the title, when sorting huge tensors on a GPU-device the peak GPU-memory usage is extremely huge.
For example:
```python
tensor = torch.rand(400_000_000, device='cuda:0')
print(torch.cuda.memory_summary())
torch.sort(tensor)
print(torch.cuda.memory_summary())
```
The first summary (i.e. after allocating this huge tensor):
```
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from large pool | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from small pool | 0 MB | 0 MB | 0 MB | 0 B |
|---------------------------------------------------------------------------|
| Active memory | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from large pool | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from small pool | 0 MB | 0 MB | 0 MB | 0 B |
|---------------------------------------------------------------------------|
| GPU reserved memory | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from large pool | 1526 MB | 1526 MB | 1526 MB | 0 B |
| from small pool | 0 MB | 0 MB | 0 MB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 0 B | 0 B | 0 B | 0 B |
| from large pool | 0 B | 0 B | 0 B | 0 B |
| from small pool | 0 B | 0 B | 0 B | 0 B |
|---------------------------------------------------------------------------|
| Allocations | 1 | 1 | 1 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Active allocs | 1 | 1 | 1 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 1 | 1 | 1 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 0 | 0 | 0 | 0 |
| from large pool | 0 | 0 | 0 | 0 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize allocations | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize GPU segments | 0 | 0 | 0 | 0 |
|===========================================================================|
```
and then the second summary after sorting:
```
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 1526 MB | 18380 MB | 18448 MB | 16922 MB |
| from large pool | 1526 MB | 18380 MB | 18448 MB | 16922 MB |
| from small pool | 0 MB | 0 MB | 0 MB | 0 MB |
|---------------------------------------------------------------------------|
| Active memory | 1526 MB | 18380 MB | 18448 MB | 16922 MB |
| from large pool | 1526 MB | 18380 MB | 18448 MB | 16922 MB |
| from small pool | 0 MB | 0 MB | 0 MB | 0 MB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 18380 MB | 18380 MB | 18380 MB | 0 B |
| from large pool | 18380 MB | 18380 MB | 18380 MB | 0 B |
| from small pool | 0 MB | 0 MB | 0 MB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 0 B | 1457 MB | 1457 MB | 1457 MB |
| from large pool | 0 B | 1457 MB | 1457 MB | 1457 MB |
| from small pool | 0 B | 0 MB | 0 MB | 0 MB |
|---------------------------------------------------------------------------|
| Allocations | 1 | 7 | 8 | 7 |
| from large pool | 1 | 7 | 8 | 7 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Active allocs | 1 | 7 | 8 | 7 |
| from large pool | 1 | 7 | 8 | 7 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 7 | 7 | 7 | 0 |
| from large pool | 7 | 7 | 7 | 0 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 0 | 1 | 1 | 1 |
| from large pool | 0 | 1 | 1 | 1 |
| from small pool | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize allocations | 0 | 0 | 0 | 0 |
|---------------------------------------------------------------------------|
| Oversize GPU segments | 0 | 0 | 0 | 0 |
|===========================================================================|
```
Notice the ~18GB peak memory usage in the second summary for sorting a tensor that occupies ~1.5GB.
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.28
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.19.0-19-amd64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 3090
Nvidia driver version: 460.73.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.12.0 py38_cu113 pytorch
cc @ngimel
| 7 |
5,763 | 77,047 |
Stop calling sizes/numel/dim/is_contiguous on undefined tensors
|
module: internals, triaged
|
### π Describe the bug
In https://github.com/pytorch/pytorch/pull/77036 I accidentally made these error and it caused a large number of test failures
https://gist.github.com/ezyang/d7eb3b8440268bd8bac8df2f66eaa39b
but they shouldn't be called in the first place
### Versions
master
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 0 |
5,764 | 77,046 |
torch.stack test_conj_view and test_neg_view are failing after 77043
|
triaged, module: complex, module: viewing and reshaping, module: primTorch
|
See https://github.com/pytorch/pytorch/pull/77043, which extended the sample inputs for stack and added xfails for these two tests.
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @ngimel
| 0 |
5,765 | 77,027 |
Kill use of TensorImpl::ShareExternalPointer in torch/csrc/jit/tensorexpr/external_functions.cpp
|
oncall: jit
|
### π Describe the bug
this is a legacy caffe2 method should not be used
### Versions
master
| 0 |
5,766 | 77,016 |
Where is fx2trt fx to tensorrt tool?
|
triaged, module: fx
|
### π The doc issue
I found there are some PR:
https://github.com/jerryzh168/pytorch/tree/fb09fd4ab4ba618db148f9dfc035be589efb9355/torch/fx/experimental/fx2trt
which persist of fx2trt tool, where does it goes in main stream pytorch code?
### Suggest a potential alternative/fix
_No response_
| 0 |
5,767 | 76,972 |
Display EC2 information
|
module: ci, triaged
|
Please see parent issue for further instructions #71563
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,768 | 76,966 |
[bug] `NATIVE` and `OMP` `parallel_for` implementations are inconsistent.
|
triaged, module: openmp, module: multithreading
|
### π Describe the bug
https://github.com/pytorch/pytorch/pull/72710 and parallel CIs uncovered that.
Simply build with `ATEN_THREADING=NATIVE python setup.py develop` and run `pytest -sv test/test_sparse.py -k index_select_par`.
Error messages could be inspected here: https://github.com/pytorch/pytorch/runs/6324324431?check_suite_focus=true.
These tests pass when built with `ATEN_THREADING=OMP`.
I was not able to test `TBB` as it is currently impossible to compile due to https://github.com/pytorch/pytorch/issues/76953.
The issue is in how backends decide on chunk size and the number of threads. I will submit a fix for that soon.
### Versions
Current master.
| 0 |
5,769 | 76,962 |
DISABLED test_comprehensive_linalg_ldl_factor_ex_cuda (__main__.TestDecompCUDA)
|
triaged, module: linear algebra, skipped
|
Platforms: linux
Same as #76961, but disabling test_comprehensive_linalg_ldl_factor_ex_cuda_complex128 and test_comprehensive_linalg_ldl_factor_ex_cuda_complex64
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 1 |
5,770 | 76,960 |
When using Rsqrt, the output of the 1/x process is very likely to have nan/inf
|
needs reproduction, triaged
|
### π Describe the bug
When using Rsqrt, the output of the 1/x process is very likely to have nan/inf. In some cases, the operator will not show output-nan/inf, but will show output-nan/inf
**For exampleοΌ**
structure
```
[{'name': 'Slice', 'params': {'tensor_space': 4, 'begin': [0, 9, 1, 1], 'size': [50, 7, 15, 1]}}, {'name': 'ReduceMax', 'params': {'keep_dims': True, 'dim': 1, 'tensor_space': 4}}, {'name': 'Rsqrt', 'params': {}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 15, 'out_features': 200}}, {'name': 'Softmax'}]
[{'name': 'ReduceMean', 'params': {'keep_dims': True, 'dim': 3, 'tensor_space': 4}}, {'name': 'ReduceMean', 'params': {'keep_dims': False, 'dim': 1, 'tensor_space': 4}}, {'name': 'Flatten', 'params': {}}, {'name': 'Rsqrt', 'params': {}}, {'name': 'Dense', 'params': {'in_features': 16, 'out_features': 200}}, {'name': 'Softmax'}]
[{'name': 'ReduceSum', 'params': {'keep_dims': True, 'dim': 1, 'tensor_space': 4}}, {'name': 'Rsqrt', 'params': {}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 48, 'out_features': 200}}, {'name': 'Softmax'}]
```
### Versions
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-74-generic x86_64)
Pytorch1.8
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| 0 |
5,771 | 76,959 |
When using Lambda, the output of the 1/x process is very likely to have nan/inf
|
needs reproduction, triaged
|
### π Describe the bug
When using Lambda, the output of the 1/x process is very likely to have nan/inf. In some cases, the operator will not show output-nan/inf, but will show output-nan/inf. Scenarios, where this happens with lambda, are generally: when the lambda expression carried in the lambda operator has certain risks (the operator does not impose certain constraints on its expression).
**For exampleοΌ**
structure
`[{'name': 'ReduceMax', 'params': {'keep_dims': True, 'dim': 1, 'tensor_space': 4}}, {'name': 'Upsample2d', 'params': {'scale_factor': 2, 'mode': 'nearest'}}, {'name': 'Lambda', 'params': {'output_shape': [2, 32, 3], 'function': 'lambda x: 1 / x**0.5 + 1.0'}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 192, 'out_features': 200}}, {'name': 'Softmax'}]`
### Versions
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-74-generic x86_64)
Pytorch1.8
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| 0 |
5,772 | 76,956 |
GlobalAvgPool2d causes the inconsistency of output between frameworks
|
needs reproduction, triaged
|
### π Describe the bug
Operators with GlobalAvgPool2d will cause inconsistency between frameworks. And for this inconsistency, only changing the input may cause its nature to change.
οΌInconsistency: When testing the three frameworks, the inconsistency here refers to comparing the output of the GlobalAvgPool2d layer with the other two frameworks and computing their absolute values.οΌ
**structureοΌ**
`[{'name': 'MaxPool2d', 'params': {'pool_size': [3, 2], 'stride': [4, 3], 'padding': 'same'}}, {'name': 'GlobalMaxPool2d', 'params': {}}, {'name': 'Dense', 'params': {'in_features': 1, 'out_features': 10}}, {'name': 'Softmax'}]`
### Versions
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-74-generic x86_64)
Pytorch1.8
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| 0 |
5,773 | 76,955 |
GaussianNoise causes the inconsistency of output between frameworks
|
needs reproduction, triaged
|
### π Describe the bug
Operators with GaussianNoise will cause inconsistency between frameworks. And for this inconsistency, only changing the input may cause its nature to change.
οΌInconsistency: When testing the three frameworks, the inconsistency here refers to comparing the output of the GaussianNoise layer with the other two frameworks and computing their absolute values.οΌ
**structureοΌ**
```
[{'name': 'GaussianNoise', 'params': {'stddev': 0.0050545493115078935}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 784, 'out_features': 10}}, {'name': 'Softmax'}]
[{'name': 'StridedSlice', 'params': {'tensor_space': 4, 'begin': [0, 1, 1, 0], 'end': [25, 22, 10, 1], 'stride': [1, 1, 1, 1]}}, {'name': 'GaussianNoise', 'params': {'stddev': 0.0031597719267489033}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 189, 'out_features': 10}}, {'name': 'Softmax'}]
```
### Versions
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-74-generic x86_64)
Pytorch1.8
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| 1 |
5,774 | 76,954 |
ReduceSum causes the inconsistency of output between frameworks
|
needs reproduction, triaged
|
### π Describe the bug
Operators with ReduceSum will cause inconsistency between frameworks. And for this inconsistency, only changing the input may cause its nature to change.
οΌInconsistency: When testing the three frameworks, the inconsistency here refers to comparing the output of the ReduceSum layer with the other two frameworks and computing their absolute values.οΌ
**structureοΌ**
```
[{'name': 'Transpose', 'params': {'output_shape': [0, 2, 3, 1], 'tensor_space': 4}}, {'name': 'ReduceSum', 'params': {'keep_dims': False, 'dim': 1, 'tensor_space': 4}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 28, 'out_features': 10}}, {'name': 'Softmax'}]
[{'name': 'ReduceSum', 'params': {'keep_dims': True, 'dim': 1, 'tensor_space': 4}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 28, 'out_features': 10}}, {'name': 'Softmax'}]
[{'name': 'LeakyReLU', 'params': {'alpha': 0.08195527217568771}}, {'name': 'Transpose', 'params': {'output_shape': [0, 3, 1, 2], 'tensor_space': 4}}, {'name': 'ReduceSum', 'params': {'keep_dims': True, 'dim': 2, 'tensor_space': 4}}, {'name': 'Flatten'}, {'name': 'Dense', 'params': {'in_features': 28, 'out_features': 10}}, {'name': 'Softmax'}]
```
### Versions
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-74-generic x86_64)
Pytorch1.8
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| 1 |
5,775 | 76,944 |
primTorch references don't handle scalar x scalar inputs correctly
|
triaged, module: type promotion, module: primTorch
|
Because of https://github.com/pytorch/pytorch/issues/76801. Testing for these is currently disabled.
cc @nairbv @mruberry @ezyang @ngimel
| 0 |
5,776 | 76,936 |
Private API for accessing all "internal" attributes on Tensors
|
module: internals, triaged
|
### π Describe the bug
c10::TensorImpl has a bunch of internal properties which are not directly accessible from Python. This can make debugging certain types of problems more difficult; e.g., you suspect the dispatch key set isn't set correctly but you can't just print it out directly. There should be a private, unstable API for querying this information (I don't think we should allow mutating the info) that can be used to assist in debugging.
### Versions
master
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 0 |
5,777 | 76,933 |
NVFuser opinfos - check for CudaFusionGroup in the graph
|
triaged, module: nvfuser
|
### π The feature, motivation and pitch
If an NVFuser opinfo fails but the JIT graph doesn't contain a CudaFusionGroup node, then we should append this information to the error, since it indicate that the problem is likely with JIT, not with NVFuser. This would make debugging these types of errors faster.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,778 | 76,927 |
Can't pickle model torch._C._distributed_c10d.ProcessGroupNCCL' object
|
oncall: distributed, module: c10d, module: ddp
|
### π Describe the bug
I can't pickle a model. I have a line where i'm effectively doing something like
```
class DataParallelModel(torch.nn.Module):
def __init__(self, model, group, ):
super().__init__()
self.shadow_model = deepcopy(self.model)
for param in self.shadow.parameters():
param.detach_()
... other stuff
```
Keep getting the following error trace
File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
y[deepcopy(key, memo)] = deepcopy(value, memo) File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)state = deepcopy(state, memo)
File "/opt/conda/lib/python3.8/copy.py", line 296, in _reconstruct
File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
value = deepcopy(value, memo)
File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.8/copy.py", line 161, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
rv = reductor(4)
TypeError: cannot pickle 'torch._C._distributed_c10d.ProcessGroupNCCL' object
state = deepcopy(state, memo)
File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/lib/python3.8/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.8/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle 'torch._C._distributed_c10d.ProcessGroupNCCL' object
### Versions
PyTorch version: 1.12.0a0+bd13bc6
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+bd13bc6
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h1d589f8_2 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+bd13bc6 pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 1 |
5,779 | 76,924 |
[RFC] Upstream current implementation of ssd_offload from fairscale FSDP to Torch Distributed FSDP
|
oncall: distributed, module: fsdp
|
### π The feature, motivation and pitch
# Goal
Upstream the currently experimental ssd_offload feature from Fairscale FSDP to torch distributed FSDP.
ssd_offload is a capability in similar FSDP-like systems (for example zero-infinity by Microsoft)
# API changes:
Logically from the outside ssd_offload looks very similar to cpu_offload. So merging the options probably makes sense (even though in the implementation they will look quite different.
fsdp constructor: cpu_offload argument -> offload_config
torch.distributed.fsdp.CPUOffload -> ... OffloadConfig
```
class OffloadConfig:
offload_type: Optional(str) = None # "cpu" or "ssd" only supported for now, maybe make into an enum
offload_directory: Optional(str) = None # directory where tensors offloaded to disk reside (only valid with offload_type="ssd". In the future this will probably be extended to support some sort of wildcards (e.g. a single machine has N GPUs and N SSDs, you may want to have a dedicated per GPU)
offload_params: bool = False
```
# Parts of Implementation
1) Efficient Chunked Disk I/O functions
2) Tensor Subclass implementation that exposes Tensors that can migrate back and forth between CPU and disk (SsdTensorHandle and SsdParameter). This also includes some optimizations for pickling (torch.save-ing) SsdTensorHandles (and its descendants) allowing them to serialize to disk and back w/out having multiple entire copies of the tensor in memory at a time.
3) Implementation of Flattened Parameter (SsdFlatParameter/SsdFlatParameterView) and ancillary code to support them. Current implementation uses non-chainable derivative of pytorch parameterization I've coined propertization (turning an nn.Module attribute into a property -- necessary for updating views from the flattened tensor due to .data modification) I'm also looking into another possible alternative that would utilize a transparently wrapped SsdFlatParameterView object (similar to wrapt, but much simpler).
### Alternatives
_No response_
### Additional context
# Future Works/Desires
* Creating Asynchronous disk I/O calls that can be called by an prefetcher/offloader to pipeline disk accesses.
* Simplifying SsdTensorHandle by removing .data overriding within FSDP
* Benchmarking + Profiling both time and memory usage of ssd_offloaded tensors when used in realistic models.
* Optimizations to improve performance.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 3 |
5,780 | 76,920 |
Avoid Self-loops on Module Creation
|
module: nn, triaged
|
### π The feature, motivation and pitch
I have two models `Model1` and `Model2`.
`Model2` will be a sub-module of `Model1`.
I need to use some parameters and methods of `Model1` in `Model2`, thus I want to link `Model2` to its father `Model1`.
```python
import torch
class Model1(torch.nn.Module):
def __init__(self):
torch.nn.Module.__init__(self)
self.submodel = Model2()
self.submodel.link(self)
class Model2(torch.nn.Module):
def link(self, model):
self.supmodel = model
model1 = Model1()
model1.eval()
```
Writing in the above way will cause self-loop, and raise maximum recursion error.
I know that I can write a third module containing both models to avoid such error, but I think directly support above implementation makes creation easier.
Is it possible to support such convention?
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 3 |
5,781 | 76,913 |
Pytorch return TCPStore( RuntimeError: Connection reset by peer)
|
oncall: distributed, module: c10d
|
I amtrying to run Cosmic Tagger pytorch benchmark. It runs file up to 256 nodes(1024 ranks). However, when I try to run on higher number of nodes 384 nodes(1536 ranks) it runs fine occasionally. Most of the time it fails
## Issue description
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
```
Error executing job with overrides: ['data.data_directory=/lus/grand/projects/Polaris/CosmicTagger/data/', 'run.minibatch_size=1536', 'run.id=at-2022-05-05_384nodes', 'run.iterations=10', 'data.downsample=0']
Traceback (most recent call last):
File "/home/ranand/bare_metal_Cosmic_tagger/CosmicTagger/bin/exec.py", line 254, in main
s = exec(cfg)
File "/home/ranand/bare_metal_Cosmic_tagger/CosmicTagger/bin/exec.py", line 47, in __init__
self.train()
File "/home/ranand/bare_metal_Cosmic_tagger/CosmicTagger/bin/exec.py", line 100, in train
self.make_trainer()
File "/home/ranand/bare_metal_Cosmic_tagger/CosmicTagger/bin/exec.py", line 181, in make_trainer
self.trainer = distributed_trainer.distributed_trainer(self.args)
File "/home/ranand/bare_metal_Cosmic_tagger/CosmicTagger/src/utils/torch/distributed_trainer.py", line 139, in __init__
torch.distributed.init_process_group(
File "//home/ranand/bare_metal_Cosmic_tagger/pytorch/torch/distributed/distributed_c10d.py", line 595, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "//home/ranand/bare_metal_Cosmic_tagger/pytorch/torch/distributed/rendezvous.py", line 259, in _env_rendezvous_handler
store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
File "//home/ranand/bare_metal_Cosmic_tagger/pytorch/torch/distributed/rendezvous.py", line 191, in _create_c10d_store
return TCPStore(
RuntimeError: Connection reset by peer
```
Software installed
Cudatoolkit/11.4
gcc-11.2
pytorch version 1.12.0
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 7 |
5,782 | 76,906 |
torch.nn.functional.linear sometimes incorrectly accepts arguments of the different type
|
module: nn, triaged, module: type promotion
|
```
In [14]: a=torch.randn(10,10,10,device="cuda", dtype=torch.half)
In [15]: w=torch.randn(10,10, device="cuda", dtype=torch.half)
In [16]: b=torch.randn(10,10, device="cuda")
In [17]: torch.nn.functional.linear(a.transpose(1,2), w, b).dtype
Out[17]: torch.float16
In [18]: torch.nn.functional.linear(w, b).dtype
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-23a27726c7de> in <module>
----> 1 torch.nn.functional.linear(w, b).dtype
RuntimeError: expected scalar type Half but found Float
```
We should disallow this case always.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @nairbv
| 0 |
5,783 | 76,891 |
Unwanted behavior with some in-place operations on CPU
|
triaged, module: viewing and reshaping
|
### π Describe the bug
When doing in-place operations on a CPU such as subtracting a column from all columns of a tensor, the following behavior occurs:
```python
import torch
a = torch.Tensor([[1, 2, 3], [4, 5, 6]])
a -= a[:, 0, None]
print(a) # tensor([[0., 2., 3.], [0., 5., 6.]])
```
It seems like the operation is processed sequentially, which is why a zero tensor is subtracted from the last two columns.
However, when performing the same operation with a tensor of size (2, 16), the first column is "correctly" subtracted from all columns:
```python
import torch
b = torch.ones((2, 16))
b -= b[:, 0, None]
print(torch.count_nonzero(b)) # tensor(0)
assert torch.count_nonzero(b) == 0
```
For a tensor of size (2, 17), the first 16 columns are processed "correctly", with the last one again being processed with the already updated version of the first column:
```python
import torch
c = torch.ones((2, 17))
c -= c[:, 0, None]
print(torch.count_nonzero(c[:, :16])) # tensor(0)
print(c[:, 16]) # tensor([1., 1.])
```
When running the same code on a GPU, the first version of the first column is always subtracted from all columns.
NumPy (on a CPU) also has the same behavior as PyTorch on a GPU.
It would be nice to get a warning when attempting to run such operations on the CPU, or otherwise to adopt the NumPy behavior and always broadcast the original slice of the tensor.
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: openSUSE Leap 15.3 (x86_64)
GCC version: (SUSE Linux) 7.5.0
Clang version: 11.0.1
CMake version: version 3.17.0
Libc version: glibc-2.31
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.3.18-150300.59.54-default-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-fft 1.3.1 pypi_0 pypi
[conda] mkl-random 1.2.2 pypi_0 pypi
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] mkl_fft 1.3.1 py310h2b4bcf5_1 conda-forge
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.21.5 pypi_0 pypi
[conda] numpy-base 1.21.5 py310h9585f30_2
[conda] pytorch 1.11.0 py3.10_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
| 1 |
5,784 | 76,887 |
Multiprocessing DataLoader hangs on exception inside iterator when using a simple queue and a producer thread
|
module: multiprocessing, module: dataloader, triaged
|
### π Describe the bug
I'm using a thread to read data from a pipe which can only be accessed by a single thread (hence the quirky setup). So the thread reads data continuously from the stream, and inserts the data frames into a multiprocessing SimpleQueue as they become available.
Then I have an `IterableDataset` that has access to this queue, and tries to read the data frame and then do some "heavy" processing on the data. I use a multiprocessing `DataLoader`.
However if something happens to raise an exception inside an dataset instance during the dataloader iteratator iteration the **python main process hangs after printing the exception rather than exiting as I would expect**. I think this is a bug, unless you can spot to multiprocessing mistake in my code example below.
I stumbled upon this ticket but I don't know it it's related: https://github.com/pytorch/pytorch/issues/48666
### Minimal example
```python
import threading
import time
import torch
import torch.multiprocessing as mp
from torch.utils.data import IterableDataset, DataLoader
class ProducerThread(threading.Thread):
def __init__(self):
super().__init__()
self.queue = mp.SimpleQueue()
def run(self):
while True:
data = '...'
time.sleep(0.01)
self.queue.put(data)
class Dataset(IterableDataset):
def __init__(self, queue):
self.queue = queue
def __iter__(self):
while True:
data = self.queue.get()
data = self.process(data)
yield data
def process(self, data):
raise RuntimeError()
if __name__ == '__main__':
batch_size = 4
num_workers = 3
producer = ProducerThread()
dataset = Dataset(producer.queue)
dataloader = DataLoader(dataset, batch_size, num_workers=num_workers)
producer.start()
for batch in dataloader:
pass
```
### Versions
Collecting environment information...
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.2.1 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Feb 15 2022, 14:18:12) [Clang 13.0.0 (clang-1300.0.27.3)] (64-bit runtime)
Python platform: macOS-12.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.10.2
[pip3] torchvision==0.11.3
[conda] Could not collect
cc @VitalyFedyunin @SsnL @ejguan @NivekT
| 1 |
5,785 | 76,885 |
EmbeddingBag: Does CUDA calculate error in EmbeddingBag forward when include_last_offset=True ?
|
module: nn, triaged, module: embedding
|
### π Describe the bug
Hi, @ezyang @jianyuh
I am a user of pytorch and want ask some help about the EmbeddingBag.
We found that the CUDA forward output of EmbeddingBag is error(not equal to CPU output) when include_last_offset=True.
Here is our test case, you can run it directly. The mode is sum.
```
import torch
from torch import nn as nn
device = 'cuda:0'
cpu_device = 'cpu'
weight_elem = 10
weight_feature_size = 4
include_last_offset = True
embedding_sum = nn.EmbeddingBag(weight_elem, weight_feature_size, mode='sum', scale_grad_by_freq=False, include_last_offset=include_last_offset)
embedding_sum.weight.data = torch.Tensor([[-0.1117, -0.4966, 0.1631, -0.8817],
[ 0.0539, 0.6684, -0.0597, -0.4675],
[-0.2153, 0.8840, -0.7584, -0.3689],
[-0.3424, -1.4020, 0.3206, -1.0219],
[ 0.7988, -0.0923, -0.7049, -1.6024],
[ 0.2891, 0.4899, -0.3853, -0.7120],
[ 0.7667, 0.0190, 0.0220, 1.1532],
[-0.3393, 0.1559, 0.8966, -0.2968],
[-0.6857, -0.0496, -1.2485, -0.8509],
[-0.7690, -1.5606, -0.5309, 0.2178]]).data
print("sum cpu")
input = torch.Tensor([1, 2, 4, 5, 4, 3, 2, 9], device=cpu_device).long()
offsets = torch.Tensor([0, 3, 6], device=cpu_device).long()
output = embedding_sum(input, offsets)
embedding_sum.zero_grad()
print("sum cuda")
input_cuda = input.to(device)
offsets_cuda = offsets.to(device)
embedding_sum = embedding_sum.to(device)
output_cuda = embedding_sum(input_cuda, offsets_cuda)
embedding_sum.zero_grad()
print('embedding_sum weight = ', embedding_sum.weight.cpu().data)
print('cpu output = ', output)
print('cuda output = ', output_cuda.to("cpu"))
```
We get the following output:
```
cpu output = tensor([[ 0.6374, 1.4601, -1.5230, -2.4388],
[0.7455, -1.0044, -0.7696, -3.3363]], grad_fn=<EmbeddingBagBackward0>)
cuda output = tensor([[ 0.6374, 1.4601, -1.5230, -2.4388],
[-0.2388, -1.6810, -2.0589, -3.4874]], grad_fn=<ToCopyBackward0>)
```
**Watch out: the second bag value of cuda output is not equal to cpu output.**
```
input [1, 2, 4, 5, 4, 3, 2, 9]
offsets [0, 3, 6]
```
For CUDA, the second bag's entries are: 5 4 3 2 9, while for cpu, they are 5 4 3. They are different.
```
idx value
5 [ 0.2891, 0.4899, -0.3853, -0.7120],
4 [ 0.7988, -0.0923, -0.7049, -1.6024],
3 [-0.3424, -1.4020, 0.3206, -1.0219],
2 [-0.2153, 0.8840, -0.7584, -0.3689],
9 [-0.7690, -1.5606, -0.5309, 0.2178],
Output = 0.2388, -1.6810, -2.0589, -3.4874
```
**Thus, we wonder how to understand the doc's description for include last offsets and which implmentation is right?**
https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html?highlight=embeddingbag#torch.nn.EmbeddingBag
### Version
My Env:
Torch: build from master, 36420b5e8cce9c783903bbc210ed7f2b6535ebf5
CUDA: 11.2
Device: A100
Ubuntu 20.04
Python: 3.8.12
Thank you.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 7 |
5,786 | 76,881 |
RecursionError when running torch.jit.script inside JitTestCase
|
oncall: jit
|
### π Describe the bug
```python
import torch
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.jit_utils import JitTestCase
class TestCudaFuser(JitTestCase):
def test_binary_ops(self):
for _ in range(1000):
def t(x: torch.Tensor, y: torch.Tensor, z: torch.Tensor):
o = torch.lt(x, y)
o = o + z
return o
t_jit = torch.jit.script(t)
if __name__ == '__main__':
run_tests()
```
Error message:
```
======================================================================
ERROR: test_binary_ops (__main__.TestCudaFuser)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1800, in wrapper
method(*args, **kwargs)
File "/home/gaoxiang/misc/jit-script-running.py", line 14, in test_binary_ops
t_jit = torch.jit.script(t)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 1323, in script
fn = torch._C._jit_script_compile(
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 261, in emitFunctionHook
self._compared_saved_loaded(func)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 241, in _compared_saved_loaded
imported = torch.jit.load(buffer2)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_serialization.py", line 169, in load
return wrap_cpp_module(cpp_module)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_recursive.py", line 865, in wrap_cpp_module
return torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 593, in _construct
script_module = RecursiveScriptModule(cpp_module)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 573, in __init__
super(RecursiveScriptModule, self).__init__()
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 272, in init_then_script
original_init(self, *args, **kwargs)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 475, in __init__
super(ScriptModule, self).__init__()
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 260, in __init__
self.training = True
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 768, in __setattr__
return super(RecursiveScriptModule, self).__setattr__(attr, value)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 500, in __setattr__
return super(ScriptModule, self).__setattr__(attr, value)
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1217, in __setattr__
if isinstance(value, Parameter):
File "/home/gaoxiang/.local/lib/python3.10/site-packages/torch/nn/parameter.py", line 11, in __instancecheck__
isinstance(instance, torch.Tensor) and getattr(instance, '_is_param', False))
RecursionError: maximum recursion depth exceeded while calling a Python object
----------------------------------------------------------------------
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0a0+git3b5093d
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 11.2.0
Clang version: 13.0.1
CMake version: version 3.23.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.17.4-arch1-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 510.60.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.3.3
/usr/lib/libcudnn_adv_infer.so.8.3.3
/usr/lib/libcudnn_adv_train.so.8.3.3
/usr/lib/libcudnn_cnn_infer.so.8.3.3
/usr/lib/libcudnn_cnn_train.so.8.3.3
/usr/lib/libcudnn_ops_infer.so.8.3.3
/usr/lib/libcudnn_ops_train.so.8.3.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0a0+gitee4c004
[pip3] torch-ucc==1.0.0
[pip3] torchani==2.2
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
```
| 0 |
5,787 | 76,871 |
PrimTorch binary refs do not handle CUDA + CPU scalar tensors correctly
|
triaged, module: primTorch
|
### π Describe the bug
```
import torch
import torch._refs as ref
print(torch.add(torch.randn(2, device='cuda'), torch.randn(())))
print(ref.add(torch.randn(2, device='cuda'), torch.randn(())))
```
gives
```
tensor([0.7217, 0.5864], device='cuda:0')
Traceback (most recent call last):
File "a.py", line 5, in <module>
print(ref.add(torch.randn(2, device='cuda'), torch.randn(())))
File "/private/home/ezyang/pytorch-tmp/torch/_refs/__init__.py", line 716, in add
result = prims.add(a, b)
File "/private/home/ezyang/pytorch-tmp/torch/_prims/__init__.py", line 173, in _prim
meta(*args, **kwargs)
File "/private/home/ezyang/pytorch-tmp/torch/_prims/__init__.py", line 202, in _elementwise_meta
utils.check_same_device(*args, allow_cpu_scalar_tensors=True)
File "/private/home/ezyang/pytorch-tmp/torch/_prims/utils.py", line 314, in check_same_device
raise RuntimeError(msg)
RuntimeError: Tensor on device cpu is not on the expected device cuda:0!
```
The CPU scalar tensor behavior in stock ATen is intentional and we need to replicate it.
### Versions
master
cc @ezyang @mruberry @ngimel
| 3 |
5,788 | 76,865 |
Object-base collectives create tensors at unexpected devices
|
high priority, oncall: distributed, better-engineering, module: c10d
|
### π Describe the bug
torch.distributed.gather_object (and similar object based collectives) rely on default Tensor pickling behavior of deserializing tensors to their original devices.
I ran into this issue when using ShardedTensor::gather as it leads to all processes occupying all GPUs.
This causes deadlocks on test/distributed/_shard/checkpoint/test_file_system_checkpoint.py -- test_load_rowwise_to_colwise and test_load_with_different_shard_plan.
Those deadlocks only happens on the VMs used by PyTorch CI (M60).
This behavior is quite surprising as the docs sort of imply that gathered tensors would all land `torch.cuda.current_device()`.
The following script will print this on a 4 GPU host:
`[0] :: cuda:0 cuda:1 cuda:2 cuda:3`
```python
import random
import torch.multiprocessing as mp
import torch
import torch.distributed as dist
import traceback
def run(rank):
dst = 0
obj = { "x": torch.rand(10).cuda(rank) }
gather_list = [None] * dist.get_world_size()
dist.gather_object(
obj,
gather_list if rank == dst else None,
dst=dst)
if rank == dst:
devices = " ".join(str(l["x"].device) for l in gather_list)
print(f"[{rank}] :: {devices}")
def worker(rank, init_method, world_size):
init_comms(init_method, rank, world_size)
try:
run(rank)
except BaseException as e:
print(f"[{rank}] :: raised {e}")
traceback.print_last()
destroy_comms()
def init_pg(init_method, rank, world_size):
torch.distributed.init_process_group(
backend="nccl",
world_size=world_size,
rank=rank,
init_method = init_method
)
torch.cuda.set_device(rank)
def init_comms(file_name, rank, world_size):
init_pg(file_name, rank, world_size=world_size)
def destroy_comms():
dist.barrier()
dist.destroy_process_group()
if __name__ == "__main__":
print("main")
port = random.randint(10000, 20000)
init_method = f"tcp://localhost:{port}"
world_size = torch.cuda.device_count()
mp.spawn(
fn=worker,
args=(init_method,world_size),
nprocs=world_size,
join=True,
)
```
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 3 |
5,789 | 76,856 |
Feature requests for optimizer overlapping
|
high priority, triage review, oncall: distributed, module: ddp, module: fsdp
|
### π The feature, motivation and pitch
After chatting with users who are interested in using optimizer overlap with DDP (and eventually FSDP) have a couple of feature requests:
1. Support an argument `set_grads_to_None`, as users would like to simply pass in this flag which will make their calls to optimizer.step() functionally a noop, instead of manually having to either to this themselves on their gradients or removing their optimizer step, and
2. Add a getter to expose the fully initialized fused optimizer on the DDP / FSDP module. This is required for things like learning late schedulers that take the optimizer as a ctor argument, as well as for use with things such as torchrec's `KeyedOptimizer` which wraps a vanilla torch.optim.Optimizer, as well as additional usability for things such as optimizer checkpointing.
3. Compatibility with LR schedulers.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 1 |
5,790 | 76,853 |
Inconsistent results between Pow and Float Pow with their numpy references for complex types
|
triaged, module: complex, module: NaNs and Infs
|
### π Describe the bug
Added numpy references to Pow and Float_Pow Opinfos. For `torch.complex64`, `torch.complex128` test_reference_numerics - (small, large, extremal) were skipped do to errors of the form: `Greatest absolute difference: nan at index`.
Followed from working on these issues: #76483, #74279
### Versions
Not version specific
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 2 |
5,791 | 76,844 |
Unify torch.ops argument parsing code with PythonArgParser
|
triaged, module: __torch_dispatch__, module: python frontend
|
### π Describe the bug
They are implementations of the same thing; we should use the same codepath for each. Relevant for https://github.com/pytorch/pytorch/pull/76835 where scalar arguments were not accepted into wrapped tensors.
### Versions
master
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 0 |
5,792 | 76,838 |
Windows CUDA TTS tracking task
|
module: ci, triaged
|
Our goal for CI TTS started at 3hrs this half and was not far from there at the start of 2022. However, now we are noticing some pretty large regressions where our Windows jobs have a TTS of 4hrs (per shard!). At peak times, the TTS goes higher due to stressed queuing times.
Actionable steps:
- Increase our pool of available Windows runners --> if feasible, this would be the ideal solution.
- Aggressively shard --> this is fine but there is an overhead we'd be eating @clee2000
- Remove Windows CUDA job from PRs --> this would not be ideal as Windows CUDA build signal is valuable to have on pull requests. We used to have smoke tests to not test so much on Windows GPU machines, but those were not worth maintaining as the process of maintaining them was mostly manual.
- Be able to delineate TTS on the job level so we can confirm results in our metrics (compared to spot checks)
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 18 |
5,793 | 76,827 |
TYPEIGNORE lint run locally disagrees with CI
|
module: lint, triaged, module: macos
|
### π Describe the bug
https://github.com/pytorch/pytorch/pull/76735 is failing TYPEIGNORE lint https://github.com/pytorch/pytorch/runs/6291285337?check_suite_focus=true but when I check out the package and run it locally the TYPEIGNORE lint passes. This is on OSX. cc @malfet @albanD @suo
### Versions
master
| 0 |
5,794 | 76,807 |
There is a bug with latest stable torch version and the following Nightly versions related to `optimize_for_mobile`
|
oncall: mobile
|
### π Describe the bug
There is a bug in `optimize_for_mobile` with latest torch version and the following nightly versions. Please look at this [discussion](https://discuss.pytorch.org/t/torch-utils-mobile-optimizer-optimize-for-mobile-is-resulting-different-output-than-torch-model-and-jit-model/150183) for full details.
**The short problem description:** the model is giving different result after `optimize_for_mobile`
The problem started with stable torch version 1.11.0 and the following Nightly versions. But it is working correctly with previous torch versions.
Here is the minimal code snippet to reproduce the error:
```
import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torchvision.models.resnet50(pretrained=True)
model.eval()
out1 = model(torch.ones([1, 3, 224, 224], dtype=torch.float))
out2 = traced_script_module(torch.ones([1, 3, 224, 224], dtype=torch.float))
out3 = traced_script_module_optimized(torch.ones([1, 3, 224, 224], dtype=torch.float))
```
and results:
```
tensor([[-3.0806e-01, 7.9845e-02, -1.1900e+00, -1.4837e+00, -5.1359e-01,...]])
tensor([[-3.0806e-01, 7.9845e-02, -1.1900e+00, -1.4837e+00, -5.1359e-01,...]])
tensor([[-5.2194e+00, 1.4939e+00, -2.2703e+00, -5.4212e+00, -2.0551e+00,...]])
```
### Versions
stable torch version 1.11.0 and the following Nightly versions.
| 5 |
5,795 | 76,806 |
torch.Tensor.__rdiv__ long x scalar float type promotion is incorrect
|
triaged, module: type promotion, module: primTorch
|
```
a = torch.tensor((1, 2, 3))
b = torch.tensor((1.), dtype=torch.double)
torch.Tensor.__rdiv__(a, b)
: tensor([1.0000, 0.5000, 0.3333])
```
The result should be have dtype torch.double:
```
b / a
: tensor([1.0000, 0.5000, 0.3333], dtype=torch.float64)
```
If we pursue this we should consider it in the context of https://github.com/pytorch/pytorch/issues/74616. I think type promotion works as expected in scenarios where the dunder is expected to be used:
```
1 / torch.tensor((1.), dtype=torch.double)
tensor(1., dtype=torch.float64)
```
cc @nairbv @mruberry @ezyang @ngimel
| 0 |
5,796 | 76,804 |
torch.add bool x bool allows integer alpha, inconsistent with other dtype type checking
|
triaged, module: primTorch
|
torch.add has an alpha kwarg which must be a scalar. Typically this kwarg must have a Python type that is weakly lower than the corresponding Python type of operation's computation dtype. For example, if adding two integer tensors alpha cannot be a float:
```
a = torch.tensor((1, 2, 3))
b = torch.tensor((4, 5, 6))
torch.add(a, b, alpha=.5)
: RuntimeError: For integral input tensors, argument alpha must not be a floating point number.
```
However for some reason we support integer alpha when adding bool tensors:
```
torch.add(torch.tensor((True, False, True, False)), torch.tensor((True, False, False, True)), alpha=1)
: tensor([ True, False, True, True])
```
The integer alpha works fine, as does specifying a boolean alpha:
```
torch.add(torch.tensor((True, False, True, False)), torch.tensor((True, False, False, True)), alpha=False)
: tensor([ True, False, True, False])
torch.add(torch.tensor((True, False, True, False)), torch.tensor((True, False, False, True)), alpha=True)
tensor([ True, False, True, True])
```
It's just inconsistent with our type checking for torch.add.
cc @ezyang @mruberry @ngimel
| 0 |
5,797 | 76,798 |
`gradcheck` for `torch.solve` may trigger INTERNAL ASSERT FAILED
|
needs reproduction, module: autograd, triaged
|
### π Describe the bug
`gradcheck` for `torch.solve` may trigger INTERNAL ASSERT FAILED randomly.
```python
import torch
input = torch.rand([0, 2, 5, 4], dtype=torch.float64, requires_grad=True)
A = torch.rand([1, 5, 5], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(torch.solve, (input, A))
# RuntimeError: falseINTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/aten/src/ATen/native/LinearAlgebraUtils.h":328, please report a bug to PyTorch. linalg.solve: (Batch element 0): Argument 1998362383 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
### Versions
pytorch: 1.11.0
cc @ezyang @gchanan @zou3519 @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 7 |
5,798 | 76,786 |
`cumprod, prod` will backward fail if `dtype` argument is different than the dtype of input tensor
|
module: autograd, triaged, module: complex, complex_autograd
|
### π Describe the bug
`cumprod, prod` will backward fail if `dtype` argument is different than the dtype of input tensor
```python
import torch
def fn(input):
dim = -1
dtype = torch.complex128
fn_res = torch.cumprod(input, dim, dtype=dtype)
return fn_res
input = torch.rand([5, 5, 5], dtype=torch.float64, requires_grad=True)
fn(input).sum().backward()
# RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
```python
import torch
input = torch.rand([3, 3], dtype=torch.float64, requires_grad=True)
def fn(input):
dtype = torch.complex64
fn_res = torch.prod(input, dtype=dtype, )
return fn_res
res = fn(input)
res.backward()
# RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @anjali411 @dylanbespalko @mruberry
| 0 |
5,799 | 76,785 |
`addr, baddmm, dist, l1_loss` will backward fail when input tensors have different dtypes
|
module: autograd, triaged, module: complex, actionable, complex_autograd
|
### π Describe the bug
`addr, baddmm, dist, l1_loss` will backward fail when input tensors have different dtypes
```python
import torch
input = torch.rand([3, 2], dtype=torch.float64, requires_grad=True)
vec1 = torch.rand([3], dtype=torch.float64, requires_grad=True)
vec2 = torch.rand([2], dtype=torch.complex128, requires_grad=True)
res = torch.addr(input, vec1, vec2)
res2 = res.sum()
res2.backward()
# RuntimeError: expected scalar type ComplexDouble but found Double
```
```python
import torch
results = dict()
input = torch.rand([10, 3, 5], dtype=torch.float64, requires_grad=True)
batch1 = torch.rand([10, 3, 4], dtype=torch.complex128, requires_grad=True)
batch2 = torch.rand([10, 4, 5], dtype=torch.complex128, requires_grad=True)
res = torch.baddbmm(input, batch1, batch2)
res2 = res.sum()
res2.backward()
# RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
```python
import torch
input = torch.rand([5, 3], dtype=torch.complex128, requires_grad=True)
other = torch.rand([5, 3], dtype=torch.float64, requires_grad=True)
res = torch.dist(input, other)
res.sum().backward()
# RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
```python
import torch
input = torch.rand([3, 5], dtype=torch.complex128, requires_grad=True)
target = torch.rand([3, 5], dtype=torch.float64, requires_grad=True)
res = torch.nn.functional.l1_loss(input, target)
res.backward()
# RuntimeError: result type ComplexDouble can't be cast to the desired output type Double
```
All of them succeed to forward but fail to backward. However, some other APIs will backward successfully even if input tensors have different dtypes, for example
```python
import torch
input = torch.rand([5, 3], dtype=torch.complex128, requires_grad=True)
other = torch.rand([5, 3], dtype=torch.float64, requires_grad=True)
res = torch.nn.functional.poisson_nll_loss(input, other)
res.sum().backward()
# succeed
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @anjali411 @dylanbespalko @mruberry
| 3 |
5,800 | 76,783 |
`gradcheck` fails for `torch.trace`
|
module: autograd, triaged, actionable
|
### π Describe the bug
`gradcheck` fails for `torch.trace` when `dim1 > dim0 + 1`
```python
import torch
input = torch.rand([5, 3], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(torch.trace, (input))
# torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
# numerical:tensor([[1.0000],
# [0.0000],
# [0.0000],
# [0.0000],
# [1.0000],
# [0.0000],
# [0.0000],
# [0.0000],
# [1.0000],
# [0.0000],
# [0.0000],
# [0.0000],
# [0.0000],
# [0.0000],
# [0.0000]], dtype=torch.float64)
# analytical:tensor([[1.],
# [0.],
# [0.],
# [0.],
# [1.],
# [0.],
# [0.],
# [0.],
# [1.],
# [0.],
# [0.],
# [0.],
# [1.],
# [0.],
# [0.]], dtype=torch.float64)
```
I think the `numerical:tensor` is right since `dim0 = 3` and there are 3 elements of the diagonal. Plus, when `dim1 = 4`, it will succeed.
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.