Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
5,601 | 78,413 |
torch.angle differs from np.angle for -0.
|
triaged, module: numpy, module: primTorch
|
### 🐛 Describe the bug
Discovered in https://github.com/pytorch/pytorch/pull/78349
```python
>>> import torch
>>> import numpy as np
>>> t = torch.tensor(-0.)
>>> torch.angle(t)
tensor(0.)
>>> np.angle(t.numpy())
3.141592653589793
```
cc: @mruberry
### Versions
master
cc @mruberry @rgommers @ezyang @ngimel
| 2 |
5,602 | 93,756 |
Torchdynamo for Deepspeed and FSDP
|
triaged, enhancement, module: fsdp, oncall: pt2
|
In AWS we are working with customers to enable gigantic transformer model training on EC2. Furthermore, we attempt to leverage compiler techniques to optimize Pytorch workloads due to its widely adoption. For example, we recently open-sourced [RAF](https://github.com/awslabs/raf), a deep learning compiler for training, that shows promising training acceleration for a set of models and verified it works with TorchDynamo. On the other hand, we do see the gap converting Pytorch programs to the IR we are using, especially when it comes to complex strategies such as distributed training.
One example is [Deepspeed](https://github.com/microsoft/DeepSpeed). It implements data parallelism (ZeRO) on top of Pytorch, introducing sharding for optimizer states, gradients and parameters. (FSDP is another approach to do ZeRO). The idea itself is pretty straightforward, but when trying to convert the Deepspeed implementation to RAF via lazy-tensor, it doesn’t work well. For instance, the [NaN check for gradients](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/stage3.py#L2461) breaks the graph and result performance degradation; It doesn’t capture CUDA stream usage; etc.
In my understanding, those issues could potentially be resolved via TorchDynamo as it has the capability to extract structures and call information via Python bytecode, apart from the lazy tensor tracing mechanism at higher level.
So we’d like to get suggestion from TorchDynamo community, what do you think about supporting such scenarios in TorchDynamo? Specifically, 1) how should TorchDynamo support ZeRO tracing? 2) would that be in the TorchDynamo soon or later? Due to our current roadmap and goals, we at AWS would be interested in collaboration along this direction if possible.
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @comaniac @szhengac @yidawang @mli @hzfan @zachzzc
| 2 |
5,603 | 78,367 |
Split up and reorganize RPC tests
|
oncall: distributed, triaged, better-engineering, module: rpc
|
## Issue description
Right now all RPC tests are in https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/distributed/rpc/rpc_test.py which is a 6500+ line file containing tests for `CudaRpcTest`, `FaultyAgentRpcTest`, `RpcTest`, `RpcTestCommon`, `TensorPipeAgentRpcTest`, `TensorPipeAgentCudaRpcTest`. These classes are then imported by various files such as https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/distributed/rpc_utils.py. It would be helpful to separate the tests so it is easier to know what you are adding to. (I encountered this when accidentally adding to RpcTest and the test began to be run internally which I did not want)
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @jjlilley @mrzzd
| 2 |
5,604 | 78,346 |
`gradcheck` fails for `torch.distribution.transform` APIs in forward mode
|
module: distributions, module: autograd, triaged, module: forward ad
|
### 🐛 Describe the bug
`gradcheck` fails for `torch.distribution.transform` APIs in forward mode when `cache` is not 0.
Take `AbsTransform` as an example,
```python
import torch
def get_fn():
cache_size = 1
arg_class = torch.distributions.transforms.AbsTransform(cache_size=cache_size, )
def fn(input):
fn_res = arg_class(input)
return fn_res
return fn
fn = get_fn()
input = torch.rand([3], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(fn, (input,), check_forward_ad=True)
# torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
# numerical:tensor([[0., 0., 0.],
# [0., 0., 0.],
# [0., 0., 0.]], dtype=torch.float64)
# analytical:tensor([[1., 0., 0.],
# [0., 1., 0.],
# [0., 0., 1.]], dtype=torch.float64)
```
Other APIs like `SigmoidTransform, TanhTransform, SoftmaxTransform` also have this issue
### Versions
pytorch: 1.11.0
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @albanD @zou3519 @gqchen @pearu @soulitzer @Lezcano @Varal7
| 0 |
5,605 | 78,332 |
TRACK: integral + floating inputs to an op with floating requiring grad result in INTERNAL_ASSERT
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
This is the tracking of the issues that "integral + floating inputs to an op with floating requiring grad result in INTERNAL_ASSERT"
- tensordot #77517
- addmv #77814
- mv #77814
- bilinear #78087
- matmul #78141
- mm #78141
- baddmm #78143
- index_fill #78443
- layer_norm #78444
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 2 |
5,606 | 78,274 |
Memory allocation errors when attempting to initialize a large number of small feed-forward networks in RAM with shared memory despite having enough memory
|
module: memory usage, triaged
|
### 🐛 Describe the bug
Hello,
I am attempting to initialize and allocate space for ~10,000 small, two hidden layer mlps with shared memory in RAM. Here is how the models are created:
```
import torch
import psutil
import torch.nn as nn
class AntNN(nn.Module):
def __init__(self, input_dim=60, hidden_size=256, action_dim=8, init_type='xavier_uniform'):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(input_dim, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, action_dim)
)
def forward(self, obs):
return self.layers(obs)
def model_factory(device, hidden_size=128, init_type='xavier_uniform', share_memory=True):
model = AntNN(hidden_size=hidden_size, init_type=init_type).to(device)
if share_memory:
model.share_memory()
return model
if __name__ == '__main__':
num_policies = 10000
mlps = []
device = torch.device('cpu')
for _ in range(num_policies):
mlp = model_factory(device, 128, share_memory=True)
mlps.append(mlp)
print(f'RAM Memory % used: {psutil.virtual_memory()[2]}')
```
I’m keeping track of the total RAM usage and this is the last statement that printed before the error:
> RAM Memory % used: 56.8
So I clearly have more than enough memory available. Here is the “cannot allocate memory” error:
> Traceback (most recent call last):
File "/home/user/map-elites/testing.py", line 164, in <module>
mlp = model_factory(device, 128, share_memory=True)
File "/home/user/map-elites/models/ant_model.py", line 11, in model_factory
model.to(device).share_memory()
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1805, in share_memory
return self._apply(lambda t: t.share_memory_())
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply
module._apply(fn)
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply
module._apply(fn)
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/nn/modules/module.py", line 601, in _apply
param_applied = fn(param)
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1805, in <lambda>
return self._apply(lambda t: t.share_memory_())
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/_tensor.py", line 482, in share_memory_
self.storage().share_memory_()
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/storage.py", line 480, in share_memory_
self._storage.share_memory_()
File "/home/user/miniconda3/envs/map-elites/lib/python3.8/site-packages/torch/storage.py", line 160, in share_memory_
self._share_filename_()
RuntimeError: unable to mmap 65600 bytes from file </torch_6936_3098808190_62696>: Cannot allocate memory (12)
Process finished with exit code 1
With share_memory=False this works just fine, but for my application, it is critical that these tensors exist in shared memory b/c these tensors are modified by different processes. Is this some bug or a fundamental limitation to how shared memory in pytorch works? Is there any way to get around this problem?
Also, I noticed that PyTorch’s default sharing system file_descriptor opens a lot of file descriptors, and that I might be reaching my system’s soft (or hard) open files limit. So I tried increasing the number from 1024 → 1,000,000 and was able to make it to
> Num mlps: 6966 / 10,000
RAM Memory % used: 62.0
Before running into the mmap error. I tried playing around with different values for the max number of open file descriptors allowed by the system and couldn’t get past that number, so I don’t think it’s bottlenecked by the number of allowed file descriptors anymore. I also tried
```torch.multiprocessing.set_sharing_strategy('file_system')```
since it seems to keep track of less file descriptors per tensor, but this didn’t help either.
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.5 | packaged by conda-forge | (default, Jul 31 2020, 02:39:48) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] libblas 3.8.0 16_mkl conda-forge
[conda] libcblas 3.8.0 16_mkl conda-forge
[conda] liblapack 3.8.0 16_mkl conda-forge
[conda] liblapacke 3.8.0 16_mkl conda-forge
[conda] mkl 2020.1 217
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.1.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
| 4 |
5,607 | 78,262 |
Request for adding the possibility for training on sparse tensors
|
module: sparse, triaged
|
### 🚀 The feature, motivation and pitch
I'm working on tensors that most of the cells are 0's and some cells are 1's, say `100` 1's out of `50^3` 0's.
I tried to use [turn tensor into sparse tensor](https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html#torch-tensor-to-sparse) in order to turn my tensors into sparse ones.
The problem is that I can't use them in order to propagate inside a neural network for training and I'm getting the error message: `RuntimeError: Input type (torch.cuda.sparse.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same`.
It would be great if there be a possibility for training on sparse tensors too.
Thank you
### Alternatives
_No response_
### Additional context
_No response_
cc @nikitaved @pearu @cpuhrsch @amjames
| 3 |
5,608 | 78,261 |
pytorch-android-lite use its own libfbjni.so, which is not compatible with any other version at all..
|
oncall: binaries, triaged, module: android
|
### 🐛 Describe the bug
Wanted to have react-native and pytorch lite together in one project.
Already wasted 2 full working days on that, only to find out the root problem (well, there are 2).
The first problem is that strange "0.0.3" version of fbjni, which NEVER existed (and should never). If you go to the fbjni page, you will also see, there was NEVER a release of 0.0.3. Yet, someone uploaded to maven and enough are referencing to it.
0.2.2 should be the version, where react-native >=0.65 and pytorch(lite) should work together, if pytorch lite didnt change the library itself and then break everything else, as you cannot say which .so it should use. Just "pickFirst (woow at android gradle for that)".
Just make a small demo pytorch android lite project and then change your implementation to that (in build.gradle) :
implementation 'com.facebook.fbjni:fbjni:0.2.2' //(not java-only, really that one)
implementation ('org.pytorch:pytorch_android_lite:1.12'){
exclude module:'fbjni'
exclude module:'fbjni-java-only'
exclude module:'soloader'
exclude module:'nativeloader'
}
Also dont forget to say "which" .so it should pick
android {
packagingOptions {
pickFirst "**/*.so"
}
}
Try to start your demo project now and it will just crash with that:
cannot locate symbol "_Unwind_Resume" referenced by "/data/app/com.planradar.demoML-IK4F5CYrCuaaR7Csu8YHzQ==/lib/arm64/libpytorch_jni_lite.so"
Then just REMOVE the exclude parts and also remove the import of fbjni0.2.2. Then build your app again and it works again..
Basically your dependency says, it needs fbjni:0.2.2 (which is true), but you somehow change the library itself, resulting in a bigger .so file.
The problem is, that when using react-native together, it always picks the "wrong" one. So the one with 170kb. React-native starts fine, but pytorch just crashes.
I replaced the .so file manually (apktool and signing) and then react-native and pytorch are starting fine.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.26
Python version: 3.7.5 (default, Feb 23 2021, 13:22:40) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-74-generic-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.3
[conda] Could not collect
PS.: I tested with pytorch android lite 1.11 and 1.12
cc @ezyang @seemethere @malfet
| 1 |
5,609 | 78,260 |
[CI] Detect when tests are no longer running from CI
|
module: ci, triaged
|
### 🐛 Describe the bug
A few days ago (as of 5/25), the mac CI tests stopped running as a result of a yml change that broke GHA parsing (see https://github.com/pytorch/pytorch/pull/78000).
The scary part is that we only noticed it days later, and there was no automatic mechanism warning us that the test was missing. We should have such a mechanism.
How we can detect when this happens next time:
1. We could warn when the number of test cases run for a commit has dropped significantly even when there are no pending jobs
2. We could warn when the number of workflow jobs run for a commit has dropped from the previous workflow run
How should we warn?
- Send an email to OSS CI oncall
- Auto create an issue and tag module: ci and high-priority
- Configure an alert that looks different from our other alerts
### Versions
CI
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,610 | 78,255 |
Floating point exception in _conv_depthwise2d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Floating point exception in `_conv_depthwise2d`.
### Example to reproduce
```python
import torch
torch.cuda.init()
gpu_dev = torch.device('cuda')
tensor_0 = torch.full((4, 1, 3, 3,), 1, dtype=torch.float64, requires_grad=False, device=gpu_dev)
tensor_1 = torch.full((4, 1, 3, 3,), 0.5, dtype=torch.float64, requires_grad=False, device=gpu_dev)
intarrayref_2 = [1, 1]
tensor_3 = torch.full((4,), 1, dtype=torch.float64, requires_grad=False, device=gpu_dev)
intarrayref_4 = [0, 0]
intarrayref_5 = [0, 0]
intarrayref_6 = [0, 0]
torch._C._nn._conv_depthwise2d(tensor_0, tensor_1, intarrayref_2, tensor_3, intarrayref_4, intarrayref_5, intarrayref_6)
```
### Result
```floating point exception```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
PyTorch version: 1.11.0a0+git1efeb37
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 4.3.21300-5bbc51d8
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-14-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: Lexa XT [Radeon PRO WX 3100]
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 40321.30
MIOpen runtime version: 2.12.0
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0a0+gitfdec0bf
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.11.0a0+gitcb6fe03 pypi_0 pypi
| 1 |
5,611 | 78,253 |
Any plan to add Noam scheduling?
|
triaged, module: LrScheduler
|
### 🚀 The feature, motivation and pitch
I found Noam scheduling(from "Attention is all you need" paper) important for training transformers.
But it's not included in torch.
Do you have plan to add it or want others to add it?
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
5,612 | 78,249 |
`max_unpool2d` is not deterministic
|
module: numerical-stability, module: nn, triaged, module: determinism, module: pooling
|
### 🐛 Describe the bug
`max_unpool2d` is not deterministic
```python
import torch
def test():
kernel_size = 2
stride = 2
unpool = torch.nn.MaxUnpool2d(kernel_size, stride=stride, )
input0_tensor = torch.rand([1, 1, 2, 2], dtype=torch.float32)
input1_tensor = torch.randint(-2, 8, [1, 1, 2, 2], dtype=torch.int64)
input0 = input0_tensor.clone()
input1 = input1_tensor.clone()
try:
res1 = unpool(input0, input1)
except Exception:
res1 = torch.tensor(0)
input0 = input0_tensor.clone()
input1 = input1_tensor.clone()
try:
res2 = unpool(input0, input1)
except Exception:
res2 = torch.tensor(0)
if torch.allclose(res1, res2) == False:
print(input0_tensor)
print(input1_tensor)
print(res1)
print(res2)
return True
else:
return False
for _ in range(1000):
if test():
break
# tensor([[[[0.8725, 0.3154],
# [0.7132, 0.8304]]]])
# tensor([[[[1, 6],
# [1, 5]]]])
# tensor([[[[0.0000, 0.7132, 0.0000, 0.0000],
# [0.0000, 0.8304, 0.3154, 0.0000],
# [0.0000, 0.0000, 0.0000, 0.0000],
# [0.0000, 0.0000, 0.0000, 0.0000]]]])
# tensor([[[[0.0000, 0.8725, 0.0000, 0.0000],
# [0.0000, 0.8304, 0.3154, 0.0000],
# [0.0000, 0.0000, 0.0000, 0.0000],
# [0.0000, 0.0000, 0.0000, 0.0000]]]])
```
### Versions
pytorch: 1.11.0
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @kurtamohler
| 3 |
5,613 | 78,248 |
USE_NATIVE_ARCH flag causes nvcc build failure due to "'arch=native': expected a number"
|
module: build, triaged
|
### 🐛 Describe the bug
I'm attempting to build pytorch wheels on a Jetson AGX Xavier DevKit running Ubuntu 18.04.
The script below works fine when the `USE_NATIVE_ARCH` option is not set, and fails when `USE_NATIVE_ARCH=1`.
<details>
<summary>
Script (click to expand)
</summary>
```bash
#!/bin/bash
set -e
set -o pipefail
python3.10 -m venv wheel_builder
source wheel_builder/bin/activate
PYTORCH_VERSION="1.11.0"
cd "${HOME}"
git clone --depth 1 --recursive --branch "v$PYTORCH_VERSION" https://github.com/pytorch/pytorch.git
cd pytorch
export BUILD_TEST=0
export USE_NCCL=0
export USE_DISTRIBUTED=0
export USE_QNNPACK=0
export USE_PYTORCH_QNNPACK=0
export TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"
export USE_NUMPY=1
export USE_CUDNN=1
export USE_NATIVE_ARCH=1
export PYTORCH_BUILD_VERSION="$PYTORCH_VERSION"
export PYTORCH_BUILD_NUMBER=1
export CC=$(which gcc-8)
export CXX=$(which g++-8)
pip install numpy==1.22.3
pip install wheel
pip install -r requirements.txt
pip install scikit-build
pip install ninja
python setup.py bdist_wheel
```
</details>
The configuration and CMake outputs can be found here: [Summaries.txt](https://github.com/pytorch/pytorch/files/8768732/Summaries.txt)
The build target that fails is
`[2888/3297] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/CrossKernel.cu.o`. It fails because `nvcc` is provided the `-march=native` option, yet doesn't seem to understand or expect that option. `-march` is a known gcc flag and works fine for native binaries but may not be a supported option for nvcc.
It does seem that nvcc has an `arch` option in some contexts, probably referring to CUDA compute capabilities. But `native` isn't an accepted value.
Error output:
```
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/CrossKernel.cu.o/usr/local/cuda-10.2/bin/nvcc -DAT_PER_OPERATOR_HEADERS -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_CUDA -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/cudnn_frontend/include -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -Iinclude -I../torch/csrc/distributed -I../aten/src/THC -I../aten/src/ATen/cuda -Icaffe2/aten/src -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -I../c10/cuda/../.. -I../c10/.. -I../torch/csrc/api -I../torch/csrc/api/include -isystem=../third_party/protobuf/src -isystem=../third_party/XNNPACK/include -isystem=../cmake/../third_party/eigen -isystem=/usr/include/python3.10 -isystem=/home/aruw/jetson-setup/asset-builds/wheel_builder/lib/python3.10/site-packages/numpy/core/include -isystem=../cmake/../third_party/pybind11/include -isystem=../cmake/../third_party/cub -isystem=/usr/local/cuda-10.2/include -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_72,code=sm_72 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -Xcompiler=-fPIC -march=native -D__NEON__ -DTH_HAVE_THREAD -Xcompiler=-Wall,-Wextra,-Wno-unused-parameter,-Wno-unused-variable,-Wno-unused-function,-Wno-unused-result,-Wno-unused-local-typedefs,-Wno-missing-field-initializers,-Wno-write-strings,-Wno-unknown-pragmas,-Wno-type-limits,-Wno-array-bounds,-Wno-unknown-pragmas,-Wno-sign-compare,-Wno-strict-overflow,-Wno-strict-aliasing,-Wno-error=deprecated-declarations,-Wno-missing-braces,-Wno-maybe-uninitialized -DTORCH_CUDA_BUILD_MAIN_LIB -Xcompiler -pthread -std=c++14 -x cu -c ../aten/src/ATen/native/cuda/CrossKernel.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/CrossKernel.cu.o && /usr/local/cuda-10.2/bin/nvcc -DAT_PER_OPERATOR_HEADERS -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_CUDA -DUSE_EXTERNAL_MZCRC-D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/cudnn_frontend/include -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -Iinclude -I../torch/csrc/distributed -I../aten/src/THC -I../aten/src/ATen/cuda -Icaffe2/aten/src -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -I../c10/cuda/../.. -I../c10/.. -I../torch/csrc/api -I../torch/csrc/api/include -isystem=../third_party/protobuf/src -isystem=../third_party/XNNPACK/include -isystem=../cmake/../third_party/eigen -isystem=/usr/include/python3.10 -isystem=/home/aruw/jetson-setup/asset-builds/wheel_builder/lib/python3.10/site-packages/numpy/core/include -isystem=../cmake/../third_party/pybind11/include -isystem=../cmake/../third_party/cub -isystem=/usr/local/cuda-10.2/include -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_72,code=sm_72 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -Xcompiler=-fPIC -march=native -D__NEON__ -DTH_HAVE_THREAD -Xcompiler=-Wall,-Wextra,-Wno-unused-parameter,-Wno-unused-variable,-Wno-unused-function,-Wno-unused-result,-Wno-unused-local-typedefs,-Wno-missing-field-initializers,-Wno-write-strings,-Wno-unknown-pragmas,-Wno-type-limits,-Wno-array-bounds,-Wno-unknown-pragmas,-Wno-sign-compare,-Wno-strict-overflow,-Wno-strict-aliasing,-Wno-error=deprecated-declarations,-Wno-missing-braces,-Wno-maybe-uninitialized -DTORCH_CUDA_BUILD_MAIN_LIB -Xcompiler -pthread -std=c++14 -x cu -M ../aten/src/ATen/native/cuda/CrossKernel.cu -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/CrossKernel.cu.o -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/CrossKernel.cu.o.d
nvcc fatal : 'arch=native': expected a number
```
I would expect the `USE_NATIVE_ARCH` option to work for CUDA-enabled builds.
### Versions
Some of the below data is unhelpful because it attempts to use `nvidia-smi`, which isn't available on the Jetson line. However, CUDA 10.2 is available and known to work with other wheels we have built.
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (aarch64)
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.13.2
Libc version: glibc-2.27
Python version: 3.10.4 (main, Apr 9 2022, 21:27:52) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.9.253-tegra-aarch64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: 10.2.300
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.2.1
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] Could not collect
```
```
aruw@jetson-aruw-lambda:~/jetson-setup/asset-builds$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
```
cc @malfet @seemethere
| 1 |
5,614 | 78,210 |
Performance with MPS on AMD GPUs are worse than CPU
|
module: performance, triaged, module: mps
|
### 🐛 Describe the bug
I tried running some experiments on the RX5300M 4GB GPU and everything seems to work correctly. The problem is that the performance are worse than the ones on the CPU of the same Mac.
To reproduce, just clone the tests in this repo `https://github.com/lucadiliello/pytorch-apple-silicon-benchmarks` and run either
```bash
python tests/transformers_sequence_classification.py --device mps --pre_trained_name bert-base-cased --mode inference --steps 100 --sequence_length 128 --batch_size 16
```
or
```bash
python tests/transformers_sequence_classification.py --device cpu --pre_trained_name bert-base-cased --mode inference --steps 100 --sequence_length 128 --batch_size 16
```
While the CPU took `143s`, with the MPS backend the test completed in `228s`. I'm sure the GPU was being because I constantly monitored the usage with `Activity Monitor`.
### Versions
PyTorch version: 1.13.0.dev20220524
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.12 (default, Oct 12 2021, 06:23:56) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220524
[pip3] torchaudio==0.12.0.dev20220524
[pip3] torchvision==0.13.0.dev20220524
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.13.0.dev20220524 pypi_0 pypi
[conda] torchaudio 0.12.0.dev20220524 pypi_0 pypi
[conda] torchvision 0.13.0.dev20220524 pypi_0 pypi
cc @VitalyFedyunin @ngimel
| 8 |
5,615 | 78,205 |
DISABLED test_complex_half_reference_testing_as_strided_scatter_cuda_complex32 (__main__.TestCommonCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_complex_half_reference_testing_as_strided_scatter_cuda_complex32%2C%20TestCommonCUDA)).
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 1 |
5,616 | 78,201 |
nn.Sequential causes fx.replace_pattern to not find any match.
|
triaged, module: fx
|
### 🐛 Describe the bug
Having nn.Sequential (.e.g., torchvision resnet50 model has it) in the model code causes fx.replace_pattern to not find any matches. However, it does find a match if nn.Sequential is removed. Overall, I am looking for a way to match all Bottleneck patterns in the resnet50 model.
The following code results in no matches.
```
import torch
import torch.nn as nn
from torchvision import models
from torch.fx import symbolic_trace, replace_pattern
class replacement_pattern(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x
# no matching pattern found with Sequential
mod = nn.Sequential(models.resnet.Bottleneck(2048, 512))
# The following works
#mod = models.resnet.Bottleneck(2048, 512)
traced = symbolic_trace(mod)
pattern = models.resnet.Bottleneck(2048, 512)
rep_pattern = replacement_pattern()
matches = replace_pattern(traced, pattern, rep_pattern)
print(f"matches: {matches}")
"""
matches: []
"""
```
I think the reason is the presence of "target" match in https://github.com/pytorch/pytorch/blob/master/torch/fx/subgraph_rewriter.py#L59 and for nn.Sequential the target is different. See below.
```
traced.graph.print_tabular()
"""
opcode name target args kwargs
------------- --------- ----------------------- ----------------- --------
placeholder input_1 input () {}
call_module _0_conv1 0.conv1 (input_1,) {}
call_module _0_bn1 0.bn1 (_0_conv1,) {}
call_module _0_relu 0.relu (_0_bn1,) {}
call_module _0_conv2 0.conv2 (_0_relu,) {}
call_module _0_bn2 0.bn2 (_0_conv2,) {}
call_module _0_relu_1 0.relu (_0_bn2,) {}
call_module _0_conv3 0.conv3 (_0_relu_1,) {}
call_module _0_bn3 0.bn3 (_0_conv3,) {}
call_function add <built-in function add> (_0_bn3, input_1) {}
call_module _0_relu_2 0.relu (add,) {}
output output output (_0_relu_2,) {}
"""
```
```
symbolic_trace(pattern).graph.print_tabular()
"""
opcode name target args kwargs
------------- ------ ----------------------- --------- --------
placeholder x x () {}
call_module conv1 conv1 (x,) {}
call_module bn1 bn1 (conv1,) {}
call_module relu relu (bn1,) {}
call_module conv2 conv2 (relu,) {}
call_module bn2 bn2 (conv2,) {}
call_module relu_1 relu (bn2,) {}
call_module conv3 conv3 (relu_1,) {}
call_module bn3 bn3 (conv3,) {}
call_function add <built-in function add> (bn3, x) {}
call_module relu_2 relu (add,) {}
output output output (relu_2,) {}
"""
```
### Versions
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.8.12 (default, Sep 10 2021, 00:16:05) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
| 0 |
5,617 | 78,185 |
test_to (__main__.TestTorch) fails with multiple gpus
|
module: multi-gpu, triaged, actionable
|
### 🐛 Describe the bug
The code below:
https://github.com/pytorch/pytorch/blob/664bb4de490198f1d68b16b21407090daa995ba8/test/test_torch.py#L7645
sets the device as device 1 if multiple gpus are found. It compares against a tensor on device 0 and fails. This test is part of the single gpu tests, and should not be using distributed setup.
```
======================================================================
FAIL: test_to (__main__.TestTorch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_torch.py", line 7662, in test_to
self._test_to_with_layout(torch.sparse_csr)
File "test_torch.py", line 7654, in _test_to_with_layout
self.assertEqual(b.device, a.to(cuda, non_blocking=non_blocking).device)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2258, in assertEqual
msg=msg,
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1086, in assert_equal
raise error_metas[0].to_error()
AssertionError: Object comparison failed: device(type='cuda', index=1) != device(type='cuda', index=0)
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 14.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.1.1 22114 5cba46feb6af367b1cafaa183ec42dbfb8207b14)
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-99-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] torch==1.12.0a0+git0bf2bb0
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.20.3 pypi_0 pypi
[conda] torch 1.12.0a0+git0bf2bb0 pypi_0 pypi
| 1 |
5,618 | 78,172 |
Allow specifying pickle module for torch.package
|
enhancement, oncall: package/deploy, imported
|
### 🚀 The feature, motivation and pitch
I'm working on a service that is to receive files created with `torch.package` to instantiate `torch.nn.Module`s created by a different service. Some of the modules use, somewhere in their structure, lambda functions and other constructs that are not supported by `pickle`, but are supported by the `dill` package.
Seeing as how the `torch.save` and `torch.load` functions both support specifying a `pickle_module` to use, I was wondering if it was possible to support specifying it when instantiating `torch.package.PackageExporter` and `torch.package.PackageImporter`, so I could take advantage of the dependency bundling they provide?
### Alternatives
I can use `torch.save` and `torch.load` with `dill` as the `pickle_module`, though that in and out of itself doesn't bundle any dependencies needed to instantiate the module later on, and so I'd need both services to have the classes and methods, and updates on one side won't be reflected on the other.
I could also copy the source code of `torch.package`, make the necessary adjustments to it so that it uses `dill`, and use that code instead, but I'd have to make sure it is up to date with any changes made in this repository, and again will need this code to exist in both services.
### Additional context
_No response_
| 0 |
5,619 | 78,170 |
[chalf] reference_testing: low quality test for fast growing ops
|
triaged, module: complex, module: half
|
### 🐛 Describe the bug
In PR https://github.com/pytorch/pytorch/pull/77640:
Since range of chalf is much less compared to cfloat, we get `inf`s easily (eg. with `pow`, `exp`), so we cast `cfloat` back to `chalf`.
However, this is might mask an actual issue as we don't control the percent of input that will be valid. The correct approach would be to sample input which are valid given the range of `chalf`.
One of the approach would be to add extra meta-data to OpInfo.
cc: @ngimel @mruberry @anjali411
### Versions
master
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
| 4 |
5,620 | 78,159 |
[Optimizer Overlap] Parameter group support
|
oncall: distributed, triaged, module: ddp
|
### 🚀 The feature, motivation and pitch
We should investigate and fix any missing gaps with parameter group support in DDP's support for overlapped optimizers. Currently, we do not have unittests for optimizer overlap with parameter groups.
We probably will scope this to only specifying parameter groups at construction time and not support `add_param_group` API. The reason is that these functional optimizers are supposed to be mostly transparent to user; user does not directly construct or access them and they work through DDP communication hook internals.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,621 | 78,158 |
[Optimizer Overlap] Proper checkpointing support
|
oncall: distributed, triaged, module: ddp
|
### 🚀 The feature, motivation and pitch
Currently, optimizer overlap with DDP does not support saving / loading checkpoints. The optimizer is not readily accessed from DDP, and thus there is no way to save / load in the optimizer's `state_dict`.
Even if we could access the optimizer from DDP, it is a functional optimizer and not a regular torch.optim.Optimizer, and thus the regular `state_dict` APIs cannot be called.
One potential solution is to make these functional optimizer instances inherit from torch.optim.Optim so that `state_dict` and `load_state_dict` can be reused out of the box. We will override methods that are unsupported on the functional optimizer but supported by regular torch.optim.Optimizer (such as `add_param_group`) to ensure user doesn't attempt to use unsupported behavior with functional optimizers.
This is a feature request from torchrec.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,622 | 78,157 |
[Optimizer Overlap] Custom optimizer registration
|
oncall: distributed, triaged, module: ddp
|
### 🚀 The feature, motivation and pitch
Currently, DDP optimizer overlap only supports a subset of commonly used torch.optim.Optimizers. However, torchrec has the use case of using their own custom implemented optimizers, so we should add support for a registry + API to add to this registry to allow users to register their own custom optimizers to be overlapped with DDP.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,623 | 78,153 |
`pack_sequence` crash
|
triaged, module: edge cases
|
### 🐛 Describe the bug
`pack_sequence` crash
```python
import torch
sequences_0 = torch.rand([1, 16, 86], dtype=torch.float32)
sequences_1 = torch.rand([1, 85, 0], dtype=torch.float16)
sequences_2 = torch.randint(0, 2, [2, 84, 85], dtype=torch.bool)
sequences_3 = torch.randint(-8, 2, [0, 4, 85], dtype=torch.int8)
sequences = [sequences_0,sequences_1,sequences_2,sequences_3,]
enforce_sorted = 0
torch.nn.utils.rnn.pack_sequence(sequences, enforce_sorted=enforce_sorted, )
# Segmentation fault (core dumped)
```
### Versions
pytorch: 1.11.0
| 0 |
5,624 | 78,151 |
`ctc_loss` will backward crash
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`ctc_loss` will backward crash
First, it will succeed in the forward pass
```python
import torch
log_probs = torch.rand([50, 16, 20], dtype=torch.float32).requires_grad_()
targets = torch.randint(-2, 2, [16, 30], dtype=torch.int64)
input_lengths = torch.randint(-4, 1, [16], dtype=torch.int64)
target_lengths = torch.randint(-4, 4, [16], dtype=torch.int64)
blank = 0
reduction = "mean"
zero_infinity = False
res = torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=blank, reduction=reduction, zero_infinity=zero_infinity, )
# succeed
```
But it will crash when backwarding
```python
res.sum().backward()
# Segmentation fault (core dumped)
```
### Versions
pytorch: 1.11.0
Ubuntu: 20.04
Python: 3.9.5
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,625 | 78,143 |
`baddmm` triggers INTERNAL ASSERT FAILED when input requires grad
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
`baddmm` triggers INTERNAL ASSERT FAILED when input requires grad
```python
import torch
input_tensor = torch.rand([10, 3, 5], dtype=torch.complex64)
batch1_tensor = torch.randint(-4, 2, [10, 3, 4], dtype=torch.int8)
batch2_tensor = torch.randint(-4, 1, [10, 4, 5], dtype=torch.int8)
input = input_tensor.clone()
batch1 = batch1_tensor.clone()
batch2 = batch2_tensor.clone()
res1 = torch.baddbmm(input, batch1, batch2, )
# Normal Pass
input = input_tensor.clone().requires_grad_()
batch1 = batch1_tensor.clone()
batch2 = batch2_tensor.clone()
res2 = torch.baddbmm(input, batch1, batch2, )
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,626 | 78,141 |
`matmul, mm` triggers INTERNAL ASSERT FAILED when input requires grad
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
`matmul` triggers INTERNAL ASSERT FAILED when input requires grad
```python
import torch
input_tensor = torch.randint(-8, 2, [11, 0, 4], dtype=torch.int8)
other_tensor = torch.rand([4], dtype=torch.float32)
input = input_tensor.clone()
other = other_tensor.clone()
res1 = torch.matmul(input, other, )
# Normal Pass
input = input_tensor.clone()
other = other_tensor.clone().requires_grad_()
res2 = torch.matmul(input, other, )
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
Plus, `mm` also has such issue
```python
import torch
input_tensor = torch.randint(0, 2, [0, 3], dtype=torch.uint8)
mat2_tensor = torch.rand([3, 2], dtype=torch.float32)
input = input_tensor.clone()
mat2 = mat2_tensor.clone()
res1 = torch.mm(input, mat2, )
input = input_tensor.clone()
mat2 = mat2_tensor.clone().requires_grad_()
res2 = torch.mm(input, mat2, )
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,627 | 78,133 |
Enhancements to AliasDB to handle in-place operations
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/OVERVIEW.md#handling-mutability
`The intention is that if you only mutate the graph through AliasDb, you don't have to think about mutability/aliasing at all in your pass. As we write more passes, the interface to AliasDb will get richer (one example is transforming an in-place operation to its pure equivalent if we can prove it's safe).`
Is the work called out here to enhance AliasDB currently planned or in-progress? I've hit a few issues around in-place container mutations (ex.` aten::_set_item`) and they don't seem to be handled well in torchscript generally (ex. parseIR does not support read-back of ops that do not produce a value). Expanding AliasDB to provide more tools to understand/resolve these ops would be helpful.
As an example, this function in torch-tensorrt currently misses in-place container operations as dependencies of an input value. Adding a pass to remove these ops or providing access to the underlying graph in AliasDB to identify dependencies correctly would help resolve these issues.
https://github.com/pytorch/TensorRT/blob/e9e824c0ef0a4704826a390fe7d2ef90272a56b7/core/partitioning/partitioning.cpp#L60
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,628 | 78,131 |
Segfault in _pad_packed_sequence
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch._pad_packed_sequence` contains segmentation fault.
### Example to reproduce
```
import torch
data = torch.full([1, 1, 1, 1], -10000, dtype=torch.float16, requires_grad=False)
batch_sizes = torch.full([0], 978, dtype=torch.int64, requires_grad=False)
batch_first = True
padding_value = False
total_length = torch.full([], -9937, dtype=torch.int64, requires_grad=False)
torch._pad_packed_sequence(data, batch_sizes, batch_first, padding_value, total_length)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,629 | 78,130 |
Segfault in _grid_sampler_2d_cpu_fallback
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch._grid_sampler_2d_cpu_fallback` contains segmentation fault.
### Example to reproduce
```
import torch
input = torch.full([3, 3, 3, 2], 9490, dtype=torch.float32, requires_grad=False)
grid = torch.full([0, 3, 8, 2, 4, 1], -9545, dtype=torch.float32, requires_grad=False)
interpolation_mode = 8330
padding_mode = 5934
align_corners = False
torch._grid_sampler_2d_cpu_fallback(input, grid, interpolation_mode, padding_mode, align_corners)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,630 | 78,129 |
Segfault in _embedding_bag_forward_only
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch._embedding_bag_forward_only` contains segmentation fault.
### Example to reproduce
```
import torch
weight = torch.full([2, 2, 5, 3, 3], -8670, dtype=torch.float64, requires_grad=False)
indices = torch.full([3, 0, 1], -4468, dtype=torch.int32, requires_grad=False)
offsets = torch.full([7, 1, 0], -7226, dtype=torch.int64, requires_grad=False)
scale_grad_by_freq = True
mode = torch.full([], 6318, dtype=torch.int64, requires_grad=False)
sparse = False
per_sample_weights = torch.full([3], -8750, dtype=torch.int64, requires_grad=False)
include_last_offset = False
padding_idx = torch.full([], 6383, dtype=torch.int64, requires_grad=False)
torch._embedding_bag_forward_only(weight, indices, offsets, scale_grad_by_freq, mode,
sparse, per_sample_weights, include_last_offset, padding_idx)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,631 | 78,128 |
Segfault in torch._C._nn.thnn_conv2d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch._C._nn.thnn_conv2d` contains segmentation fault.
### Example to reproduce
```
import torch
tensor_0 = torch.full([3, 3, 3], -4398, dtype=torch.float64, requires_grad=False)
tensor_1 = torch.full([6, 7], -4532, dtype=torch.int32, requires_grad=False)
intarrayref_2 = -10000
tensor_3 = torch.full([3, 3, 3, 6, 7], -2321, dtype=torch.float16, requires_grad=False)
intarrayref_4 = -2807
intarrayref_5 = []
torch._C._nn.thnn_conv2d(tensor_0, tensor_1, intarrayref_2, tensor_3, intarrayref_4, intarrayref_5)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,632 | 78,127 |
Segfault in torch._C._nn.reflection_pad2d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch._C._nn.reflection_pad2d` contains segmentation fault.
### Example to reproduce
```
import torch
tensor_0 = torch.full([6, 5, 7], -8754, dtype=torch.int32, requires_grad=False)
intarrayref_1 = []
torch._C._nn.reflection_pad2d(tensor_0, intarrayref_1)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,633 | 78,126 |
Segfault in max_unpool3d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch.max_unpool3d` contains segmentation fault.
### Example to reproduce
```
import torch
tensor_0 = torch.full([], -10000, dtype=torch.int64, requires_grad=False)
tensor_1 = torch.full([7, 7, 7, 4, 4, 4, 7, 7], -8695, dtype=torch.float16, requires_grad=False)
intarrayref_2 = []
intarrayref_3 = 7052
intarrayref_4 = -9995
torch._C._nn.max_unpool3d(tensor_0, tensor_1, intarrayref_2, intarrayref_3, intarrayref_4)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,634 | 78,125 |
Segfault in grid_sampler_3d
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch.grid_sampler_3d` contains segmentation fault.
### Example to reproduce
```
import torch
input = torch.full([4, 2, 0, 0, 4, 0, 0, 3], -1480, dtype=torch.float64, requires_grad=False)
grid = torch.full([1, 6, 3, 5, 3, 4, 0, 6], -2024, dtype=torch.float64, requires_grad=False)
interpolation_mode = -3278
padding_mode = -1469
align_corners = True
torch.grid_sampler_3d(input, grid, interpolation_mode, padding_mode, align_corners)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,635 | 78,122 |
Segfault in bincount
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Function `torch.bincount` contains segmentation fault.
### Example to reproduce
```
import torch
self = torch.full([3], 2550, dtype=torch.int64, requires_grad=False)
weights = torch.full([3, 1, 3, 0, 0, 0, 1, 1], -4620, dtype=torch.int64, requires_grad=False)
minlength = 9711
torch.bincount(self, weights, minlength)
```
### Result
Segmentation fault
### Expected Behavior
Throwing a Python Exception
### Notes
This bug was discovered using [Atheris](https://github.com/google/atheris).
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 1 |
5,636 | 78,109 |
Doesn't work when register hook to torch.nn.MultiheadAttention.out_proj
|
oncall: transformer/mha
|
### 🐛 Describe the bug
I am not sure whether this issue should be a bug or a new feature reques. The problem is: when you register a hook to out_proj under MultiheadAttention, it will never be called. Which may cause some abnormal behaviors when use tools implemented with hook, e.g. torch.nn.utils.prune (#69353).
And this is because when you forward a MultiheadAttention, it directly call the out_proj.weight without forwarding it in torch._native_multi_head_attention or F.multi_head_attention_forward. (see [#L1113](https://github.com/pytorch/pytorch/blob/cac16e2ee293964033dffa6616f78b68603cd565/torch/nn/modules/activation.py#L1113), [#L1142](https://github.com/pytorch/pytorch/blob/cac16e2ee293964033dffa6616f78b68603cd565/torch/nn/modules/activation.py#L1142), and [#L1153](https://github.com/pytorch/pytorch/blob/cac16e2ee293964033dffa6616f78b68603cd565/torch/nn/modules/activation.py#L1153)) To maintain the consistency of hook behaviors, I suggest call those hooks at some proper places, maybe pass into F.multi_head_attention_forward?
Thanks!
### To Reproduce
```python
import torch
import torch.nn as nn
from functools import partial
test = {}
def hook(name, module, inputs):
global test
test[name] = test.get(name, 0) + 1
class net(nn.Module):
def __init__(self):
super().__init__()
self.attn = nn.MultiheadAttention(100,2)
self.proj = nn.Linear(100, 10)
# register hook
self.attn.register_forward_pre_hook(partial(hook, "attn"))
self.attn.out_proj.register_forward_pre_hook(partial(hook, "attn.out_proj"))
self.proj.register_forward_pre_hook(partial(hook, "proj"))
def forward(self, x):
attn_output, _ = self.attn(x, x, x)
logits = self.proj(attn_output)
return logits
model = net().eval()
x = torch.randn((16, 50, 100))
logits = model(x)
print(test)
```
### Expected behavior
```bash
{'attn': 1, 'attn_out': 1, 'proj': 1}
```
### Real behavior
```bash
{'attn': 1, 'proj': 1}
```
### Versions
Collecting environment information...
PyTorch version: 1.9.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] pytorch-lightning==1.5.9
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.9.1+cu111
[pip3] torch-optimizer==0.3.0
[pip3] torch-stoi==0.1.2
[pip3] torch-tb-profiler==0.2.1
[pip3] torchaudio==0.9.1
[pip3] torchmetrics==0.7.0
[pip3] torchvision==0.10.1+cu111
[conda] cudatoolkit 10.2.89 hfd86e86_1 defaults
[conda] numpy 1.20.3 py38h9894fe3_1 conda-forge
[conda] pytorch-lightning 1.5.9 pypi_0 pypi
[conda] pytorch-qrnn 0.2.1 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] torch 1.9.1+cu111 pypi_0 pypi
[conda] torch-optimizer 0.3.0 pypi_0 pypi
[conda] torch-stoi 0.1.2 pypi_0 pypi
[conda] torch-tb-profiler 0.2.1 pypi_0 pypi
[conda] torchaudio 0.9.1 pypi_0 pypi
[conda] torchmetrics 0.7.0 pypi_0 pypi
[conda] torchvision 0.10.1+cu111 pypi_0 pypi
cc @jbschlosser @bhosmer @cpuhrsch
| 7 |
5,637 | 78,102 |
[ONNX] Support tensors as scale and zero_point arguments
|
module: onnx, triaged, OSS contribution wanted, onnx-triaged
|
### 🚀 The feature, motivation and pitch
PyTorch supports using tensors as scale and zero_point arguments. Currently in `test_pytorch_onnx_caffe2_quantized.py`, we get
```
RuntimeError: Expected node type 'onnx::Constant' for argument 'scale' of node 'quantize_per_tensor', got 'prim::Param'.
```
Related:
- https://github.com/pytorch/pytorch/pull/77772
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
5,638 | 78,082 |
RFC: Move functorch into pytorch/pytorch
|
triaged
|
We (some folks from the functorch team, @samdow @Chillee) met with some of the PyTorch GitHub first and dev infra teams (@malfet, @osalpekar, @seemethere) to discuss pytorch/pytorch and functorch co-development pains.
In that meeting, we identified the following solution, some stakeholders for out-of-tree (@suo) and pytorch core development (@gchanan), CI and developer velocity (@suo, @mruberry), and we are putting this issue up for discussion and to ask for sign-off from the stakeholders.
## Motivation 1: internal build failures
pytorch/pytorch and pytorch/functorch get synced internally at different cadences. Realistically this means that e.g. a deletion to a c++ function in pytorch/pytorch can cause the pytorch/functorch build to fail until it is fixed, leading to downstream failures.
## Motivation 2: co-development is difficult
functorch's CI breaks almost daily due to changes to pytorch/pytorch. There are a number of reasons for this:
- a pytorch/pytorch c++ function is deleted or signature is updated (see Motivation 1)
- a operator implementation is updated
- a developer is co-developing a feature between pytorch/pytorch and pytorch/functorch and is unable to atomically commit to both repos
## Proposal: move functorch into pytorch/pytorch
Our proposal is to "cut-and-paste" (with adjustments as necessary) functorch from pytorch/functorch into a top-level folder in pytorch/pytorch named functorch.
There are a couple of details here:
- **functorch development**: this happens inside pytorch/pytorch/functorch. There will probably be nothing left in the pytorch/functorch repo after the cut-and-paste.
- **Build and release**: functorch will still be packaged separately from pytorch/pytorch
- **CI**: pytorch/pytorch CI will run both pytorch/pytorch and functorch tests. *Developers are responsible for ensuring that both pass*
Why did we choose this? Our eventual plan is to upstream and integrate functorch into pytorch/pytorch. We believe functorch's {transforms, compilation} pieces should be a core part of pytorch/pytorch. Integration is a much larger project and we're looking for a quick stop-gap solution for the co-development pains, so moving functorch into pytorch/pytorch is a first step towards integration.
## Impact on pytorch/pytorch developers
- **test-cost increase**. functorch build+test takes 1hr on CUDA. If we add new "functorch build+test" jobs to pytorch/pytorch CI, then we'll end up using more resources, but these can be run in parallel to pytorch/pytorch tests.
- **dev-x**. pytorch/pytorch developers are responsible for making sure all functorch build+tests pass. Realistically what this will look like is that if something fails and the developer needs help, then the should loop in someone from functorch and we will assist.
cc-ing some more folks: @vfdev-5, @kshitij12345, @ezyang, @bdhirsh
| 6 |
5,639 | 78,075 |
torch.multiprocessing.spawn raise PicklingError inside a decorator
|
module: multiprocessing, module: serialization, triaged
|
### 🐛 Describe the bug
I wrote a decorator to simplify the process of launching multiprocessing which takes a function as an argument and calls torch.multiprocessing.spawn to spawn multiple processes that runs the input function.
However, I met a quite odd situation. When calling the decorator directly as a function factory (`print_rank_B` in the script below), `torch.multiprocessing.spawn` works as expected and the input function indeed runs multiple times. But when I try to run a function that is decorated by this decorator (`print_rank_C` in the script below), a `PicklingError` is raised.
I understand that this `PicklingError` sometimes can be caused by inconsistent names of functions (i.e. `__name__`) so I used `wraps` from `functools` library, but the error still exists.
I'm using torch@1.10.0 with Python 3.8.13.
Thanks in advance!
The script to reproduce this error:
```python
import random
from functools import wraps
import torch.multiprocessing
import torch.distributed as dist
# from .utils.distributed import get_rank
def get_rank():
if dist.is_initialized():
return dist.get_rank()
else:
return 0
def _distributed_init(id, world_size, init_method, fn, args):
dist.init_process_group(backend='nccl', init_method=init_method, world_size=world_size, rank=id)
fn(*args)
def distributed(world_size):
if world_size > 1:
def decorator(fn):
@wraps(_distributed_init)
def wrapper(*args):
port = random.randint(10000, 20000)
init_method = f'tcp://localhost:{port}'
torch.multiprocessing.spawn(fn=_distributed_init, args=(world_size, init_method, fn, args), nprocs=world_size)
return wrapper
else:
def decorator(fn):
return fn
return decorator
def run_distributed(world_size, fn, *args):
port = random.randint(10000, 20000)
init_method = f'tcp://localhost:{port}'
if world_size > 1:
torch.multiprocessing.spawn(fn=_distributed_init, args=(world_size, init_method, fn, args), nprocs=world_size)
else:
fn(*args)
def print_rank():
print(get_rank())
def print_rank_A():
run_distributed(4, print_rank)
def print_rank_B():
distributed(4)(print_rank)()
@distributed(4)
def print_rank_C():
print('hi')
if __name__ == '__main__':
print('A:')
print_rank_A()
print('B:')
print_rank_B()
print('C:')
print_rank_C()
```
The output of the script:
```shell
A:
0
3
1
2
B:
0
1
2
3
C:
Traceback (most recent call last):
File "/home/tb5zhh/SUField/sufield/lib/distributed_launch.py", line 68, in <module>
print_rank_C()
File "/home/tb5zhh/SUField/sufield/lib/distributed_launch.py", line 28, in wrapper
torch.multiprocessing.spawn(fn=_distributed_init, args=(world_size, init_method, fn, args), nprocs=world_size)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes
process.start()
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/tb5zhh/.conda/envs/sufield/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function print_rank_C at 0x7f432e3f6700>: it's not the same object as __main__.print_rank_C
```
### Versions
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.10.0
[pip3] torchaudio==0.10.0
[pip3] torchvision==0.11.0
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py38h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py38hd3c417c_0 defaults
[conda] mkl_random 1.2.2 py38h51133e4_0 defaults
[conda] numpy 1.21.5 py38he7a7128_2 defaults
[conda] numpy-base 1.21.5 py38hf524024_2 defaults
[conda] pytorch 1.10.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.0 py38_cu113 pytorch
[conda] torchvision 0.11.0 py38_cu113 pytorch
cc @VitalyFedyunin @mruberry
| 1 |
5,640 | 78,071 |
[primTorch] item prim can't return a bool properly
|
triaged, module: primTorch
|
This is because its schema is `item(Tensor a) -> Scalar`, but the actual torch.Tensor.item can return a bool or a scalar. Trying to return a bool from the implementation of the item prim will convert the result to an integer.
cc @ezyang @mruberry @ngimel
| 0 |
5,641 | 78,070 |
[primTorch] Meta function for item creates a dummy value
|
triaged, module: primTorch
|
The Meta function for item currently creates a dummy value. This is harmless for now, but we need improve our modeling of "meta numbers" to indicate that this value is unknown except at runtime.
cc @ezyang @mruberry @ngimel
| 2 |
5,642 | 78,068 |
DISABLED test_init_from_local_shards (__main__.TestShardedTensorFromLocalShards)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_init_from_local_shards%2C%20TestShardedTensorFromLocalShards) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6546960161).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 3 red and 5 green.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @jjlilley @mrzzd
| 2 |
5,643 | 78,067 |
Installation on Jetson target board
|
triaged, module: jetson
|
I'm installing the pytorch on Jetson target board(Nx, AGX, nano...).
Official Pytorch for jetson wheel is only for python3.6, so I use the installation through the source.
However, during the installation, the error occured.
My environment is :
- board : Jetson Xavier AGX
- JetPack : 4.4
- Cuda: 10.2
- gcc, g++ : 7
- Cmake : 3.18.2
-


How can i solve it..? plz help....
Also , How can i download the lower version of pytorch(ex, 1.8.0) by using this source?
cc @ptrblck @puririshi98
| 1 |
5,644 | 78,065 |
Gamma and Related Functions
|
feature, triaged, module: special
|
# Gamma and Related Functions
A brief proposal for providing a complete suite of gamma and related functions as PyTorch operators. Enjoy!
One of a five-part series of special functions issues:
- Gamma and Related Functions (#78065)
- Bessel and Related Functions (#76324)
- Orthogonal Polynomials (#80152)
- Elliptic Functions and Integrals (#80157)
-
## Operators
- [ ] `barnes_g`
- [ ] `beta`
- [ ] `binomial_coefficient`
- [ ] `double_factorial`
- [ ] `factorial`
- [ ] `falling_factorial`
- [ ] `gamma`
- [ ] `log_barnes_g`
- [ ] `log_beta`
- [ ] `log_binomial`
- [ ] `log_double_factorial`
- [ ] `log_factorial`
- [ ] `log_falling_factorial`
- [ ] `log_gamma`
- [ ] `log_gamma_sign`
- [ ] `log_quadruple_factorial`
- [ ] `log_rising_factorial`
- [ ] `log_triple_factorial`
- [ ] `lower_incomplete_gamma`
- [ ] `polygamma`
- [ ] `quadruple_factorial`
- [ ] `reciprocal_gamma`
- [ ] `rising_factorial`
- [ ] `triple_factotial`
- [ ] `upper_incomplete_gamma`
## Documentation
### Beta function
```Python
beta(a, b, other, *, out=None) → Tensor
```
Beta function:
$${\displaystyle \mathrm{B}(a, b) = \int _{0}^{1}t^{a - 1}(1 - t)^{b - 1} dt}.$$
A key property of the beta function is its close relationship to the gamma function:
$${\displaystyle \mathrm{B}(a, b)={\frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)}}.}$$
The closeness of the relationship between the gamma and beta functions is frequently applied in calculus and statistics (e.g., beta and related probability distributions).
The beta function is also closely related to binomial coefficients. When $a$ or $b$ is a positive integer:
$${\displaystyle \mathrm{B}(a, b) = {\dfrac{(a - 1)! (b - 1)!}{(a + b - 1)!}} = \frac{\frac{a + b}{ab}}{\binom{a + b}{a}}.}$$
### Binomial coefficient
```Python
binomial_coefficient(input, k, *, out=None) → Tensor
```
Binomial coefficient:
$${\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}$$
### Digamma function
```Python
digamma(z, *, out=None) → Tensor
```
The logarithmic derivative of the gamma function (i.e., the first of the polygamma functions):
$${\displaystyle \psi(z) =
{\frac
{\mathrm {d} }
{\mathrm {d} z}
}
\ln {\big (}\Gamma (z){\big )} =
{\frac
{\Gamma '(z)}
{\Gamma (z)}
}\sim \ln {z}-
{\frac {1}{2z}}.}$$
The digamma function is related to the harmonic numbers by:
$${\displaystyle \psi (n) = H_{n - 1} - \gamma ,}$$
where $H_{0} = 0$ and gamma is the Euler–Mascheroni constant.
### `double_factorial(input, *, out=None) → Tensor`
Double factorial function:
$$\text{input}!!.$$
### `factorial(input, *, out=None) → Tensor`
Factorial function:
$$\text{input}!.$$
The product of all positive integers less than or equal to $n$.
Many notable functions and number sequences are related to the factorials, e.g., binomial coefficients; double, triple, and quadruple factorials; and falling factorials and rising factorials.
### `falling_factorial(input, *, out=None) → Tensor`
Falling factorial function:
$$\begin{aligned}
(x)_{n}=x^{\underline {n}}&=\overbrace {x(x-1)(x-2)\cdots (x-n+1)} ^{n{\text{ terms}}}\\
&=\prod _{k=1}^{n}(x-k+1)=\prod _{k=0}^{n-1}(x-k)\,.
\end{aligned}$$
The falling factorial is extended to real values of n using the gamma function provided x and x + n are real numbers that are not negative integers.
### `gamma(input, *, out=None) → Tensor`
Gamma function:
$$\Gamma(\text{input}).$$
The standard extension of the factorial function to all complex numbers except non-positive integers.
### `log_beta(input, other, *, out=None) → Tensor`
Natural logarithm of the Euler beta function:
$$\ln\text{B}(\text{input}, \text{other}).$$
### `log_double_factorial(input, *, out=None) → Tensor`
Natural logarithm of the double factorial function:
$$\text{input}!!.$$
### `log_factorial(input, *, out=None) → Tensor`
### `log_falling_factorial(input, *, out=None) → Tensor`
### `log_gamma(input, *, out=None) → Tensor`
Natural logarithm of the gamma function:
$$\log\Gamma(\text{input}).$$
### `log_quadruple_factorial(input, *, out=None) → Tensor`
Natural logarithm of the quadruple factorial function:
$$\text{input}!!.$$
### `log_rising_factorial(input, *, out=None) → Tensor`
### `log_triple_factorial(input, *, out=None) → Tensor`
Natural logarithm of the triple factorial function:
$$\text{input}!!.$$
### `lower_incomplete_gamma(input, a, *, out=None) → Tensor`
Lower incomplete gamma function:
$$\Gamma(a, \text{input}).$$
Incomplete gamma functions are special functions that arise as solutions to certain integrals. They are defined similarly to the gamma function but with _incomplete_ integral limits. Unlike the gamma function, defined as an integral from zero to infinity, the lower incomplete gamma function is defined as an integral from $0$ to $a$.
### `upper_incomplete_gamma(input, a, *, out=None) → Tensor`
Upper incomplete gamma function:
$$\Gamma(a, \text{input}).$$
Incomplete gamma functions are special functions that arise as solutions to certain integrals. They are defined similarly to the gamma function but with _incomplete_ integral limits. Unlike the gamma function, defined as an integral from zero to infinity, the upper incomplete gamma function is defined as an integral from $a$ to infinity.
### `polygamma(input, n, *, out=None) → Tensor`
Polygamma function:
$$\psi^{(n)}(\text{input}).$$
### `regularized_gamma(input, *, out=None) → Tensor`
Regularized incomplete gamma function $Q(a, z)$.
### `inverse_regularized_gamma(input, *, out=None) → Tensor`
Inverse of the regularized incomplete gamma function.
### `incomplete_beta(input, other, z, *, out=None) → Tensor`
Incomplete Euler beta function $\text{B}_{z}(\text{input}, \text{other})$.
### `regularized_beta(input, *, out=None) → Tensor`
Regularized incomplete beta function $I_{z}(a, b)$.
### `inverse_regularized_beta(input, *, out=None) → Tensor`
Inverse of the regularized incomplete beta function.
### `triple_factorial(input, *, out=None) → Tensor`
Triple factorial, $\text{input}!!!$.
### `quadruple_factorial(input, *, out=None) → Tensor`
Quadruple factorial, $\text{input}!!!!$.
$${\displaystyle (x)_{n}={\frac {\Gamma (x+1)}{\Gamma (x-n+1)}}\,.}$$
### `log_falling_factorial(input, *, out=None) → Tensor`
### `rising_factorial(input, *, out=None) → Tensor`
$$
\begin{aligned}x^{(n)}=x^{\overline {n}}&=\overbrace {x(x+1)(x+2)\cdots (x+n-1)} ^{n{\text{ terms}}}\\
&=\prod _{k=1}^{n}(x+k-1)=\prod _{k=0}^{n-1}(x+k)\,.
\end{aligned}$$
The rising factorial is extended to real values of n using the gamma function provided x and x + n are real numbers that are not negative integers:
$${\displaystyle x^{(n)}={\frac {\Gamma (x+n)}{\Gamma (x)}}\,.}$$
### `log_rising_factorial(input, *, out=None) → Tensor`
### `barnes_g(input, *, out=None) → Tensor`
Barnes G-function $G(z)$.
### `ln_barnes_g(input, *, out=None) → Tensor`
Natural logarithm of the Barnes G-function $\ln G(z)$.
## Notes
* multiple factorial operators are proposed (i.e., double, triple, and quadruple) to simplify differentiation;
* I’m unclear whether it’s possible to implement reentrant functions using the Jiterator;
* `incomplete_gamma` is implemented as `gammainc` and `gammaincc`. I recommend renaming `gammainc` to `incomplete_gamma` for clarity and consistency. I also recommend removing `gammaincc` in favor of `regularized_gamma`.
cc @mruberry @kshitij12345
| 3 |
5,645 | 78,064 |
nn.CosineSimilarity returns value larger than 1
|
module: nn, triaged, module: correctness (silent)
|
### 🐛 Describe the bug
## nn.CosineSimilarity returns value larger than 1
When I was computing cosine similarity, it returned a tensor([1.0000]). However, it's larger than 1, which leads to the runtimeError of BCELoss.
## To reproduce the bug
```
import torch
import torch.nn as nn
t1 = torch.tensor([[1.6965e-02, 0.0000e+00, 1.5725e-02, 0.0000e+00, 9.7518e-03, 4.1566e-03,
2.8437e-03, 1.2394e-03, 0.0000e+00, 4.4327e-02, 6.6013e-02, 2.3693e-02,
1.2146e-02, 9.4390e-03, 0.0000e+00, 2.4374e-02, 0.0000e+00, 0.0000e+00,
9.9630e-04, 8.2091e-03, 8.6477e-05, 0.0000e+00, 1.2825e-02, 0.0000e+00,
1.5316e-03, 0.0000e+00, 4.4526e-02, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 9.7517e-04, 3.3356e-02, 8.4023e-08, 5.8102e-04, 0.0000e+00,
2.3170e-02, 0.0000e+00, 0.0000e+00, 7.8518e-03, 0.0000e+00, 1.9662e-02,
7.7019e-05, 1.7013e-02, 4.0341e-02, 3.7943e-03, 2.0059e-02, 1.6905e-02,
0.0000e+00, 0.0000e+00, 3.3092e-02, 0.0000e+00, 2.0570e-04, 6.7327e-03,
0.0000e+00, 0.0000e+00, 8.5911e-04, 0.0000e+00, 0.0000e+00, 1.9356e-02,
0.0000e+00, 0.0000e+00, 0.0000e+00, 3.9724e-02]])
t2 = torch.tensor([[1.6965e-02, 0.0000e+00, 1.5725e-02, 0.0000e+00, 9.7522e-03, 4.1569e-03,
2.8436e-03, 1.2394e-03, 0.0000e+00, 4.4329e-02, 6.6014e-02, 2.3694e-02,
1.2146e-02, 9.4390e-03, 0.0000e+00, 2.4375e-02, 0.0000e+00, 0.0000e+00,
9.9659e-04, 8.2090e-03, 8.6500e-05, 0.0000e+00, 1.2826e-02, 0.0000e+00,
1.5317e-03, 0.0000e+00, 4.4532e-02, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 9.7523e-04, 3.3357e-02, 8.3033e-08, 5.8104e-04, 0.0000e+00,
2.3171e-02, 0.0000e+00, 0.0000e+00, 7.8521e-03, 0.0000e+00, 1.9662e-02,
7.7023e-05, 1.7013e-02, 4.0342e-02, 3.7944e-03, 2.0059e-02, 1.6906e-02,
0.0000e+00, 0.0000e+00, 3.3093e-02, 0.0000e+00, 2.0572e-04, 6.7329e-03,
0.0000e+00, 0.0000e+00, 8.5914e-04, 0.0000e+00, 0.0000e+00, 1.9357e-02,
0.0000e+00, 0.0000e+00, 0.0000e+00, 3.9725e-02]])
print(t1.size(), t2.size())
pred = nn.CosineSimilarity(dim=1, eps=1e-8)
cos = pred(t1, t2)
print(cos, cos>1)
criterion = nn.BCELoss()
res = criterion(cos, torch.ones(1))
print(res)
```
It produces the RuntimeError:
```
torch.Size([1, 64]) torch.Size([1, 64])
tensor([1.0000]) tensor([True])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-52-beebcac695ff> in <module>
3 print(cos, cos>1)
4 criterion = nn.BCELoss()
----> 5 res = criterion(cos, torch.ones(1))
6 print(res)
~/anaconda3/envs/patent-isic/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/patent-isic/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
601
602 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 603 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
604
605
~/anaconda3/envs/patent-isic/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2913 weight = weight.expand(new_size)
2914
-> 2915 return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
2916
2917
RuntimeError: all elements of input should be between 0 and 1
```
### Versions
Collecting environment information...
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 11.6 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0
CMake version: version 3.15.5
Libc version: N/A
Python version: 3.7.5 (default, Oct 25 2019, 10:52:18) [Clang 4.0.1 (tags/RELEASE_401/final)] (64-bit runtime)
Python platform: Darwin-20.6.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.10.2
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.0.4
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.13
[pip3] torch-spline-conv==1.2.1
[conda] blas 1.0 mkl
[conda] mkl 2019.0 pypi_0 pypi
[conda] mkl-service 2.3.0 py37h9ed2024_1
[conda] mkl_fft 1.3.0 py37h4a7008c_2
[conda] mkl_random 1.2.1 py37hb2f4e1b_2
[conda] numpy 1.17.3 pypi_0 pypi
[conda] numpy-base 1.21.5 py37h3b1a694_1
[conda] pyg 2.0.4 py37_torch_1.10.0_cpu pyg
[conda] pytorch 1.10.2 cpu_py37h903acac_0
[conda] pytorch-cluster 1.6.0 py37_torch_1.10.0_cpu pyg
[conda] pytorch-scatter 2.0.9 py37_torch_1.10.0_cpu pyg
[conda] pytorch-sparse 0.6.13 py37_torch_1.10.0_cpu pyg
[conda] pytorch-spline-conv 1.2.1 py37_torch_1.10.0_cpu pyg
[conda] torch 1.4.0 pypi_0 pypi
[conda] torch-geometric 2.0.5 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 14 |
5,646 | 78,063 |
Adam is 30% slower than SGD on Apple Metal.
|
module: performance, module: optimizer, triaged, module: mps
|
### 🐛 Describe the bug
I am benchmarking the Apple silicon on PyTorch and Adam is 30% slower than SGD.
<img width="1126" alt="image" src="https://user-images.githubusercontent.com/18441985/169709366-b415c24d-7a37-4cce-94ec-6c68e0a7d1dd.png">
You can check my experiments on this wandb workspace:
https://wandb.ai/capecape/M1_TF_vs_PT?workspace=user-capecape
I am on latest Pytorch 22/05 nightly.
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220522
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:42:03) [Clang 12.0.1 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.13.0.dev20220522
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.13.0.dev20220522 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
cc @VitalyFedyunin @ngimel @vincentqb @jbschlosser @albanD
| 9 |
5,647 | 78,061 |
Python memory allocator called without holding the GIL when running torchrun under Python debug version
|
triaged, oncall: r2p
|
### 🐛 Describe the bug
I built Python 3.8.13 myself with pydebug, and used PyTorch 1.11.0.
First start with any script:
```python
# hello.py
print('Hello')
```
Then run `torchrun` with
```bash
torchrun --node_rank=0 --nproc_per_node=1 --nnodes=1 --master_addr=127.0.0.1 --master_port=1234 hello.py
```
This will throw the following error:
```
Fatal Python error: Python memory allocator called without holding the GIL
Python runtime state: initialized
Current thread 0x00007f41936513c0 (most recent call first):
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 31 in get_all
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 53 in synchronize
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 647 in _share_and_gather
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 610 in _assign_worker_ranks
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125 in wrapper
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 541 in _rendezvous
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125 in wrapper
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 678 in _initialize_workers
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125 in wrapper
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 844 in _invoke_run
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 709 in run
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125 in wrapper
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 236 in launch_agent
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131 in __call__
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/run.py", line 715 in run
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/run.py", line 724 in main
File "/mnt/efs/gq/conda-env-gq-dbg/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345 in wrapper
File "/mnt/efs/gq/conda-env-gq-dbg/bin/torchrun", line 8 in <module>
Aborted (core dumped)
```
Probably related to #26475.
### Versions
`collect_env.py` output goes as follows:
```
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.21.0
Libc version: glibc-2.27
Python version: 3.8.13 (tags/v3.8.13-dirty:ea673213dd, May 22 2022, 12:36:27) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1055-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: Tesla M60
GPU 1: Tesla M60
GPU 2: Tesla M60
GPU 3: Tesla M60
Nvidia driver version: 450.142.00
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
```
I built my Python 3.8.13 within a conda environment that has a default Python 3.8 installation:
```
conda create -n py38-dbg python=3.8
```
with the following commands:
```bash
git clone https://github.com/python/cpython
cd cpython
git checkout v3.8.13
mkdir debug
cd debug
CPPFLAGS=-I$CONDA_PREFIX/include LDFLAGS="-L$CONDA_PREFIX/lib -Wl,-rpath,$CONDA_PREFIX/lib" ../configure --with-pydebug --prefix=$CONDA_PREFIX
make -j12
make install
```
I also installed `numpy` and `scipy`.
| 1 |
5,648 | 78,053 |
toleranceOverride should override atol and rtol even when explicitly specified in a test
|
module: tests, triaged, module: primTorch
|
This would allow a test's default atol/rtol to be set tighter but still allow operators to set the atol/rtol manually.
In primTorch this would let us set tighter atol and rtol for the reference consistency tests.
cc @mruberry @ezyang @ngimel
| 0 |
5,649 | 78,050 |
RFC: [primTorch] Stride-agnostic Operator Semantics
|
triaged, module: python frontend, module: primTorch
|
Originally the primTorch project was targeting stride consistency for reference implementations with PyTorch's eager mode. This has proved to be an issue for several reasons:
### 1) PyTorch eager's striding is inconsistent.
See https://github.com/pytorch/pytorch/issues/77731 and https://github.com/pytorch/pytorch/issues/77553 for some examples. @ngimel has fixed several of these issues on CUDA, too. See https://github.com/pytorch/pytorch/pull/77610 and https://github.com/pytorch/pytorch/pull/77585.
These issues suggest that stride consistency must not be important enough to strive for it in PyTorch eager today, nor do our users seem particularly affected by inconsistent striding when it does occur.
### 2) Our elementwise striding behavior is not commutative or associative.
Non-commutative or non-associative properties (our type promotion is also non-associative!) are often a pain, because they mean that different valid reference implementations for an operator must work around the non-commutativity or non-associativity. For example, a valid decomposition of clamp (when both min_value and max_value are specified) is:
```
minimum(maximum(a, min_value), max_value)
```
However the application of two elementwise binary operators (`maximum` and `minimum`) is not, in general, equivalent to the application of a single elementwise ternary operator (`clamp`)! We work around the type promotion discrepancy by wrapping every ref in the appropriate type promotion wrapper, but we don't think it's reasonable to develop an analogous striding wrapper. Thus, if we enforce strict stride consistency, we will be limited in how we naturally write references, and any elementwise ternary operator will require a corresponding prim as a design limitation.
### 3) Operations that sometimes return a view and sometimes return a new tensor are difficult to model, and we tell users not to rely on this behavior for all but the simplest cases.
The [reshape documentation ](https://pytorch.org/docs/master/generated/torch.reshape.html?highlight=reshape#torch.reshape) directs the user: "you should not depend on [its] copying vs. viewing behavior." On the other hand, [contiguous](https://pytorch.org/docs/master/generated/torch.Tensor.contiguous.html?highlight=contiguous#torch.Tensor.contiguous) has no such warning.
When tracing, capturing view semantics today depends on careful stride analysis, so if we want to represent views correctly we need a very high fidelity with stride consistency.
### Proposal
Given that users don't seem to be demanding absolute stride consistency today, we want to let users write reasonable decompositions without worrying too much about strides, and we'd prefer to model views and striding behavior more simply when tracing, we propose changing our operator semantics to be stride agnostic. @zou3519 has long advocated for reshape always returning a copy from the user's perspective (his full proposal is a more nuanced copy-on-write idea that would preserve reshape's performance), and this proposal would require that work be done, too, to ensure consistency between PyTorch eager and traces. It would require the same behavior from contiguous as well.
It may seem like there's a middle-ground approach where we model contiguity, for instance, and so don't have to modify the contiguous operation, but we don't think there is. Any property we model will have to be "closed" -- that is, determining the property will depend on inputs having it or not -- and our operators are not "closed" w.r.t. to contiguity. We could possibly model additional properties, like "denseness" and whether something is "permuted," but these schemes seem complicated and for little benefit.
cc @ezyang @mruberry @ngimel
| 13 |
5,650 | 78,047 |
DDP multi host with single GPU each.
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
Folks,
I have two hosts, and each host has a single GPU, I'm using an example.
https://github.com/sudomaze/ttorch/blob/main/examples/ddp/run.py
if I use the master node
rank 0 , world_size 2
worker
rank 1, world_size 2
if I use (master start training loop but worker never connected)
rank 0 , world_size 1
rank 1 , world_size 1
Stack trace for case one. Note that master goes and waits only if world_size 2.
```gpu10:17709:17709 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
gpu10:17709:17709 [0] NCCL INFO NET/Socket : Using [0]eth0:172.16.80.231<0>
gpu10:17709:17709 [0] NCCL INFO Using network Socket
gpu10:17709:17729 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
gpu10:17709:17729 [0] NCCL INFO Channel 00 : 0[b000] -> 1[6000] [receive] via NET/Socket/0
gpu10:17709:17729 [0] NCCL INFO Channel 01 : 0[b000] -> 1[6000] [receive] via NET/Socket/0
gpu10:17709:17729 [0] NCCL INFO Channel 00 : 1[6000] -> 0[b000] [send] via NET/Socket/0
gpu10:17709:17729 [0] NCCL INFO Channel 01 : 1[6000] -> 0[b000] [send] via NET/Socket/0
gpu10:17709:17729 [0] NCCL INFO Connected all rings
gpu10:17709:17729 [0] NCCL INFO Connected all trees
gpu10:17709:17729 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
gpu10:17709:17729 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
gpu10:17709:17729 [0] NCCL INFO comm 0x7f7f2c002fc0 rank 1 nranks 2 cudaDev 0 busId 6000 - Init COMPLETE
barrier released
Traceback (most recent call last):
File "/root/git/dtc_latest/dtc/ddp_test/ddp_sample.worker.py", line 235, in <module>
init_process(1, world_size, run)
File "/root/git/dtc_latest/dtc/ddp_test/ddp_sample.worker.py", line 227, in init_process
fn(rank, world_size)
File "/root/git/dtc_latest/dtc/ddp_test/ddp_sample.worker.py", line 158, in run
torch.cuda.set_device(rank)
File "/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py", line 313, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call, so the stack trace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
Master node.
```
win11:15538:15538 [0] NCCL INFO Bootstrap : Using eth0:192.168.254.205<0>
win11:15538:15538 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
win11:15538:15538 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
win11:15538:15538 [0] NCCL INFO NET/Socket : Using [0]eth0:192.168.254.205<0>
win11:15538:15538 [0] NCCL INFO Using network Socket
NCCL version 2.10.3+cuda11.5
win11:15538:15568 [0] NCCL INFO Channel 00/02 : 0 1
win11:15538:15568 [0] NCCL INFO Channel 01/02 : 0 1
win11:15538:15568 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
win11:15538:15568 [0] NCCL INFO Channel 00 : 1[6000] -> 0[b000] [receive] via NET/Socket/0
win11:15538:15568 [0] NCCL INFO Channel 01 : 1[6000] -> 0[b000] [receive] via NET/Socket/0
win11:15538:15568 [0] NCCL INFO Channel 00 : 0[b000] -> 1[6000] [send] via NET/Socket/0
win11:15538:15568 [0] NCCL INFO Channel 01 : 0[b000] -> 1[6000] [send] via NET/Socket/0
win11:15538:15568 [0] NCCL INFO Connected all rings
win11:15538:15568 [0] NCCL INFO Connected all trees
win11:15538:15568 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
win11:15538:15568 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
win11:15538:15568 [0] NCCL INFO comm 0x7fa13c002fb0 rank 0 nranks 2 cudaDev 0 busId b000 - Init COMPLETE
win11:15538:15538 [0] NCCL INFO Launch mode Parallel
releasing
2 64
win11:15538:15570 [0] include/socket.h:423 NCCL WARN Net : Connection closed by remote peer 172.16.80.231<36406>
win11:15538:15570 [0] NCCL INFO transport/net_socket.cc:414 -> 2
win11:15538:15570 [0] NCCL INFO include/net.h:28 -> 2
win11:15538:15570 [0] NCCL INFO transport/net.cc:459 -> 2
win11:15538:15570 [0] NCCL INFO proxy.cc:351 -> 2
win11:15538:15570 [0] NCCL INFO proxy.cc:452 -> 2 [Proxy Thread]
```
I manage a bit narrow it down.
This a case
```
master
rank 0 local rank set 0 world size 2
device = "cuda:0"
model.to(device)
DDP(model, device_ids=[0], output_device=[0)
worker
device = "cuda:0"
model.to(device)
DDP(model, device_ids=[0], output_device=[0)
```
```
RuntimeError: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call, so the stack trace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu115
Is debug build: False
CUDA used to build PyTorch: 11.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 512.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu115
[pip3] torchaudio==0.11.0+cu115
[pip3] torchvision==0.12.0+cu115
[conda] Could not collect
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
5,651 | 78,045 |
[FSDP] .modules() return original modules instead of FSDP prefixed modules
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
FSDP destructively recreated a module and changed the module path, but users still expect to get the original module path in their application, override the .module() API to return the original module with a clean path. Similar to https://github.com/pytorch/pytorch/pull/74333, which overrides named_parameters() API to return named parameters with the clean paths.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
5,652 | 78,044 |
FFT operators are not supported on MPS device
|
high priority, triaged, module: complex, module: fft, topic: new features, module: mps
|
### 🐛 Describe the bug
# Extra comments
The error happens regardless of whether `PYTORCH_ENABLE_MPS_FALLBACK=1` env var is set or not.
# Code to reproduce:
```python
import torch
x = torch.randn(1, 16000, device="mps")
y = torch.fft.rfft(x)
y_abs = y.abs()
```
# Script output message:
```
$ python test2.py
test2.py:4: UserWarning: The operator 'aten::_fft_r2c' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
y = torch.fft.rfft(x)
libc++abi: terminating with uncaught exception of type c10::TypeError: Trying to convert ComplexFloat to the MPS backend but it does not have support for that dtype.
Exception raised from getMPSDataType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:124 (most recent call first):
frame #0: at::native::mps::getMPSDataType(c10::ScalarType) + 452 (0x113dc690c in libtorch_cpu.dylib)
frame #1: at::native::mps::mpsGraphRankedPlaceHolder(MPSGraph*, at::Tensor const&) + 60 (0x113dc838c in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::unary_op(at::Tensor const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, MPSGraphTensor* (MPSGraph*, MPSGraphTensor*) block_pointer) + 96 (0x113e33d84 in libtorch_cpu.dylib)
frame #3: invocation function for block in at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer) + 216 (0x113dd0dec in libtorch_cpu.dylib)
frame #4: _dispatch_client_callout + 20 (0x1984d01b4 in libdispatch.dylib)
frame #5: _dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x1984df414 in libdispatch.dylib)
frame #6: at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer) + 160 (0x113dc8e90 in libtorch_cpu.dylib)
frame #7: at::native::mps::unary_op(at::Tensor const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, MPSGraphTensor* (MPSGraph*, MPSGraphTensor*) block_pointer) + 984 (0x113e33b70 in libtorch_cpu.dylib)
frame #8: at::native::abs_out_mps(at::Tensor const&, at::Tensor&) + 76 (0x113e351a4 in libtorch_cpu.dylib)
frame #9: at::_ops::abs_out::call(at::Tensor const&, at::Tensor&) + 280 (0x110f268e0 in libtorch_cpu.dylib)
frame #10: at::native::abs(at::Tensor const&) + 232 (0x110aadcd4 in libtorch_cpu.dylib)
frame #11: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&), &(torch::autograd::VariableType::(anonymous namespace)::abs(c10::DispatchKeySet, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) + 968 (0x1125fa24c in libtorch_cpu.dylib)
frame #12: at::_ops::abs::call(at::Tensor const&) + 264 (0x110f25c84 in libtorch_cpu.dylib)
frame #13: torch::autograd::THPVariable_abs(_object*, _object*) + 188 (0x106de8904 in libtorch_python.dylib)
frame #14: method_vectorcall_NOARGS + 136 (0x104e5ee4c in python3.8)
frame #15: call_function + 128 (0x104f859a0 in python3.8)
frame #16: _PyEval_EvalFrameDefault + 19988 (0x104f7c91c in python3.8)
frame #17: _PyEval_EvalCodeWithName + 616 (0x104f75cc8 in python3.8)
frame #18: pyrun_file + 280 (0x104fea0d0 in python3.8)
frame #19: pyrun_simple_file + 448 (0x104fe980c in python3.8)
frame #20: PyRun_SimpleFileExFlags + 120 (0x104fe95ec in python3.8)
frame #21: pymain_run_file + 444 (0x105011040 in python3.8)
frame #22: pymain_run_python + 328 (0x10501040c in python3.8)
frame #23: Py_RunMain + 40 (0x105010268 in python3.8)
frame #24: pymain_main + 52 (0x105011c50 in python3.8)
frame #25: main + 56 (0x104e25a08 in python3.8)
frame #26: start + 520 (0x10534508c in dyld)
[1] 17751 abort python test2.py
```
### Versions
# Env information
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0.dev20220521
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:13:39) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] k2==1.15.1.dev20220520+cpu.torch1.12.0.dev20220519
[pip3] numpy==1.22.3
[pip3] torch==1.13.0.dev20220521
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] k2 1.15.1.dev20220520+cpu.torch1.12.0.dev20220519 pypi_0 pypi
[conda] numpy 1.22.3 py38h25ab29e_0
[conda] numpy-base 1.22.3 py38h974a1f5_0
[conda] torch 1.13.0.dev20220521 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @peterbell10 @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 45 |
5,653 | 78,034 |
Error occurred , when compile source code setting BUILD_CAFFE2=ON
|
module: build, caffe2, triaged
|
### 🐛 Describe the bug
compile source code failed
# compile command:
TORCH_CUDA_ARCH_LIST="7.5" ONNX_NAMESPACE="torch_tmp" USE_NUMPY=0 USE_DISTRIBUTED=OFF USE_NCCL=OFF _GLIBCXX_USE_CXX11_ABI=0 REL_WITH_DEB_INFO=1 BUILD_CAFFE2=ON python setup.py install
## error print
<img width="1529" alt="image" src="https://user-images.githubusercontent.com/13358476/169635520-26b97a21-23e3-40f7-80bc-967f85d3049a.png">
<img width="1530" alt="image" src="https://user-images.githubusercontent.com/13358476/169635531-74175742-2b89-4245-be63-c43ebebc9ade.png">
### Versions
## envs:
---------------------- -------------------
astunparse 1.6.3
brotlipy 0.7.0
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.4
conda 4.12.0
conda-package-handling 1.8.1
cryptography 37.0.1
future 0.18.2
idna 3.3
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
numpy 1.21.5
pip 21.2.4
pycosat 0.6.3
pycparser 2.21
pyOpenSSL 22.0.0
PySocks 1.7.1
PyYAML 6.0
requests 2.27.1
ruamel-yaml-conda 0.15.100
setuptools 61.2.0
six 1.16.0
torch 1.11.0a0+gitbc2c6ed
tqdm 4.64.0
typing_extensions 4.1.1
urllib3 1.26.9
wheel 0.37.1
cc @malfet @seemethere
| 1 |
5,654 | 78,018 |
Three memory copies of every dataloader cpu tensor
|
module: multiprocessing, module: dataloader, module: cuda, triaged, enhancement
|
### 🐛 Describe the bug
(from slack discussion with @albanD)
Is it possible to both share_memory and pin_memory for a tensor, for dataloading across shared memory with zero copies?
```
>>> x.share_memory_().pin_memory().is_shared()
False
>>> x.pin_memory().share_memory_().is_pinned()
False
```
It seems like the answer is no, since cudaHostAlloc doesn't seem to be compatible with shared memory, which is a shame because there's implicitly three full copies of every tensor generated by a dataset:
- dataset generates an example
- dataloader collates it
- dataloader copies it into shared memory to share with main process
- memory pinning thread copies into pinned memory before transfer to device
How might we reduce these extra copies? The H100 generation is going to start being impossible to feed.
(sort of related to https://pytorch.slack.com/archives/C3PDTEV8E/p1652536474661579 and https://pytorch.slack.com/archives/C3PDTEV8E/p1652844740547679)
### Versions
Not relevant, but I'm on PyTorch 1.11.0 and on CUDA 11.4, with 8x V100s or A100s.
cc @VitalyFedyunin @SsnL @ejguan @NivekT @ngimel
| 3 |
5,655 | 77,981 |
Override sym_sizes to create LTC IR for SymIntNode
|
triaged, lazy
| null | 1 |
5,656 | 77,973 |
forward-mode support for "logically composite" operators
|
triaged, module: derivatives, module: forward ad
|
Today, in PyTorch, when we add forward-mode AD support for an operator, there are times when the forward-mode AD formula recomputes intermediate values used in the operator implementation. There is currently no way to share intermediate values between the operator implementation and forward-mode AD formula, especially if an operator implementation is some fused kernel.
Recomputation is sometimes not a problem; @Chillee's work on recomputation in AOTAutograd shows that when paired with a fuser, recomputation can be faster than sharing intermediate values due to avoiding relatively expensive memory transfers. However, PyTorch's many forward-mode AD formulas are not fused kernels (yet, at least :)).
This issue is an ask to investigate if we should reconsider how we are writing forward-mode AD formulas to avoid recomputation.
## Pitch
Let's consider a hypothetical operator. Assume that there is some fused CPU and CUDA kernel for it.
```
def mulmul_reference(a, b, c):
mul1 = a * b
mul2 = mul1 * c
return mul2
def mulmul(a, b, c):
return some_fast_kernel(a, b, c)
```
To add a forward-mode AD formula for it, today we would write it like the following:
```
def mulmul_jvp(a, a_t, b, b_t, c, c_t):
mul1 = a * b
mul1_t = a_t * b + a * b_t
mul2_t = mul1 * c_t + mul1_t * c
return mul2_t
```
Note that at some point, `mulmul` and `mulmul_jvp` both compute mul1.
### Proposal
Instead of writing `mulmul_jvp` we would implement `mulmul` like the following:
```
def mulmul(a, b, c):
if any_is_dual_tensor(a, b, c):
return mulmul_reference(a, b, c)
return some_fast_kernel(a, b, c)
```
This gives us forward-mode AD support for mulmul AND it has the nice property that `mul1` does not get computed twice.
CC'ing folks who have been working on forward-mode AD-related things
cc @soulitzer @albanD @Lezcano @samdow
| 1 |
5,657 | 77,967 |
Inference Tensors should not be allowed to hold `grad_fn`
|
triaged, inference mode
|
### 🐛 Describe the bug
Here's a sneaky script to only override the backward implementation for a PyTorch op. This should not be used but posting to share what seems like a bug in Inference Tensor implementation (as per the title and comments in the code below).
```py
import torch
op = torch.ops.aten.sin.default
class MyFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
with torch.inference_mode():
out = op(input)
# returns an inference tensor with `grad_fn`.
# out.detach() also returns an inference tensor with a `grad_fn`
return out
# returns a non inference tensor with a `grad_fn` set
return out.clone()
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
return grad_output + input
mylib = torch.library.Library("aten", "IMPL")
mylib.impl("sin", MyFunc.apply, "AutogradCPU")
a = torch.zeros(10, requires_grad=True)
out = torch.sin(a)
print(out, out.sum(), torch.is_inference_mode_enabled(), torch.is_grad_enabled())
# errors out when we return an inference tensor since out.sum() magically returns a tensor with no `grad_fn`.
out.sum().backward()
print(out, a.grad)
```
### Versions
master
| 0 |
5,658 | 77,963 |
`logaddexp2` fails to backward
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`logaddexp2` fails to backward
```python
import torch
input = torch.randint(0, 2, [1], dtype=torch.bool)
other = torch.rand([3], dtype=torch.float32, requires_grad=True)
res = torch.logaddexp2(input, other)
res.sum().backward()
# Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,659 | 77,962 |
Operating on boolean torch tensor and numpy array casts to `unit8`
|
triaged, module: numpy, module: type promotion, module: boolean tensor
|
### 🐛 Describe the bug
Sometimes one accidentally mixes `torch` and `numpy` arrays in the code. When this happens, `numpy` generally raises `TypeError`, while `torch` usually handles these situations gracefully. However, for boolean operations `torch` recasts to `uint8`, which leads to unexpected error messages down the road
```python
import numpy as np
import torch
x = torch.tensor(True) | np.array(True)
print(x)
```
outputs `tensor(1, dtype=torch.uint8)` with `dtype` `unit8`. Expected output would be `tensor(True)` with boolean dtype.
### Versions
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.941
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
cc @mruberry @rgommers @nairbv
| 1 |
5,660 | 77,961 |
Exporting the operator isinstance to ONNX opset version 13 is not supported
|
module: onnx, triaged, onnx-needs-info
|
### 🚀 The feature, motivation and pitch
Is it possible to add support for the operator `isinstance` in the ONNX opset please?
I cannot see any workaround to deal with my problem.
Thanks
### Alternatives
_No response_
### Additional context
_No response_
| 8 |
5,661 | 77,955 |
NaN tensor values problem for GTX16xx users (no problem on other devices)
|
module: cudnn, triaged
|
### 🐛 Describe the bug
I use [yolov5 ](https://github.com/ultralytics/yolov5)to test with the demo dataset (coco128) and found that box and obj are nan. Also, there are no detections appear on validation images. This only happens on GTX1660ti devices (GPU mode), when I use CPU or use Google colab(Tesla K80) / RTX2070 for training, everything works fine.

The command used for training is
`python train.py`
There are issues here also discussing the same problem.
- https://github.com/pytorch/pytorch/issues/58123
- https://github.com/openai/glide-text2im/issues/31
- https://discuss.pytorch.org/t/half-precision-convolution-cause-nan-in-forward-pass/117358/3
- https://github.com/pytorch/pytorch/issues/69449
- https://github.com/ultralytics/yolov5/issues/5815
However, I have tried pytorch with cuda version 11.5 (whose cudnn version is 8.3.0>8.2.2) and I also tried **downloading cuDNN from nvidia and copy/paste the dll files into the relevant folder in torch/lib** , the problem still can not be solved.
Another workaround is to downgrade to pytorch with cuda version 10.2(tested and it works), but this is currently not feasible as CUDA-10.2 PyTorch builds are no longer available for Windows.
Environment
- Windows 10 10.0.19044.1706
- YOLOv5-6.1 (version 6.1)
- Nvidia GTX 1660 TI, 6 GB
- Python3.9
- cudatoolkit-11.3.1
- pytorch-1.11.0-py3.9_cuda11.3_cudnn8_0
- (also tried pytorch-1.11.0-py3.9_cuda11.5_cudnn8_0)
- (with dependencies installed correctly)
cc @csarofeen @ptrblck @xwang233
| 1 |
5,662 | 77,951 |
`topk` returns different results with the same input twice in cuda
|
module: cuda, triaged, module: python frontend
|
### 🐛 Describe the bug
`topk` has different results for input with and without grad in cuda
```python
import torch
input_tensor = torch.tensor(0.4105, dtype=torch.float64)
k = 0
dim = 0
largest = True
sorted = True
input = input_tensor.clone().detach().to('cuda')
res1 = torch.topk(input, k, dim=dim, largest=largest, sorted=sorted, )
input = input_tensor.clone().detach().to('cuda')
res2 = torch.topk(input, k, dim=dim, largest=largest, sorted=sorted, )
input = input_tensor.clone().detach().to('cuda')
res3 = torch.topk(input, k, dim=dim, largest=largest, sorted=sorted, )
print(res1)
print(res2)
print(res3)
# torch.return_types.topk(
# values=tensor(0., device='cuda:0', dtype=torch.float64),
# indices=tensor(0, device='cuda:0'))
# torch.return_types.topk(
# values=tensor(0.4105, device='cuda:0', dtype=torch.float64),
# indices=tensor(0, device='cuda:0'))
# torch.return_types.topk(
# values=tensor(0.4105, device='cuda:0', dtype=torch.float64),
# indices=tensor(0, device='cuda:0'))
```
### Versions
pytorch: 1.11.0
cudakit: 11.3
GPU 0: NVIDIA GeForce RTX 3080 Ti
cc @ngimel
| 0 |
5,663 | 77,946 |
[failing test] test_foreach::test_binary_op_scalarlist_fastpath
|
triaged
|
### 🐛 Describe the bug
Following tests are failing in PR https://github.com/pytorch/pytorch/pull/77524
* `test_binary_op_scalarlist_fastpath__foreach_div_cuda_float16`
* `test_binary_op_scalarlist_fastpath__foreach_mul_cuda_float16`
Log
<details>
```
======================================================================
[9744](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9745)
FAIL [0.144s]: test_binary_op_scalarlist_fastpath__foreach_mul_cuda_float16 (__main__.TestForeachCUDA)
[9745](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9746)
----------------------------------------------------------------------
[9746](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9747)
Traceback (most recent call last):
[9747](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9748)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
[9748](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9749)
method(*args, **kwargs)
[9749](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9750)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
[9750](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9751)
result = test(self, **param_kwargs)
[9751](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9752)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 786, in test_wrapper
[9752](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9753)
return test(*args, **kwargs)
[9753](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9754)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 821, in dep_fn
[9754](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9755)
return fn(slf, *args, **kwargs)
[9755](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9756)
File "test_foreach.py", line 243, in test_binary_op_scalarlist_fastpath
[9756](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9757)
self._test_binary_op_scalarlist(device, dtype, op, N, scalarlist, True, disable_fastpath)
[9757](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9758)
File "test_foreach.py", line 216, in _test_binary_op_scalarlist
[9758](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9759)
self._binary_test(dtype, op, ref, inputs, is_fastpath, is_inplace=False)
[9759](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9760)
File "test_foreach.py", line 111, in _binary_test
[9760](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9761)
actual = op(inputs, self.is_cuda, is_fastpath)
[9761](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9762)
File "test_foreach.py", line 83, in __call__
[9762](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9763)
assert e.count == self.n_expected_cudaLaunchKernels
[9763](https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9764)
AssertionError
```
</details>
CI Ref:
* https://github.com/pytorch/pytorch/runs/6500788273?check_suite_focus=true#step:9:8642
* https://github.com/pytorch/pytorch/runs/6512197886?check_suite_focus=true#step:9:9722
NOTE: Tests passed locally on build with `cuda-11.3`
Will skip them in the PR
### Versions
master
cc: @ngimel @crcrpar @anjali411
| 5 |
5,664 | 77,939 |
Fails to compile with GCC 12.1.0
|
module: build, triaged
|
### 🐛 Describe the bug
I followed the instructions to compile from source code within conda environment on arch linux. The compilation fails with the following error:
>
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:188:10: note: ‘__Y’ was declared here
> 188 | __m512 __Y = __Y;
> | ^~~
> In function ‘__m512i _mm512_cvtps_epi32(__m512)’,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:331:47:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:14044:52: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 14044 | return (__m512i) __builtin_ia32_cvtps2dq512_mask ((__v16sf) __A,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 14045 | (__v16si)
> | ~~~~~~~~~
> 14046 | _mm512_undefined_epi32 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~
> 14047 | (__mmask16) -1,
> | ~~~~~~~~~~~~~~~
> 14048 | _MM_FROUND_CUR_DIRECTION);
> | ~~~~~~~~~~~~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
> 206 | __m512i __Y = __Y;
> | ^~~
> In function ‘__m512i _mm512_permutexvar_epi32(__m512i, __m512i)’,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:353:45:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:7027:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 7027 | return (__m512i) __builtin_ia32_permvarsi512_mask ((__v16si) __Y,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 7028 | (__v16si) __X,
> | ~~~~~~~~~~~~~~
> 7029 | (__v16si)
> | ~~~~~~~~~
> 7030 | _mm512_undefined_epi32 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~
> 7031 | (__mmask16) -1);
> | ~~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
> 206 | __m512i __Y = __Y;
> | ^~~
> In function ‘__m128i _mm512_extracti32x4_epi32(__m512i, int)’,
> inlined from ‘__m128i _mm512_castsi512_si128(__m512i)’ at /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:15829:10,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:373:25:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:6045:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 6045 | return (__m128i) __builtin_ia32_extracti32x4_mask ((__v16si) __A,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 6046 | __imm,
> | ~~~~~~
> 6047 | (__v4si)
> | ~~~~~~~~
> 6048 | _mm_undefined_si128 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~
> 6049 | (__mmask8) -1);
> | ~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/emmintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 8; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/emmintrin.h:788:11: note: ‘__Y’ was declared here
> 788 | __m128i __Y = __Y;
> | ^~~
> In function ‘__m512 _mm512_cvtepi32_ps(__m512i)’,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:268:34:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:14148:10: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 14148 | return (__m512) __builtin_ia32_cvtdq2ps512_mask ((__v16si) __A,
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 14149 | (__v16sf)
> | ~~~~~~~~~
> 14150 | _mm512_undefined_ps (),
> | ~~~~~~~~~~~~~~~~~~~~~~~
> 14151 | (__mmask16) -1,
> | ~~~~~~~~~~~~~~~
> 14152 | _MM_FROUND_CUR_DIRECTION);
> | ~~~~~~~~~~~~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:188:10: note: ‘__Y’ was declared here
> 188 | __m512 __Y = __Y;
> | ^~~
> In function ‘__m512i _mm512_cvtps_epi32(__m512)’,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:331:47:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:14044:52: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 14044 | return (__m512i) __builtin_ia32_cvtps2dq512_mask ((__v16sf) __A,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 14045 | (__v16si)
> | ~~~~~~~~~
> 14046 | _mm512_undefined_epi32 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~
> 14047 | (__mmask16) -1,
> | ~~~~~~~~~~~~~~~
> 14048 | _MM_FROUND_CUR_DIRECTION);
> | ~~~~~~~~~~~~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
> 206 | __m512i __Y = __Y;
> | ^~~
> In function ‘__m512i _mm512_permutexvar_epi32(__m512i, __m512i)’,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:353:45:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:7027:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 7027 | return (__m512i) __builtin_ia32_permvarsi512_mask ((__v16si) __Y,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 7028 | (__v16si) __X,
> | ~~~~~~~~~~~~~~
> 7029 | (__v16si)
> | ~~~~~~~~~
> 7030 | _mm512_undefined_epi32 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~
> 7031 | (__mmask16) -1);
> | ~~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
> 206 | __m512i __Y = __Y;
> | ^~~
> In function ‘__m128i _mm512_extracti32x4_epi32(__m512i, int)’,
> inlined from ‘__m128i _mm512_castsi512_si128(__m512i)’ at /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:15829:10,
> inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/elf/brego/src/pytorch/third_party/fbgemm/src/QuantUtilsAvx512.cc:369:25:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/avx512fintrin.h:6045:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
> 6045 | return (__m128i) __builtin_ia32_extracti32x4_mask ((__v16si) __A,
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
> 6046 | __imm,
> | ~~~~~~
> 6047 | (__v4si)
> | ~~~~~~~~
> 6048 | _mm_undefined_si128 (),
> | ~~~~~~~~~~~~~~~~~~~~~~~
> 6049 | (__mmask8) -1);
> | ~~~~~~~~~~~~~~
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/emmintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/emmintrin.h:788:11: note: ‘__Y’ was declared here
> 788 | __m128i __Y = __Y;
> | ^~~
> cc1plus: all warnings being treated as errors
> ninja: build stopped: subcommand failed.
>
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.17.7-arch1-2-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py39h7a5d4dd_0
[conda] numpy-base 1.22.3 py39hb8be1f0_0
cc @malfet @seemethere
| 12 |
5,665 | 77,901 |
Heap corruption in slow_conv_transpose3d
|
module: convolution, triaged, module: edge cases
|
### 🐛 Describe the bug
Heap corruption in `slow_conv_transpose3d`.
### Example to reproduce
```python
import torch
tensor_0 = torch.empty((3, 3, 3, 3, 3,), dtype=torch.float32)
tensor_1 = torch.empty((3, 3, 3, 3, 3,), dtype=torch.float32)
intarrayref_2 = [2, 2, 2]
tensor_3 = torch.full((3,), 0.5, dtype=torch.float64, requires_grad=False)
intarrayref_4 = [1, 1, 1]
intarrayref_5 = [1, 1, 1]
intarrayref_6 = [-1250999896764, -1250999896764, -1250999896764]
intarrayref_7 = [1250999896764, 1250999896764, 1250999896764]
torch._C._nn.slow_conv_transpose3d(tensor_0, tensor_1, intarrayref_2, tensor_3, intarrayref_4, intarrayref_5, intarrayref_6, intarrayref_7)
```
### Result
```malloc_consolidate(): invalid chunk size```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,666 | 77,900 |
Floating point exception in slow_conv3d
|
module: convolution, triaged, module: edge cases
|
### 🐛 Describe the bug
Floating point exception in `slow_conv3d`.
### Example to reproduce
```python
import torch
tensor_0 = torch.full((0, 0, 9, 15, 0, 0, 0, 0, 0, 8,), 1, dtype=torch.float64, requires_grad=False)
tensor_1 = torch.full((0, 0, 9, 15, 0, 0, 0, 0, 0, 8, 0, 11, 10, 3, 0,), 0.5, dtype=torch.float64, requires_grad=False)
intarrayref_2 = [-8608480567731124087, -8608480567731124087, -8608480567731124087]
tensor_3 = torch.full((), 1, dtype=torch.int64, requires_grad=False)
intarrayref_4 = [-8608480567731124087, -8608480567731124087, -8608480567731124087]
intarrayref_5 = [-8608480567731124087, -8608480567731124087, -8608480567731124087]
torch._C._nn.slow_conv3d(tensor_0, tensor_1, intarrayref_2, tensor_3, intarrayref_4, intarrayref_5)
```
### Result
```floating point exception```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,667 | 77,899 |
Floating point exception in native_channel_shuffle
|
triaged, release notes: python_frontend
|
### 🐛 Describe the bug
Floating point exception in `native_channel_shuffle`.
### Example to reproduce
```python
import torch
self = torch.full((0, 0, 9, 15, 0, 0, 0, 0, 0, 8,), 1, dtype=torch.int64, requires_grad=False)
groups = 0
torch.native_channel_shuffle(self, groups)
```
### Result
```floating point exception```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,668 | 77,894 |
Floating point exception in channel_shuffle
|
triaged, release notes: python_frontend
|
### 🐛 Describe the bug
Floating point exception in `channel_shuffle`.
### Example to reproduce
```python
import torch
self = torch.empty((0, 3, 8, 0, 6,), dtype=torch.float64)
groups = 1
torch.channel_shuffle(self, groups)
```
### Result
```floating point exception```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
| 0 |
5,669 | 77,893 |
Segmentation fault in _remove_batch_dim
|
needs reproduction, triaged, module: vmap
|
### 🐛 Describe the bug
Segmentation fault in `_remove_batch_dim`.
### Example to reproduce
```python
import torch
self = torch.full((5, 5, 5, 5, 5,), 1, dtype=torch.float64, requires_grad=False)
level = 0
batch_size = 0
out_dim = 1250999896764
torch._remove_batch_dim(self, level, batch_size, out_dim)
```
### Result
```segmentation fault```
### Expected behavior
Graceful termination or a RuntimeError to be thrown.
### Note
This bug was discovered using IvySyn, a fuzz testing tool which is currently being developed at Secure Systems Labs at Brown University.
### Versions
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.0-11-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.11.0 py39_cpu pytorch
[conda] torchvision 0.12.0 py39_cpu pytorch
cc @zou3519
| 1 |
5,670 | 77,880 |
Make the appropriate backend `DimensionNode` visible to LTC core
|
triaged, lazy
|
### 🚀 The feature, motivation and pitch
With the current implementation of `DimensionNode` in LTC, each backend has its own implementation ([PyTorch/XLA](https://github.com/pytorch/xla/pull/3558), [TorchScript](https://github.com/pytorch/pytorch/blob/master/torch/csrc/lazy/ts_backend/dynamic_ir.h#L46-L56)).
At the same time, shape inference builds off of LTC core classes leading to this [PR](https://github.com/pytorch/pytorch/pull/77830) failing to build shape inference implementation for `expand.SymInt`. Please make the appropriate backend `DimensionNode` visible to LTC core.
**Solution** alternatives based on offline chat with @Krovatkin.
* IR core to access the correct backend implementation
* Use multiple inheritance
CC @wconstab @JackCaoG
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
5,671 | 77,869 |
Throw warning if python optimise flags are enabled
|
triaged, module: python frontend
|
### 🐛 Describe the bug
Currently pytorch will not thrown any obvious warnings / errors when PYTHONOPTIMZE(-O and -OO) flags are used. [#76619, #76034, #76659, #60953] This **might** imply that the behaviour is consistent whether the flags are enabled or disabled.
However this is not true. Currently assertions are used for checks and throwing error messages.
Examples (run first with `python`, then run again with `python -O` or `python -OO`):
```python
import torch
m = torch.nn.Softmax2d()
input = torch.randn(2, 3, 12, 13, 15)
output = m(input)
```
Correct behaviour will throw: `AssertionError: Softmax2d requires a 4D tensor as input`
Incorrect behaviour with `-O`: Input will be silently accepted.
Similarly:
```python
import torch.nn as nn
att = nn.MultiheadAttention(6, 5, kdim=2,vdim=2)
```
Correct behaviour: `AssertionError: embed_dim must be divisible by num_heads`
Incorrect behaviour with `-O`: Input will be silently accepted.
```python
rnn = nn.RNNCell(10, 20)
input = torch.ones((6, 3, 10, 5))
hx = torch.randn(3, 20)
output = []
for i in range(6):
hx = rnn(input[i], hx)
```
Correct behaviour with helpful error message: `AssertionError: RNNCell: Expected input to be 1-D or 2-D but received 3-D tensor`
Incorrect behaviour with not so useful error message: `RuntimeError: input has inconsistent input_size: got 3 expected 10`
>Using `-O` flag in python is after all user's responsibility and the user should be aware of the potential problem in using it if they are using pytorch or anything else. However if it is known that assertions are used in pytorch to throw error then it could be informed to the user to avoid any confusion.
If someone using this flag is unaware about this behaviour will potentially miss few errors (leading to funny results?) or will not get meaningful error messages.
Maybe important errors which are behind asserts can be accompanied / replaced with an exception or a warning can be shown to the user if they have enabled this flag and imports torch.
### Versions
PyTorch version: 1.12.0a0+git4d527cd
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.2 (default, Mar 26 2021, 21:58:27) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.3.0-46-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1050
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] torch==1.12.0a0+gitffd9608
[pip3] torchvision==0.13.0a0+970ba35
| 2 |
5,672 | 77,844 |
Conv2D with large different number of input and output channels gives a CUDNN_STATUS_INTERNAL_ERROR
|
module: cudnn, triaged
|
### 🐛 Describe the bug
I'm trying to perform a convolution just to increase the dimensionality of the input tensor from `512*32` (i.e. 16384) dimensions to `512*512` (i.e. 262144) dimesions, and I got the following message with a code snippet which indeed reproduces the error:
```
Exception has occurred: RuntimeError
cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([4, 16384, 1, 1], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(16384, 262144, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
data_type = CUDNN_DATA_FLOAT
padding = [0, 0, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x55eae4955610
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 4, 16384, 1, 1,
strideA = 16384, 1, 1, 1,
output: TensorDescriptor 0x55eae494f370
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 4, 262144, 1, 1,
strideA = 262144, 1, 1, 1,
weight: FilterDescriptor 0x55eae4955520
type = CUDNN_DATA_FLOAT
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 262144, 16384, 1, 1,
Pointer addresses:
input: 0x7f5103540000
output: 0x7f55ae000000
weight: 0x7f4cf0000000
File "/scratch/arturao/GANSketching22/train_hypernet.py", line 159, in training_loop
pred_weights = trainer.gan_model.hyper_net.weight_predictor(feature)
File "/scratch/arturao/GANSketching22/train_hypernet.py", line 240, in <module>
training_loop()
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:57:06) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.11.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.19.5 pypi_0 pypi
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.6.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cu113 pytorch
[conda] torchvision 0.7.0 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233
| 0 |
5,673 | 77,842 |
ONNX export of CumSum produces different data type
|
module: onnx, triaged, onnx-triaged
|
### 🐛 Describe the bug
When running a model with ```torch.cumsum(..., dtype=None)``` where dtype is set to None, an integer data type gets promoted to ```torch.int64```. However, if the same model gets exported to ONNX, the dtype does not get promoted. See following example:
```python
import torch
import onnx
class Model(torch.nn.Module):
def forward(self, x):
return x.cumsum(0, dtype=None), x.cumsum(0, dtype=torch.int32), x.cumsum(0, dtype=torch.float)
input = torch.rand(1, 2, 3).to(torch.int32)
model = Model()
output = model(input)
torch.onnx.export(model, input, 'error.onnx', opset_version=11)
omodel = onnx.load('error.onnx')
lookup = {
onnx.TensorProto.BOOL: 'onnx.TensorProto.BOOL',
onnx.TensorProto.DOUBLE: 'onnx.TensorProto.DOUBLE',
onnx.TensorProto.FLOAT16: 'onnx.TensorProto.FLOAT16',
onnx.TensorProto.FLOAT: 'onnx.TensorProto.FLOAT',
onnx.TensorProto.INT8: 'onnx.TensorProto.INT8',
onnx.TensorProto.INT16: 'onnx.TensorProto.INT16',
onnx.TensorProto.INT32: 'onnx.TensorProto.INT32',
onnx.TensorProto.INT64: 'onnx.TensorProto.INT64',
onnx.TensorProto.UINT8: 'onnx.TensorProto.UINT8',
onnx.TensorProto.UINT16: 'onnx.TensorProto.UINT16',
onnx.TensorProto.UINT32: 'onnx.TensorProto.UINT32',
onnx.TensorProto.UINT64: 'onnx.TensorProto.UINT64'
}
print('PyTorch Output DTypes: {}'.format(tuple(o.dtype for o in output)))
print('ONNX Output DTypes: {}'.format(
tuple(lookup.get(o.type.tensor_type.elem_type) for o in omodel.graph.output))
)
```
Output is:
```
PyTorch Output DTypes: (torch.int64, torch.int32, torch.float32)
ONNX Output DTypes: ('onnx.TensorProto.INT32', 'onnx.TensorProto.INT32', 'onnx.TensorProto.FLOAT')
```
As you can see, in the ```dtype=None``` case, PyTorch uses INT64 while ONNX uses INT32.
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.17
Python version: 3.7.12 (default, Feb 6 2022, 20:29:18) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Is CUDA available: False
CUDA runtime version: 11.3.109
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
| 1 |
5,674 | 77,840 |
Legacy model format is not supported on mobile
|
oncall: mobile
|
### 🐛 Describe the bug
`
import torch
m = torch.jit.load('model.pt')
`
failed. Notes:
RuntimeError: Legacy model format is not supported on mobile.
## enviroment :
- torch version: 1.11.0
A model is traced by torch_1.9. when I load the model with torch_1.9, it worked. if I load the model with torch_1.11, it failed.

what does the error mean? How to solve this problem? Thanks
### Versions
model traced by torch 1.9.0
expect to load the model with torch_1.11
| 0 |
5,675 | 77,839 |
BUG: reference count leak when using `THPLayout_New` and `THPMemoryFormat_New` (static analyzer reports)
|
module: memory usage, triaged, module: python frontend
|
Hint 1: In function `THPLayout_New` and `THPMemoryFormat_New`, the `tp_alloc` method is used to allocate the memory for the PyObject to be returned. If the default allocator function is inherited, the function call will return a new reference.
Hint 2: Function `PyModule_AddObject` will steal a reference to the third argument only when the return value is zero.
---
* In function `initializeLayouts`, the trigger path provided by our analyzer is as follows. (Internal Report ID: e2a925)
1. A new reference is returned from `THPLayout_New` and assigned to `strided_layout`. (refcnt = 1)
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_layouts.cpp#L25
2. Increase the refcnt. (refcnt = 2)
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_layouts.cpp#L26
3. An exception is thrown without decreasing the refcnt.
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_layouts.cpp#L28
---
* In lambda expression `add_memory_format` in function `initializeMemoryFormats`, the trigger path provided by our analyzer is as follows. (Internal Report ID: 09d8df)
1. A new reference is returned from `THPMemoryFormat_New` and assigned to `memory_format`. (refcnt = 1)
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_memoryformats.cpp#L32
2. Increase the refcnt. (refcnt = 2)
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_memoryformats.cpp#L33
3. Decrease the refcnt. (refcnt = 1)
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_memoryformats.cpp#L35
4. An exception is thrown without decreasing the refcnt.
https://github.com/pytorch/pytorch/blob/d1bb420fb1087d680b7ad526536e14acbc2f18bf/torch/csrc/utils/tensor_memoryformats.cpp#L36
cc @ezyang @gchanan @zou3519
| 1 |
5,676 | 77,838 |
Sporadic convolution error with dilation=0
|
module: convolution, triaged
|
### 🐛 Describe the bug
When creating a `nn.ConvXd` with `dilation=0`, you will receive an error
```
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in forward(self, input)
445
446 def forward(self, input: Tensor) -> Tensor:
--> 447 return self._conv_forward(input, self.weight, self.bias)
448
449 class Conv3d(_ConvNd):
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias)
442 _pair(0), self.dilation, self.groups)
443 return F.conv2d(input, weight, bias, self.stride,
--> 444 self.padding, self.dilation, self.groups)
445
446 def forward(self, input: Tensor) -> Tensor:
RuntimeError: could not create a descriptor for a dilated convolution forward propagation primitive
```
However, if you run the convolution with `dilation=1` without restarting the Python kernel, and rerun the original again, the error doesn't show anymore.
## Steps to repro:
1. Run either a `jupyter notebook` or an `ipython` instance with the following code
```python
import torch
from torch import nn
c = nn.Conv2d(2, 3, 5, bias=False, dilation=0)
c.weight = nn.Parameter(torch.ones_like(c.weight)) # Optional
x = torch.ones(2, 2, 42, 42)
c(x)
```
2. Once you receive an error, modify the cell above (if in jupyter), or re-run the `c = ...` line (if in `ipython`)
```python
# ...
c = nn.Conv2d(2, 3, 5, bias=False, dilation=1)
# ...
```
3. After rerunning the code, and receiving no error, change back the convolution instantiation back to the original. There will be no error:
```python
# ...
c = nn.Conv2d(2, 3, 5, bias=False, dilation=0)
# ...
```
## Expected behavior
Either (1) Throw an error whenever `dilation=0`, even if its descriptors were already created in the backend, or (2) Don't throw error at all, and allow "experimentations" with `dilation=0`.
### Versions
```shell
Collecting environment information...
PyTorch version: 1.12.0a0+gitd40a240
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.21.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1070
GPU 1: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 470.74
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] torch==1.12.0a0+gitd40a240
[pip3] torchvision==0.12.0a0+dcf5dc8
[conda] magma-cuda113 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-include 2021.4.0 h8d4b97c_729 conda-forge
[conda] numpy 1.20.0 pypi_0 pypi
[conda] torch 1.12.0a0+git365c03f dev_0 <develop>
[conda] torchvision 0.12.0a0+dcf5dc8 dev_0 <develop>
```
| 0 |
5,677 | 77,837 |
TorchScript attempts to compile dead branch of torch.jit.is_scripting
|
oncall: jit
|
### 🐛 Describe the bug
Consider the following code. The `else` branch of the condition in `get_context` evaluates to `False` when scripting. Running this code still produces an exception.
```python
import torch
meow = 1239
def get_context():
if torch.jit.is_scripting():
print('hi')
else:
global meow
meow = 9123
return meow
if __name__ == '__main__':
x = torch.jit.script(get_context)
print(x.code)
```
Output:
```python
torch.jit.frontend.UnsupportedNodeError: global variables aren't supported:
File "/mnt/data/lmnt/code/tacotron/src/lmnt/tacotron/context.py", line 9
print('hi')
else:
global meow
~~~~~~ <--- HERE
meow = 9123
return meow
```
This is the output of the script after removing the `global meow` line:
```python
def get_context() -> NoneType:
print("hi")
return None
```
Note how the dead branch is eliminated *after* scripting. Instead, the dead branch should be eliminated before attempting to compile it.
There are workarounds to the current behavior (e.g. move the else clause into a separate function with a `@torch.jit.unused` annotation) but they aren't ergonomic and often ruin the code flow.
### Versions
Any version of PyTorch
| 2 |
5,678 | 77,821 |
cannot convert to channels last format for conv2d conv3d hybrid model
|
module: convolution, triaged
|
### 🐛 Describe the bug
cannot convert to channels last format for conv2d conv3d hybrid model
```
import torch.nn as nn
import torch
cpu_device = torch.device("cpu")
class Hybrid_model(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Conv2d(2, 4, kernel_size=3, stride=1, padding=1, bias=False)
self.layer2 = nn.Conv3d(4, 8, kernel_size=3, stride=1, padding=1, bias=False)
def forward(self, inputs):
x = self.layer1(inputs)
x = torch.reshape(x, (x.size(0), x.size(1), x.size(2), 16, 2))
x = self.layer2(x)
return x
if __name__ == "__main__":
test_model = Hybrid_model().to(memory_format=torch.channels_last)
x = torch.randn([3, 2, 32, 32], dtype=torch.float, requires_grad=True)
y = test_model(x)
```
```
python test_conv2dconv3d_hybrid_model.py
Traceback (most recent call last):
File "test_conv2dconv3d_hybrid_model.py", line 21, in <module>
test_model = Hybrid_model().to(memory_format=torch.channels_last)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 899, in to
return self._apply(convert)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
param_applied = fn(param)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 896, in convert
non_blocking, memory_format=convert_to_format)
RuntimeError: required rank 4 tensor to use channels_last format
```
Even we use `test_model = Hybrid_model().to(memory_format=torch.channels_last_3d)`, it still report error as below:
```
python test_conv2dconv3d_hybrid_model.py
Traceback (most recent call last):
File "test_conv2dconv3d_hybrid_model.py", line 21, in <module>
test_model = Hybrid_model().to(memory_format=torch.channels_last_3d)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 899, in to
return self._apply(convert)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
param_applied = fn(param)
File "/home/gta/miniconda3/envs/xxx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 896, in convert
non_blocking, memory_format=convert_to_format)
RuntimeError: required rank 5 tensor to use channels_last_3d format
```
### Versions
Collecting environment information...
PyTorch version: 1.10.0a0+gitcb9f926
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.54+prerelease2927-x86_64-with-debian-bullseye-sid
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0a0+gitcb9f926
[conda] mkl 2022.0.1 h06a4308_117 defaults
[conda] mkl-include 2022.0.1 h06a4308_117 defaults
[conda] numpy 1.21.2 py37hd8d4704_0 defaults
[conda] numpy-base 1.21.2 py37h2b8c604_0 defaults
[conda] torch 1.10.0a0+gitcb9f926 pypi_0 pypi
| 4 |
5,679 | 77,818 |
torch.nn.Conv3D on MPS backend
|
triaged, topic: new features, module: mps
|
### 🐛 Describe the bug
Using `Conv3D` on MPS backend, like in this sample code:
```python
import torch
x = torch.randn(1, 10, 10, 10, device="mps")
c = torch.nn.Conv3d(1, 1, 3, device="mps")
c(x)
```
Python process are being aborted with this error:
```
❯ python test_mps.py
/AppleInternal/Library/BuildRoots/560148d7-a559-11ec-8c96-4add460b61a6/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/Operations/MPSGraphConvolutionOps.mm:346: failed assertion `sourceTensor rank must be 4'
fish: Job 1, 'python test_mps.py' terminated by signal SIGABRT (Abort)
```
This is the full report:
```
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: Python [48029]
Path: /opt/homebrew/*/Python.framework/Versions/3.9/Resources/Python.app/Contents/MacOS/Python
Identifier: org.python.python
Version: 3.9.12 (3.9.12)
Code Type: ARM-64 (Native)
Parent Process: fish [22773]
Responsible: iTerm2 [74184]
User ID: 501
Date/Time: 2022-05-18 22:44:20.6603 -0300
OS Version: macOS 12.3.1 (21E258)
Report Version: 12
Anonymous UUID: 37559CD1-BBA5-46C4-92C9-52CBDEDD1C5E
Sleep/Wake UUID: 0D6A0AE4-267E-4E25-8625-6F3AD3D4E3F6
Time Awake Since Boot: 74000 seconds
Time Since Wake: 4829 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: cache queue
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Application Specific Information:
/AppleInternal/Library/BuildRoots/560148d7-a559-11ec-8c96-4add460b61a6/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/Operations/MPSGraphConvolutionOps.mm:346: failed assertion `sourceTensor rank must be 4'
Thread 0 Crashed:: Dispatch queue: cache queue
0 libsystem_kernel.dylib 0x1af814db8 __pthread_kill + 8
1 libsystem_pthread.dylib 0x1af849ee0 pthread_kill + 288
2 libsystem_c.dylib 0x1af784340 abort + 168
3 libsystem_c.dylib 0x1af783754 __assert_rtn + 272
4 Metal 0x1b82207a8 MTLReportFailure.cold.1 + 56
5 Metal 0x1b820a2c4 MTLReportFailure + 480
6 MetalPerformanceShadersGraph 0x212fd1f88 0x212e7f000 + 1388424
7 libtorch_cpu.dylib 0x10e9a9b3c invocation function for block in at::native::_mps_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long) + 540
8 libtorch_cpu.dylib 0x10e99f4e0 invocation function for block in at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer) + 216
9 libdispatch.dylib 0x1af6861c8 _dispatch_client_callout + 20
10 libdispatch.dylib 0x1af695414 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 libtorch_cpu.dylib 0x10e997634 at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer) + 160
12 libtorch_cpu.dylib 0x10e9a8a9c at::native::_mps_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long) + 3040
13 libtorch_cpu.dylib 0x10bd064c0 at::_ops::_mps_convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long) + 360
14 libtorch_cpu.dylib 0x10b528d94 at::native::_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long, bool, bool, bool, bool) + 12668
15 libtorch_cpu.dylib 0x10bb62074 at::_ops::_convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long, bool, bool, bool, bool) + 436
16 libtorch_cpu.dylib 0x10b51e390 at::native::convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long) + 288
17 libtorch_cpu.dylib 0x10bb617a4 at::_ops::convolution::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long) + 188
18 libtorch_cpu.dylib 0x10d1a83ec c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long), &(torch::autograd::VariableType::(anonymous namespace)::convolution(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long) + 1984
19 libtorch_cpu.dylib 0x10bb61400 at::_ops::convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long) + 376
20 libtorch_cpu.dylib 0x10b51cd14 at::native::conv3d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long) + 620
21 libtorch_cpu.dylib 0x10bf68288 at::_ops::conv3d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long) + 360
22 libtorch_python.dylib 0x102350da8 torch::autograd::THPVariable_conv3d(_object*, _object*, _object*) + 1092
23 Python 0x10085e8ec cfunction_call + 60
24 Python 0x10080f758 _PyObject_MakeTpCall + 132
25 Python 0x100901294 _PyEval_EvalFrameDefault + 28164
26 Python 0x10081029c _PyFunction_Vectorcall + 184
27 Python 0x100812cf0 method_vectorcall + 124
28 Python 0x1009000d4 _PyEval_EvalFrameDefault + 23620
29 Python 0x10081029c _PyFunction_Vectorcall + 184
30 Python 0x100812d94 method_vectorcall + 288
31 Python 0x1008faf50 _PyEval_EvalFrameDefault + 2752
32 Python 0x1008f9490 _PyEval_EvalCode + 452
33 Python 0x100810340 _PyFunction_Vectorcall + 348
34 Python 0x10080f9d8 _PyObject_FastCallDictTstate + 96
35 Python 0x1008821b8 slot_tp_call + 188
36 Python 0x10080f758 _PyObject_MakeTpCall + 132
37 Python 0x100900280 _PyEval_EvalFrameDefault + 24048
38 Python 0x1008f9490 _PyEval_EvalCode + 452
39 Python 0x100950f20 run_eval_code_obj + 136
40 Python 0x100950e50 run_mod + 112
41 Python 0x10094e464 pyrun_file + 168
42 Python 0x10094dd84 pyrun_simple_file + 252
43 Python 0x10094dc48 PyRun_SimpleFileExFlags + 80
44 Python 0x10096e7ec pymain_run_file + 320
45 Python 0x10096df2c Py_RunMain + 876
46 Python 0x10096f404 Py_BytesMain + 40
47 dyld 0x100295088 start + 516
Thread 1:: Dispatch queue: com.apple.root.utility-qos
0 libsystem_kernel.dylib 0x1af80c8d0 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x1af80cd40 mach_msg + 76
2 libsystem_notify.dylib 0x1b23d2d18 notify_register_check + 1112
3 libsystem_asl.dylib 0x1b49ba944 _asl_notify_open + 140
4 libsystem_asl.dylib 0x1b49b9398 asl_open + 44
5 libDiagnosticMessagesClient.dylib 0x1b62f4f90 msgtracer_client + 52
6 libDiagnosticMessagesClient.dylib 0x1b62f5224 msgtracer_vlog_with_keys_skip_nulls + 428
7 libDiagnosticMessagesClient.dylib 0x1b62f506c msgtracer_log_with_keys + 36
8 Metal 0x1b8148824 __createContextTelemetryDataWithQueueLabelAndCallstack_block_invoke + 920
9 libdispatch.dylib 0x1af684604 _dispatch_call_block_and_release + 32
10 libdispatch.dylib 0x1af6861c8 _dispatch_client_callout + 20
11 libdispatch.dylib 0x1af697a04 _dispatch_root_queue_drain + 680
12 libdispatch.dylib 0x1af698104 _dispatch_worker_thread2 + 164
13 libsystem_pthread.dylib 0x1af846324 _pthread_wqthread + 228
14 libsystem_pthread.dylib 0x1af845080 start_wqthread + 8
Thread 2:: Dispatch queue: com.Metal.CommandQueueDispatch
0 libsystem_kernel.dylib 0x1af80c8d0 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x1af80cd40 mach_msg + 76
2 IOKit 0x1b22fd8a8 io_connect_method + 512
3 IOKit 0x1b22fd640 IOConnectCallMethod + 176
4 IOGPU 0x1ca2d03dc IOGPUCommandQueueSubmitCommandBuffers + 144
5 IOGPU 0x1ca2c2ff8 -[IOGPUMetalCommandQueue _submitCommandBuffers:count:] + 820
6 IOGPU 0x1ca2c2c98 -[IOGPUMetalCommandQueue submitCommandBuffers:count:] + 88
7 Metal 0x1b815de0c -[_MTLCommandQueue _submitAvailableCommandBuffers] + 684
8 libdispatch.dylib 0x1af6861c8 _dispatch_client_callout + 20
9 libdispatch.dylib 0x1af689670 _dispatch_continuation_pop + 500
10 libdispatch.dylib 0x1af69c8e0 _dispatch_source_invoke + 1596
11 libdispatch.dylib 0x1af68d784 _dispatch_lane_serial_drain + 376
12 libdispatch.dylib 0x1af68e404 _dispatch_lane_invoke + 392
13 libdispatch.dylib 0x1af698c98 _dispatch_workloop_worker_thread + 648
14 libsystem_pthread.dylib 0x1af846360 _pthread_wqthread + 288
15 libsystem_pthread.dylib 0x1af845080 start_wqthread + 8
Thread 3:
0 libsystem_kernel.dylib 0x1af810a2c __gettimeofday + 12
1 libsystem_c.dylib 0x1af712768 gettimeofday + 72
2 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
3 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
4 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 4:
0 libsystem_kernel.dylib 0x1af80cf2c mach_absolute_time + 108
1 libsystem_kernel.dylib 0x1af80e980 __commpage_gettimeofday_internal + 44
2 libsystem_c.dylib 0x1af712758 gettimeofday + 56
3 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
4 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
5 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 5:
0 libsystem_kernel.dylib 0x1af80cf2c mach_absolute_time + 108
1 libsystem_kernel.dylib 0x1af80e980 __commpage_gettimeofday_internal + 44
2 libsystem_c.dylib 0x1af712758 gettimeofday + 56
3 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
4 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
5 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 6:
0 libopenblas64_.0.dylib 0x1067a0e8c blas_thread_server + 252
1 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
2 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
3 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 7:
0 libsystem_kernel.dylib 0x1af80e9b0 __commpage_gettimeofday_internal + 92
1 libsystem_kernel.dylib 0x1af80e980 __commpage_gettimeofday_internal + 44
2 libsystem_c.dylib 0x1af712758 gettimeofday + 56
3 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
4 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
5 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 8:
0 libsystem_kernel.dylib 0x1af80e9b0 __commpage_gettimeofday_internal + 92
1 libsystem_kernel.dylib 0x1af80e980 __commpage_gettimeofday_internal + 44
2 libsystem_c.dylib 0x1af712758 gettimeofday + 56
3 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
4 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
5 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 9:
0 libsystem_kernel.dylib 0x1af80cf2c mach_absolute_time + 108
1 libsystem_kernel.dylib 0x1af80e980 __commpage_gettimeofday_internal + 44
2 libsystem_c.dylib 0x1af712758 gettimeofday + 56
3 libopenblas64_.0.dylib 0x1067a0ea4 blas_thread_server + 276
4 libsystem_pthread.dylib 0x1af84a26c _pthread_start + 148
5 libsystem_pthread.dylib 0x1af84508c thread_start + 8
Thread 10:
0 libsystem_pthread.dylib 0x1af845078 start_wqthread + 0
Thread 11:
0 libsystem_pthread.dylib 0x1af845078 start_wqthread + 0
Thread 12:
0 libsystem_pthread.dylib 0x1af845078 start_wqthread + 0
Thread 0 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x0000000000000000 x5: 0x000000000000001a x6: 0x0000000000000000 x7: 0x0000000000000001
x8: 0xefad173ca1fc99d9 x9: 0xefad173da1cc1c59 x10: 0xcccccccccccccccd x11: 0x000000000000000a
x12: 0x0000000000000000 x13: 0x0000000000000033 x14: 0x0000020000011000 x15: 0x0000000207ff5430
x16: 0x0000000000000148 x17: 0x00000002096b7640 x18: 0x0000000000000000 x19: 0x0000000000000006
x20: 0x0000000100308580 x21: 0x0000000000000103 x22: 0x0000000100308660 x23: 0xffffffffffffffff
x24: 0x00000002057ae000 x25: 0x000000020598e000 x26: 0x00000001b823b482 x27: 0x0000600003ba4cf0
x28: 0x0000000000000005 fp: 0x000000016fbc4e30 lr: 0x00000001af849ee0
sp: 0x000000016fbc4e10 pc: 0x00000001af814db8 cpsr: 0x40001000
far: 0x0000000109ad8000 esr: 0x56000080 Address size fault
Binary Images:
0x1af80b000 - 0x1af842fff libsystem_kernel.dylib (*) <1d7b3b8e-75a1-34ea-aa52-9f7c23155c55> /usr/lib/system/libsystem_kernel.dylib
0x1af843000 - 0x1af84ffff libsystem_pthread.dylib (*) <cee8bc77-6923-34d9-89a3-6f8f7279605e> /usr/lib/system/libsystem_pthread.dylib
0x1af70a000 - 0x1af78bfff libsystem_c.dylib (*) <fd566a15-42d8-314a-a99a-b59237ddf5bc> /usr/lib/system/libsystem_c.dylib
0x1b813e000 - 0x1b827ffff com.apple.Metal (261.13) <da3fd743-76d9-351c-9bb0-17027a76b6df> /System/Library/Frameworks/Metal.framework/Versions/A/Metal
0x212e7f000 - 0x21383dfff com.apple.MetalPerformanceShadersGraph (1.0) <92c3cdaa-7900-3e11-b107-3c1fb787fa04> /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
0x10b294000 - 0x10ff6bfff libtorch_cpu.dylib (*) <7d4fb975-9f75-386e-80ff-70f320c9ef36> /Users/USER/*/libtorch_cpu.dylib
0x1af682000 - 0x1af6c8fff libdispatch.dylib (*) <dc048e3b-e023-3d17-afe5-4ff3dc625608> /usr/lib/system/libdispatch.dylib
0x101f9c000 - 0x102bb3fff libtorch_python.dylib (*) <ef946875-fb71-3128-b8ba-2a9b843a02af> /Users/USER/*/libtorch_python.dylib
0x1007b0000 - 0x100a73fff org.python.python (3.9.12, (c) 2001-2021 Python Software Foundation.) <07c6f0c1-bf5f-3eb1-87a3-9ea5c96126e5> /opt/homebrew/*/Python.framework/Versions/3.9/Python
0x100290000 - 0x1002effff dyld (*) <fbb89662-e6f2-3434-b542-f75185ac5e74> /usr/lib/dyld
0x1b23d1000 - 0x1b23e0fff libsystem_notify.dylib (*) <5ff2da89-8a88-34bb-aa68-ba9c5d24e639> /usr/lib/system/libsystem_notify.dylib
0x1b49b8000 - 0x1b49cffff libsystem_asl.dylib (*) <6b2f4a2f-2c36-3d5d-87f0-9b6bbae5560c> /usr/lib/system/libsystem_asl.dylib
0x1b62f3000 - 0x1b62f5fff libDiagnosticMessagesClient.dylib (*) <28ab09cf-5f67-3012-bc5b-1387122c2c81> /usr/lib/libDiagnosticMessagesClient.dylib
0x1b22fa000 - 0x1b23d0fff com.apple.framework.IOKit (2.0.2) <1d3a63f1-1f43-3ead-9815-fa086cbeda27> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x1ca2bc000 - 0x1ca2e9fff com.apple.IOGPU (35.29) <51396341-67fd-333d-a043-99baada3f29d> /System/Library/PrivateFrameworks/IOGPU.framework/Versions/A/IOGPU
0x106638000 - 0x10787ffff libopenblas64_.0.dylib (*) <0672674f-0fbf-3245-8fec-358e42c77be2> /Users/USER/*/libopenblas64_.0.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=925.4M resident=0K(0%) swapped_out_or_unallocated=925.4M(100%)
Writable regions: Total=1.9G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=1.9G(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 256K 1
Kernel Alloc Once 32K 1
MALLOC 280.2M 42
MALLOC guard page 96K 5
MALLOC_MEDIUM (reserved) 960.0M 8 reserved VM address space (unallocated)
MALLOC_NANO (reserved) 384.0M 1 reserved VM address space (unallocated)
STACK GUARD 208K 13
Stack 22.4M 13
VM_ALLOCATE 118.0M 93
VM_ALLOCATE (reserved) 160.0M 1 reserved VM address space (unallocated)
__AUTH 1482K 92
__AUTH_CONST 6219K 212
__DATA 9296K 254
__DATA_CONST 8913K 265
__DATA_DIRTY 342K 83
__FONT_DATA 4K 1
__LINKEDIT 644.7M 57
__OBJC_CONST 607K 72
__OBJC_RO 82.9M 1
__OBJC_RW 3152K 1
__TEXT 280.7M 275
__UNICODE 592K 1
dyld private memory 1024K 1
mapped file 40.8M 7
shared memory 624K 7
=========== ======= =======
TOTAL 2.9G 1507
TOTAL, minus reserved VM space 1.5G 1507
-----------
Full Report
-----------
{"app_name":"Python","timestamp":"2022-05-18 22:44:20.00 -0300","app_version":"3.9.12","slice_uuid":"f237681b-772a-3109-bcee-605ef4cfa3b2","build_version":"3.9.12","platform":1,"bundleID":"org.python.python","share_with_app_devs":0,"is_first_party":0,"bug_type":"309","os_version":"macOS 12.3.1 (21E258)","incident_id":"9DE0A56B-93FA-438A-9D5F-20655743C1A4","name":"Python"}
{
"uptime" : 74000,
"procLaunch" : "2022-05-18 22:44:20.3202 -0300",
"procRole" : "Unspecified",
"version" : 2,
"userID" : 501,
"deployVersion" : 210,
"modelCode" : "MacBookAir10,1",
"procStartAbsTime" : 1778018614835,
"coalitionID" : 881,
"osVersion" : {
"train" : "macOS 12.3.1",
"build" : "21E258",
"releaseType" : "User"
},
"captureTime" : "2022-05-18 22:44:20.6603 -0300",
"incident" : "9DE0A56B-93FA-438A-9D5F-20655743C1A4",
"bug_type" : "309",
"pid" : 48029,
"procExitAbsTime" : 1778026745864,
"translated" : false,
"cpuType" : "ARM-64",
"procName" : "Python",
"procPath" : "\/opt\/homebrew\/*\/Python.framework\/Versions\/3.9\/Resources\/Python.app\/Contents\/MacOS\/Python",
"bundleInfo" : {"CFBundleShortVersionString":"3.9.12","CFBundleVersion":"3.9.12","CFBundleIdentifier":"org.python.python"},
"storeInfo" : {"deviceIdentifierForVendor":"8609997B-9AB4-5133-A7E8-6B9CC49395B0","thirdParty":true},
"parentProc" : "fish",
"parentPid" : 22773,
"coalitionName" : "com.googlecode.iterm2",
"crashReporterKey" : "37559CD1-BBA5-46C4-92C9-52CBDEDD1C5E",
"responsiblePid" : 74184,
"responsibleProc" : "iTerm2",
"wakeTime" : 4829,
"sleepWakeUUID" : "0D6A0AE4-267E-4E25-8625-6F3AD3D4E3F6",
"sip" : "enabled",
"isCorpse" : 1,
"exception" : {"codes":"0x0000000000000000, 0x0000000000000000","rawCodes":[0,0],"type":"EXC_CRASH","signal":"SIGABRT"},
"asi" : {"libsystem_c.dylib":["\/AppleInternal\/Library\/BuildRoots\/560148d7-a559-11ec-8c96-4add460b61a6\/Library\/Caches\/com.apple.xbs\/Sources\/MetalPerformanceShadersGraph\/mpsgraph\/MetalPerformanceShadersGraph\/Core\/Files\/Operations\/MPSGraphConvolutionOps.mm:346: failed assertion `sourceTensor rank must be 4'"]},
"extMods" : {"caller":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"system":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"targeted":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"warnings":0},
"faultingThread" : 0,
"threads" : [{"triggered":true,"id":1594697,"threadState":{"x":[{"value":0},{"value":0},{"value":0},{"value":0},{"value":0},{"value":26},{"value":0},{"value":1},{"value":17270485695218883033},{"value":17270485699510672473},{"value":14757395258967641293},{"value":10},{"value":0},{"value":51},{"value":2199023325184},{"value":8724108336,"symbolLocation":0,"symbol":"OBJC_CLASS_$___NSCFString"},{"value":328},{"value":8747972160},{"value":0},{"value":6},{"value":4298147200,"symbolLocation":0,"symbol":"_main_thread"},{"value":259},{"value":4298147424,"symbolLocation":224,"symbol":"_main_thread"},{"value":18446744073709551615},{"value":8681873408,"symbolLocation":3968,"symbol":"StructLocks"},{"value":8683839488},{"value":7384315010,"symbolLocation":8023,"symbol":"_MTLRequestHashToString(MTLUINT256_t)::hexChars"},{"value":105553178807536},{"value":5}],"flavor":"ARM_THREAD_STATE64","lr":{"value":7239671520},"cpsr":{"value":1073745920},"fp":{"value":6169579056},"sp":{"value":6169579024},"esr":{"value":1442840704,"description":" Address size fault"},"pc":{"value":7239454136,"matchesCrashFrame":1},"far":{"value":4457332736}},"queue":"cache queue","frames":[{"imageOffset":40376,"symbol":"__pthread_kill","symbolLocation":8,"imageIndex":0},{"imageOffset":28384,"symbol":"pthread_kill","symbolLocation":288,"imageIndex":1},{"imageOffset":500544,"symbol":"abort","symbolLocation":168,"imageIndex":2},{"imageOffset":497492,"symbol":"__assert_rtn","symbolLocation":272,"imageIndex":2},{"imageOffset":927656,"symbol":"MTLReportFailure.cold.1","symbolLocation":56,"imageIndex":3},{"imageOffset":836292,"symbol":"MTLReportFailure","symbolLocation":480,"imageIndex":3},{"imageOffset":1388424,"imageIndex":4},{"imageOffset":57760572,"symbol":"invocation function for block in at::native::_mps_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)","symbolLocation":540,"imageIndex":5},{"imageOffset":57717984,"symbol":"invocation function for block in at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer)","symbolLocation":216,"imageIndex":5},{"imageOffset":16840,"symbol":"_dispatch_client_callout","symbolLocation":20,"imageIndex":6},{"imageOffset":78868,"symbol":"_dispatch_lane_barrier_sync_invoke_and_complete","symbolLocation":56,"imageIndex":6},{"imageOffset":57685556,"symbol":"at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, at::native::mps::MPSCachedGraph* () block_pointer)","symbolLocation":160,"imageIndex":5},{"imageOffset":57756316,"symbol":"at::native::_mps_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)","symbolLocation":3040,"imageIndex":5},{"imageOffset":10953920,"symbol":"at::_ops::_mps_convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)","symbolLocation":360,"imageIndex":5},{"imageOffset":2706836,"symbol":"at::native::_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long, bool, bool, bool, bool)","symbolLocation":12668,"imageIndex":5},{"imageOffset":9232500,"symbol":"at::_ops::_convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long, bool, bool, bool, bool)","symbolLocation":436,"imageIndex":5},{"imageOffset":2663312,"symbol":"at::native::convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)","symbolLocation":288,"imageIndex":5},{"imageOffset":9230244,"symbol":"at::_ops::convolution::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)","symbolLocation":188,"imageIndex":5},{"imageOffset":32588780,"symbol":"c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long), &(torch::autograd::VariableType::(anonymous namespace)::convolution(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)","symbolLocation":1984,"imageIndex":5},{"imageOffset":9229312,"symbol":"at::_ops::convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, bool, c10::ArrayRef<long long>, long long)","symbolLocation":376,"imageIndex":5},{"imageOffset":2657556,"symbol":"at::native::conv3d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)","symbolLocation":620,"imageIndex":5},{"imageOffset":13451912,"symbol":"at::_ops::conv3d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long long>, c10::ArrayRef<long long>, c10::ArrayRef<long long>, long long)","symbolLocation":360,"imageIndex":5},{"imageOffset":3886504,"symbol":"torch::autograd::THPVariable_conv3d(_object*, _object*, _object*)","symbolLocation":1092,"imageIndex":7},{"imageOffset":714988,"symbol":"cfunction_call","symbolLocation":60,"imageIndex":8},{"imageOffset":391000,"symbol":"_PyObject_MakeTpCall","symbolLocation":132,"imageIndex":8},{"imageOffset":1381012,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":28164,"imageIndex":8},{"imageOffset":393884,"symbol":"_PyFunction_Vectorcall","symbolLocation":184,"imageIndex":8},{"imageOffset":404720,"symbol":"method_vectorcall","symbolLocation":124,"imageIndex":8},{"imageOffset":1376468,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":23620,"imageIndex":8},{"imageOffset":393884,"symbol":"_PyFunction_Vectorcall","symbolLocation":184,"imageIndex":8},{"imageOffset":404884,"symbol":"method_vectorcall","symbolLocation":288,"imageIndex":8},{"imageOffset":1355600,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":2752,"imageIndex":8},{"imageOffset":1348752,"symbol":"_PyEval_EvalCode","symbolLocation":452,"imageIndex":8},{"imageOffset":394048,"symbol":"_PyFunction_Vectorcall","symbolLocation":348,"imageIndex":8},{"imageOffset":391640,"symbol":"_PyObject_FastCallDictTstate","symbolLocation":96,"imageIndex":8},{"imageOffset":860600,"symbol":"slot_tp_call","symbolLocation":188,"imageIndex":8},{"imageOffset":391000,"symbol":"_PyObject_MakeTpCall","symbolLocation":132,"imageIndex":8},{"imageOffset":1376896,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":24048,"imageIndex":8},{"imageOffset":1348752,"symbol":"_PyEval_EvalCode","symbolLocation":452,"imageIndex":8},{"imageOffset":1707808,"symbol":"run_eval_code_obj","symbolLocation":136,"imageIndex":8},{"imageOffset":1707600,"symbol":"run_mod","symbolLocation":112,"imageIndex":8},{"imageOffset":1696868,"symbol":"pyrun_file","symbolLocation":168,"imageIndex":8},{"imageOffset":1695108,"symbol":"pyrun_simple_file","symbolLocation":252,"imageIndex":8},{"imageOffset":1694792,"symbol":"PyRun_SimpleFileExFlags","symbolLocation":80,"imageIndex":8},{"imageOffset":1828844,"symbol":"pymain_run_file","symbolLocation":320,"imageIndex":8},{"imageOffset":1826604,"symbol":"Py_RunMain","symbolLocation":876,"imageIndex":8},{"imageOffset":1831940,"symbol":"Py_BytesMain","symbolLocation":40,"imageIndex":8},{"imageOffset":20616,"symbol":"start","symbolLocation":516,"imageIndex":9}]},{"id":1594698,"queue":"com.apple.root.utility-qos","frames":[{"imageOffset":6352,"symbol":"mach_msg_trap","symbolLocation":8,"imageIndex":0},{"imageOffset":7488,"symbol":"mach_msg","symbolLocation":76,"imageIndex":0},{"imageOffset":7448,"symbol":"notify_register_check","symbolLocation":1112,"imageIndex":10},{"imageOffset":10564,"symbol":"_asl_notify_open","symbolLocation":140,"imageIndex":11},{"imageOffset":5016,"symbol":"asl_open","symbolLocation":44,"imageIndex":11},{"imageOffset":8080,"symbol":"msgtracer_client","symbolLocation":52,"imageIndex":12},{"imageOffset":8740,"symbol":"msgtracer_vlog_with_keys_skip_nulls","symbolLocation":428,"imageIndex":12},{"imageOffset":8300,"symbol":"msgtracer_log_with_keys","symbolLocation":36,"imageIndex":12},{"imageOffset":43044,"symbol":"__createContextTelemetryDataWithQueueLabelAndCallstack_block_invoke","symbolLocation":920,"imageIndex":3},{"imageOffset":9732,"symbol":"_dispatch_call_block_and_release","symbolLocation":32,"imageIndex":6},{"imageOffset":16840,"symbol":"_dispatch_client_callout","symbolLocation":20,"imageIndex":6},{"imageOffset":88580,"symbol":"_dispatch_root_queue_drain","symbolLocation":680,"imageIndex":6},{"imageOffset":90372,"symbol":"_dispatch_worker_thread2","symbolLocation":164,"imageIndex":6},{"imageOffset":13092,"symbol":"_pthread_wqthread","symbolLocation":228,"imageIndex":1},{"imageOffset":8320,"symbol":"start_wqthread","symbolLocation":8,"imageIndex":1}]},{"id":1594699,"queue":"com.Metal.CommandQueueDispatch","frames":[{"imageOffset":6352,"symbol":"mach_msg_trap","symbolLocation":8,"imageIndex":0},{"imageOffset":7488,"symbol":"mach_msg","symbolLocation":76,"imageIndex":0},{"imageOffset":14504,"symbol":"io_connect_method","symbolLocation":512,"imageIndex":13},{"imageOffset":13888,"symbol":"IOConnectCallMethod","symbolLocation":176,"imageIndex":13},{"imageOffset":82908,"symbol":"IOGPUCommandQueueSubmitCommandBuffers","symbolLocation":144,"imageIndex":14},{"imageOffset":28664,"symbol":"-[IOGPUMetalCommandQueue _submitCommandBuffers:count:]","symbolLocation":820,"imageIndex":14},{"imageOffset":27800,"symbol":"-[IOGPUMetalCommandQueue submitCommandBuffers:count:]","symbolLocation":88,"imageIndex":14},{"imageOffset":130572,"symbol":"-[_MTLCommandQueue _submitAvailableCommandBuffers]","symbolLocation":684,"imageIndex":3},{"imageOffset":16840,"symbol":"_dispatch_client_callout","symbolLocation":20,"imageIndex":6},{"imageOffset":30320,"symbol":"_dispatch_continuation_pop","symbolLocation":500,"imageIndex":6},{"imageOffset":108768,"symbol":"_dispatch_source_invoke","symbolLocation":1596,"imageIndex":6},{"imageOffset":46980,"symbol":"_dispatch_lane_serial_drain","symbolLocation":376,"imageIndex":6},{"imageOffset":50180,"symbol":"_dispatch_lane_invoke","symbolLocation":392,"imageIndex":6},{"imageOffset":93336,"symbol":"_dispatch_workloop_worker_thread","symbolLocation":648,"imageIndex":6},{"imageOffset":13152,"symbol":"_pthread_wqthread","symbolLocation":288,"imageIndex":1},{"imageOffset":8320,"symbol":"start_wqthread","symbolLocation":8,"imageIndex":1}]},{"id":1594700,"frames":[{"imageOffset":23084,"symbol":"__gettimeofday","symbolLocation":12,"imageIndex":0},{"imageOffset":34664,"symbol":"gettimeofday","symbolLocation":72,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594701,"frames":[{"imageOffset":7980,"symbol":"mach_absolute_time","symbolLocation":108,"imageIndex":0},{"imageOffset":14720,"symbol":"__commpage_gettimeofday_internal","symbolLocation":44,"imageIndex":0},{"imageOffset":34648,"symbol":"gettimeofday","symbolLocation":56,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594702,"frames":[{"imageOffset":7980,"symbol":"mach_absolute_time","symbolLocation":108,"imageIndex":0},{"imageOffset":14720,"symbol":"__commpage_gettimeofday_internal","symbolLocation":44,"imageIndex":0},{"imageOffset":34648,"symbol":"gettimeofday","symbolLocation":56,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594703,"frames":[{"imageOffset":1478284,"symbol":"blas_thread_server","symbolLocation":252,"imageIndex":15},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594704,"frames":[{"imageOffset":14768,"symbol":"__commpage_gettimeofday_internal","symbolLocation":92,"imageIndex":0},{"imageOffset":14720,"symbol":"__commpage_gettimeofday_internal","symbolLocation":44,"imageIndex":0},{"imageOffset":34648,"symbol":"gettimeofday","symbolLocation":56,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594705,"frames":[{"imageOffset":14768,"symbol":"__commpage_gettimeofday_internal","symbolLocation":92,"imageIndex":0},{"imageOffset":14720,"symbol":"__commpage_gettimeofday_internal","symbolLocation":44,"imageIndex":0},{"imageOffset":34648,"symbol":"gettimeofday","symbolLocation":56,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594706,"frames":[{"imageOffset":7980,"symbol":"mach_absolute_time","symbolLocation":108,"imageIndex":0},{"imageOffset":14720,"symbol":"__commpage_gettimeofday_internal","symbolLocation":44,"imageIndex":0},{"imageOffset":34648,"symbol":"gettimeofday","symbolLocation":56,"imageIndex":2},{"imageOffset":1478308,"symbol":"blas_thread_server","symbolLocation":276,"imageIndex":15},{"imageOffset":29292,"symbol":"_pthread_start","symbolLocation":148,"imageIndex":1},{"imageOffset":8332,"symbol":"thread_start","symbolLocation":8,"imageIndex":1}]},{"id":1594707,"frames":[{"imageOffset":8312,"symbol":"start_wqthread","symbolLocation":0,"imageIndex":1}]},{"id":1594708,"frames":[{"imageOffset":8312,"symbol":"start_wqthread","symbolLocation":0,"imageIndex":1}]},{"id":1594709,"frames":[{"imageOffset":8312,"symbol":"start_wqthread","symbolLocation":0,"imageIndex":1}]}],
"usedImages" : [
{
"source" : "P",
"arch" : "arm64e",
"base" : 7239413760,
"size" : 229376,
"uuid" : "1d7b3b8e-75a1-34ea-aa52-9f7c23155c55",
"path" : "\/usr\/lib\/system\/libsystem_kernel.dylib",
"name" : "libsystem_kernel.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7239643136,
"size" : 53248,
"uuid" : "cee8bc77-6923-34d9-89a3-6f8f7279605e",
"path" : "\/usr\/lib\/system\/libsystem_pthread.dylib",
"name" : "libsystem_pthread.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7238361088,
"size" : 532480,
"uuid" : "fd566a15-42d8-314a-a99a-b59237ddf5bc",
"path" : "\/usr\/lib\/system\/libsystem_c.dylib",
"name" : "libsystem_c.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7383277568,
"CFBundleShortVersionString" : "261.13",
"CFBundleIdentifier" : "com.apple.Metal",
"size" : 1318912,
"uuid" : "da3fd743-76d9-351c-9bb0-17027a76b6df",
"path" : "\/System\/Library\/Frameworks\/Metal.framework\/Versions\/A\/Metal",
"name" : "Metal",
"CFBundleVersion" : "261.13"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 8907124736,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.MetalPerformanceShadersGraph",
"size" : 10219520,
"uuid" : "92c3cdaa-7900-3e11-b107-3c1fb787fa04",
"path" : "\/System\/Library\/Frameworks\/MetalPerformanceShadersGraph.framework\/Versions\/A\/MetalPerformanceShadersGraph",
"name" : "MetalPerformanceShadersGraph",
"CFBundleVersion" : "1"
},
{
"source" : "P",
"arch" : "arm64",
"base" : 4482220032,
"size" : 80576512,
"uuid" : "7d4fb975-9f75-386e-80ff-70f320c9ef36",
"path" : "\/Users\/USER\/*\/libtorch_cpu.dylib",
"name" : "libtorch_cpu.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7237804032,
"size" : 290816,
"uuid" : "dc048e3b-e023-3d17-afe5-4ff3dc625608",
"path" : "\/usr\/lib\/system\/libdispatch.dylib",
"name" : "libdispatch.dylib"
},
{
"source" : "P",
"arch" : "arm64",
"base" : 4328112128,
"size" : 12681216,
"uuid" : "ef946875-fb71-3128-b8ba-2a9b843a02af",
"path" : "\/Users\/USER\/*\/libtorch_python.dylib",
"name" : "libtorch_python.dylib"
},
{
"source" : "P",
"arch" : "arm64",
"base" : 4303028224,
"CFBundleShortVersionString" : "3.9.12, (c) 2001-2021 Python Software Foundation.",
"CFBundleIdentifier" : "org.python.python",
"size" : 2899968,
"uuid" : "07c6f0c1-bf5f-3eb1-87a3-9ea5c96126e5",
"path" : "\/opt\/homebrew\/*\/Python.framework\/Versions\/3.9\/Python",
"name" : "Python",
"CFBundleVersion" : "3.9.12"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 4297654272,
"size" : 393216,
"uuid" : "fbb89662-e6f2-3434-b542-f75185ac5e74",
"path" : "\/usr\/lib\/dyld",
"name" : "dyld"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7285313536,
"size" : 65536,
"uuid" : "5ff2da89-8a88-34bb-aa68-ba9c5d24e639",
"path" : "\/usr\/lib\/system\/libsystem_notify.dylib",
"name" : "libsystem_notify.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7325057024,
"size" : 98304,
"uuid" : "6b2f4a2f-2c36-3d5d-87f0-9b6bbae5560c",
"path" : "\/usr\/lib\/system\/libsystem_asl.dylib",
"name" : "libsystem_asl.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7351513088,
"size" : 12288,
"uuid" : "28ab09cf-5f67-3012-bc5b-1387122c2c81",
"path" : "\/usr\/lib\/libDiagnosticMessagesClient.dylib",
"name" : "libDiagnosticMessagesClient.dylib"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7284432896,
"CFBundleShortVersionString" : "2.0.2",
"CFBundleIdentifier" : "com.apple.framework.IOKit",
"size" : 880640,
"uuid" : "1d3a63f1-1f43-3ead-9815-fa086cbeda27",
"path" : "\/System\/Library\/Frameworks\/IOKit.framework\/Versions\/A\/IOKit",
"name" : "IOKit"
},
{
"source" : "P",
"arch" : "arm64e",
"base" : 7686832128,
"CFBundleShortVersionString" : "35.29",
"CFBundleIdentifier" : "com.apple.IOGPU",
"size" : 188416,
"uuid" : "51396341-67fd-333d-a043-99baada3f29d",
"path" : "\/System\/Library\/PrivateFrameworks\/IOGPU.framework\/Versions\/A\/IOGPU",
"name" : "IOGPU",
"CFBundleVersion" : "35.29"
},
{
"source" : "P",
"arch" : "arm64",
"base" : 4402151424,
"size" : 19169280,
"uuid" : "0672674f-0fbf-3245-8fec-358e42c77be2",
"path" : "\/Users\/USER\/*\/libopenblas64_.0.dylib",
"name" : "libopenblas64_.0.dylib"
}
],
"sharedCache" : {
"base" : 7236354048,
"size" : 3136438272,
"uuid" : "1df3dfc1-141a-35d0-a4e5-f1e113894c6e"
},
"vmSummary" : "ReadOnly portion of Libraries: Total=925.4M resident=0K(0%) swapped_out_or_unallocated=925.4M(100%)\nWritable regions: Total=1.9G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=1.9G(100%)\n\n VIRTUAL REGION \nREGION TYPE SIZE COUNT (non-coalesced) \n=========== ======= ======= \nActivity Tracing 256K 1 \nKernel Alloc Once 32K 1 \nMALLOC 280.2M 42 \nMALLOC guard page 96K 5 \nMALLOC_MEDIUM (reserved) 960.0M 8 reserved VM address space (unallocated)\nMALLOC_NANO (reserved) 384.0M 1 reserved VM address space (unallocated)\nSTACK GUARD 208K 13 \nStack 22.4M 13 \nVM_ALLOCATE 118.0M 93 \nVM_ALLOCATE (reserved) 160.0M 1 reserved VM address space (unallocated)\n__AUTH 1482K 92 \n__AUTH_CONST 6219K 212 \n__DATA 9296K 254 \n__DATA_CONST 8913K 265 \n__DATA_DIRTY 342K 83 \n__FONT_DATA 4K 1 \n__LINKEDIT 644.7M 57 \n__OBJC_CONST 607K 72 \n__OBJC_RO 82.9M 1 \n__OBJC_RW 3152K 1 \n__TEXT 280.7M 275 \n__UNICODE 592K 1 \ndyld private memory 1024K 1 \nmapped file 40.8M 7 \nshared memory 624K 7 \n=========== ======= ======= \nTOTAL 2.9G 1507 \nTOTAL, minus reserved VM space 1.5G 1507 \n",
"legacyInfo" : {
"threadTriggered" : {
"queue" : "cache queue"
}
},
"trialInfo" : {
"rollouts" : [
{
"rolloutId" : "607844aa04477260f58a8077",
"factorPackIds" : {
"SIRI_MORPHUN_ASSETS" : "6103050cbfe6dc472e1c982a"
},
"deploymentId" : 240000066
},
{
"rolloutId" : "61301e3a61217b3110231469",
"factorPackIds" : {
"SIRI_FIND_MY_CONFIGURATION_FILES" : "6216ae152a40e71046e16225"
},
"deploymentId" : 240000016
}
],
"experiments" : [
]
}
}
Model: MacBookAir10,1, BootROM 7459.101.3, proc 8:4:4 processors, 8 GB, SMC
Graphics: Apple M1, Apple M1, Built-In
Display: Color LCD, 2560 x 1600 Retina, Main, MirrorOff, Online
Memory Module: LPDDR4
AirPort: Wi-Fi, wl0: Feb 8 2022 01:44:45 version 18.60.21.0.7.8.126 FWID 01-1cdae627
Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
USB Device: USB31Bus
USB Device: USB31Bus
Thunderbolt Bus: MacBook Air, Apple Inc.
Thunderbolt Bus: MacBook Air, Apple Inc.
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0.dev20220518
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.3)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.12 (main, May 8 2022, 17:57:49) [Clang 13.1.6 (clang-1316.0.21.2)] (64-bit runtime)
Python platform: macOS-12.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.1
[pip3] torch==1.12.0.dev20220518
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 16 |
5,680 | 77,814 |
`addmv, mv` will trigger INTERNAL ASSERT FAILED when input requiring grad
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
`addmv` will trigger INTERNAL ASSERT FAILED when input requiring grad
```python
import torch
def addmv(input):
mat = torch.randint(-1, 1, [2, 3], dtype=torch.int8)
vec = torch.randint(-1, 1, [3], dtype=torch.int8)
return torch.addmv(input, mat, vec)
input_normal = torch.rand([2], dtype=torch.float32)
addmv(input_normal)
print('normal pass')
# normal pass
input_with_grad = input_normal.clone().requires_grad_()
addmv(input_with_grad)
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
`mv` will trigger INTERNAL ASSERT FAILED when input requiring grad
```python
import torch
vec = torch.randint(0, 8, [10], dtype=torch.uint8)
tensor = torch.rand([0, 10], dtype=torch.complex64, requires_grad=True)
torch.mv(tensor, vec)
# RuntimeError: isDifferentiableType(variable.scalar_type()) INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1654585902583/work/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,681 | 77,812 |
FSDP should work for model outputs that are dataclasses
|
oncall: distributed, triaged, pt_distributed_rampup, module: fsdp
|
### 🚀 The feature, motivation and pitch
FSDP currently supports dict/OrderedDict, tuple, list of tensor outputs, although we've seen some use cases also want to output `dataclass` instances from their model. This currently breaks in `_apply_to_tensors` when registering backwards hooks, we should add this support for dataclasses.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,682 | 77,808 |
`Could not start gRPC server` flakiness in XLA tests
|
triaged, module: xla
|
For some examples, see [here](https://hud.pytorch.org/failure/RuntimeError%3A%20tensorflow%2Fcompiler%2Fxla%2Fxla_client%2Fxrt_local_service.cc%3A56%20%3A%20Check%20failed%3A%20tensorflow%3A%3ANewServer(server_def%2C%20%26server_)%20%3D%3D%20%3A%3Atensorflow%3A%3AStatus%3A%3AOK()%20(UNKNOWN%3A%20Could%20not%20start%20gRPC%20server%20vs.%20OK)).
Can we add some retries or something to this test?
cc @bdhirsh
| 10 |
5,683 | 77,801 |
`torch.utils.benchmark.examples.blas_compare` can not be parsed by Python-3.7 runtime
|
triaged, module: benchmark
|
### 🐛 Describe the bug
Attempt to `python3 -c "import torch.utils.benchmark.examples.blas_compare"` fails with
```
File "/opt/conda/lib/python3.7/site-packages/torch/utils/benchmark/examples/blas_compare.py", line 35, in <module>
_WORKER_POOL: queue.Queue[Tuple[str, str, int]] = queue.Queue()
TypeError: 'type' object is not subscriptable
```
See [this](https://github.com/pytorch/pytorch/runs/6493890839?check_suite_focus=true) for example
### Versions
nightly
| 0 |
5,684 | 77,764 |
General MPS op coverage tracking issue
|
feature, triaged, module: mps
|
### This issue is to have a centralized place to list and track work on adding support to new ops for the MPS backend.
[**MPS operators coverage matrix**](https://qqaatw.github.io/pytorch-mps-ops-coverage/) - The matrix covers most of the supported operators but is not exhaustive. Before you comment below, please take a look at this matrix to make sure the operator you're requesting has not been implemented in nightly. More details can be found on the [readme](https://github.com/qqaatw/pytorch-mps-ops-coverage).
There are a very large number of operators in pytorch and so they are not all implemented yet for the MPS backends as it is still in the prototype phase. We will be prioritizing adding new operators based on user feedback. If possible, please also provide link to the network or use-case where this op is getting used.
If you want to work on adding support for such op, feel free to comment below to get assigned one. Please avoid pickup up an op that is already being worked on or that already has a PR associated with it.
[Link to the wiki for details](https://github.com/pytorch/pytorch/wiki/MPS-Backend) on how to add these ops and example PRs.
**Good First Issue:**
Below is list of Ops which are good to get started to add operations to MPS backend. Please consider picking them up.
- [ ] `nn.Conv3D`
- [ ] `aten::_weight_norm_interface`
- [ ] `aten::max_unpool2d`
- [ ] `aten::cummin.out`, `aten::cummax.out`
- [ ] `aten::upsample_linear1d.out`
- [ ] `aten::lerp.Scalar_out`
- [ ] `aten::renorm`
**Not categorized:**
These are the ops which are not yet picked up and need MPS implementation.
- [ ] `aten::slow_conv3d_forward`
- [ ] `aten::_ctc_loss`
- [ ] `aten::avg_pool3d.out`
- [ ] `aten::linalg_qr.out`
- [ ] `aten::multilabel_margin_loss_forward`
- [ ] `aten::unique_dim`
- [ ] `aten::_sample_dirichlet`
- [ ] `aten::_fft_r2c`
- [ ] `aten::upsample_bicubic2d.out`
- [ ] `aten::linalg_inv_out_helper`
- [ ] `aten::bucketize`
- [ ] `aten::_embedding_bag`
- [ ] `aten::_standard_gamma`
- [ ] `aten::_upsample_bicubic2d_aa.out`
- [ ] `aten::'aten::_symeig_helper`
- [ ] `aten::linalg_matrix_exp`
- [ ] `aten::_nested_tensor_from_mask`
- [ ] `aten::randperm.generator_out`
- [ ] `aten::_fused_sdp_choice`
- [ ] `aten::linalg_cholesky_ex`
- [ ] `aten::scatter_reduce.two_out`
- [ ] `aten::kthvalue.values`
- [ ] `aten::_linalg_solve_ex.result`
- [ ] `aten::grid_sampler_2d_backward'`
**WIP:**
- [ ] `max_pool3d` https://github.com/pytorch/pytorch/pull/102148
- [ ] `aten::kl_div_backward` (Is not needed )
**Implemented Ops:**
Ops that have MPS backend implementations.
See [**MPS operators coverage matrix**](https://qqaatw.github.io/pytorch-mps-ops-coverage/) and the [readme](https://github.com/qqaatw/pytorch-mps-ops-coverage) for more details.
<details>
<summary>deprecated list</summary>
- [x] `aten::histc` #96652
- [x] `pow.Scalar_out` (@qqaatw )
- [x] `aten::log_sigmoid_forward` (@qqaatw )
- [x] `aten::fmax.out` (@qqaatw )
- [x] `aten::roll` https://github.com/pytorch/pytorch/pull/95168
- [x] `aten::hardsigmoid` (@qqaatw )
- [x] `aten::logit` (@qqaatw )
- [x] `linalg_solve_triangular`
- [x] `aten::sort.values_stable` https://github.com/pytorch/pytorch/issues/86750
- [x] `aten::remainder.Tensor_out` https://github.com/pytorch/pytorch/issues/86806
- [x] `aten::hardswish` https://github.com/pytorch/pytorch/issues/86807
- [x] `aten::nansum` https://github.com/pytorch/pytorch/issues/86809
- [x] `aten::fmod.Tensor_out` https://github.com/pytorch/pytorch/issues/86810
- [x] `aten::range` https://github.com/pytorch/pytorch/issues/86990
- [x] `aten::argsort` https://github.com/pytorch/pytorch/issues/86991
- [x] `aten::repeat_interleave` https://github.com/pytorch/pytorch/issues/87219
- [x] `aten::median` https://github.com/pytorch/pytorch/issues/87220
- [x] `aten::trace` https://github.com/pytorch/pytorch/issues/87221
- [x] `aten::im2col` (Falling back to CPU as its mostly used in preprocessing layers)
- [x] `aten::_cdist_forward` https://github.com/pytorch/pytorch/pull/91643
- [x] `aten::native_group_norm_backward` (Implemented by @malfet )
- [x] `aten::grid_sampler_2d` (https://github.com/pytorch/pytorch/pull/94273)
- [x] `aten::upsample_nearest1d_backward.grad_input`
- [x] `aten::upsample_nearest1d.out`
- [x] `aten::repeat_interleave.self_int`
- [x] `aten::nan_to_num.out`
- [x] `aten::unique_consecutive` https://github.com/pytorch/pytorch/pull/88532
- [x] `torch.bincount` https://github.com/pytorch/pytorch/pull/91267
- [x] `aten::_unique2` https://github.com/pytorch/pytorch/pull/88532
- [x] `aten::unfold` https://github.com/pytorch/pytorch/pull/91266
- [x] `aten::triangular_solve.X` https://github.com/pytorch/pytorch/pull/94345
- [x] `aten::nonzero` https://github.com/pytorch/pytorch/pull/91616
- [x] `aten::_index_put_impl_` (https://github.com/pytorch/pytorch/pull/85672)
- [x] `aten::amax.out` (#79682)
- [X] `aten::_slow_conv2d_forward` (https://github.com/pytorch/pytorch/pull/86303)
- [x] `aten::eye.m_out` (https://github.com/pytorch/pytorch/pull/78408)
- [x] `aten::multinomial` (https://github.com/pytorch/pytorch/pull/80760 )
- [x] `aten::flip` (#80214)
- [x] `aten::equal` https://github.com/pytorch/pytorch/pull/80195
- [x] `aten::_local_scalar_dense`
- [x] `aten::l1_loss_backward.grad_input` (#80010)
- [x] `aten::glu.out` (#79866)
- [x] ` aten::linspace.out` https://github.com/pytorch/pytorch/pull/78570
- [x] `aten::arange.out` https://github.com/pytorch/pytorch/pull/78789
- [x] `aten::adaptive_max_pool2d` https://github.com/pytorch/pytorch/pull/78410
- [x] `aten::count_nonzero.dim_IntList`
- [x] `aten::softplus.out` (https://github.com/pytorch/pytorch/pull/78930)
- [x] `aten::index_add.out` https://github.com/pytorch/pytorch/pull/79935
- [x] `aten::normal` (#80297)
- [x] `aten::native_layer_norm_backward` https://github.com/pytorch/pytorch/pull/79189
- [x] `aten::logical_and.out` (#80216)
- [x] `aten::frac.out` (https://github.com/pytorch/pytorch/pull/86625)
- [x] `aten:: masked_select` https://github.com/pytorch/pytorch/pull/85818
- [x] `aten::softplus_backward.grad_input` (#79873)
- [x] `aten::slow_conv_transpose2d.out` (@malfet could be due to incompatibility with torchvision)
- [x] `aten::signbit.out` (https://github.com/pytorch/pytorch/pull/87214)
- [X] `aten::cumsum.out` (https://github.com/pytorch/pytorch/pull/88319)
- [X] `aten::cumprod.out`
- [X] `aten::expm1.out` (https://github.com/pytorch/pytorch/pull/87147)
- [x] `aten::bitwise_xor.Tensor_out` (https://github.com/pytorch/pytorch/pull/82307)
- [x] `aten::bitwise_and.Tensor_out` (https://github.com/pytorch/pytorch/pull/82307)
- [x] `aten::bitwise_or.Tensor_out` (https://github.com/pytorch/pytorch/pull/82307)
- [x] `aten::index.Tensor` (https://github.com/pytorch/pytorch/pull/82507)
- [x] `aten::index.Tensor_out` (https://github.com/pytorch/pytorch/pull/82507)
</details>
**Ops not supported by MPS:**
Ops that will require either to use the CPU fallback system or a custom Metal kernel.
- [ ] `aten::lgamma.out`
- [ ] `aten::linalg_householder_product`
| 867 |
5,685 | 77,742 |
strange behaviour in torch.div
|
high priority, triaged
|
### 🐛 Describe the bug
The following code
```
import torch
x = torch.tensor([1e+308, 0., 0., 0., 0., 0., 0.], dtype=torch.float64)
y = torch.tensor([0.5, 1., 1., 1., 1., 1., 1.], dtype=torch.float16)
torch.div(x, y, rounding_mode='floor')
```
gives `tensor([inf, 0., 0., 0., 0., 0., 0.], dtype=torch.float64)`, as I expect. However, appending another `0` to `x` and another `1` to `y` seems to change the behaviour of div:
```
x = torch.tensor([1e+308, 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64)
y = torch.tensor([0.5, 1., 1., 1., 1., 1., 1., 1.], dtype=torch.float16)
torch.div(x, y, rounding_mode='floor')
```
gives `tensor([nan, 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64)`.
Why does the first element of the result change from `inf` to `nan` just because I made `x` and `y` longer?
### Versions
1.11.0
cc @ezyang @gchanan @zou3519
| 2 |
5,686 | 77,738 |
net_observer_reporter_print.h missing
|
module: build, triaged
|
### 🐛 Describe the bug
```
FAILED: binaries/CMakeFiles/convert_and_benchmark.dir/convert_and_benchmark.cc.o
/usr/bin/ccache /usr/bin/c++ -DGFLAGS_IS_A_DLL=0 -DGLOG_CUSTOM_PREFIX_SUPPORT -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -I./pytorch/build/aten/src -I./pytorch/aten/src -I./pytorch/build -I./pytorch -I./pytorch/third_party/onnx -I./pytorch/build/third_party/onnx -I./pytorch/third_party/foxi -I./pytorch/build/third_party/foxi -I./pytorch/torch/csrc/api -I./pytorch/torch/csrc/api/include -I./pytorch/c10/.. -isystem ./pytorch/third_party/gemmlowp -isystem ./pytorch/third_party/neon2sse -isystem ./pytorch/third_party/XNNPACK/include -isystem /usr/include/opencv4 -isystem ./pytorch/cmake/../third_party/eigen -isystem ./pytorch/third_party/ideep/include -isystem ./pytorch/third_party/ideep/mkl-dnn/include -march=native -mtune=native -O3 -pipe -fno-plt -Wno-deprecated-declarations -Wno-maybe-uninitialized -Wno-array-bounds -Wno-uninitialized -Wno-error=nonnull -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -fPIE -DTH_HAVE_THREAD -std=gnu++14 -MD -MT binaries/CMakeFiles/convert_and_benchmark.dir/convert_and_benchmark.cc.o -MF binaries/CMakeFiles/convert_and_benchmark.dir/convert_and_benchmark.cc.o.d -o binaries/CMakeFiles/convert_and_benchmark.dir/convert_and_benchmark.cc.o -c ./pytorch/binaries/convert_and_benchmark.cc
./pytorch/binaries/convert_and_benchmark.cc:34:10: schwerwiegender Fehler: observers/net_observer_reporter_print.h: Datei oder Verzeichnis nicht gefunden
34 | #include <observers/net_observer_reporter_print.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.23.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, May 14 2022, 05:21:19) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.17.8-1-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
cc @malfet @seemethere
| 0 |
5,687 | 77,737 |
torchrun leads to `ModuleNotFoundError: No module named 'tensorboard'`, but python -m torch.distributed.launch is ok
|
triaged, oncall: r2p
|
### 🐛 Describe the bug
When I tried to use torchrun to launch the job
```
torchrun --nproc_per_node=4 --master_port=12346 train_ours.py
```
It told me that `ModuleNotFoundError: No module named 'tensorboard'`, but actually I have installed it.
```
[stderr] from torch.utils.tensorboard import SummaryWriter
[stderr] File "/home/aiscuser/.local/lib/python3.7/site-packages/wrapt/importer.py", line 176, in _exec_module
[stderr] self.loader.exec_module(module)
[stderr] File "/opt/miniconda/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py", line 1, in <module>
[stderr] import tensorboard
[stderr]ModuleNotFoundError: No module named 'tensorboard'
```
I used the old command to launch, everything is ok ....
`python -m torch.distributed.launch --nproc_per_node=4 --master_port=12346 train_ours.py`
### Versions
```
torch 1.10.1+cu111
torchaudio 0.10.1+rocm4.1
torchvision 0.11.2+cu111
tensorboard 2.9.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorboardX 2.5
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 4 |
5,688 | 77,736 |
TimeSeriesDataset retrieve columns
|
feature, triaged
|
### 🚀 The feature, motivation and pitch
Right now it is not clear or easy to retrieve column names from torch tensor after transforming data into pytorch forecasting TimeSeriesDataset. This is crucial to be able to use this library and interpret the results in a more straightforward way.
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
5,689 | 77,733 |
Adding Vulkan Support
|
triaged, module: vulkan
|
### 🐛 Describe the bug
To Reproduce :-
cd PYTORCH_ROOT
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python setup.py install
Getting Error :-
Could NOT find Shaderc (missing: GOOGLE_SHADERC_INCLUDE_DIRS GOOGLE_SHADERC_LIBRARIES)
CMake Error at cmake/VulkanDependencies.cmake:117 (message):
USE_VULKAN: Shaderc not found in VULKAN_SDK
But Vulkan SDK has a folder named Shaderc but it doesn't contain GOOGLE_SHADERC_INCLUDE_DIRS GOOGLE_SHADERC_LIBRARIES.

### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.27
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2
| 4 |
5,690 | 77,731 |
complex abs strides are wrong on empty tensors and tensors with 1 dimension
|
triaged, module: complex, module: primTorch
|
Because it uses empty_like, which doesn't create contiguous strides for empty tensors or tensors with one dimension
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @ngimel
| 2 |
5,691 | 77,724 |
FSDP: enhanced shared parameter support
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
As of 1.12, only limited shared param support exists for FSDP, i.e., they must be part of the same FSDP unit, user cannot shared parameters if their respective modules end up being wrapped as part of different FSDP units. Creating this issue to track full support.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
5,692 | 77,675 |
PrimTorch refs do not match argument naming with their PyTorch counterparts
|
triaged, module: primTorch
|
### 🐛 Describe the bug
```
>>> import torch
>>> torch.nn.functional.elu(input=torch.randn(2))
tensor([-0.3709, 0.5241])
>>> import torch._refs.nn.functional
>>> torch._refs.nn.functional.elu(input=torch.randn(2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/raid/ezyang/pytorch-scratch2/torch/_prims/wrappers.py", line 93, in _fn
bound = sig.bind(*args, **kwargs)
File "/scratch/ezyang/pytorch-scratch2-env/lib/python3.8/inspect.py", line 3025, in bind
return self._bind(args, kwargs)
File "/scratch/ezyang/pytorch-scratch2-env/lib/python3.8/inspect.py", line 2940, in _bind
raise TypeError(msg) from None
TypeError: missing a required argument: 'a'
```
Unfortunately because nothing in PyTorch is positional-only, the argument names are public API
### Versions
master
cc @ezyang @mruberry @ngimel
| 1 |
5,693 | 77,668 |
Extend BC test to test for __torch_function__ overridability
|
triaged, module: __torch_function__
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/pull/77477#discussion_r874982910
### Versions
master
cc @hameerabbasi @rgommers @peterbell10
| 0 |
5,694 | 77,659 |
PrimTorch decomps for random functions
|
triaged, module: random, ezyang's list, module: primTorch
|
I'd like to see primtorch decompositions that map all randomness to a single function (perhaps aten.rand()).
The snag: I want the same answer as eager mode. Same as eager mode means I can check correctness. It is also a better experience for users.
The ones I am seeing in torchbench models are:
- [ ] dropout (already exists in torch._decomp, but different(?) answer than eager)
- [ ] randn
- [ ] randperm
cc @pbelevich @ezyang @mruberry @ngimel @Lezcano @peterbell10 @Chillee
| 18 |
5,695 | 77,646 |
Werror=nonnull in dataloader.cpp (part of tests)
|
module: dataloader, triaged
|
### 🐛 Describe the bug
**FAILED: test_api/CMakeFiles/test_api.dir/dataloader.cpp.o**
```
/usr/bin/ccache /usr/bin/c++ -DGFLAGS_IS_A_DLL=0 -DGLOG_CUSTOM_PREFIX_SUPPORT -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -I./pytorch/build/aten/src -I./pytorch/aten/src -I./pytorch/build -I./pytorch -I./pytorch/cmake/../third_party/benchmark/include -I./pytorch/third_party/onnx -I./pytorch/build/third_party/onnx -I./pytorch/third_party/foxi -I./pytorch/build/third_party/foxi -I./pytorch/build/caffe2/../aten/src -I./pytorch/torch/csrc/api -I./pytorch/torch/csrc/api/include -I./pytorch/c10/.. -isystem ./pytorch/cmake/../third_party/googletest/googlemock/include -isystem ./pytorch/cmake/../third_party/googletest/googletest/include -isystem ./pytorch/third_party/gemmlowp -isystem ./pytorch/third_party/neon2sse -isystem ./pytorch/third_party/XNNPACK/include -isystem /usr/include/opencv4 -isystem ./pytorch/cmake/../third_party/eigen -isystem ./pytorch/third_party/ideep/include -isystem ./pytorch/third_party/ideep/mkl-dnn/include -isystem ./pytorch/third_party/googletest/googletest/include -isystem ./pytorch/third_party/googletest/googletest -march=native -mtune=native -O3 -pipe -fno-plt -Wno-deprecated-declarations -Wno-maybe-uninitialized -Wno-array-bounds -Wno-uninitialized -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIE -DTH_HAVE_THREAD -Wno-unused-variable -Wno-maybe-uninitialized -Wno-unused-but-set-parameter -std=gnu++14 -MD -MT test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -MF test_api/CMakeFiles/test_api.dir/dataloader.cpp.o.d -o test_api/CMakeFiles/test_api.dir/dataloader.cpp.o -c ./pytorch/test/cpp/api/dataloader.cpp
In Datei, eingebunden von /usr/include/c++/12.1.0/memory:63,
von ./pytorch/third_party/googletest/googletest/include/gtest/gtest.h:57,
von ./pytorch/test/cpp/api/dataloader.cpp:1:
In statischer Elementfunktion »static _Tp* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(const _Tp*, const _Tp*, _Tp*) [with _Tp = long unsigned int; bool _IsMove = false]«,
eingefügt von »_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]« bei /usr/include/c++/12.1.0/bits/stl_algobase.h:495:30,
eingefügt von »_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const long unsigned int*; _OI = long unsigned int*]« bei /usr/include/c++/12.1.0/bits/stl_algobase.h:522:42,
eingefügt von »_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false; _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]« bei /usr/include/c++/12.1.0/bits/stl_algobase.h:529:31,
eingefügt von »_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const long unsigned int*, vector<long unsigned int> >; _OI = __gnu_cxx::__normal_iterator<long unsigned int*, vector<long unsigned int> >]« bei /usr/include/c++/12.1.0/bits/stl_algobase.h:620:7,
eingefügt von »std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = long unsigned int; _Alloc = std::allocator<long unsigned int>]« bei /usr/include/c++/12.1.0/bits/vector.tcc:244:21:
/usr/include/c++/12.1.0/bits/stl_algobase.h:431:30: Fehler: Argument 1 ist null, aber nichtnull wird erwartet [-Werror=nonnull]
431 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/12.1.0/bits/stl_algobase.h:431:30: Anmerkung: in einem Aufruf der eingebauten Funktion »void* __builtin_memmove(void*, const void*, long unsigned int)«
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.23.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, May 14 2022, 05:21:19) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.17.8-1-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
GPU: _Radeon RX 460_: No interest in CUDA or ROCm
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 3 |
5,696 | 77,614 |
PyTorch fails to build on gcc 12 due to gloo
|
module: build, triaged, module: third_party
|
### 🐛 Describe the bug
```
/home/gaoxiang/pytorch-nvfuser-master/third_party/gloo/gloo/transport/tcp/device.cc:151:39: error: implicit instantiation of undefined template 'std::array<char, 64>'
std::array<char, HOST_NAME_MAX> hostname;
```
Adding `#include <array>` to that file should fix the problem. But looks like the `gloo` submodule haven't been updated for long time, so I don't know how this issue should be fixed.
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.23.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.17.6-arch1-1-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 510.68.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.4.0
/usr/lib/libcudnn_adv_infer.so.8.4.0
/usr/lib/libcudnn_adv_train.so.8.4.0
/usr/lib/libcudnn_cnn_infer.so.8.4.0
/usr/lib/libcudnn_cnn_train.so.8.4.0
/usr/lib/libcudnn_ops_infer.so.8.4.0
/usr/lib/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch-ucc==1.0.0
[pip3] torchani==2.2
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
```
cc @malfet @seemethere
| 2 |
5,697 | 77,589 |
How to handle __module__ attribute for Public API bindings
|
module: tests, triaged
|
While working on the NN onboarding lab (with corresponding closed PR: #77425 ), after registering the functional version of new module in `torch/nn/functional.py` The following test would fail ` pytest test/test_public_bindings.py` with:
```Bash
Full list:
# torch.nn.functional.bias:
- Is public: it is an attribute that does not start with `_` on a module that does not have `__all__` defined
- Does NOT look public: because its `__module__` attribute (`torch._C._nn`) is not within the torch library or does not start with the submodule where it is defined (`torch.nn.functional`)
- You can do either of these two things to fix this problem:
- To make it NOT public: either define a `__all__` for `torch.nn.functional` or add a `_` at the beginning of the name
- To make it look public: make sure the `__module__` is properly set and points to a submodule of `torch.nn.functional`
```
I defined the functional version analogously to the linear module:
```Python
bias = _add_docstr(
torch._C._nn.bias,
r"""
bias(input, bias) -> Tensor
Adds a bias vector the last dimension of input tensor
Shape:
- Input: math:`(*, num\_features)` where `*` means any number of
additional dimensions, including none
- Bias: :math:`(num\_features)` or :math:`()`
- Output: :math:`(*, num\_features)` where `*` means any number of
additional dimensions, including none, same shape as Input
""")
```
I add this function 'bias' to the allowlist here: `test/allowlist_for_publicAPI.json` in the list for `"torch.nn.functional"`
When reading the test function though it says that no new functions should be added to this list. If I def bias above and then implement `bias.__module__ = 'torch.nn.functional'` This does indeed work.
Is that the correct solution?
Would it be a nicer API if there was a function analogous to `_add_docstr` which also defined the `__module__` attribute when setting the doc string.
cc @mruberry
| 1 |
5,698 | 77,583 |
FSDP: test mixed precision with checkpoint
|
oncall: distributed, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
When training with mixed precision, we should ensure we take checkpoint with parameters, buffers in full precision so that it can be loaded in for full precision retraining. We currently cast params and buffers to their full precision dtype before taking checkpoint, but this should be validated via unittests.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,699 | 77,576 |
`stateless.functional_call` doesn't work with `nn.DataParallel`
|
module: nn, triaged, module: data parallel, actionable
|
### 🐛 Describe the bug
Encountered in https://github.com/pytorch/pytorch/pull/77137#issuecomment-1125694907
```
import torch
import torch.nn as nn
import torch.nn.utils.stateless as stateless
class Foo(nn.Module):
def __init__(self):
super().__init__()
self.weight = nn.Parameter(torch.ones(5))
def forward(self, x):
return self.weight + x
mod = Foo().cuda()
mod = nn.DataParallel(mod, [0, 1])
print(stateless.functional_call(mod, {'module.weight': torch.zeros(5, device='cuda')}, (torch.ones(2, 5, device='cuda'),)))
```
errors with
``
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
```
### Versions
Nightly
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 3 |
5,700 | 77,558 |
Investigate sharded gradscaler OOM on CPU workloads
|
oncall: distributed, module: fsdp
|
### 🐛 Describe the bug
Sharded grad scaler for FSDP + mixed precision is landed: https://github.com/pytorch/pytorch/pull/76918, although we're seeing an OOM on workloads with CPU offload at the moment. We should investigate and fix this OOM.
### Versions
main
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.