Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
5,301 | 80,903 |
make_fx doesn't work with truly dynamic argument functions (e.g. fx.Interpreter)
|
triaged, module: fx
|
### 🐛 Describe the bug
sample which breaks:
```
make_fx(lambda *args: torch.cat(args))(torch.randn(2), torch.randn(2))
```
Fails with
```
File "test/test_dynamo_cudagraphs.py", line 145, in cudagraphs
make_fx(lambda *args: torch.cat(args))([torch.randn(2), torch.randn(2)])
File "/raid/ezyang/pytorch-scratch2/torch/fx/experimental/proxy_tensor.py", line 281, in wrapped
t = dispatch_trace(wrap_key(f, args), tracer=fx_tracer, concrete_args=tuple(phs))
File "/raid/ezyang/pytorch-scratch2/torch/fx/experimental/proxy_tensor.py", line 177, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/raid/ezyang/torchdynamo/torchdynamo/eval_frame.py", line 88, in _fn
return fn(*args, **kwargs)
File "/raid/ezyang/pytorch-scratch2/torch/fx/_symbolic_trace.py", line 667, in trace
fn, args = self.create_args_for_root(
File "/raid/ezyang/pytorch-scratch2/torch/fx/_symbolic_trace.py", line 524, in create_args_for_root
raise RuntimeError(
RuntimeError: Tracing expected 0 arguments but got 1 concrete arguments
```
The problem is FX knows how to see through varargs if there is an inner function its wrapping that has a true argument list, but cannot deal with truly dynamic lists. This is fine for vanilla FX as FX is unwilling to burn in the number of tensors in the list but for AOTAutograd/ProxyTensor we should burn in the list count and trace anyway.
cc @ezyang @Chillee @zdevito @jamesr66a
### Versions
master
| 11 |
5,302 | 80,875 |
slow test infra cannot handle nested suites
|
module: ci, triaged
|
### 🐛 Describe the bug
We currently categorize a selection of slow test cases in our repo to run on a particular config. We query for tests slower than a certain threshold and put these tests into a json every night. The slow tests json generated by our infra looks like: https://github.com/pytorch/test-infra/blob/generated-stats/stats/slow-tests.json.
The code in our CI that parses the JSON to detect whether or not to run the test is https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_utils.py#L1544. Note that we perfectly match a test name with the keys of the JSON.
This is almost always correct, except not all test suites proceed with a `__main__`, which we assume in our query here https://github.com/pytorch/test-infra/blob/main/torchci/rockset/commons/__sql/slow_tests.sql#L57.
### Solution
Let's change the logic in the test selection side to not care about `__main__` when figuring out if a test is a match.
### Versions
CI related
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,303 | 80,874 |
C++ extensions inject a bunch of compilation flags
|
oncall: binaries, module: cpp-extensions, triaged, better-engineering
|
### 🐛 Describe the bug
Discovered by @vfdev-5
This is possibly intentional, but I've noticed that C++ extensions, while building on linux, inject a bunch of compilation flags. For example, in functorch, we do not specify either of ['-g -NDEBUG'](https://github.com/pytorch/functorch/blob/b22d52bb15276cd919815cde01a16d4b3d8f798e/setup.py#L84-L88), but those appear in [our build logs](https://github.com/pytorch/functorch/runs/7140654131?check_suite_focus=true).
The end result of `-g` is that our linux binaries are a lot larger than they need to be (the debug symbols are 10s of mb) while our mac and windows binaries are very small < 1mb. I would expect the domain libraries to have the same issue.
## The diagnosis
In C++ extensions we adopt [certain flags from distutils](https://github.com/pytorch/pytorch/blob/f7678055033045688ae5916c8df72f5107d86a4a/torch/utils/cpp_extension.py#L558).
These flags come from Python's built-in sysconfig module (https://github.com/python/cpython/blob/ec5e253556875640b1ac514e85c545346ac3f1e0/setup.py#L476), which contains a list of flags used to compile Python. These include -g, among other things.
This is probably intentional (I wrote these lines by copy-pasting them from distutils) but I don't have a good understanding of why these flags are necessary or if only some of them are.
### Versions
main
cc @ezyang @seemethere @malfet @zou3519
| 2 |
5,304 | 80,867 |
[BE] Refactor FSDP Unit Tests
|
triaged, better-engineering, module: fsdp
|
## Motivation & Goals
1. In H2, we plan to land new FSDP features such as non-recursive wrapping and multiple parameter group support within one `FlatParameter`. Since these features correspond to distinct code paths, the core unit tests need to parameterize over them. This means an increase in the time-to-signal (TTS) by up to 4x and may increase with further features (e.g. native XLA support).
**Goal 1: Cut down the existing TTS while maintaining feature coverage**.
2. The test files have become fragmented. Despite the existence of `common_fsdp.py`, test files often define their own models and training loops, leading to redundancy, and the methods that do exist in `common_fsdp.py` are not well-documented, leading to confusion.
**Goal 2: Refactor common models and boilerplate to `common_fsdp.py`.**
3. For the non-recursive wrapping code path, the model construction differs from the existing recursive wrapping code path. As a result, much of the existing tests cannot be directly adapted to test the non-recursive path by simply changing a single `FullyShardedDataParallel` constructor call.
**Goal 3: Refactor model construction to enable simpler testing for the non-recursive wrapping path.**
## Status Quo
On the AI AWS cluster with 2 A100 GPUs, the current TTS is approximately **4967.06 seconds = 82.78 minutes = 1.38 hours**[0]. PyTorch Dev Infra has asked to have our multi-GPU tests run in < 75 minutes in CI, which uses P60 GPUs.
The largest contributors to the TTS are `test_fsdp_core.py` (2191.89 seconds = 36.53 minutes) and `test_fsdp_mixed_precision.py` (1074.42 seconds = 17.91 minutes), representing almost 2/3 of the total. Since these test the core FSDP runtime, newly-added code paths will target these tests.
[0]This is a point estimate from running each test file once and excludes recent changes to the test files (my stack was rebased on a commit from 6/21).
## Approach
I will proceed with a series of PRs. The order will be to address Goal 3 -> Goal 1 -> Goal 2.
- For Goal 3, I will introduce a common interface `FSDPTestModel`.
https://github.com/pytorch/pytorch/pull/80873
- For Goal 1, I will use `self.subTest()` with `dist.barrier()` to avoid the expensive process spawn and `dist.init_process_group()` for each parameterization.
https://github.com/pytorch/pytorch/pull/80908
https://github.com/pytorch/pytorch/pull/80915
https://blog.ganssle.io/articles/2020/04/subtests-in-python.html
> There are, however, occasionally situations where the subTest form factor offers some advantages even in parameterization. For example, if you have a number of tests you'd like to perform that have an expensive set-up function that builds or acquires an immutable resource that is used in common by all the subtests:
- For Goal 2, I will perform a deep comb through the existing test suite.
## Related Issues
https://github.com/pytorch/pytorch/issues/80872
https://github.com/pytorch/pytorch/issues/78277
https://github.com/pytorch/pytorch/issues/67288
cc @zhaojuanmao @mrshenli @rohan-varma @ezyang
| 3 |
5,305 | 80,863 |
SummaryWriter add_embedding issue with label_img
|
oncall: visualization
|
### 🐛 Describe the bug
Posting working make_sprite function from `utils/tensorboard/_embedding.py`
https://github.com/pytorch/pytorch/blob/master/torch/utils/tensorboard/_embedding.py#L24
```
def make_sprite(label_img, save_path):
from PIL import Image
from io import BytesIO
# this ensures the sprite image has correct dimension as described in
# https://www.tensorflow.org/get_started/embedding_viz
print('label_img.size(0):', label_img.size(0))
nrow = int(math.ceil((label_img.size(0)) ** 0.5))
print('nrow:', nrow)
np_imgs = make_np(label_img)
print('np_imgs:', np_imgs)
# plt.imshow(np.moveaxis(np_imgs[0], 0, 2))
# plt.show()
arranged_img_CHW = make_grid(np_imgs, ncols=nrow)
print('arranged_img_CHW:', arranged_img_CHW)
print('arranged_img_CHW.shape:', arranged_img_CHW.shape)
# plt.imshow(np.moveaxis(arranged_img_CHW, 0, 2))
# plt.show()
# augment images so that #images equals nrow*nrow
arranged_augment_square_HWC = np.zeros((arranged_img_CHW.shape[2], arranged_img_CHW.shape[2], 3))
print('arranged_augment_square_HWC:', arranged_augment_square_HWC)
print('arranged_augment_square_HWC.shape:', arranged_augment_square_HWC.shape)
arranged_img_HWC = arranged_img_CHW.transpose(1, 2, 0) # chw -> hwc
print('arranged_img_HWC:', arranged_img_HWC)
print('arranged_img_HWC.shape:', arranged_img_HWC.shape)
# plt.imshow(arranged_img_HWC)
# plt.show()
arranged_augment_square_HWC[:arranged_img_HWC.shape[0], :, :] = arranged_img_HWC
print('arranged_augment_square_HWC:', arranged_augment_square_HWC)
print('arranged_augment_square_HWC.shape:', arranged_augment_square_HWC.shape)
plt.imshow(arranged_img_HWC)
plt.show()
# transformed_img = np.uint8((arranged_augment_square_HWC * 255).clip(0, 255))
# print(transformed_img)
# print(transformed_img.shape)
im = Image.fromarray(arranged_img_HWC)
# im = Image.fromarray(arranged_augment_square_HWC)
with BytesIO() as buf:
im.save(buf, format="PNG")
im_bytes = buf.getvalue()
fs = tf.io.gfile.get_filesystem(save_path)
# print(im_bytes)
fs.write(fs.join(save_path, 'sprite.png'), im_bytes, binary_mode=True)
```
Commenting this line creates correct sprite and tensorboard is correctly displaying it, seems like it is not required here, please clarify what it should be doing...:
```
arranged_augment_square_HWC[:arranged_img_HWC.shape[0], :, :] = arranged_img_HWC
```
Along with this change, which is not required, because i already have correct rgb integers from 0 to 255:
```
im = Image.fromarray(np.uint8((arranged_augment_square_HWC * 255).clip(0, 255)))
```
Changed to:
```
im = Image.fromarray(arranged_img_HWC)
```
From my perspective, additional asserts are required, to provide useful information when forwarding data into `add_embedding` function, to inform devs about expected data format inside it or create additional `if` statements and check which transformations should be done and which shouldn't in the above two likes of code which i have commented out or changed.
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 21.10 (x86_64)
GCC version: (Ubuntu 11.2.0-7ubuntu2) 11.2.0
Clang version: 13.0.0-2
CMake version: version 3.18.4
Libc version: glibc-2.34
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.17.0-051700-generic-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 510.60.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.3.3
/usr/local/cuda-11.5/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.3.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] facenet-pytorch==2.5.2
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.0
[pip3] numpydoc==1.1.0
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] facenet-pytorch 2.5.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.0 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
| 7 |
5,306 | 80,861 |
jit.freeze throws RuntimeError: stack_out && stack_out->size() == 1 INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/frozen_conv_folding.cpp":281
|
oncall: jit
|
### 🐛 Describe the bug
When trying to `jit.freeze` my torchscript module, I am encountering an `INTERNAL ASSERT FAILED` error. I would have expected the freezing to simply run without errors. I have cut down my code to this minimal sample:
```python
import torch
from torch import nn
device = 'cuda'
def get_dummy_input():
img_seq = torch.randn(3, 5, 3, 256, 256, device=device)
return img_seq
class MinimalHead(nn.Module):
def __init__(
self,
num_inp=5,
):
super().__init__()
self.num_inp = num_inp
self.inp_heads = nn.ModuleList([
nn.Conv2d(3, 3, 32)
for _ in range(num_inp)
])
def forward(self, inp_seq):
inp_features = 0
for inp_i in range(self.num_inp):
inp_features = inp_features + self.inp_heads[inp_i](inp_seq[:, inp_i])
return inp_features
def main():
head = MinimalHead().to(device)
head = torch.jit.trace(head, get_dummy_input()).eval()
torch.jit.freeze(head)
if __name__ == '__main__':
main()
```
Running the above script results in this error and traceback:
```
Traceback (most recent call last):
File "minimal_freeze_error.py", line 39, in <module>
main()
File "minimal_freeze_error.py", line 35, in main
torch.jit.freeze(head)
File "[...]/torch/jit/_freeze.py", line 119, in freeze
run_frozen_optimizations(out, optimize_numerics, preserved_methods)
File "[...]/torch/jit/_freeze.py", line 167, in run_frozen_optimizations
torch._C._jit_pass_optimize_frozen_graph(mod.graph, optimize_numerics)
RuntimeError: stack_out && stack_out->size() == 1 INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/frozen_conv_folding.cpp":281, please report a bug to PyTorch.
```
Since the error message suggests reporting this, that is what I am doing.
Interestingly, this same script running in my laptop instead of my server results in a smooth run.
Another workaround (not ideal due to existing jit checkpoints) I have currently found is to replace my model's `forward` with the following equivalent code:
```python
def forward(self, inp_seq):
inp_features = [
self.inp_heads[inp_i](inp_seq[:, inp_i])
for inp_i in range(self.num_inp)
]
inp_features = torch.sum(
torch.stack(
inp_features,
dim=0,
),
dim=0,
)
return inp_features
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
GPU 4: Tesla P100-PCIE-16GB
GPU 5: Tesla P100-PCIE-16GB
GPU 6: Tesla P100-PCIE-16GB
GPU 7: Tesla P100-PCIE-16GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.4.9
[pip3] pytorch-metric-learning==0.9.99
[pip3] torch==1.12.0+cu113
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==0.12.0+cu113
[pip3] torchmetrics==0.5.1
[pip3] torchvision==0.13.0+cu113
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 1.19.5 pypi_0 pypi
[conda] pytorch-lightning 1.4.9 pypi_0 pypi
[conda] pytorch-metric-learning 0.9.99 pypi_0 pypi
[conda] torch 1.12.0+cu113 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 0.12.0+cu113 pypi_0 pypi
[conda] torchmetrics 0.5.1 pypi_0 pypi
[conda] torchvision 0.13.0+cu113 pypi_0 pypi
```
| 4 |
5,307 | 80,857 |
Compatibility List
|
oncall: binaries, module: docs
|
### 🚀 The feature, motivation and pitch
Have an easily to find (e.g. in the previous-versions page) compatibility list. If it has to be in the form of commands and comments ( just like in the previous-versions page), instead of a proper table, then this would be fine, too.
However I believe many people wont know what version to use. and a proper tabulation would help.
### Alternatives
wasting good nights sleep to find out it doesn't work, after jumping through all the hoops was not a good alternative.
### Additional context
this is no "help request" or anything, but simply a feature request for a general professional table (or some list of arbitrary commands)
for context:
Even though my pytorch version and "CUDA SDK version" are valid and below the max supported driver version as reported by smi (max = 11.4, installed 11.3) ( `pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3` ) pytorch slams the GC max supported "CUDA version" as 3.7 ... nothing you can actually find out until after the fact based on looking at the installation information
cc @ezyang @seemethere @malfet @svekars @holly1238
| 2 |
5,308 | 80,851 |
[bug][nvfuser] Applying nvfuser to the model leads to runtime error
|
triaged, module: nvfuser
|
### 🐛 Describe the bug
```
Traceback (most recent call last):
File "main.py", line 202, in <module>
all_metrics = trainer.train(args.steps, args.val_steps, args.save_every, args.eval_every)
File "/h/zhengboj/SetGan/set-gan/trainer.py", line 127, in train
d_loss, g_loss, d_aux_losses, g_aux_losses = self.train_step(args_i)
File "/h/zhengboj/SetGan/set-gan/trainer.py", line 189, in train_step
d_base_loss_i, d_aux_losses_i = self._discriminator_step(args)
File "/h/zhengboj/SetGan/set-gan/trainer.py", line 383, in _discriminator_step
aux_losses[loss_fct.name] = loss_fct(self.discriminator, self.generator, candidate_batch, fake_batch, args, "discriminator", reference_batch)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/h/zhengboj/SetGan/set-gan/training_utils.py", line 204, in forward
return self._discriminator_loss(discriminator, generator, real_batch, fake_batch, args, *loss_args, **loss_kwargs)
File "/h/zhengboj/SetGan/set-gan/training_utils.py", line 236, in _discriminator_loss
scaled_gradients = torch.autograd.grad(outputs=self.scaler.scale(disc_interpolates), inputs=interpolates,
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 275, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: 0INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":602, please report a bug to PyTorch. We don't have an op for aten::cat but it isn't a special case. Argument types: Tensor, int,
Candidates:
aten::cat(Tensor[] tensors, int dim=0) -> (Tensor)
aten::cat.names(Tensor[] tensors, str dim) -> (Tensor)
aten::cat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -> (Tensor(a!))
aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!))
Generated:
```
I tried to enable nvfuser when training my model but got the above runtime error. I also tried running my code without scripting the model and everything goes fine. It seems that the error is caused by invoking the `torch.cat` operator with a single tensor. However, after checking the source code, I can verify that each `torch.cat` operator is invoked with a list of tensors. Therefore, I am not sure what is causing this issue. Any help is appreciated. Thanks.
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0a0+2c916ef
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+2c916ef
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.12.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h05e7239_0 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+2c916ef pypi_0 pypi
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.12.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
```
FYI, @wangshangsam
| 23 |
5,309 | 80,832 |
[DDP] doesn't support multiple backwards when static_graph=True
|
oncall: distributed, module: ddp
|
### 🐛 Describe the bug
```python
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
class ToyModel(nn.Module):
def __init__(self, in_dim=10, out_dim=5):
super(ToyModel, self).__init__()
self.dense1 = nn.Linear(in_dim, in_dim)
self.dense2 = nn.Linear(in_dim, out_dim)
def forward(self, x):
x = self.dense1(x)
return self.dense2(x)
dist.init_process_group("nccl")
rank = dist.get_rank()
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank], find_unused_parameters=True)
ddp_model._set_static_graph()
x = torch.randn(5, 10)
y = torch.randn(5, 5).to(rank)
loss_fn = nn.MSELoss()
output = ddp_model(x)
loss = loss_fn(output, y)
loss.backward(retain_graph=True)
output.backward(torch.zeros_like(output))
```
It will report the error:
```
SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7fb37a2ec200> returned NULL without setting an error
```
But if I call the prepare_for_backward before backward (like #47260),
```python
ddp_model.reducer.prepare_for_backward(loss)
loss.backward(retain_graph=True)
ddp_model.reducer.prepare_for_backward(output)
output.backward(torch.zeros_like(output))
```
It works and the output seems ok. But I don't know if this is correct or potentially risky?
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @ezyang @gchanan @zou3519 @bdhirsh @agolynski @mrzzd @xush6528
### Versions
PyTorch 1.10.0
| 0 |
5,310 | 80,829 |
Can torchscript dump backward graph?
|
oncall: jit
|
### 🚀 The feature, motivation and pitch
Can torchscript dump backward graph?
I am interested in tracing through both the forward and backward graph using TorchScript and dumping the IR, for full graph optimization in a separate framework.
Currently, can torchscript dump backward graph?
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
5,311 | 80,827 |
Inconsistent computation of gradient in MaxUnPooling
|
module: autograd, triaged, module: determinism, actionable, module: correctness (silent)
|
### 🐛 Describe the bug
Hey
I think there is a inconsistent definition how MaxUnPool and it's gradient are computed. Here is an example code.
```python
import torch
torch.manual_seed(123)
A = torch.rand(1, 1, 9, 9)
_, I = torch.nn.MaxPool2d(3, 1, return_indices=True)(A)
print("Indices", I)
B = torch.arange(I.numel(), 0, -1).to(torch.float).view(I.shape).detach()
B.requires_grad = True
print("MaxUnPool Input", B)
C = torch.nn.MaxUnpool2d(3, 1)(B, I)
print("MaxUnPool Output", C)
D = C * torch.arange(C.numel()).to(torch.float).view(C.shape)
# now compute the gradient
E = D.sum()
E.backward()
print("MaxUnPool Gradient", B.grad)
```
The output is:
```
Indices tensor([[[[20, 20, 20, 23, 23, 23, 17],
[28, 30, 30, 23, 23, 23, 34],
[28, 30, 30, 23, 23, 23, 43],
[28, 48, 48, 48, 49, 43, 43],
[47, 48, 48, 48, 49, 43, 43],
[47, 48, 48, 48, 49, 70, 70],
[74, 57, 57, 57, 69, 70, 70]]]])
MaxUnPool Input tensor([[[[49., 48., 47., 46., 45., 44., 43.],
[42., 41., 40., 39., 38., 37., 36.],
[35., 34., 33., 32., 31., 30., 29.],
[28., 27., 26., 25., 24., 23., 22.],
[21., 20., 19., 18., 17., 16., 15.],
[14., 13., 12., 11., 10., 9., 8.],
[ 7., 6., 5., 4., 3., 2., 1.]]]], requires_grad=True)
MaxUnPool Output tensor([[[[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 43.],
[ 0., 0., 47., 0., 0., 37., 0., 0., 0.],
[ 0., 35., 0., 33., 0., 0., 0., 36., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 15., 0.],
[ 0., 0., 14., 18., 17., 0., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 3., 8., 0.],
[ 0., 0., 7., 0., 0., 0., 0., 0., 0.]]]],
grad_fn=<MaxUnpool2DBackward0>)
MaxUnPool Gradient tensor([[[[20., 20., 20., 23., 23., 23., 17.],
[28., 30., 30., 23., 23., 23., 34.],
[28., 30., 30., 23., 23., 23., 43.],
[28., 48., 48., 48., 49., 43., 43.],
[47., 48., 48., 48., 49., 43., 43.],
[47., 48., 48., 48., 49., 70., 70.],
[74., 57., 57., 57., 69., 70., 70.]]]])
```
In "Indices" we can see that the index 20 is used 3 times.
Now in "MaxUnPool Output" we see that the index 20 does only use the very last value 47 in this case. So I would expect the gradient for these three to be [0, 0, 20].
I put in a multiplication, so that in the gradient we get the actual index where the value was taken from. As we can see, the gradient for these three values is instead [20, 20, 20].
To wrap this up. To my understanding MaxUnPool computes the following during the forward pass:
```c++
for(int batch ...)
for(int channel ...)
for(int index ...)
output[batch, channel, indices[index]] = input[batch, channel, index];
```
However, when we look at the gradient, all get the value propagated, which is inconsistent with the forward pass, where only the last gets used.
This seems to be inconsistent to me. So I think either the forward pass needs to use a ```+=``` instead of ```=```, or the backward pass needs to only propagate the value to the last occurrence of the index.
Best
### Versions
Collecting environment information...
PyTorch version: 1.12.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.17
Python version: 3.7.12 (default, Feb 6 2022, 20:29:18) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Is CUDA available: False
CUDA runtime version: 11.3.109
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0
[pip3] torchmetrics==0.9.1
[pip3] torchvision==0.13.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @kurtamohler
| 19 |
5,312 | 80,826 |
Ne op does not behaves as expected with nan
|
high priority, needs reproduction, triaged
|
### 🐛 Describe the bug
When Nan values are given as input torch.ne op behaves in a strange manner
```
import torch
import numpy as np
x= torch.tensor([ -np.inf, 0.3516, 0.3719, 0.5452, 0.4024, 0.9232, 0.6995, 0.0805, 0.7434,0.1871, 0.3802, 0.5379, 0.1533, np.nan, 0.8519, 0.7572, np.inf, 0.4675, 0.4702, 0.2297, 0.5905, 0.6923, 0.2628, -np.inf, -np.inf, 0.6335, 0.9912,0.9256, 0.0237, 0.4891, np.nan, 0.9731])
x.ne__(45)
# when x.size is greater than or equal to 32, ne op returns 0 in the place of NAN
x= torch.tensor([ -np.inf, 0.9232, 0.6995, 0.0805, 0.7434,0.1871, 0.3802, 0.5379, 0.1533, np.nan, 0.8519, 0.7572, np.inf, 0.4675, 0.4702, 0.2297, 0.5905, 0.6923, 0.2628, -np.inf, -np.inf, 0.6335, 0.9912,0.9256, 0.0237, 0.4891, np.nan, 0.9731])
x.ne__(45)
# when x.size is lesser than 32, ne op returns 1 in the place of NAN
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.5
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
cc @ezyang @gchanan @zou3519
| 3 |
5,313 | 80,824 |
When running GPT trainning with megatron, the program quit due to torch.distributed.elastic.agent.server.api:Received 1 death signal, shutting down workers
|
oncall: distributed, module: elastic
|
### 🐛 Describe the bug
1.When running GPT trainning with megatron, the program quit due to torch.distributed.elastic.agent.server.api:Received 1 death signal, shutting down workers
2.code
Megatron-LM github branch master and I changed /Megatron-LM/megatron/tokenizer/bert_tokenization.py and /Megatron-LM/megatron/tokenizer/tokenizer.py for berttokenizer data preprocess needs.
[tokenizer.zip](https://github.com/NVIDIA/Megatron-LM/files/9035644/tokenizer.zip)
[bert_tokenization.zip](https://github.com/NVIDIA/Megatron-LM/files/9035645/bert_tokenization.zip)
3.training data ~103MB
[vocab_processed.txt](https://github.com/NVIDIA/Megatron-LM/files/9035635/vocab_processed.txt)
[my-gpt2_test_0704_text_document.zip](https://github.com/NVIDIA/Megatron-LM/files/9035638/my-gpt2_test_0704_text_document.zip)
my-gpt2_test_0704_text_document.bin is ~103MB which exceed size limit, if you need , i can send it.
4.bash 0704_gpt_train.sh
[0704_gpt_train.zip](https://github.com/NVIDIA/Megatron-LM/files/9035651/0704_gpt_train.zip)
5.env:
linux:Linux version 4.15.0-167-generic (buildd@lcy02-amd64-045) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04))
[python env.txt](https://github.com/pytorch/pytorch/files/9035632/python.env.txt)
6.error log:
[0704.log](https://github.com/pytorch/pytorch/files/9035617/0704.log)
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-167-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla V100S-PCIE-32GB
GPU 1: Tesla V100S-PCIE-32GB
GPU 2: Tesla V100S-PCIE-32GB
GPU 3: Tesla V100S-PCIE-32GB
GPU 4: Tesla V100S-PCIE-32GB
GPU 5: Tesla V100S-PCIE-32GB
GPU 6: Tesla V100S-PCIE-32GB
GPU 7: Tesla V100S-PCIE-32GB
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 10.2.89 h713d32c_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.22.3 py38he7a7128_0 defaults
[conda] numpy-base 1.22.3 py38hf524024_0 defaults
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cu102 pytorch
[conda] torchvision 0.12.0 py38_cu102 pytorch
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,314 | 80,821 |
Add typing support to ModuleList and ModuleDict
|
module: typing, triaged
|
### 🚀 The feature, motivation and pitch
Currently, the containers `nn.ModuleList` and `nn.ModuleDict` are typing-unaware, i.e. given this:
```python
class A(nn.Module):
def __init__(self):
self.my_modules = nn.ModuleList([nn.Linear(1, 1) for _ in range(10)])
```
`self.my_modules[i]` is treated as `nn.Module`, not as `nn.Linear`. For example, VSCode complains about snippets like `function_that_expects_tensor(self.my_modules[i].weight`), because it thinks that `.weight` can be both `Tensor` and `nn.Module`.
What I propose:
```python
from collections.abc import MutableSequence
from typing import TypeVar
ModuleListValue = TypeVar('ModuleListValue', bound=nn.Module)
class ModuleList(Module, MutableSequence[ModuleListValue]):
# now, some methods can be typed, e.g.:
def __getitem__(...) -> ModuleListValue:
...
...
```
For `nn.DictModule`, it is a bit more complicated, since there are two different patterns:
- (A) `dict`-like: the set of keys is not fixed, all values are modules of the same type (e.g. `nn.Linear`)
- (B) `TypedDict`-like: the set of keys is fixed, the values can be of different types (e.g. `{'linear': nn.Linear(...), 'relu': nn.ReLU}`).
(A) can be implemented similarly to the previous example:
```python
...
class ModuleDict(Module, MutableMapping[str, ModuleDictValue]):
...
```
In fact, this can cover (B) as well in a very limited way (by setting `ModuleDictValue=nn.Module`). And it is unclear to me how to implement the fully functioning (B), it looks like we need something like `TypedMutableMapping`, but there is no such thing in typing. So I would start with `MutableMapping` and add TypedModuleDict when it becomes technically possible.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 4 |
5,315 | 80,808 |
The result of doing a dot product between two vectors, using einsum, depends on another unrelated vector
|
triaged, module: numerical-reproducibility
|
### 🐛 Describe the bug
I have two tensors, `x` and `y`. The first has shape `(2, 3, 2)`, the second has shape `(34, 2)`. I use `einsum` to calculate the dot product between each of the six 2-dimensional vectors that lie in the last dimension of `x`, and each of the 34 vectors that lie in the last dimension of `y`. The bug is that the result of the dot product between `x[0, 0]` and `y[0]` changes if we ignore the last vector of `y`, i.e. if we take `y[:33]` instead of `y`. This is undesired behavior (I think).
See here:
```python
from torch import tensor, zeros, einsum
x = zeros((2, 3, 2))
x[0, 0, 0] = 1.0791796445846558
x[0, 0, 1] = 0.30579063296318054
y = zeros((34, 2))
y[0, 0] = -0.14987720549106598
y[0, 1] = 0.9887046217918396
# the following two numbers should be equal, but they are not.
# the expressions differ in that the second one uses y[:33]
a = einsum('dhb,nb->ndh', x, y )[0, 0, 0].item() # =0.14059218764305115
b = einsum('dhb,nb->ndh', x, y[:33])[0, 0, 0].item() # =0.14059217274188995
# returns False
a == b
```
I believe this is a minimal example (at least, local minimum). If I take `x` to be 1d or 2d instead of 3d, the bug does not occur. If I take `x` and `y` to have last dimension of size 1, the bug does not occur. If I change any of the non-zero entries of `x` and `y` to value zero, the bug does not occur. If I do a "manual" dot product instead, like this: `(x[None,...]*y[:33,None,None,:]).sum(3)[0,1,0].item()`, the bug does not occur (and we get the value `0.14059217274188995`, equal to `b` above). If I put the tensors on GPU (`.to('cuda')`), the bug does not occur (and again we get the value of `b` above). If I use numpy, the bug does not occur (but we get a different value: `0.14059218275815866`).
### Versions
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.5
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
| 5 |
5,316 | 80,805 |
torch.einsum results in segfault
|
high priority, triage review, oncall: binaries, module: crash, module: openmp, module: multithreading
|
### 🐛 Describe the bug
I experience a segfault when running a simple script like:
```python
import torch
As = torch.randn(3, 2, 5)
Bs = torch.randn(3, 5, 4)
torch.einsum("bij,bjk->bik", As, Bs)
```
Running this results in:
```console
$ python test.py
Segmentation fault: 11
```
Even if I add `-X faulthandler` I don't seem to get any kind of stacktrace to help locate the issue. If someone can give me instructions for how to use gdb I can try to get a backtrace.
### Versions
```console
$ python collect_env.py
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.9.13 (main, Jun 18 2022, 21:43:00) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.6.3
[pip3] mypy==0.931
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.0
[pip3] pytorch-lightning==1.6.4
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] segmentation-models-pytorch==0.2.0
[pip3] torch==1.12.0
[pip3] torchmetrics==0.7.2
[pip3] torchvision==0.12.0a0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @seemethere @malfet
| 15 |
5,317 | 80,804 |
`torch.renorm` gives wrong gradient for 0-valued input when `p` is even and `maxnorm=0`.
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`torch.renorm` gives wrong gradient for 0-valued input when `p` is even and `maxnorm=0`.
```py
import torch
def fn(input):
p = 2
dim = -1
maxnorm = 0
fn_res = torch.renorm(input, p, dim, maxnorm, )
return fn_res
input = torch.tensor([[0.1, 0.], [0., 0.]], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(fn, (input))
```
```
GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=torch.float64)
analytical:tensor([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]], dtype=torch.float64)
```
Because `p=2` and `maxnorm=0`, this function should be `f(x) = 0` for every element. Therefore, it should return 0 as the gradient.
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,318 | 80,803 |
`hardshrink` gives wrong gradient for 0 input when `lambd` is 0.
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`hardshrink` gives wrong gradient for 0-valued input when `lambd` is 0.
```python
import torch
def fn(input):
fn_res = input.hardshrink(lambd=0.0)
return fn_res
input = torch.tensor([0.], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(fn, (input))
```
```
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[1.]], dtype=torch.float64)
analytical:tensor([[0.]], dtype=torch.float64)
```
Based on the definition of `hardshrink`, it should be `f(x) = x` if `lambd=0`. Thus, it's supposed to return 1 as the gradient when input is 0.
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,319 | 80,776 |
`torch.inverse()` crash in cuda
|
triaged, module: linear algebra, module: correctness (silent)
|
### 🐛 Describe the bug
`tensor.inverse` produce wrong results. The code works fine before, until one day I move `import torch` to the end of imports
Unfortunately, I can't provide a snippet to reproduce the bug. It may be caused by the conficts with other libs.
The result that works correctly before:

After I move `import torch` to the end of imports, I get:

It works correctly on CPU and `torch.pinverse`.
### Versions
`torch.__version__`: 1.11.0+cu113
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 1 |
5,320 | 80,774 |
RPC: Make RRefProxy callable
|
oncall: distributed, enhancement, module: rpc
|
### 🚀 The feature, motivation and pitch
Executing remote callable objects (including `Module`) currently requires explicitly specifying `__call__` in the RPC command.
Consider:
```Python
import os
import torch
from torch.distributed import rpc
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
rpc.init_rpc('worker0', world_size=1, rank=0)
class MyModule(torch.nn.Module):
def forward(self, tensor):
print(tensor)
mod = rpc.remote(0, MyModule)
t = torch.randn(10)
# Works:
mod.rpc_sync().__call__(t)
# TypeError: 'RRefProxy' object is not callable
mod.rpc_sync()(t)
rpc.shutdown()
```
It would be cleaner if users didn't have to explicitly call double-underscore methods.
Cheers.
### Alternatives
None
### Additional context
None
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @jjlilley @mrzzd
| 0 |
5,321 | 80,771 |
Anaconda is not a package manager
|
module: docs, triaged
|
### 📚 The doc issue
The documentation in ["Getting Started"](https://pytorch.org/get-started/locally/#start-locally) states:
> "*Anaconda is our recommended package manager since it installs all dependencies.*"
However, Anaconda is not a package manager, but a distribution of Python that includes the Conda package manager. This misusage confuses users and leads to people unnecessarily installing Anaconda when many users are better off with a Miniforge variant, such as Mambaforge.
### Suggest a potential alternative/fix
Please replace "Anaconda" with "Conda".
cc @svekars @holly1238
| 0 |
5,322 | 80,765 |
Let torch.utils.tensorboard support multiprocessing
|
module: multiprocessing, triaged, module: tensorboard
|
### 🚀 The feature, motivation and pitch
In TensorboardX, [GlobalSummaryWriter](https://github.com/lanpa/tensorboardX/blob/df1944916f3aecd22309217af040f2e705997d9c/tensorboardX/global_writer.py) is implemented. I want to use torch.utils.tensorboard in multiprocessing, but I failed after a simple modification of TensorboardX’s code. I believe this feature is very important and not difficult to implement. Thanks you very much!
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin
| 2 |
5,323 | 80,762 |
`atan2` will gradcheck fail when `other` is a tensor with `int8` dtype
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`atan2` will gradcheck fail when `other` is a tensor with `int8` dtype
```python
import torch
def fn(input):
other = torch.tensor([[22, 18, 29, 24, 27],
[ 3, 11, 23, 1, 19],
[17, 26, 11, 26, 2],
[22, 11, 21, 23, 29],
[ 7, 30, 24, 15, 10]], dtype=torch.int8)
fn_res = torch.atan2(input, other, )
return fn_res
input = torch.tensor([[ 0.6021, -0.8055, -0.5270, -0.3233, -0.9129]], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(fn, (input))
```
It will fail
```
GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[0.0454, 0.0000, 0.0000, 0.0000, 0.0000, 0.3204, 0.0000, 0.0000, 0.0000,
0.0000, 0.0587, 0.0000, 0.0000, 0.0000, 0.0000, 0.0454, 0.0000, 0.0000,
0.0000, 0.0000, 0.1418, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0554, 0.0000, 0.0000, 0.0000, 0.0000, 0.0904, 0.0000, 0.0000,
0.0000, 0.0000, 0.0384, 0.0000, 0.0000, 0.0000, 0.0000, 0.0904, 0.0000,
0.0000, 0.0000, 0.0000, 0.0333, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0345, 0.0000, 0.0000, 0.0000, 0.0000, 0.0435, 0.0000,
0.0000, 0.0000, 0.0000, 0.0907, 0.0000, 0.0000, 0.0000, 0.0000, 0.0476,
0.0000, 0.0000, 0.0000, 0.0000, 0.0416, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0417, 0.0000, 0.0000, 0.0000, 0.0000, 0.9054,
0.0000, 0.0000, 0.0000, 0.0000, 0.0385, 0.0000, 0.0000, 0.0000, 0.0000,
0.0435, 0.0000, 0.0000, 0.0000, 0.0000, 0.0666, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0370, 0.0000, 0.0000, 0.0000, 0.0000,
0.0525, 0.0000, 0.0000, 0.0000, 0.0000, 0.4138, 0.0000, 0.0000, 0.0000,
0.0000, 0.0344, 0.0000, 0.0000, 0.0000, 0.0000, 0.0992]],
dtype=torch.float64)
analytical:tensor([[-0.7960, 0.0000, 0.0000, 0.0000, 0.0000, 0.3204, 0.0000, 0.0000,
0.0000, 0.0000, 0.5096, 0.0000, 0.0000, 0.0000, 0.0000, -0.7960,
0.0000, 0.0000, 0.0000, 0.0000, 0.1418, 0.0000, 0.0000, 0.0000,
0.0000],
[ 0.0000, 0.2622, 0.0000, 0.0000, 0.0000, 0.0000, 0.0904, 0.0000,
0.0000, 0.0000, 0.0000, -0.2846, 0.0000, 0.0000, 0.0000, 0.0000,
0.0904, 0.0000, 0.0000, 0.0000, 0.0000, -0.2432, 0.0000, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.3958, 0.0000, 0.0000, 0.0000, 0.0000, 1.3312,
0.0000, 0.0000, 0.0000, 0.0000, 0.0907, 0.0000, 0.0000, 0.0000,
0.0000, -0.2969, 0.0000, 0.0000, 0.0000, 0.0000, 0.3734, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.0000, 0.3744, 0.0000, 0.0000, 0.0000, 0.0000,
0.9054, 0.0000, 0.0000, 0.0000, 0.0000, -0.2829, 0.0000, 0.0000,
0.0000, 0.0000, 1.3447, 0.0000, 0.0000, 0.0000, 0.0000, -0.4855,
0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, -0.7074, 0.0000, 0.0000, 0.0000,
0.0000, 0.1795, 0.0000, 0.0000, 0.0000, 0.0000, 0.4138, 0.0000,
0.0000, 0.0000, 0.0000, 0.3928, 0.0000, 0.0000, 0.0000, 0.0000,
0.0992]], dtype=torch.float64)
```
But when the `other` is `int16` or `int32`, it will pass the gradcheck
```python
import torch
def fn(input):
other = torch.tensor([[22, 18, 29, 24, 27],
[ 3, 11, 23, 1, 19],
[17, 26, 11, 26, 2],
[22, 11, 21, 23, 29],
[ 7, 30, 24, 15, 10]], dtype=torch.int16)
fn_res = torch.atan2(input, other, )
return fn_res
input = torch.tensor([[ 0.6021, -0.8055, -0.5270, -0.3233, -0.9129]], dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(fn, (input))
# True
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,324 | 80,761 |
`det` will return wrong gradient for `1x1` matrix with 0 value.
|
module: autograd, triaged, module: edge cases
|
### 🐛 Describe the bug
`det` will return wrong gradient for `1x1` matrix with 0 value.
```python
import torch
input = torch.tensor([[0.]], dtype=torch.float64, requires_grad=True)
torch.det(input).backward()
print(input.grad)
# tensor([[0.]], dtype=torch.float64)
```
The correct gradient should be 1. Instead, when the value isn't zero, it will return the correct gradient.
```python
import torch
input = torch.tensor([[0.1]], dtype=torch.float64, requires_grad=True)
torch.det(input).backward()
print(input.grad)
# tensor([[1.]], dtype=torch.float64)
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 1 |
5,325 | 80,756 |
[ONNX] RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/ir.cpp":518
|
oncall: jit, module: onnx, onnx-needs-info
|
### 🐛 Describe the bug
I want to export an onnx model which contains a dynamic for loop determined by the input tensor.
However, I got the bug below.
# Full Error Information
```
Traceback (most recent call last):
File "d:\End2End\test_onnx.py", line 22, in <module>
export(model_script, inputs, 'script.onnx',opset_version=11, example_outputs=output)
File "D:\anaconda\envs\mmlab\lib\site-packages\torch\onnx\__init__.py", line 271, in export
return utils.export(model, args, f, export_params, verbose, training,
File "D:\anaconda\envs\mmlab\lib\site-packages\torch\onnx\utils.py", line 88, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "D:\anaconda\envs\mmlab\lib\site-packages\torch\onnx\utils.py", line 694, in _export
_model_to_graph(model, args, verbose, input_names,
File "D:\anaconda\envs\mmlab\lib\site-packages\torch\onnx\utils.py", line 463, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "D:\anaconda\envs\mmlab\lib\site-packages\torch\onnx\utils.py", line 174, in _optimize_graph
torch._C._jit_pass_lint(graph)
RuntimeError: 0 INTERNAL ASSERT FAILED at "..\\torch\\csrc\\jit\\ir\\ir.cpp":518, please report a bug to PyTorch. 20 not in scope
```
# code to reproduce the bug.
```python
import torch
from torch import nn, jit
from typing import List
from torch.onnx import export
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv3d(1, 1, 3, 1, 1)
def forward(self, x):
outputs = jit.annotate(List[torch.Tensor], [])
for i in range(x.size(0)):
outputs.append(self.conv(x[i].unsqueeze(0)))
return torch.stack(outputs, 0).squeeze()
inputs = torch.rand((3, 1, 5, 5, 5))
model = Model()
with torch.no_grad():
output = model(inputs)
model_script = jit.script(model)
export(model_script, inputs, 'script.onnx',
opset_version=11, example_outputs=output)
```
### Versions
PyTorch version: 1.8.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 家庭中文版
GCC version: (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0
Clang version: Could not collect
CMake version: version 3.21.1
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:59:08) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 466.81
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.8.1+cu111
[pip3] torchvision==0.9.1+cu111
[conda] numpy 1.22.4 <pip>
[conda] torch 1.8.1+cu111 <pip>
[conda] torchvision 0.9.1+cu111 <pip>
| 1 |
5,326 | 80,753 |
CapabilityBasedPartitioner requires is node supported to only return true for CALLABLE_NODE_OPS but no assertion for this invariant exists
|
triaged, module: fx, oncall: pt2
|
### 🐛 Describe the bug
Steps to reproduce:
1. Write a OperatorSupport that always returns True
2. Try to run the partitioner on a graph
Expected: an error, or it fuses the entire graph
Actual: the resulting graph is malformed
@SherlockNoMad
### Versions
master
| 2 |
5,327 | 92,033 |
Unable to use vmap atop torch.distribution functionality
|
high priority, triaged, module: functorch
|
Hello! I'm working on an application that requires computing a neural net's weight Jacobians through a torch.distribution log probability. Minimal example code show below:
```python
import torch
from torch.distributions import Independent, Normal
from functorch import make_functional_with_buffers, jacrev, vmap
def compute_fischer_stateless_model(fmodel, params, buffers, input, target):
input = input.unsqueeze(0)
target = target.unsqueeze(0)
pred = fmodel(params, buffers, input)
normal = Independent(Normal(loc=pred, scale=torch.ones_like(pred)), reinterpreted_batch_ndims=1)
log_prob = normal.log_prob(target)
return log_prob
# Instantiate model, inputs, targets, etc.
fmodel, params, buffers = make_functional_with_buffers(model)
ft_compute_jac = jacrev(compute_fischer_stateless_model, argnums=1)
ft_compute_sample_jac = vmap(ft_compute_jac, in_dims=(None, None, None, 0, 0))
jac = ft_compute_sample_jac(fmodel, params, buffers, inputs, targets)
```
Executing my script returns a `RuntimeError` error of the form:
*RuntimeError: vmap: It looks like you're either (1) calling .item() on a Tensor or (2) attempting to use a Tensor in some data-dependent control flow or (3) encountering this error in PyTorch internals. For (1): we don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. For (2): If you're doing some control flow instead, we don't support that yet, please shout over at https://github.com/pytorch/functorch/issues/257 . For (3): please file an issue.*
Any help would be appreciated -- thanks in advance for you time!
cc @ezyang @gchanan @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99
| 9 |
5,328 | 80,742 |
Add TorchDynamo as a submodule to Pytorch?
|
module: build, triaged
|
### 🚀 The feature, motivation and pitch
It has been hard to recommend TorchDynamo usage with Pytorch given that TorchDynamo does not have an official release and users often want to use specific versions of Pytorch (Nvidia releases containers on a monthly basis from TOT), however, TorchDynamo requires using Pytorch TOT, currently, and can break when not in sync.
I wanted to propose adding TorchDynamo as a submodule to Pytorch.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 22 |
5,329 | 80,738 |
Output for `aten::_native_multi_head_attention` appears inconsistent with entry in `native_functions.yaml`
|
oncall: transformer/mha
|
### 🐛 Describe the bug
The following python code is an invocation of `aten::_native_multi_head_attention` that outputs a `(Tensor, None)` tuple.
```python
import torch
embed_dim = 8
num_heads = 4
bs = 4
sl = 2
qkv = torch.nn.Linear(embed_dim, embed_dim * 3, dtype=torch.float32)
proj = torch.nn.Linear(embed_dim, embed_dim, dtype=torch.float32)
q = torch.randn(bs, sl, embed_dim) * 10
k = torch.randn(bs, sl, embed_dim) * 10
v = torch.randn(bs, sl, embed_dim) * 10
mha = torch.ops.aten._native_multi_head_attention(
q,
k,
v,
embed_dim,
num_heads,
qkv.weight,
qkv.bias,
proj.weight,
proj.bias,
need_weights=False,
average_attn_weights=False,
)
print(mha)
```
The following is an example output.
```
(tensor([[[ 1.6427, 2.0966, 2.4298, 1.6536, 2.9116, -0.6659, 0.0086,
4.0757],
[ 2.0386, 0.8152, -0.8718, 1.7295, 0.9999, -1.8865, -2.7697,
1.9216]],
[[ 4.0717, 0.0476, -0.6383, 3.1022, -2.5480, 2.0922, -4.1062,
-0.5034],
[ 2.3662, 0.3523, -1.0895, 1.9332, 0.3525, 0.4775, -2.1356,
0.4972]],
[[-5.0851, 3.8904, 2.9651, -3.1131, 6.5247, -2.5286, -1.4031,
1.0763],
[-2.5247, 1.5687, -1.5536, 1.0382, 4.8081, -2.2505, 1.6698,
2.1023]],
[[-1.7481, 1.0500, 2.4167, -1.5026, 5.5205, -3.3177, 3.3927,
4.1006],
[-3.4155, 2.5501, 4.6239, -8.3866, 4.6514, -2.5655, 5.8211,
2.1764]]], grad_fn=<NotImplemented>), None)
```
Note that the second value in the returned tuple is `None`. This appears to contradict the entry for `_native_multi_head_attention` in `native_functions.yaml` which indicates that it will always return a tuple of tensors `(Tensor, Tensor)`.
```
- func: _native_multi_head_attention(Tensor query, Tensor key, Tensor value, int embed_dim, int num_head, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, Tensor? mask=None, bool need_weights=True, bool average_attn_weights=True) -> (Tensor, Tensor)
variants: function
dispatch:
CPU, CUDA, NestedTensorCPU, NestedTensorCUDA: native_multi_head_attention
```
Please let me know if I have misunderstood something regarding this function signature and this is intended behavior.
### Versions
```
PyTorch version: 1.12.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.23.2
Libc version: glibc-2.35
Python version: 3.10.5 (main, Jun 6 2022, 18:49:26) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.18.5-arch1-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==1.12.0
[conda] Could not collect
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 2 |
5,330 | 80,606 |
[jit.script] jit.script give uncertain results using torch.half
|
oncall: jit, module: nvfuser
|
### 🐛 Describe the bug
`torch.jit.script` give uncertain results using `torch.half`. The result of the first execution of the function is different from that of the second execution, but the result is the same since the second execution. Here is the code to reproduce.
```
import math
import torch
@torch.jit.script
def f(x):
return torch.tanh(math.sqrt(2.0 / math.pi) * x)
x = torch.rand((32,), dtype=torch.half).cuda()
res = []
for i in range(5):
res.append(f(x))
for i in range(5):
for j in range(i+1, 5):
print(f"{i} {j}: {(res[i]-res[j]).nonzero().any()}")
```
the result is:
```
0 0: False
0 1: True
0 2: True
0 3: True
0 4: True
1 1: False
1 2: False
1 3: False
1 4: False
2 2: False
2 3: False
2 4: False
3 3: False
3 4: False
4 4: False
```
please note that the problem does not exist using `torch.float`. And also not exists without `@torch.jit.script`
### Versions
Collecting environment information...
PyTorch version: 1.12.0a0+git67ece03
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.5
Libc version: glibc-2.27
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.0.221
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] torch==1.12.0a0+git67ece03
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] _pytorch_select 0.1 cpu_0 https://10.251.102.1/anaconda/pkgs/main
[conda] blas 1.0 mkl https://10.251.102.1/anaconda/pkgs/main
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 https://10.251.102.1/anaconda/pkgs/main
[conda] libmklml 2019.0.5 h06a4308_0 https://10.251.102.1/anaconda/pkgs/main
[conda] mkl 2020.2 256 https://10.251.102.1/anaconda/pkgs/main
[conda] numpy 1.20.3 py39hdbf815f_1 https://10.251.102.1/anaconda/cloud/conda-forge
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 https://10.251.102.1/anaconda/cloud/pytorch
[conda] pytorch-mutex 1.0 cuda https://10.251.102.1/anaconda/cloud/pytorch
[conda] torch 1.12.0a0+git67ece03 pypi_0 pypi
[conda] torchaudio 0.11.0 py39_cu113 https://10.251.102.1/anaconda/cloud/pytorch
[conda] torchvision 0.12.0 py39_cu113 https://10.251.102.1/anaconda/cloud/pytorch
| 2 |
5,331 | 80,605 |
pad_sequence and pack_sequence should support length zero tensors
|
module: rnn, triaged, enhancement
|
### 🚀 The feature, motivation and pitch
This situation naturally occurs when training on irregularly sampled times series data with a fixed real-time sliding window, i.e. the model received all observations made within a time interval of say 2 hours.
Since the time series is irregular, it can happen that there are no observations in this time-slice, hence the resulting tensor has a 0-length dimension.
## MWE
```python
import torch
from torch.nn.utils.rnn import pack_sequence, pad_sequence
tensors = [torch.randn(abs(n - 3)) for n in range(6)]
pad_sequence(tensors, batch_first=True, padding_value=float("nan"))
pack_sequence(tensors, enforce_sorted=False)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519
| 0 |
5,332 | 80,595 |
Overlapping Optimizer.step() with DDP backward
|
oncall: distributed, module: optimizer
|
### 🚀 The feature, motivation and pitch
DDP `all_reduce` parameters' gradients in `buckets`, which means that some parameters would get their finial gradients earlier than others. This exposes an opportunity for `Optimizer` to optimize part of parameters before `backward` finished. So that overlapping part of `Optimizer.step` and DDP `backward` is possible and may led to higher parallelism or even hide some `all_reduce` cost.
### Alternatives
This re-scheduling may be achieved by compilation optimizations like `LazyTensorCore` and `XLA`.
### Additional context
<img width="1160" alt="截屏2022-06-30 下午3 58 35" src="https://user-images.githubusercontent.com/73142299/176624701-e2ec53da-5240-42e9-8ca2-7c5e71f6b62a.png">
As the timeline above, if we can start `Optimizer.step` working on those parameters already has finial gradients, we may save almost all `Optimizer.step`'s cost.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @vincentqb @jbschlosser @albanD
| 5 |
5,333 | 80,594 |
RuntimeError: DataLoader worker (pid 22822) is killed by signal: Aborted.
|
module: dataloader, triaged
|
### 🐛 Describe the bug
I set num_workers=4 in dataloader,the training process can run normally, and an error is reported in the verification phase。
### Versions
torch 1.11.0
torchvision 0.12.0
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 3 |
5,334 | 80,588 |
Semi-reproducible random torch.baddbmm NaNs
|
needs reproduction, triaged, module: NaNs and Infs
|
### 🐛 Describe the bug
The following code snippet appears to cause `torch.baddbmm` to randomly generate NaNs when run *on CPU*
```
for i in range(10000):
out = torch.baddbmm(
torch.zeros([1, 1, 1], dtype=torch.float32),
torch.FloatTensor([[[1]]]),
torch.FloatTensor([[[1]]]),
beta=0,
)
assert not torch.isnan(out).any(), i
# AssertionError: 9886
# (or some other number)
```
Despite running the same calculation each time, it often fails not on the first try, but many tries in.
(Sometimes I need to run the loop several times before it actually encounters a NaN, which seems odd to me.)
I've tried this on two different hardware setups and encountered the same issue.
Hope I'm not just doing something silly!
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.11 (main, Mar 29 2022, 19:08:29) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.18.0-305.28.1.el8_4.x86_64-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Quadro RTX 8000
Nvidia driver version: 470.57.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.11.0 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu102 pytorch
[conda] torchvision 0.12.0 py39_cu102 pytorch
| 9 |
5,335 | 80,580 |
`torch.ops.aten.find` inconsistent with `str.find`
|
module: cpp, triaged, module: sorting and selection
|
### 🐛 Describe the bug
For empty target strings `aten::find` will always return the value of the starting position in the string, even when the given range is invalid.
```
>>> import torch
>>> "example".find("", 100, 0)
-1
>>> torch.ops.aten.find("example", "", 100, 0)
100
```
As far as I can tell the cause of the discrepancy is found in `register_prim_ops.cpp` on [line 1522](https://github.com/pytorch/pytorch/blob/b4e491798c0679ab2e61f36a511484d7b8ecf8d3/torch/csrc/jit/runtime/register_prim_ops.cpp#L1522) where if the target string is empty, it will always search for the target even if the search range is invalid in some way.
Additionally, this can lead to crashes for certain inputs as shown below.
```
>>> import torch
>>> torch.ops.aten.find("example", "a", 100, 200)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/quinn/torch-mlir/mlir_venv/lib/python3.10/site-packages/torch/_ops.py", line 148, in __call__
return self._op(*args, **kwargs or {})
IndexError: basic_string::substr: __pos (which is 100) > this->size() (which is 7)
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220623+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.22.5
Libc version: glibc-2.35
Python version: 3.10.5 (main, Jun 6 2022, 18:49:26) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.18.5-arch1-1-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==1.13.0.dev20220623+cpu
[pip3] torch-mlir==20220620.509
[pip3] torchvision==0.14.0.dev20220623+cpu
[conda] Could not collect
cc @jbschlosser
| 1 |
5,336 | 80,577 |
2-dimensional arange
|
triaged, enhancement, module: nestedtensor, module: tensor creation
|
### 🚀 The feature, motivation and pitch
When dealing with batches of variable-length sequences, we often track the start indices, length of sequences, and end indices.
```python
maximum_sequence_length = 3
number_of_sequences = 3
data = torch.random(maximum_sequence_length * number_of_sequences, feature)
sequence_length = torch.tensor([1, 3, 2])
start_idx = torch.tensor([0, 3, 6])
end_idx = torch.tensor([1, 6, 8])
```
When we want to index our `data` tensor, and select only the non-padded/valid data, we need to generate some indices that go from `start_idx` to `end_idx`. Currently, there is no vectorized way to do this, and so we do the following
```python
# Sad, non-vectorized loop :(
idx = torch.cat([torch.arange(start_idx[b], end_idx[b]) for b in range(batch)])
print(idx)
# Parenthesis added for readability
>>> [(0, ), (3, 4, 5), (6, 7)]
# With the indices, we can now index our data, free from padding
data[idx]
```
This has applications to filtering and message-passing GNNs. Over many batches, this for-loop concatenation overhead tends to add up. It would be nice to have something vectorized like
```python
torch.arange2d(start_idx: torch.Tensor, end_idx: torch.Tensor) -> torch.Tensor:
"""Given a vector of s_i \in starts and e_i \in ends,
returns [s_1, s_1 + 1, ... e_1, s_2, s_2 + 1, ... e_2, ... e_n]"""
```
### Alternatives
`repeat_interleave` can be used in some edge cases, but does not work in general. Slicing using a tensor would also solve this, i.e.
```python
data[start_idx: end_idx]
```
but this is not currently supported either.
### Additional context
cc @cpuhrsch @gchanan @mruberry
| 4 |
5,337 | 80,574 |
`bmm_sparse_cuda` kernel for `bfloat16`
|
module: sparse, module: cuda, triaged, module: bfloat16
|
### 🚀 The feature, motivation and pitch
The kernel would be useful for sparse attention with `bfloat16` weights
### Alternatives
```python
torch.bmm(
sparse_matrix.float(),
dense_matrix.float()
).to(torch.bfloat16)
```
### Additional context
_No response_
cc @nikitaved @pearu @cpuhrsch @amjames @ngimel
| 0 |
5,338 | 80,561 |
Cannot run scripted BERT_Pytorch
|
oncall: jit
|
### 🐛 Describe the bug
I tried to run the following code, but it fails with error
Traceback (most recent call last):
File "bug.py", line 70, in <module>
main()
File "bug.py", line 65, in main
script_f(*arg_list)
File ".../pytorch/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 3)
```python
import torch
from functorch import make_fx
from torch.nn.utils.stateless import functional_call
from functorch.compile import ts_compile
from functorch.compile import default_decompositions
import torch.utils._pytree as pytree
import importlib
import gc
def load_model(device, model_name):
module = importlib.import_module(f"torchbenchmark.models.{model_name}")
benchmark_cls = getattr(module, "Model", None)
if not hasattr(benchmark_cls, "name"):
benchmark_cls.name = model_name
batch_size = None
benchmark = benchmark_cls(
test="train", device=device, jit=False, batch_size=batch_size
)
model, example_inputs = benchmark.get_module()
model.eval()
gc.collect()
return device, benchmark.name, model, example_inputs
def trace_model(model, inputs):
def f(params, inp):
out = functional_call(model, params, inp)
# out.sum().backward()
result = 0
if isinstance(out, tuple):
for i in out:
result += i.sum()
else:
result = out.sum()
result.sum().backward()
return [param.grad for param in params.values()]
params = dict(model.named_parameters())
traced_graph = make_fx(f, decomposition_table=default_decompositions)(params, inputs)
return traced_graph, params
def main():
torch._C._jit_override_can_fuse_on_cpu(False)
torch._C._jit_override_can_fuse_on_gpu(False)
torch._C._jit_set_texpr_fuser_enabled(False)
torch._C._jit_set_nvfuser_enabled(True)
torch.manual_seed(1337)
device, name, model, example_inputs = load_model(
"cuda", 'BERT_pytorch'
)
traced_graph, params = trace_model(model, example_inputs)
traced_graph.graph.set_codegen(torch.fx.graph.CodeGen()) # avoid recursive pytree
script_f = ts_compile(traced_graph, (params, example_inputs))
arg_list, spec = pytree.tree_flatten([params, example_inputs])
script_f(*arg_list)
script_f(*arg_list)
script_f(*arg_list)
script_f(*arg_list)
if __name__ == "__main__":
main()
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0a0+git9244547
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-16) 12.1.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.5
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.112
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] botorch==0.6.4
[pip3] flowtorch==0.8
[pip3] functorch==0.3.0a0+6cfb462
[pip3] gpytorch==1.6.0
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] numpyro==0.9.2
[pip3] pytorch-transformers==1.2.0
[pip3] torch==1.13.0a0+git9244547
[pip3] torch-struct==0.5
[pip3] torchaudio==0.12.0a0+4d2fa19
[pip3] torchdynamo==0.2.0
[pip3] torchfile==0.1.0
[pip3] torchmetrics==0.9.1
[pip3] torchrec-nightly==2022.4.26
[pip3] torchtext==0.13.0a0+d6e3550
[pip3] torchvision==0.14.0a0+a7e4fbd
[pip3] torchx-nightly==2022.6.8
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] botorch 0.6.4 pypi_0 pypi
[conda] flowtorch 0.8 pypi_0 pypi
[conda] functorch 0.3.0a0+6cfb462 dev_0 <develop>
[conda] gpytorch 1.6.0 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] magma-cuda116 2.6.1 0 pytorch
[conda] mkl 2022.1.0 pypi_0 pypi
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.20.0 pypi_0 pypi
[conda] numpy-base 1.22.3 py38hf524024_0
[conda] numpyro 0.9.2 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] torch 1.13.0a0+git9244547 dev_0 <develop>
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 0.12.0a0+4d2fa19 dev_0 <develop>
[conda] torchdynamo 0.2.0 dev_0 <develop>
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchmetrics 0.9.0 pypi_0 pypi
[conda] torchrec-nightly 2022.4.26 pypi_0 pypi
[conda] torchtext 0.13.0a0+d6e3550 dev_0 <develop>
[conda] torchvision 0.14.0a0+a7e4fbd dev_0 <develop>
[conda] torchx-nightly 2022.6.6 pypi_0 pypi
| 0 |
5,339 | 80,553 |
Nonliner conjugate gradient optimizer + Hager-Zhang line search
|
feature, module: optimizer, triaged, needs research
|
### 🚀 The feature, motivation and pitch
The nonlinear conjugate gradient (CG) method is a good alternative to the L-BFGS optimizer. Features of nonlinear CG:
1. It theoretically converges faster than gradient-descent methods (including most derived variants), and the memory footprint is just slightly higher than that of gradient-descent methods.
2. It is theoretically able to converge to machine precision, which may be important for regression-type problems when using single-precision floats.
3. Though it converges slower than L-BFGS does, it needs much less memory.
4. The specific CG method from Hager and Zhang automatically falls back to the vanilla gradient descent method (but with an auto-determined learning rate) if the conjugate direction does not perform better.
5. No need for configuring the learning rate. The step size (i.e., the learning rate) on the conjugate/gradient direction is determined by an inexact line search.
6. Hager and Zhang also proposed a companion inexact line-search algorithm for the proposed CG method.
I already have a working implementation of Hager and Zhang's nonlinear CG as a derived `torch.optim.Optimizer` class, including the inexact line search algorithm. I have been using this implementation for my own research. However, in terms of code tests, I only have two simple cases that can be used for code tests. But at least if this feature request is approved, we don't have to start from scratch.
This is directly related to #1359.
**Reference**
[1] W. W. Hager and H. Zhang, “Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent,” ACM Trans. Math. Softw., vol. 32, no. 1, pp. 113–137, Mar. 2006, doi: 10.1145/1132973.1132979.
### Alternatives
_No response_
### Additional context
There are some issues mentioning implementing CG, like #53441 and #17902. However, I think they are more about linear CG, which is a special case of nonlinear CG. And I feel like linear and nonlinear CG solvers usually are implemented separately. So I don't think this feature request will resolve the issues requesting linear CG.
cc @vincentqb @jbschlosser @albanD
| 1 |
5,340 | 80,551 |
NVFuser should extend caching to remove necessity for PrimTorch's executor to Provide Tensor Contiguity Info
|
triaged, module: nvfuser, module: primTorch
|
### 🚀 The feature, motivation and pitch
Currently, PrimTorch's executor provides contiguity and tensor rank size information in the `FusionDefinition` for each fusion group sent to NVFuser. This should be removed in favor of NVFuser receiving this information at runtime and caching the graph appropriately.
Therefore, the only information NVFuser should be provided in the Tensor Definition is the number of ranks of each Tensor. @mruberry does this match your understanding?
### Alternatives
_No response_
### Additional context
CC @csarofeen @jjsjann123 @ngimel @IvanYashchuk
cc @ezyang @mruberry @ngimel
| 9 |
5,341 | 80,549 |
Allow parameterization of Layouts
|
module: sparse, feature, triaged, module: python frontend
|
### 🚀 The feature, motivation and pitch
Some layouts can be arbitrarily parameterized. Block sparse layouts, for example require block sizes to completely specify the layout.
If we were to implement a way of parameterizing such layouts, then passing the layout in any context would be fully specified without the need for additional arguments.
### Alternatives
- All interfaces accepting a layout would strictly support a subset of layouts.
- This would make the adoption of any "parameterized" layout more difficult.
- All interfaces accepting a `layout` would have to accept any additional parameters required to perform any possible conversions.
- These interfaces would need to be updated when new layouts are introduced.
- Specification of a parameter with a `layout` option that it does not pertain to would need to be detected and handled (Warning/Error), or ignored (potentially problematic)
### Additional context
A key example (see #80542).
If the `Tensor.to` method were to support layout conversions a `blocksize` parameter would need to be accepted, but ignored for most layouts. If the `blocksize` was instead a parameter attached to the layout itself that interface (and all others accepting a `layout`) could safely be ignorant of the additional details for those layouts. The parameter need only be accessed at the point where the real format conversion function gets involved.
```python
x = torch.Tensor(...)
y = x.to(layout=torch.sparse_bsr(blocksize=(...)))
z = x.to(layout=torch.sparse_coo)
```
vs
```python
x = torch.Tensor(...)
y = x.to(layout=torch.sparse_bsr, blocksize=(...))
z = x.to(layout=torch.sparse_coo, blocksize=(...))
```
The handling of the last case in above would be tedious, and get worse as more formats introduce additional specialty parameters.
cc @nikitaved @pearu @cpuhrsch @amjames
| 4 |
5,342 | 80,541 |
[Prims+NVFuser] Prims with missing NVFuser ops
|
triaged, module: nvfuser, module: primTorch
|
These prims are missing NVFuser implementations. They were found by running https://github.com/pytorch/pytorch/compare/bahuang/nvfuser_e2e with additional decompositions.
- [ ] cat
- [ ] maximum
- [ ] transpose
cc @ezyang @mruberry @ngimel
| 9 |
5,343 | 80,496 |
DDP find_unused_parameters=True does not work for Sparse gradients
|
oncall: distributed
|
### 🐛 Describe the bug
If you have an unused parameter that is a sparse embedding table like below
```
class Example(nn.Module):
def __init__(self):
self.a = nn.Embedding(10, 32, sparse=True)
self.b = nn.Embedding(10, 32, sparse=True)
def forward(self, x):
return self.a(x)
```
and in main, you are using `DDP(model, find_unused_parameters=True)`
then the code above fails with
```
File "/home/ec2-user/anaconda3/envs/elanmark/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected sparse gradient to be defined.
```
Switching `self.b` to `sparse=False` fixes the issue. However, in my actual case, each embedding is required under different cases.
It looks like `Reducer::mark_variable_ready_sparse` does not have the same checks as `Reducer::mark_variable_ready_dense`. Instead it just checks `grad.defined()`.
### Versions
Not available. Ask if specific relevant data needed.
torch==1.7.0+cu110
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,344 | 80,494 |
[bug] libtorch bug in nn::MultiheadAttention and nn::Transformer
|
module: cpp, module: nn, triaged, oncall: transformer/mha, module: correctness (silent)
|
### 🐛 Describe the bug
the attn_mask does not work in nn::MultiheadAttention
~~~
#include <torch/torch.h>
namespace nn = torch::nn;
// seq_length x batch_size x feature_size
torch::Tensor x = torch::randn({3,1,4});
torch::Tensor attn_mask = nn::TransformerImpl::generate_square_subsequent_mask(3);
torch::Tensor attn_mask_bool = attn_mask.to(torch::kBool);
nn::MultiheadAttention multihead_attention(4,1);
std::tuple<torch::Tensor, torch::Tensor> output_with_attn_mask = multihead_attention ->forward(x,x,x,{},true,attn_mask_bool);
torch::Tensor attn_output, attn_output_weights;
std::tie(attn_output, attn_output_weights) = output_with_attn_mask; //unpacking tuple into variables
std::cout << attn_output_weights << std::endl;
~~~
~~~
attn_mask:
/*
0 1 1
0 0 1
0 0 0
[ CPUBoolType{3,3} ]
*/
attn_output_weights:
(1,.,.) =
0.1918 0.5302 0.2780
0.2074 0.1919 0.6007
0.1948 0.5092 0.2960
[ CPUFloatType{1,3,3} ]
~~~
* 1 the attn_mask does not affect the attn_output_weights.
* 2 the API in libtorch for MultiheadAttention and Transformer is not comparable with that of in pytorch. e.g. libtorch api have no batch_first param.
* 3 more detail comparison see https://github.com/walkacross/pytorch-libtorch-API-translation/tree/main/translation/torch.nn/transformer_layers
### Versions
libtorch: 1.11.0+cu113
cc @jbschlosser @albanD @mruberry @walterddr @kshitij12345 @saketh-are @bhosmer @cpuhrsch @erichan1
| 7 |
5,345 | 80,488 |
Negative values still produced by torch.nn.functional.kl_div
|
high priority, module: nn, triaged
|
### 🐛 Describe the bug
## 🐛 Bug
Despite a fix in https://github.com/pytorch/pytorch/issues/32520, kl_div in torch.nn.functional still outputs negative values.
## To Reproduce
Say we have outputs with the same values:
```python
a = torch.tensor([[1,2,3], [5.0, 5.0, 5.0]])
b = torch.tensor([[1,2,3], [5.0, 5.0, 5.0]])
torch.nn.functional.kl_div(torch.nn.functional.log_softmax(a, 1), torch.nn.functional.softmax(b, 1), reduction="none")
tensor([[ 0.0000e+00, 0.0000e+00, -1.9826e-08],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00]])
```
Or, say we have outputs with arbitrarily-chosen different values. https://github.com/pytorch/pytorch/issues/32520 attempts to resolve negative outputs by allowing for a target in log-space, however this has no effect here:
```python
a = torch.tensor([[1,2,3], [5.0, 5.0, 5.0]])
b = torch.tensor([[8,10,6], [5.0, 5.0, 5.0]])
torch.nn.functional.kl_div(torch.nn.functional.log_softmax(a, 1), torch.nn.functional.log_softmax(b, 1), reduction="none", log_target=True)
tensor([[ 0.0310, 1.0962, -0.0593],
[ 0.0000, 0.0000, 0.0000]])
```
or with a log-space target:
```python
a = torch.tensor([[1,2,3], [5.0, 5.0, 5.0]])
b = torch.tensor([[8,10,6], [5.0, 5.0, 5.0]])
torch.nn.functional.kl_div(torch.nn.functional.log_softmax(a, 1), torch.nn.functional.softmax(b, 1), reduction="none")
tensor([[ 0.0310, 1.0962, -0.0593],
[ 0.0000, 0.0000, 0.0000]])
```
## Expected Behavior
Outputs with values >= 0.
### Versions
pytorch==1.12.0 (same behavior in 1.8.2 LTS)
torchvision==0.13.0
cudatoolkit=11.3.1
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 10 |
5,346 | 80,458 |
Revisit OpInfo samples for nn.functional.max_poolNd
|
module: nn, triaged, actionable, module: pooling, module: testing
|
As mentioned [here](https://github.com/pytorch/pytorch/issues/80314#issuecomment-1168426818), the `nn.functional.max_poolNd` OpInfo entries test input combinations extensively, resulting in long test run times. From the times reported [here](https://observablehq.com/d/b95f7b67261147ba), they appear to be within the top 15 longest running tests:
```
TestOps.test_dtypes_nn_functional_max_pool1d_cuda 45.902 sec
TestOps.test_dtypes_nn_functional_max_pool2d_cuda 61.708 sec
TestOps.test_dtypes_nn_functional_max_pool3d_cuda 27.613 sec
```
We should revisit these OpInfo entries and consider reducing the number of samples to bring down these test times.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,347 | 80,439 |
scatter_reduce choosed indices
|
triaged, enhancement, module: scatter & gather ops
|
### 🚀 The feature, motivation and pitch
Can i get choosed indices for scatter_reduce function? **like torch.max() -> return: max, max_indices**
### Alternatives
_No response_
### Additional context
_No response_
cc @mikaylagawarecki
| 0 |
5,348 | 80,431 |
CMake Error: File /opt/pytorch/build_variables.bzl does not exist.
|
triaged, module: regression, module: docker
|
### 🐛 Describe the bug
Following #80212, the docker build command failed again today on a freshly cloned repo:
```sh
$ git clone --recursive https://github.com/pytorch/pytorch
$ cd pytorch
$ DOCKER_BUILDKIT=1 docker build . > log.txt 2>&1
```
[log.txt](https://gist.github.com/iago-lito/61358872444050cfbbff05d18ccfea00)
Maybe related: #78542 ?
### Versions
```sh
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.1.0
Clang version: 13.0.1
CMake version: version 3.23.2
Libc version: glibc-2.35
Python version: 3.10.5 (main, Jun 6 2022, 18:49:26) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 960M
Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.4.0
/usr/lib/libcudnn_adv_infer.so.8.4.0
/usr/lib/libcudnn_adv_train.so.8.4.0
/usr/lib/libcudnn_cnn_infer.so.8.4.0
/usr/lib/libcudnn_cnn_train.so.8.4.0
/usr/lib/libcudnn_ops_infer.so.8.4.0
/usr/lib/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0a0+1665451
[conda] Could not collect
```
| 1 |
5,349 | 80,427 |
Torch fx print line number of each node
|
triaged, module: fx
|
### 🚀 The feature &motivation
It can be very helpful if each node in the graph, generated by symbolic_trace, would include the full path and line number of this operator, similarly to torch.onnx.export verbose output. For example:
%180 : Float(1, 512, 7, 7, strides=[25088, 49, 7, 1], requires_grad=1, device=cpu) = onnx::**Add**(%240, %243) # **C:\ProgramData\Miniconda3\lib\site-packages\torchvision\models\resnet.py:80:0**
One of possible use cases for such feature is to locate specific functions or attribute queries such as: x.size(), x.reshape
Best regards
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @SherlockNoMad
| 0 |
5,350 | 93,774 |
Guard Failures in T5 Model
|
triaged, bug, oncall: pt2, module: dynamo
|
In addition to the `torch.finfo` issue, that I verified was fixed, there remains several guard failures. Are these unique instances or the hierarchy of the failure?
The last guard failure provided the most specific point in the code dealing with the shape of the input linear matmuls of the multihead attention block:
```
def project(hidden_states, proj_layer, key_value_states, past_key_value):
"""projects hidden states correctly to key/query states"""
if key_value_states is None:
# self-attn
# (batch_size, n_heads, seq_length, dim_per_head)
hidden_states = shape(proj_layer(hidden_states))
elif past_key_value is None:
# cross-attn
# (batch_size, n_heads, seq_length, dim_per_head)
hidden_states = shape(proj_layer(key_value_states))
if past_key_value is not None:
if key_value_states is None:
# self-attn
# (batch_size, n_heads, key_length, dim_per_head)
hidden_states = torch.cat([past_key_value, hidden_states], dim=2)
else:
# cross-attn
hidden_states = past_key_value
return hidden_states
```
```
torchdynamo hit recompilation cache limit (64) for function '__getitem__' (/opt/pytorch/pytorch/torch/nn/modules/container.py:194), due to the following guard failures: [['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)'], ['idx == 0'], ['___check_obj_id(self, 140428461148432)']]to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo hit recompilation cache limit (64) for function 'forward' (/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py:632), due to the following guard failures: [['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)'], ['___check_obj_id(self, 140428461148288)']]to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo hit recompilation cache limit (64) for function 'forward' (/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py:560), due to the following guard failures: [['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ["tensor 'position_bias' strides mismatch at index 0. expected 16, actual 4194304"], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)'], ['___check_obj_id(self, 140428418168912)']]to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo hit recompilation cache limit (64) for function 'forward' (/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py:437), due to the following guard failures: [['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ["tensor 'position_bias' strides mismatch at index 0. expected 16, actual 262144"], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)'], ['___check_obj_id(self, 140428219191360)']]to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
torchdynamo hit recompilation cache limit (64) for function 'project' (/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py:475), due to the following guard failures: [['___check_obj_id(proj_layer, 140428461148960)'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid'], ['___guarded_code.valid']]to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
```
How to reproduce (the script automatically installs HuggingFace so you need to do nothing to install dependencies):
```
git clone https://github.com/kevinstephano/simple_dl_models.git
cd simple_dl_models
python huggingface_t5.py [--nvprims_nvfuser|--inductor]
```
For small GPUs, you might need to dial down the number of sequences to fit.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 20 |
5,351 | 80,420 |
[DDP] output_device argument appears completely unused
|
oncall: distributed, triaged, better-engineering, module: ddp
|
### 🐛 Describe the bug
In current DDP implementation, the only reference to `self.output_device` is in DDP logging to log construction data: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py#L760.
It does not actually appear used anywhere, possibly we should mark this argument as deprecated / unused.
### Versions
main
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,352 | 80,417 |
[c10d] Async object-based collectives
|
oncall: distributed, triaged, module: c10d
|
### 🚀 The feature, motivation and pitch
Feature request for supporting an async mode for object-based collectives was raised in https://discuss.pytorch.org/t/how-can-i-receive-the-outputs-from-dist-all-gather-object-asynchronously/155004. Creating this issue to track support for all object-based collectives to be async.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,353 | 80,411 |
Tracker: Slow gradcheck failures possibly indicating incorrect gradients
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
For a very long time, most tests are being [skipped](https://ossci-raw-job-status.s3.amazonaws.com/log/7032407678) by slow gradcheck (see https://github.com/pytorch/pytorch/issues/80314#issuecomment-1168074007):
At least the following are failing in slow gradcheck (There are probably a lot more, but we can't see those right now because the test is timing out):
- [ ] test_fn_fwgrad_bwgrad___rmod___cuda_float64 (non-differentiable points encountered when sample inputs are large)
- [ ] test_fn_fwgrad_bwgrad__masked_prod_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad__masked_prod_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_cat_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_cholesky_inverse_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_copysign_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_fft_*
- [ ] test_fn_fwgrad_bwgrad_float_power_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_float_power_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_fmod_cuda_float64 (probably same issue as `__fmod__`)
- [ ] test_fn_fwgrad_bwgrad_linalg_householder_product_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_linalg_lu_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_linalg_lu_factor_ex_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_linalg_matrix_power_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_remainder_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_repeat_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_prod_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_slice_scatter_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_tile_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_pow_cuda_float64
- [ ] test_fn_fwgrad_bwgrad_pow_cuda_complex128
- [ ] test_fn_fwgrad_bwgrad_zero__cuda_complex128
- [ ] test_fn_gradgrad_linalg_lu_factor_cuda_float64
- [ ] test_fn_grad_div_trunc_rounding_cuda_float64
- [ ] test_fn_grad_div_floor_rounding_cuda_float64
See https://ossci-raw-job-status.s3.amazonaws.com/log/7034710626
The failures don't necessarily mean the formulas are wrong, `__rmod__` (and possibly also `fmod`) for example only fails when perturbation of eps will crosses a point of non-differentiability, and when sample inputs are large enough that happens with high probability.
### Versions
main
cc @ezyang @gchanan @zou3519 @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 1 |
5,354 | 80,380 |
Support for learnable p Values in LPPOOL like Pool
|
module: nn, triaged, needs research
|
### 🚀 The feature, motivation and pitch
Make a LPPOOL layer that can learn p Values. The value for p = inf is max pooling and p = 1 is Average pooling. Different from LPPOOL where p = 1 is sum pooling.
The new LPPOOL has a p value for each feature map. So for each feature map a p Value can be learned.
### Alternatives
_No response_
### Additional context
I already tested this layer in my repo https://github.com/joerg-de/AdaptiveLearnableLPPool2d-test where I replaced the pooling layer of some well known Networks.
For shufflenet_v2_x0_5 Acc1 was around 1.9 % better and Acc5 was around 1.5 % better.
With 1024 additional Weights.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,355 | 80,377 |
Modify _add_docstr to also set the correct module for the APIs
|
triaged, better-engineering, actionable, module: python frontend
|
Currently, many public APIs don't have their module set correctly to meet our requirements for public API (https://github.com/pytorch/pytorch/wiki/Public-API-definition-and-documentation). All the ops in `torch.linalg`, `torch.special`, `torch.sparse` etc currently use `_add_docstr` to correctly set the doc for all the functions in these modules. The proposal here is to modify `_add_docstr` to set the correct module.
Once this is done, we can get rid of all the respective entries in this allowlist: https://github.com/pytorch/pytorch/blob/d67ce755ad1439b03d4abcc2d40496b20cfd570e/test/allowlist_for_publicAPI.json#L1786-L1827
cc. @albanD
| 3 |
5,356 | 80,372 |
[BE] Update ProcessGroupWrapper tests to test other collective message
|
oncall: distributed, triaged, better-engineering, module: c10d
|
### 🚀 The feature, motivation and pitch
Context of changes are in https://github.com/pytorch/pytorch/pull/79901.
Since the updated ProcessGroupWrapper message when `TORCH_DISTRIBUTED_DEBUG=DETAIL` is set will now print the op type, tensor shape, device type, etc. of the offending collective, we should also update the tests to check for these error messages. The tests in test_pg_wrapper.py currently only validate the error message for the current rank, but does not validate anything about the other rank's collective.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,357 | 80,349 |
Distributed Store `get` doesn't work well with `add`
|
high priority, triage review, oncall: distributed, module: docs, triaged, module: c10d
|
### 🐛 Describe the bug
While I was working on sharing distributed seed for DataLoader, internal users encountered a problem that the distributed process will hang when trying to `get` an integer counter from the distributed store (`PrefixStore`).
While I dived deeper on this problem, I found a comment here:
https://github.com/pytorch/pytorch/blob/590d3e5774110e4657dcaa6acdb387ef69e41b47/torch/distributed/distributed_c10d.py#L242-L246
Do we have a plan to remove the legacy store to make sure `get` is working with `add`? If not, should we add some notes to recommend users to use `add(key, 0)` to retrieve the value saved by `store.add`?
https://pytorch.org/docs/stable/distributed.html#torch.distributed.Store.add
### Versions
fblearner
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @svekars @holly1238
| 4 |
5,358 | 80,338 |
DISABLED test_lobpcg (__main__.TestAutograd)
|
module: autograd, triaged, skipped
|
Platforms: linux
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_lobpcg%2C%20TestAutograd)).
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
5,359 | 80,337 |
Illegal Memory Access from nonzero method when Tensor is Too Large
|
module: dependency bug, module: crash, triaged, module: edge cases
|
### 🐛 Describe the bug
There is a known issue that the nonzero method fails if the input tensor exceeds INT_MAX (#51871), raising a RuntimeError. However, the nonzero method also fails for tensors that are very large but slightly smaller than INT_MAX, causing an irrecoverable illegal memory access.
To Reproduce:
```
import torch
M = 2121269248
torch.ones(M, device=’cuda’).nonzero()
torch.cuda.synchronize()
```
Decreasing M by one eliminates the error.
Expected Behavior:
The method either returns the indices of non-zero elements, or raises a RuntimeError that can be caught and recovered from.
Additional Context:
Possibly related to #51872?
### Versions
PyTorch Version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux Server release 7.9 (Maipo) (x86_64)
GCC version: (GCC) 8.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.3
Python platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-redhat-7.9-Maipo
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 470.52.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.1.1
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu113
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
| 3 |
5,360 | 80,321 |
java.lang.ExceptionInInitializerError at org.pytorch.NativePeer.initHybrid(Native Method)
|
oncall: mobile
|
### 🐛 Describe the bug
I have used Jitsi_meet plugin on the flutter side and PyTorch dependency in the android module. So, I have faced a duplication issue in **com.facebook.fbjni** classes then I excluded the Facebook module from PyTorch dependency. After processing the image from the Andriod module side. I'm getting the below issue.
**My dependency** :
implementation("org.pytorch:pytorch_android:1.10.0"){
exclude group: 'com.facebook.fbjni'
}
implementation("org.pytorch:pytorch_android_torchvision:1.10.0"){
exclude group: 'com.facebook.fbjni'
}
**My issue** :
java.lang.ExceptionInInitializerError
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.<init>(NativePeer.java:27)
at org.pytorch.Module.load(Module.java:28)
at org.pytorch.Module.load(Module.java:38)
at com.healthcubed.ezdxlib.bluetoothHandler.RDTClassifier.processRDTImage(Unknown Source:6)
at com.healthcubed.ezdxlib.bluetoothHandler.EzdxDataParser.sendAIResult(Unknown Source:2)
at com.healthcubed.ezdxlib.bluetoothHandler.EzdxDataParser.handleImageData(Unknown Source:147)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService.rawTestDataParser(Unknown Source:130)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService.access$800(Unknown Source:0)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService$ConnectedThread.dispatchBuffer(Unknown Source:42)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService$ConnectedThread.run(Unknown Source:23)
Caused by: java.lang.RuntimeException: SoLoader.init() not yet called
at com.facebook.soloader.SoLoader.assertInitialized(SoLoader.java:781)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:505)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:484)
at com.facebook.jni.HybridData.<clinit>(HybridData.java:23)
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.<init>(NativePeer.java:27)
at org.pytorch.Module.load(Module.java:28)
at org.pytorch.Module.load(Module.java:38)
at com.healthcubed.ezdxlib.bluetoothHandler.RDTClassifier.processRDTImage(Unknown Source:6)
at com.healthcubed.ezdxlib.bluetoothHandler.EzdxDataParser.sendAIResult(Unknown Source:2)
at com.healthcubed.ezdxlib.bluetoothHandler.EzdxDataParser.handleImageData(Unknown Source:147)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService.rawTestDataParser(Unknown Source:130)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService.access$800(Unknown Source:0)
at com.healthcubed.ezdxlib.bluetoothHandler.BluetoothClassicService$ConnectedThread.dispatchBuffer(Unknown Source:42)
**if anyone has an idea about this issue. Please, post your suggestions and solutions.**
### Versions
org.pytorch:pytorch_android:1.10.0 and org.pytorch:pytorch_android_torchvision:1.10.0
| 0 |
5,361 | 93,772 |
Add support for torch.nn.quantized.modules.FloatFunctional
|
module: nn, triaged, enhancement, oncall: pt2
|
cc @albanD @mruberry @jbschlosser @walterddr @saketh-are @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
5,362 | 80,308 |
CosineAnnealingWarmRestarts with initial warm up and weight decay applied on consecutive cycles without warm up
|
feature, module: optimizer, triaged, needs research
|
### 🚀 The feature, motivation and pitch
So far there is no optimizer in torch.optim or open-source implementations that support CosineAnnealinWarmRestarts with initial linear warmup up to n specified steps followed by CosineAnnealinWarmRestarts without any further warmup with each restart, which gets followed by weight decay.
### Initial_Warmup_Cosine_Annealing_With_Weight_Decay

### Initial_Warmup_Without_Weight_Decay

### No_Initial_Warmup_With_Weight_Decay

### Alternatives
Alternatives involve the ChainedScheduler paradigm which is most suitable for mutex schedulers. In order to achieve this feature, I followed the high-level design pattern of ChainedScheduler and further simplified the implementation so that it is clean, easier to understand, and above all alleviated the need to initialize multiple schedules ahead of time, before passing them to the chained scheduler which can be extra painful in this use case i.e. state staring between the two, required to pass the same optimizer to both schedules that can be potential pitfalls among others if not done correctly.
Final Thoughts:
My approach provides a much simple, modular solution that provides the flexibility to tweak individual components to generate much more interesting learning rates that can come in handy while experimenting and quickly iterating through different variations of the same lr_scheduler.
As an example of how ChainedScheduler would look like for the case where two different schedulers need to have synchronized and updated states that are to be used by the next scheduler:
` ``
>>> warmup_scheduler = WarmUpScheduler(
>>> optimizer,
>>> T_0=20,
>>> T_mul=1,
>>> eta_min=1e-5,
>>> warmup_steps=10,
>>> max_lr=1.0,
>>> gamma=1.0,
>>> )
>>> cosine_warm_restarts_decay = CosineAnealingWarmRestartsWeightDecay(
>>> optimizer,
>>> T_0=20,
>>> T_mul=1,
>>> eta_min=1e-5,
>>> warmup_steps=0,
>>> max_lr=1.0,
>>> gamma=0.9,
>>> )
>>> model = AlexNet(num_classes=2)
>>> optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=1e-1)
>>> scheduler = ChainedScheduler([warmup_scheduler, cosine_warm_restarts_decay], warmup_steps=10)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> optimizer.step()
>>> scheduler.step()
```
### Additional context
CosineAnealingWarmRestarts with linear warm-up during the initial steps (strictly for the number of steps specified during early stages) followed by weight decay is an extremely powerful concept and a life saver at times when training deep models as the learning rate directly affects the time required for a training run, compute resources, model convergence to either local optimum or a global one. (just to name a few)
cc @vincentqb @jbschlosser @albanD
| 1 |
5,363 | 80,302 |
AttributeError: 'LinearPackedParams' object has no attribute '_modules'
|
needs reproduction, oncall: quantization, module: nn, triaged
|
### 🐛 Describe the bug
I am trying to run [easyocr](https://github.com/JaidedAI/EasyOCR) library in spark udf and getting the following error.
```
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/tmp/ipykernel_4537/530021952.py", line 107, in get_parsed_output
File "/home/centos/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 8839, in apply
return op.apply().__finalize__(self, method="apply")
File "/home/centos/.local/lib/python3.8/site-packages/pandas/core/apply.py", line 727, in apply
return self.apply_standard()
File "/home/centos/.local/lib/python3.8/site-packages/pandas/core/apply.py", line 851, in apply_standard
results, res_index = self.apply_series_generator()
File "/home/centos/.local/lib/python3.8/site-packages/pandas/core/apply.py", line 871, in apply_series_generator
results[i] = results[i].copy(deep=False)
File "/home/centos/.local/lib/python3.8/site-packages/pandas/core/apply.py", line 138, in f
return func(x, *args, **kwargs)
File "/tmp/ipykernel_4537/1144331222.py", line 32, in crop_save
File "/home/centos/.local/lib/python3.8/site-packages/easyocr/easyocr.py", line 400, in readtext
result = self.recognize(img_cv_grey, horizontal_list, free_list,\
File "/home/centos/.local/lib/python3.8/site-packages/easyocr/easyocr.py", line 330, in recognize
result0 = get_text(self.character, imgH, int(max_width), self.recognizer, self.converter, image_list,\
File "/home/centos/.local/lib/python3.8/site-packages/easyocr/recognition.py", line 206, in get_text
result1 = recognizer_predict(recognizer, converter, test_loader,batch_max_length,\
File "/home/centos/.local/lib/python3.8/site-packages/easyocr/recognition.py", line 101, in recognizer_predict
model.eval()
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1751, in eval
return self.train(False)
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1732, in train
module.train(mode)
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1732, in train
module.train(mode)
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1732, in train
module.train(mode)
[Previous line repeated 1 more time]
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1731, in train
for module in self.children():
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1618, in children
for name, module in self.named_children():
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1636, in named_children
for name, module in self._modules.items():
File "/home/centos/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LinearPackedParams' object has no attribute '_modules'
```
I am using Centos 7
Python 3.8.12
torch version - 1.11.0+cu102
torchvision version - 0.12.0+cu102
### Versions
--2022-06-26 16:11:47-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16906 (17K) [text/plain]
Saving to: 'collect_env.py'
100%[======================================>] 16,906 --.-K/s in 0s
2022-06-26 16:11:47 (199 MB/s) - 'collect_env.py' saved [16906/16906]
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.8.12 (default, Jun 25 2022, 20:44:59) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 3 |
5,364 | 80,301 |
Need "valid" and "same" padding mode for convTranspose2d
|
feature, module: nn, module: convolution, triaged, module: padding
|
### 🚀 The feature, motivation and pitch
The "valid" and "same" padding mode have been added for conv2D and it was a heavily requested feature. I request to add the similar feature for convTranspose2D (this feature is already present in tf)
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,365 | 80,296 |
Sort tensors inplace
|
feature, triaged, module: sorting and selection
|
### 🚀 The feature, motivation and pitch
Hello Everyone,
I encounter Cuda-out-of-memory error during the sorting operation due to a huge tensor.
The tensor shape is [40000, 2048] thus I think If we can perform the sorting operation in an inplace manner it would be very nice feature to become more efficient.
Also, is there any workaround to make it inplace by myself without changing the torch version, i.e., 1.5?
Best regards.
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
5,366 | 80,259 |
Cudnn batch norm kernel (batchnorm_bwtr_nhwc_semiPersist) gets blocked by overlapping NCCL all_reduce calls
|
module: dependency bug, module: cudnn, triaged, module: nccl, module: memory format
|
### 🐛 Describe the bug
When cudnn + channels_last format is enabled, under distributed DDP, the backward pass of cudnn batchnorm backward can be stalled by the overlapped nccl all_reduce call.
Sorry I don't have a code snippet for repro, but here are some screenshots of torch profiler
Blocking batch_norm_backward

Neither the aten native implementation nor the NCHW version of cudnn kernel `bn_bw_1C11_singleread_specialized` shows this behavior.
Technically this seems like a cuDNN bug, but I don't know where/how to file a cudnn bug, so filing it here instead. This issue is also reported on https://github.com/NVIDIA/nccl/issues/338 https://github.com/NVIDIA/nccl/issues/661
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0a0+gitbc2c6ed
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 14.0.5 (https://github.com/conda-forge/clangdev-feedstock 0f793d7b2ad6a3c631eb76429f4e4def37c890dd)
CMake version: version 3.19.1
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-64-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.4.120
GPU models and configuration: GPU 0: NVIDIA A100-SXM-80GB
Nvidia driver version: 470.57.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.19.5
[pip3] pytorch3d==0.6.1
[pip3] torch==1.11.0+bc2c6ed.cuda114.cudnn841.se02.ap
[pip3] torch-scatter==2.0.8
[pip3] torch-tb-profiler==0.4.0
[pip3] torchfile==0.1.0
[pip3] torchvision==0.9.0a0+8fb5838
[conda] magma-cuda111 2.5.2 1 pytorch
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.19.5 py38h8246c76_3 conda-forge
[conda] pytorch3d 0.6.1 pypi_0 pypi
[conda] torch 1.11.0+bc2c6ed.cuda114.cudnn841.se02.ap pypi_0 pypi
[conda] torch-scatter 2.0.8 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.9.0a0+8fb5838 pypi_0 pypi
```
cc @csarofeen @ptrblck @xwang233 @VitalyFedyunin @jamesr66a
| 3 |
5,367 | 80,256 |
[complex] dropout and it's variants should support complex tensors
|
feature, module: nn, triaged, module: complex
|
### 🐛 Describe the bug
```python
import torch
torch.nn.functional.dropout(torch.randn(3, 3, dtype=torch.cfloat))
```
Output
```
RuntimeError: "bernoulli_scalar_cpu_" not implemented for 'ComplexFloat'
```
List of Dropout Modules:
* [ ] Dropout
* [ ] Dropout2d
* [ ] Dropout3d
* [ ] AlphaDropout
* [ ] FeatureAlphaDropout
### Versions
master
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @ezyang @anjali411 @dylanbespalko @Lezcano @nikitaved
| 2 |
5,368 | 80,242 |
Write some torch.distributed.nn.* tests for the new dispatcher passable ops
|
oncall: distributed, triaged
|
Write some torch.distributed.nn.* tests for the new dispatcher passable ops. This is a follow up on #79669.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,369 | 80,241 |
Change c10d APIs in ProcessGroup to accept const std::vector<at::Tensor>&
|
oncall: distributed, triaged
|
Change c10d APIs in ProcessGroup to accept `const std::vector<at::Tensor>&` instead of `std::vector<at::Tensor>&`.
This is a follow up on #79669.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,370 | 80,238 |
test_conv_backend tests OOMing in 10.2 slow_gradcheck CI
|
module: nn, module: ci, module: convolution, triaged
|
I have NOT disabled this test yet because it may just be transient. To disable the test, add DISABLED at the front of the issue title.
Platforms: linux
This test is failing on periodic jobs due to running out of memory ([recent examples](http://torch-ci.com/failure/test_conv_backend_cudnn3d_transposed_has_bias_False_strided_False_contiguous_True_cuda%2C%20TestNNDeviceTypeCUDA)).
https://hud.pytorch.org/pytorch/pytorch/commit/bab1ea8592f7308cded5d146e2d921ed2a6702dc
```
2022-06-24T07:26:25.5421038Z RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.44 GiB total capacity; 6.88 GiB already allocated; 20.06 MiB free; 6.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @seemethere @malfet @pytorch/pytorch-dev-infra
| 3 |
5,371 | 80,231 |
[Prims+NVFuser] Supports 0-sized inputs
|
triaged, module: nvfuser, module: primTorch
|
A potential fix for this issues is to add a guard to ensure inputs to nvFuser kernels are non-empty tensor. Fallback to eager for empty tensor.
Invoked with: <torch._C._nvfuser.FusionDefinition object at 0x7f144038b430>, <class 'RuntimeError'>, RuntimeError('sizes[i] > 0 INTERNAL ASSERT FAILED at "/fsx/users/bahuang/repos/pytorch_fsx/torch/csrc/jit/codegen/cuda/python_frontend/python_bindings.cpp":225, **please report a bug to PyTorch. Size of 0 is not supported in nvFuser. Expected size > 0.**'), <traceback object at 0x7f1440767500>
cc @ezyang @mruberry @ngimel
| 1 |
5,372 | 80,230 |
[Prims+NVFuser] Aten2Prim refs tracking items
|
triaged, module: nvfuser, module: primTorch
|
- [ ] aten.div https://github.com/pytorch/pytorch/pull/77936
- [ ] aten.amax
- [ ] aten.view
- [ ] aten.new_zero
- [ ] aten.conj_phsycial https://github.com/pytorch/pytorch/pull/81014
- [ ] aten.type_as
- [ ] aten.rand_like
- [ ] aten.var
cc @ezyang @mruberry @ngimel
| 1 |
5,373 | 80,226 |
Support tensor subclasses as `UninitializedParameter`s
|
module: nn, triaged, enhancement, module: lazy, tensor subclass
|
Tensor subclass support as `Parameter`s within modules was added in #73459. A notable gap is support for lazy modules, which utilize `UninitializedParameter`s. We need a way to specify that the `UninitializedParameter` should become a particular subtype when materialized into a `Parameter`.
https://github.com/pytorch/pytorch/blob/35268bdc2a6f0be613b9fa0b5e3da6ae68ece67f/torch/nn/parameter.py#L92-L110
We can change the above, but this will notably require lazy modules (implementing `LazyModuleMixin`) to change slightly. Don't see any way around this change.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,374 | 80,221 |
OpInfos for torch.ops.aten operations
|
feature, module: tests, triaged
|
### 🚀 The feature, motivation and pitch
We should have OpInfos for torch.ops.aten operations that are not covered by the forward pass of our existing torch.* based OpInfos. For example: at::convolution_backward is not directly invoked by the OpInfo for torch.nn.functional.conv2d, so it falls into this category.
## Motivation
Today, to test coverage for at::convolution_backward, someone must write a "forward + backward" test for their OpInfo.
- Performance: This is inefficient because the backward pass (convolution_backward) is generally 2x more expensive than the forward pass
- DRY: You start seeing subsystems (e.g. AOTAutograd, functorch vmap) having the same logic to run forward+backward tests for their OpInfos. They wouldn't all need to duplicate this logic if the backward operations were already a part of OpInfos.
- Coverage: OpInfo samples may not cover all of the aten operations that are involved in a particular operation. This is because when one writes an OpInfo, they look at the public python PyTorch API and try to come up with some test cases, not the ATen implementation of the operator.
- API mismatch: torch.ops.aten.index_put_ and Tensor.index_put_ actually accept different inputs (torch.ops.aten.index_put_ is more general and used in more situation, but not all of it is tested via OpInfo)
Having aten operators in torch.ops.aten will make it easier for transform and backend writers to gain confidence in their implementations.
### Alternatives
n/a
### Additional context
_No response_
cc @mruberry
| 0 |
5,375 | 80,208 |
F.binary_cross_entropy_with_logits unexpected behaviour
|
module: nn, module: loss, triaged
|
I wanted to see if I put a non-binary input into `F. binary_cross_entropy_with_logits` if it will fail. Instead it gives me an answer without failing which is odd, but I can't figure out where the numbers are coming from. See example below (3rd row is confusing).
Two questions regarding below:
1. Why is this not chucking an error because of the 7 in `targets`
2. Why do I have to manually convert targets to `float` when `F.cross_entropy` expects a `LongTensor` and not `FloatTensor`.
```python
import torch
import torch.nn.functional as F
torch.manual_seed(42)
logits = torch.randn((5, 1))
targets = torch.LongTensor([0, 1, 7, 1, 0])[:, None]
log_prob = torch.cat([torch.log1p(-torch.sigmoid(logits)), F.logsigmoid(logits)], dim=-1)
print(F.binary_cross_entropy_with_logits(logits, targets_multilabel.float(), reduction="none"))
print(-log_prob)
"""
Output:
tensor([[ 0.8756],
[ 0.6308],
[-0.8240],
[ 0.5846],
[ 0.2817]])
tensor([[0.8756, 0.5389],
[0.7596, 0.6308],
[0.8172, 0.5828],
[0.8149, 0.5846],
[0.2817, 1.4045]])
"""
```
F.binary_cross_entropy_with_logits(logits, targets_multilabel.float(), reduction="none")
### Versions
Got this error, but my pytorch version is `1.11.0` and running on python 3.9
```
Collecting environment information...
Traceback (most recent call last):
File "collect_env.py", line 492, in <module>
main()
File "collect_env.py", line 475, in main
output = get_pretty_env_info()
File "collect_env.py", line 470, in get_pretty_env_info
return pretty_str(get_env_info())
File "collect_env.py", line 319, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "collect_env.py", line 301, in get_pip_packages
out = run_with_pip(sys.executable + ' -mpip')
File "collect_env.py", line 289, in run_with_pip
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,376 | 80,206 |
`soft_margin_loss` gives wrong gradient when `target` with dtype uint8
|
module: autograd, module: nn, module: loss, triaged
|
### 🐛 Describe the bug
`soft_margin_loss`gives wrong gradient when`target` with dtype uint8
```python
import torch
torch.random.manual_seed(9782)
def get_fn():
target = torch.tensor([0, 0, 1, 1], dtype=torch.uint8, device='cpu')
def fn(input):
fn_res = torch.nn.functional.soft_margin_loss(input, target)
return fn_res
return fn
fn = get_fn()
input = torch.tensor([[0.9707, 0.2672, 0.2389, 0.6876]], dtype=torch.float64, device='cpu', requires_grad=True)
try:
torch.autograd.gradcheck(fn, (input), check_sparse_nnz=False, atol=0.01, rtol=0.01, check_forward_ad=False, check_backward_ad=True, check_batched_grad=False)
except Exception as e:
print(e)
# Jacobian mismatch for output 0 with respect to input 0,
# numerical:tensor([[ 0.0000],
# [ 0.0000],
# [-0.1101],
# [-0.0836]], dtype=torch.float64)
# analytical:tensor([[-0.0000],
# [-0.0000],
# [-0.2500],
# [-0.2500]], dtype=torch.float64)
```
If `target` has dtype `int16` or `int32`, it will return the correct gradient.
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,377 | 80,204 |
`max_unpool` gives wrong gradient when `indices` has duplicate
|
module: autograd, module: nn, triaged, module: pooling, module: edge cases
|
### 🐛 Describe the bug
`max_unpool` doesn't consider that the `indices` may have duplicate elements in its gradient formula.
```python
import torch
def get_fn():
indices = torch.tensor([[[4, 2, 3, 5]]], dtype=torch.int64, device='cuda')
kernel_size = [1]
stride = [2]
def fn(input):
fn_res = torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=stride,)
return fn_res
return fn
fn = get_fn()
input_tensor = torch.tensor([[[0.6418, 0.0359, 0.6088, 0.9408]]], dtype=torch.float64, device='cuda')
print(input_tensor)
# tensor([[[0.6418, 0.0359, 0.6088, 0.9408]]], dtype=torch.float64)
print(fn(input_tensor))
# tensor([[[0.6418, 0.9408, 0.6088, 0.0000, 0.0000, 0.0000, 0.0000]]],
# dtype=torch.float64)
try:
input = input_tensor.clone().detach().requires_grad_()
torch.autograd.gradcheck(fn, (input), check_sparse_nnz=False, atol=0.01, rtol=0.01, check_forward_ad=False, check_backward_ad=True, check_batched_grad=False)
except Exception as e:
print(e)
# Jacobian mismatch for output 0 with respect to input 0,
# numerical:tensor([[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
# [0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
# [0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000],
# [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
# device='cuda:0', dtype=torch.float64)
# analytical:tensor([[1., 0., 0., 0., 0., 0., 0.],
# [0., 1., 0., 0., 0., 0., 0.],
# [0., 0., 1., 0., 0., 0., 0.],
# [0., 1., 0., 0., 0., 0., 0.]], device='cuda:0', dtype=torch.float64)
```
In the above example, the output of `max_unpool` only has 3 elements but the gradient computed by reverse-mode has 4 elements.
I am not sure whether this is a big issue since `max_pool` could not return `indices` with duplicate elements.
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,378 | 80,189 |
[NVFuser] Investigate models without any fusion groups found
|
triaged, module: nvfuser
|
### 🚀 The feature, motivation and pitch
Many of the modules (29%) have no fusion groups found. It's possible that there's actually a bug in the partitioner, because this number seems high.
For those who have access: https://docs.google.com/document/d/1XU48ru9Cj4XFJJF6FTQ2X_QFuzyLroE1oomKGleYfP8/edit?usp=sharing
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,379 | 80,188 |
[NVFuser] Choose partitioner op list based on supported prim decompositions
|
triaged, module: nvfuser
|
### 🚀 The feature, motivation and pitch
Currently the partitioner uses a list of supported ops that was copied from torchscript.
But for the fx -> prims -> nvfuser approach, we want the supported op list to instead be based on the list of ops that can be decomposed to prims. This could be generated by tracing through aten ops and trying to decompose them, and seeing whether or not they can be fully decomposed to prims.
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
5,380 | 80,187 |
[NVFuser] Investigate modules with bad performance relative to eager
|
triaged, module: nvfuser, module: primTorch
|
### 🚀 The feature, motivation and pitch
In particular, `hf_DistilBert_forward_0` has ~30% regression compared to eager.
Status document (for those who have access): https://docs.google.com/document/d/1XU48ru9Cj4XFJJF6FTQ2X_QFuzyLroE1oomKGleYfP8/edit?usp=sharing
cc @ezyang @mruberry @ngimel @jjsjann123 @kevinstephano
| 20 |
5,381 | 80,172 |
Torch.fx: add reporting of the name of a module not found during tracing
|
triaged, module: fx
|
### 🚀 The feature, motivation and pitch
During fx tracing of modules, the function `path_of_module` tries to get the qualified name of submodules. If it is unable to find the module (as a member of the root modules `named_modules()` method), it returns a `NameError` that does not report which module caused this issue. This PR adds reporting of the name of that module, making debugging easier.
### Alternatives
N/A, this is just making error messages more useful
### Additional context
One example that would raise this issue is the following torch model:
```
class ExampleModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3,64,(3,3))
def forward(self, x):
x = self.conv1(x)
x = nn.ReLU()(x)
return x
```
This model works training and inference, but fails fx tracing due to the initialization of the nn.ReLU() module in the forward pass. In a toy example like above, tracing down this issue can be straightforward, but in a larger, production sized model, a lack of helpful error messages can make this a daunting task.
cc @ezyang @SherlockNoMad
| 0 |
5,382 | 93,770 |
Catch value errors if cell in match_nested_cell is empty
|
triaged, bug, oncall: pt2
|
```
symbolic_convert.py", line 1250, in match_nested_cell
value = cell.cell_contents
ValueError: Cell is empty
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
5,383 | 80,168 |
GEGLU activation
|
module: nn, triaged, enhancement, needs research
|
### 🚀 The feature, motivation and pitch
Seems to be the latest and greatest activation function for transformers.
See https://arxiv.org/abs/2002.05202v1
### Alternatives
One could copy-paste the implementation from https://github.com/pfnet-research/deep-table/blob/237c8be8a405349ce6ab78075234c60d9bfe60b7/deep_table/nn/layers/activation.py
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
5,384 | 80,167 |
AMP step() enforce synchronization
|
triaged, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
# torch/cuda/amp/grad_scalar.py
def _maybe_opt_step(self, optimizer, optimizer_state, *args, **kwargs):
retval = None
if not sum(v.item() for v in optimizer_state["found_inf_per_device"].values()): **# v is on gpu**
retval = optimizer.step(*args, **kwargs)
return retval
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu115
Is debug build: False
CUDA used to build PyTorch: 11.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.5.119
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0+cu115
[pip3] torch-geometric==2.0.4
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.13
[conda] Could not collect
cc @mcarilli @ptrblck
| 3 |
5,385 | 80,161 |
[RFC] Module specific workflows
|
module: rocm, triaged
|
@jeffdaily asked for a ciflow label to trigger ROCm jobs because `ciflow/trunk` runs a lot of non-rocm things. This is totally reasonable and likely to be a common request from module maintainers, so it's worth designing a process that can scale well.
The problem definition is: module maintainers want to be able to run a set of jobs specific to their module on PRs. Sometimes those jobs are a subset of the `trunk` or `periodic` jobs that we just want PR-time signal on, sometimes they wholly custom to a module (experimental tests, optional tests, etc.).
Here is our proposal. I phrase it specific to ROCm, but it can be read as generic across any module (e.g. onnx, functorch).
- The Infra team will continue to maintain the _core workflows_ (`pull`, `trunk`), etc. as they are today, so no change there.
- This includes ever-stricter standards on time-to-signal and reliability, since the core workflows are run for all PT devs.
- It also includes guarantees that the core workflows will be green; i.e. we will revert if one is broken.
- We define a ROCm-specific GitHub Actions workflow.
- The workflow is scoped to only affect ROCm maintainers (or other interested parties).
- Initially, this means that it will be triggered by `ciflow/rocm` label only (not pull requests or master).
- In the future, one could imagine tracking a mirror of the master branch with custom validation jobs.
- The workflow is maintained by the ROCm maintainers.
- There are no restrictions on what stuff can be put in there. The only caveat is that we need to make sure usage of shared global resources (e.g. the AMD runners) does not affect core workflows.
- The PyTorch Infra team will not maintain the workflow; that is, breakages to the workflow will not be triaged by the infra team and we will not auto-revert PRs based on failures in the workflow.
- Notably, keeping the workflow in sync with ROCm jobs in core workflows is the responsibility of the ROCm maintainers. We will develop linting/syncing infra to help there though.
The initial version of the ROCm workflow will be a strict subset of what we run on `trunk`, but one could imagine putting arbitrary ROCm-specific testing there.
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 3 |
5,386 | 80,157 |
Elliptic Functions and Integrals
|
feature, triaged, module: special
|
# Elliptic Functions and Integrals
A brief proposal for providing a complete suite of elliptic functions and integrals as PyTorch operators. Enjoy!
One of a five-part series of special functions issues:
- Bessel and Related Functions (#76324)
- Elliptic Functions and Integrals (#80157)
- Gamma and Related Functions (#78065)
- Orthogonal Polynomials (#80152)
-
## Parameterization
More than any other special functions, elliptic functions and integrals are expressed in various ways. In particular, the parameter is expressed using the modulus, $k$, modular angle, \alpha, or the parameter $m$.
The following formulas relate them:
$k = \sin \times \alpha$
$m = k^{2} = \sin \times 2 \times \alpha$
So that the integral of the third kind (for example) may be expressed as either:
$\Pi(n, φ, k)$
$\Pi(n, φ, α)$
$\Pi(n, φ, m)$
This proposal provides `k` and `m` variations denoted by either `_k` or `_m` suffixes.
## API
### Elliptic Integrals (*k*)
- [ ] `complete_elliptic_integral_k_e(input: Tensor, *, out=None) → Tensor`
- [ ] `complete_elliptic_integral_k_k(input: Tensor, *, out=None) → Tensor`
### Elliptic Integrals (*m*)
- [ ] `complete_elliptic_integral_m_e(input: Tensor, *, out=None) → Tensor`
- [ ] `complete_elliptic_integral_m_k(input: Tensor, *, out=None) → Tensor`
### Jacobi Elliptic Functions (*k*)
- [ ] `jacobi_elliptic_k_amplitude(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_cd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_cn(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_cs(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_dc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_dn(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_ds(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_nc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_nd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_ns(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_sc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_sd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_k_sn(input: Tensor, k, *, out=None) → Tensor`
### Jacobi Elliptic Functions (*m*)
- [ ] `jacobi_elliptic_m_amplitude(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_cd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_cn(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_cs(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_dc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_dn(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_ds(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_nc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_nd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_ns(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_sc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_sd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `jacobi_elliptic_m_sn(input: Tensor, m, *, out=None) → Tensor`
### Inverse Jacobi Elliptic Functions (*k*)
- [ ] `inverse_jacobi_elliptic_k_cd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_cn(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_cs(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_dc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_dn(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_ds(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_nc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_nd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_ns(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_sc(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_sd(input: Tensor, k, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_k_sn(input: Tensor, k, *, out=None) → Tensor`
### Inverse Jacobi Elliptic Functions (*m*)
- [ ] `inverse_jacobi_elliptic_m_cd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_cn(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_cs(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_dc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_dn(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_ds(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_nc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_nd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_ns(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_sc(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_sd(input: Tensor, m, *, out=None) → Tensor`
- [ ] `inverse_jacobi_elliptic_m_sn(input: Tensor, m, *, out=None) → Tensor`
- [ ] `carlson_elliptic_integral_rc(x, y)`
- [ ] `carlson_elliptic_integral_rd(x, y, z)`
- [ ] `carlson_elliptic_integral_rf(x, y, z)`
- [ ] `carlson_elliptic_integral_rg(x, y, z)`
### Jacobi Theta Functions
- [ ] `jacobi_theta_1(x, q)`
- [ ] `jacobi_theta_2(x, q)`
- [ ] `jacobi_theta_3(x, q)`
- [ ] `jacobi_theta_4(x, q)`
### Jacobi Theta Derivatives
- [ ] `jacobi_theta_derivative_1(x, q)`
- [ ] `jacobi_theta_derivative_2(x, q)`
- [ ] `jacobi_theta_derivative_3(x, q)`
- [ ] `jacobi_theta_derivative_4(x, q)`
| 0 |
5,387 | 80,154 |
[primTorch] No _refs support for torch.Tensor.requires_grad.__get__
|
triaged, module: primTorch
|
Not sure what's going on here -- I would expect requires_grad should be a property access like ndim
cc @ezyang @mruberry @ngimel @eellison
| 1 |
5,388 | 80,152 |
Orthogonal Polynomials
|
feature, triaged, module: special
|
# Orthogonal Polynomials
A brief proposal for providing a complete suite of orthogonal polynomials as PyTorch operators. Enjoy!
One of a five-part series of special functions issues:
- Gamma and Related Functions (#78065)
- Bessel and Related Functions (#76324)
- Orthogonal Polynomials (#80152)
- Elliptic Functions and Integrals (#80157)
-
## API
### Chebyshev Polynomials
- [x] `chebyshev_polynomial_t(input: Tensor, n, *, out=None) → Tensor`
Chebyshev polynomial of the first kind $T_{n}\left(\text{input}\right)$.
- [x] `chebyshev_polynomial_u(input: Tensor, n, *, out=None) → Tensor`
Chebyshev polynomial of the first kind $U_{n}\left(\text{input}\right)$.
- [x] `chebyshev_polynomial_v(input: Tensor, n, *, out=None) → Tensor`
Chebyshev polynomial of the first kind $V_{n}\left(\text{input}\right)$.
- [x] `chebyshev_polynomial_w(input: Tensor, n, *, out=None) → Tensor`
Chebyshev polynomial of the first kind $W_{n}\left(\text{input}\right)$.
### Shifted Chebyshev Polynomials
- [x] `shifted_chebyshev_polynomial_t(input: Tensor, n, *, out=None) → Tensor`
Shifted Chebyshev polynomial of the first kind $T_{n}^{\ast}\left(\text{input}\right)$.
- [x] `shifted_chebyshev_polynomial_u(input: Tensor, n, *, out=None) → Tensor`
Shifted Chebyshev polynomial of the first kind $U_{n}^{\ast}\left(\text{input}\right)$.
- [x] `shifted_chebyshev_polynomial_v(input: Tensor, n, *, out=None) → Tensor`
Shifted Chebyshev polynomial of the first kind $V_{n}^{\ast}\left(\text{input}\right)$.
- [x] `shifted_chebyshev_polynomial_w(input: Tensor, n, *, out=None) → Tensor`
Shifted Chebyshev polynomial of the first kind $W_{n}^{\ast}\left(\text{input}\right)$.
### Hermite Polynomials
- [x] `hermite_polynomial_h(input: Tensor, n, *, out=None) → Tensor`
Physicist’s Hermite polynomial $H_{n}\left(\text{input}\right)$.
- [x] `hermite_polynomial_he(input: Tensor, n, *, out=None) → Tensor`
Probabilist’s Hermite polynomial $He_{n}\left(\text{input}\right)$.
### Charlier Polynomials
- [ ] `charlier_polynomial_c(input: Tensor, a, n, *, out=None) → Tensor`
### Gegenbauer Polynomials
- [ ] `gegenbauer_polynomial_c(input: Tensor, m, n, *, out=None) → Tensor`
Gegenbauer polynomial $C_{n}\left(\text{input}\right)$.
### Jacobi Polynomials
- [ ] `jacobi_polynomial_p(input: Tensor, a, b, n, *, out=None) → Tensor`
Jacobi polynomial $P_{n}\left(\text{input}\right)$.
### Laguerre Polynomials
- [ ] `generalized_laguerre_polynomial_l(input: Tensor, a, n, *, out=None) → Tensor`
- [x] `laguerre_polynomial_l(input: Tensor, n, *, out=None) → Tensor`
### Legendre Polynomials
- [x] `legendre_polynomial_p(input: Tensor, n, *, out=None) → Tensor`
Legendre polynomial $P_{n}\left(\text{input}\right)$.
### Shifted Legendre Polynomials
- [ ] `shifted_legendre_polynomial_p(input: Tensor, n, *, out=None) → Tensor`
Shifted Legendre polynomial $P_{n}^{\ast}\left(\text{input}\right)$.
| 0 |
5,389 | 80,151 |
activation checkpointing with non_reentrant implementation memory leaks
|
high priority, triage review, oncall: distributed, triaged
|
### 🐛 Describe the bug
This is coming from an internal report where using `CheckpointWrapper` with the `CheckpointImpl.NO_REENTRANT` gradually leaks memory, leading to CUDA OOM.
### Versions
main
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
5,390 | 80,142 |
CPUProfilingAllocator greedy allocation plan generation failed
|
oncall: mobile
|
### 🐛 Describe the bug
Segmentation fault during greedy allocation plan generation using CPUProfilingAllocator. After carefully looking into the code, I found the root cause is the `free_size_to_offset` doesn't consider tensors with the same size.
I fixed this issue by let `free_size_to_offset` map a size to a list of offset. When there are a new tensor allocation with existing size, it will add to the existing item's value list.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.5 (main, Jun 16 2022, 19:48:34) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.56.bsk.10-amd64-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] torch==1.13.0a0+git226a5e8
[conda] Could not collect
| 2 |
5,391 | 80,134 |
[feature request] Add support for a custom DatasetFetcher in DataLoader
|
module: dataloader, triaged, enhancement, module: data
|
### 🚀 The feature, motivation and pitch
I'm working on a time-series dataset, and I stumbled upon a limitation withing pytorch's DataLoader that I'd like to address here.
I have a custom map-style dataset that implements a `__getitem__` and `__len__` methods.
`__getitem__` has a following signature and returns by default features of shape `(seq_length, n_features)` and an integer label.
```python
def __getitem__(self, i: int, seq_length: int = 100):
# ....
return features, label
```
Sequence length is fixed by default, but can be changed by passing a `seq_length` argument.
I'm using an LSTM + Transformer model that is able to process an input of variable sequence length.\
BUT I'm not leveraging this feature of my model because pytorch's DataLoader expects all the samples returned by the dataset to be of the same shape.
A traditional approach to solving this problem is **sequence padding**. This doesn't really solve the problem in my opinion (the model still gets constant sequence length as an input) and leads to a range of further questions (which values to use for padding, how to interpolate ect.).
**What I would like to do instead** is to use a custom DatasetFetcher to create batches that have constant sequence length within a batch, but **varying** sequence length over the dataset.\
So let's say or batch has a shape `batch_size, seq_length, n_features`. (Let batch_size=32, n_features=10)\
Our DataLoader will sample indices for the first batch and then fetch samples with a constant `seq_length=100`, which will result in a batch of shape `32, 100, 10`.\
On the next batch, it will randomly (or by any other rule) pick a new sequence length (let's say 80)
and create a batch of shape `32, 80, 10`.
```python
# Default behaviour
data = [self.dataset[idx] for idx in possibly_batched_index]
# New behaviour
data = [self.dataset.__getitem__(idx, seq_length) for idx in possibly_batched_index]
```
I'm pretty sure it will have a regularization effect.
### Alternatives
_No response_
### Additional context
_No response_
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 1 |
5,392 | 80,132 |
Expose more MAGMA backends for solve_triangular
|
triaged, module: linear algebra, module: magma
|
### 🚀 The feature, motivation and pitch
MAGMA has quite a few (at least 3) backends for triangular solvers, and their efficiency varies wildly. We should expose as many backends as we can and do a proper benchmark of all of them to have faster triangular solvers, as triangular solvers are the basis for almost all other linalg algorithms.
This was discovered while running the benchmarks for https://github.com/pytorch/pytorch/pull/80074
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
5,393 | 80,118 |
Allow a user provided "test name - test time" mapping file work with pytorch's test sharding mechanism
|
module: ci, triaged
|
### 🚀 The feature, motivation and pitch
We're running pytorch unit tests with sharding, but we can only do plain test sharding because "test time-based sharding" is only available from pytorch's past CI data, for example, `get_shard_based_on_S3` needs `test_times_file` from S3.
A plain sharding doesn't always work well because some shards finish in 20 minutes while other shards take longer than 3 hour to finish. This happens more frequently as we're having more primTorch tests and they all occurred in one test file (suite).
### Alternatives
It's ok that pytorch CI collects their own time data. Can we add some mechanism that allows users to collect and export their own test time data? Can I then save this mapping file somewhere and later reuse this data file to get a better sharding?
We currently run our tests with the below command like the pytorch CI does:
```
CI=1 python test/run_test.py -v --save-xml
```
This outputs CI stats to xml files `test/test-reports/python-unittest/` which contain test name - test time mapping. It's nice if we can rework the stats parser `upload_test_stats.py` to dump a mapping file and let the "test sharder" `get_shard_based_on_S3` be able to reuse this file.
(impl details: If I run test in 4 shards and there are 4 mapping files dumped, do we take an extra step to merge them or do we allow the "test sharder" to take 4 sharded mapping files?)
### Additional context
N/A
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @janeyx99 @suo @ptrblck
| 5 |
5,394 | 80,104 |
Provide error message when thread pool is exhausted in RPC
|
high priority, oncall: distributed, triaged, module: rpc
|
### 🐛 Describe the bug
See issue raised in https://github.com/pytorch/pytorch/issues/80017. We should provide a better error message when we can no longer send requests due to exhaustion of threads in thread pool. Docs: https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions
### Versions
current
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @kwen2501 @jjlilley @mrzzd @H-Huang
| 1 |
5,395 | 80,080 |
Complex support in DDP
|
oncall: distributed, triaged, module: ddp
|
### 🐛 Describe the bug
There are a few issues to support complex tensors in DDP:
1. Currently, passing in complex tensors to DDP does not work because ProcessGroupNCCL doesn't know how to handle the type. @kumpera has a WIP fix in https://github.com/pytorch/pytorch/pull/74039.
2. Conjugated tensors won't work out of the box as they need to be materialized before calling `view_as_real` for correctness.
As an e2e test, we should verify that training local model with complex inputs and DDP with complex inputs for conjugated /unconjugated complex tensors is equivalent.
### Versions
main
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,396 | 80,067 |
FakeTensor: Support torch.tensor([FakeTensor, 0])
|
triaged, module: meta tensors
|
### 🚀 The feature, motivation and pitch
Repro:
```
with enable_torch_dispatch_mode(FakeTensorMode(inner=None)):
torch.tensor([torch.ones([], dtype=torch.int64), 0],)
```
> NotImplementedError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process
`torch.tensor` undergoes a special path that doesn't go through a lot of the typical `__torch_dispatch__`.
### Alternatives
_No response_
### Additional context
This showed up in the `maml` benchmark with torch dynamo.
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 7 |
5,397 | 80,061 |
pow CUDA tensor raised to CPU scalar tensor result can't backward properly
|
module: autograd, triaged, actionable
|
### 🐛 Describe the bug
Here is a short snippet to demonstrate how difficult this could be to traceback.
```
>>> a = torch.tensor(1.0, device='cuda')
>>> b = torch.tensor(2.0, device='cuda')
>>> c = torch.tensor(2.0, device='cpu')
>>> a.requires_grad = True
>>> (a**c).backward()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/amai/.conda/envs/th/lib/python3.9/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/amai/.conda/envs/th/lib/python3.9/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected condition, x and y to be on the same device, but condition is on cpu and x and y are on cuda:0 and cuda:0 respectively
>>> (a**b).backward()
```
When running with anomaly mode, it just points to the PowBackward1 as the issue, but there is no explanation of why.
### Versions
1.11.0, 1.13.0. The other issue is present across different environments and comptuers.
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @gchanan
| 4 |
5,398 | 80,033 |
Support `antialias` option on `torch.interpolate` for ONNX export
|
module: onnx, triaged, onnx-triaged
|
### 🚀 The feature, motivation and pitch
Hi, I'm working on a model that uses `torch.interpolate` for upsampling/downsampling with `antialias` function, i.e., `torch>=1.11.0`. Antialiasing is necessary before downsampling to remove upper half frequency components.
I would like to be able to export this model with ONNX. But, currently it does not yet support the exportation with `antialias` option.
Is it possible to request this addition on ONNX export?
Many thanks before.
### Alternatives
It is also possible to create a custom convolution for low-pass filter to perform similar thing as the `antialias` option. In this case, ONNX export should be OK. However, it would also be nice to have it supported to sync with the latest functionality of PyTorch.
### Additional context
ONNX version 1.12.0
| 11 |
5,399 | 80,025 |
`torch.special.gammainc` backward pass with respect to the first argument
|
module: distributions, module: autograd, triaged
|
### 🚀 The feature, motivation and pitch
The current implementation of [`torch.special.gammainc`](https://pytorch.org/docs/stable/special.html#torch.special.gammainc) only supports backward pass with respect to the second argument. The documentation notes that I should open a GitHub issue in case I'm interested in the gradient w.r.t. the first argument.
The `gammainc` function (and the related [`gammaincc`](https://pytorch.org/docs/stable/special.html#torch.special.gammaincc)) is used when computing the CDF and the survival function of the [Gamma](https://en.wikipedia.org/wiki/Gamma_distribution) and [Poisson](https://en.wikipedia.org/wiki/Poisson_distribution) distributions. These are some of the most commonly used distributions for modeling non-negative & count data in statistics.
One use case where we need to backpropagate w.r.t. both arguments of `gammainc` is when modeling time until an event (e.g., predicting time until death in survival analysis or time until an earthquake in temporal point processes). If we model time until the event with a Gamma distribution, computing the likelihood of a partially observed / censored sequence will involve computing the survival function of the Gamma distribution, which relies on `gammainc`. Since we cannot backpropagate, we cannot learn the first parameter of the Gamma distribution.
Here is a minimal example that reproduces the problem.
```python
import torch
a = torch.tensor(2.0, requires_grad=True)
b = torch.tensor(0.5, requires_grad=True)
t = torch.tensor(1.0)
# Probability that the event hasn't occurred until time t
likelihood = torch.special.gammaincc(a, b * t)
likelihood.backward() # raises NotImplementedError
```
This code would be equivalent to `likelihood = 1.0 - torch.distributions.Gamma(a, b).cdf(t)` if `torch.distributions.Gamma` implemented the `cdf` method.
### Alternatives
_No response_
### Additional context
This question has already been raised in #67763 but the issue was closed since the OP was actually interested in the derivative w.r.t. the second argument, which was already implemented.
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @albanD @zou3519 @gqchen @pearu @soulitzer @Lezcano @Varal7
| 7 |
5,400 | 80,022 |
memory leaking when doing all_to_all_single communication
|
oncall: distributed, triaged, module: c10d
|
### 🐛 Describe the bug
I observed memory-leaking problem (CPU memory is always decreasing) when training an moe model on (pytorch 1.10, CUDA 11.3, NCCL 2 .10.3). I found that the problem comes from `all_to_all_single` API. But the problem could not be found on (pytorch 1.9.1, CUDA 11.1, NCCL 2 .7.08)
The following script could reproduce the problem.
```python
import torch
import os
import torch.multiprocessing as mp
import torch.distributed as dist
def setup(rank, world_size):
os.environ['MASTER_ADDR']='127.0.0.2'
os.environ['MASTER_PORT']="123457"
dist.init_process_group('nccl', rank=rank, world_size=world_size)
def demo(rank, world_size):
setup(rank, world_size)
x=torch.zeros(int(1e6)).to(rank)
output=torch.empty_like(x)
while True:
if dist.get_rank()==0:
print('running all2all communication')
dist.all_to_all_single(output, x)
if __name__=='__main__':
world_size=8
mp.spawn(
demo,
args=(world_size,),
nprocs=world_size,
join=True
)
```
### Versions (With Memory leaking)
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.6 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Clang version: Could not collect
CMake version: version 3.5.2
Libc version: glibc-2.10
Python version: 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 21:08:20) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-99-generic-x86_64-with-debian-stretch-sid
Is CUDA available: True
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 470.86
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] faiss-gpu 1.7.2 py3.7_h28a55e0_0_cuda11.3 pytorch
[conda] libfaiss 1.7.2 hfc2d529_0_cuda11.3 pytorch
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-service 2.4.0 py37h402132d_0 conda-forge
[conda] mkl_fft 1.3.1 py37h3e078e5_1 conda-forge
[conda] mkl_random 1.2.2 py37h219a48f_0 conda-forge
[conda] numpy 1.21.2 py37h20f2e39_0 defaults
[conda] numpy-base 1.21.2 py37h79a1101_0 defaults
[conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
### Versions (Without memory leaking)
Collecting environment information...
PyTorch version: 1.9.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.4 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.17
Python version: 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-171-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.9.1+cu111
[conda] numpy 1.19.5 <pip>
[conda] torch 1.9.1+cu111 <pip>
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.