Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
โ | Body
stringlengths 9
74.5k
โ | Comments
int64 0
867
|
---|---|---|---|---|---|
4,801 | 84,681 |
JIT will affect the gradient computation of forward mode
|
oncall: jit
|
### ๐ Describe the bug
JIT will affect the gradient computation of forward mode for some API
For example, if we directly call `jit_fn` with forward mode, it will output the correct gradient:
```py
import torch
def fn(input, target):
weight = None
reduction = "mean"
return torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=weight, reduction=reduction, )
input = torch.tensor([[1., 1.]], dtype=torch.float32)
target = torch.tensor([[2., 2.]], dtype=torch.float32)
inputs1 = (input.clone().requires_grad_(), target.clone().requires_grad_())
inputs2 = (input.clone().requires_grad_(), target.clone().requires_grad_())
jit_fn = torch.jit.trace(fn, (input, target))
print(torch.autograd.functional.jacobian(jit_fn, inputs2, vectorize=True, strategy='forward-mode'))
# [tensor([[-0.6345, -0.6345]], grad_fn=<ReshapeAliasBackward0>), tensor([[-0.5000, -0.5000]], grad_fn=<ReshapeAliasBackward0>)
```
However, if we call `jit_fn(*inputs1)` before the gradient computation in forward mode with `inputs2`, the gradient will be all 0, which is wrong
```py
import torch
def fn(input, target):
weight = None
reduction = "mean"
return torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=weight, reduction=reduction, )
input = torch.tensor([[1., 1.]], dtype=torch.float32)
target = torch.tensor([[2., 2.]], dtype=torch.float32)
inputs1 = (input.clone().requires_grad_(), target.clone().requires_grad_())
inputs2 = (input.clone().requires_grad_(), target.clone().requires_grad_())
jit_fn = torch.jit.trace(fn, (input, target))
jit_fn(*inputs1)
print(torch.autograd.functional.jacobian(jit_fn, inputs2, vectorize=True, strategy='forward-mode'))
# [tensor([[0., 0.]]), tensor([[0., 0.]])]
```
Interestingly, this cannot affect the gradient computation in reverse mode.
Many APIs suffer from this problem, like `lp_pool1d`
### Versions
pytorch: 1.12.1
| 2 |
4,802 | 84,673 |
Autograd will take `init` module API into account when using `jit`
|
oncall: jit
|
### ๐ Describe the bug
As mentioned in doc,
> All the functions in this module (`init`) are intended to be used to initialize neural network parameters, so they all run in :func:`torch.no_grad` mode and will not be taken into account by autograd.
However, autograd will take `init` module API into account when using `jit`.
```py
import torch
def fn(input):
return torch.nn.init.ones_(input)
input = torch.rand([4])
jit_fn = torch.jit.trace(fn, (input.clone(), ))
fn(input.clone().requires_grad_()) # work
jit_fn(input.clone().requires_grad_())
# RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
```
### Versions
pytorch: 1.12.1
| 2 |
4,803 | 84,661 |
[ONNX] Track non-exportable pattern as diagnostics.
|
module: onnx, triaged, onnx-triaged
|
Issue hub for tracking and organizing patterns to be added as diagnostic rules.
- [ ] [#87738](https://github.com/pytorch/pytorch/issues/87738) Not non-exportable. Convert shape inference warning to diagnostic.
| 0 |
4,804 | 84,652 |
Support FP16 with torch._fake_quantize_learnable_per_channel_affine & torch._fake_quantize_learnable_per_tensor_affine
|
oncall: quantization, triaged
|
### ๐ The feature, motivation and pitch
The PyTorch operators `torch._fake_quantize_learnable_per_channel_affine` and `torch._fake_quantize_learnable_per_tensor_affine` currently only support `torch.float32` datatypes, while the non-learnable scale operators `torch.fake_quantize_per_channel_affine` and `torch.fake_quantize_per_tensor_affine` support more datatypes (ie torch.float16). This makes using the learnable scale fake quantization operators break some pipelines which use CUDA mixed precision training, and otherwise breaks the consistency between the learnable and non-learnable scale fake quantize operators.
The non-learnable scale fake quantize operators originally only supported `torch.float32` (see [this issue ](https://github.com/pytorch/pytorch/issues/50417) that ran into this problem, and [this issue](https://github.com/pytorch/pytorch/issues/42351) which tracked the implementation of more datatypes). I'd like to know if there is a timeline to expand support for extra datatypes for the learnable scale fake quantization operators as well.
### Alternatives
_No response_
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 2 |
4,805 | 84,646 |
JIT script calculation/dtype inconsistent depending on operator expression
|
oncall: jit
|
### ๐ Describe the bug
In the following simple snippet, we expect `sum_terms` to be equal to `one_expression` (both have all entries -1), and that is indeed the case in the python interpreter. With the `@torch.jit.script` decorator, that is no longer the case, indicating a bug with the JIT compiler.
```
import torch
@torch.jit.script # code works as expected if decorator is removed
def func(a, b):
term_1 = 1 * (a & b)
term_2 = -1 * (a != b)
sum_terms = term_1 + term_2
one_expression = 1 * (a & b) - 1 * (a != b)
# workaround
# one_expression = (a & b).to(torch.int64) - (a != b).to(torch.int64)
print(sum_terms == one_expression)
a = torch.zeros(10, dtype=bool, device='cuda')
b = torch.ones(10, dtype=bool, device='cuda')
func(a, b)
```
This might have to do with the inconsistent types yielded, `term_1` of `CUDABoolType{10}` whereas `term_2` of `CUDALongType{10}`.
Also in this minimal version, I get the following error. But in my original programme containing identical code I don't--only the values are wrong (all entries 1 instead of -1). I have yet to figure out why the RuntimeError occurs in only one of the two.
```
Traceback (most recent call last):
File "test.py", line 22, in <module>
func(a, b)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "test.py", line 15, in func
1 * (a & b) - 1 * (a != b),
)
one_expression = 1 * (a & b) - 1 * (a != b)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
print(sum_terms == one_expression)
RuntimeError: Subtraction, the `-` operator, with two bool tensors is not supported. Use the `^` or `logical_xor()` operator instead.
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.0
[pip3] torchvision==0.13.0
[conda] Could not collect
```
| 0 |
4,806 | 84,630 |
torch.nn.functional.interpolate fails on some degenerate shapes, but passes on others
|
triaged, module: interpolation
|
Not sure if this can be counted as bug or not. If you have an input with an empty batch dimension, everything works fine
```py
>>> t = torch.rand(0, 3, 16, 16)
>>> interpolate(t, (15, 17))
tensor([], size=(0, 3, 15, 17))
```
If any other dimension is zero, it will throw an error:
```py
>>> t = torch.rand(4, 0, 16, 16)
>>> interpolate(t, (15, 17))
RuntimeError: Non-empty 4D data tensor expected but got a tensor with sizes [4, 0, 16, 16]
>>> t = torch.rand(4, 3, 0, 16)
>>> interpolate(t, (15, 17))
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 0, W: 16) output (H: 15, W: 17)
>>> t = torch.rand(4, 3, 16, 0)
>>> interpolate(t, (15, 17))
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 16, W: 0) output (H: 15, W: 17)
```
The latter two are somewhat understandable. Still, since no interpolation will happen in either case, shouldn't we align the behavior?
| 1 |
4,807 | 84,628 |
INTERNAL ASSERT when the type of argument is not considered in JIT
|
oncall: jit
|
### ๐ Describe the bug
INTERNAL ASSERT when the type of argument is not considered in JIT. For example, some argument may be fed with boolean, but `torch.jit.trace` doesn't consider such cases and output INTERNAL ASSERT directly
```py
import torch
def fn(input):
return torch.add(input, other=True)
input = torch.tensor([1.])
fn(input) # tensor([2.])
jit_fn = torch.jit.trace(fn, input)
```
```
RuntimeError: 0INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646756402876/work/torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::add but it isn't a special case. Argument types: Tensor, bool, int,
Candidates:
aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> (Tensor)
aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
aten::add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> (Tensor(a!))
aten::add.t(t[] a, t[] b) -> (t[])
aten::add.str(str a, str b) -> (str)
aten::add.int(int a, int b) -> (int)
aten::add.complex(complex a, complex b) -> (complex)
aten::add.float(float a, float b) -> (float)
aten::add.int_complex(int a, complex b) -> (complex)
aten::add.complex_int(complex a, int b) -> (complex)
aten::add.float_complex(float a, complex b) -> (complex)
aten::add.complex_float(complex a, float b) -> (complex)
aten::add.int_float(int a, float b) -> (float)
aten::add.float_int(float a, int b) -> (float)
aten::add(Scalar a, Scalar b) -> (Scalar)
...
aten::add.complex_float(complex a, float b) -> (complex)
aten::add.int_float(int a, float b) -> (float)
aten::add.float_int(float a, int b) -> (float)
aten::add(Scalar a, Scalar b) -> (Scalar)
```
It seems that many APIs suffer from this issue, such as `ne, le, ge, div, mul, ...`
### Versions
pytorch: 1.12.1
| 2 |
4,808 | 84,625 |
Beta distribution behaves incorrectly for small parameters
|
module: distributions, triaged, module: edge cases
|
### ๐ Describe the bug
The `Beta(alpha, alpha)` distribution should converge to `Bernoulli(0.5)` as `alpha -> 0`. However, we get a `Dirac(0.5)`, which is what we expect for `alpha -> infty` (last line, correct).
```python
import torch
print("Torch version:", torch.__version__)
torch.manual_seed(0)
beta = torch.distributions.Beta(1e-3, 1e-3)
print("Sample from Beta(1e-6, 1e-6):", beta.sample())
beta = torch.distributions.Beta(1e3, 1e+3)
print("Sample from Beta(1e+3, 1e+3):", beta.sample())
```
Gives:
```
Torch version: 1.12.1
Sample from Beta(1e-6, 1e-6): tensor(0.5000)
Sample from Beta(1e+3, 1e+3): tensor(0.4966)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.7 (default, Sep 16 2021, 08:50:36) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] numpydoc==1.1.0
[pip3] pytorch-ignite==0.4.9
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 h0a44026_0 pytorch
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-service 2.4.0 py39h9ed2024_0
[conda] mkl_fft 1.3.1 py39h4ab4a9b_0
[conda] mkl_random 1.2.2 py39hb2f4e1b_0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch 1.12.1 py3.9_0 pytorch
[conda] pytorch-ignite 0.4.9 pypi_0 pypi
[conda] torchaudio 0.12.1 py39_cpu pytorch
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.1 py39_cpu pytorch
```
cc @fritzo @neerajprad @alicanb @nikitaved
| 1 |
4,809 | 84,620 |
torch.hub.load local model
|
triaged, module: hub
|
### ๐ Describe the bug
python code:
```
model_name = 'x3d_m'
# model = torch.hub.load('facebookresearch/pytorchvideo', model_name, pretrained=True)
model = torch.hub.load('/Users/kpinfo/.cache/torch/hub/pytorch_vision_master/', model_name, source='local')
```
error:
```
ImportError: cannot import name 'get_model_weights' from 'torchvision.models
```
I have run "model = torch.hub.load('facebookresearch/pytorchvideo', model_name, pretrained=True)" well. But sometimes it will stuck and exit for timeout. I find x3d_m model have load in "/Users/kpinfo/.cache/torch/hub/checkpoints/X3D_M.pyth" , and I want load this local model when I need x3d model next time.
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 10.15.2 (x86_64)
GCC version: Could not collect
Clang version: 11.0.3 (clang-1103.0.32.62)
CMake version: Could not collect
Libc version: N/A
Python version: 3.7.11 (default, Jul 27 2021, 07:03:16) [Clang 10.0.0 ] (64-bit runtime)
Python platform: Darwin-19.2.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytorchvideo==0.1.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorchvideo 0.1.3 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
cc @nairbv @NicolasHug @vmoens @jdsgomes
| 1 |
4,810 | 84,616 |
Autogenerated out functions are missing at::cpu:: and co bindings
|
triaged, module: codegen, topic: build
|
### ๐ Describe the bug
Example: look at `build/aten/src/ATen/ops/prod_cpu_dispatch.h`. On my build it looks like
```
TORCH_API at::Tensor prod(const at::Tensor & self, c10::optional<at::ScalarType> dtype=c10::nullopt);
TORCH_API at::Tensor prod(const at::Tensor & self, int64_t dim, bool keepdim=false, c10::optional<at::ScalarType> dtype=c10::nullopt);
TORCH_API at::Tensor & prod_out(at::Tensor & out, const at::Tensor & self, int64_t dim, bool keepdim=false, c10::optional<at::ScalarType> dtype=c10::nullopt);
TORCH_API at::Tensor & prod_outf(const at::Tensor & self, int64_t dim, bool keepdim, c10::optional<at::ScalarType> dtype, at::Tensor & out);
```
However, notice that there are two functional overloads, but only one `prod_out` overload. We're missing the Tensor, ScalarType overload for out.
This is affecting static runtime.
cc @bhosmer @bdhirsh @d1jang @tenpercent
### Versions
master
| 0 |
4,811 | 84,615 |
Serialize the warmed up torchscript module
|
oncall: jit
|
### ๐ The feature, motivation and pitch
- JIT and torchscript can provide a huge performance boost for models, but the long warm-up time for large models like stable diffusion unet(~90s) can be a huge blocker for production, even individual use
- If there could be a way of serializing the warmed-up module (even with limitation that needs to be run on specific hardware), would be super helpful
### Alternatives
- Tried to `torch._C._jit_set_profiling_executor(False) eliminates the extra recompilations` but this eliminates performance gain
### Additional context
_No response_
| 1 |
4,812 | 93,661 |
Capture scalar outputs / dynamically sized outputs by default, partition graphs for backends that can't handle it
|
triaged, ezyang's list, oncall: pt2, module: dynamic shapes
|
Capturing scalar outputs improves perf e.g. by capturing the optimizer. Now that the fx subgraph capability matcher has landed, it should be easy for backends which do not support scalars to compile subgraphs even with this config turned on. Additionally, having a config option be off by default means its test coverage is extremely limited.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 4 |
4,813 | 84,597 |
Accept SymInts and SymFloats For Scalar Inputs
|
triaged
|
### ๐ The feature, motivation and pitch
When we are a tracing context (ProxyTensor/FakeTensor/TorchDynamo) tensor -> scalar conversions can cause graph breaks because the tracking tensors do not have real data associated with them. This can occur with user invocations: `x.item()` and in implicit Tensor->Scalar conversions when a 0-element tensor is passed in as a `Scalar` argument.
ProxyTensor and FakeTensor do some amount of [constant](https://github.com/pytorch/pytorch/pull/84387) tracking for cases where the value of the 0-element tensor is known statically, but there are other cases where this will occur.
For example, in [adagrad](https://github.com/pytorch/pytorch/blob/master/torch/optim/adagrad.py#L279): `param.addcdiv_(grad, std, value=-clr)`
`addcdiv` has the signature: `addcdiv(Tensor self, Tensor tensor1, Tensor tensor2, *, Scalar value=1) -> Tensor`. When the config capture_scalar_outputs is true, `-clr` will be a 0-element tensor, and the dispatcher call of _local_scalar_dense in converting `-clr` to a scalar will cause a graph break.
If Scalars could contain SymInts/Symfloats, `-clr` could be returned from `_local_scalar_dense` as a SymFloat without graph breaking.
More generally, this might allow us to re-factor the "capture scalar inputs" config to work by tracing SymInts/Symfloats instead of wrapping scalars to tensors, and potentially track other dynamic shape operations that occur from indexing tensor data.
| 0 |
4,814 | 84,593 |
Uneven and/or Dynamically sized collectives
|
good first issue, triaged, module: c10d
|
### ๐ The feature, motivation and pitch
A recurring question we get is how to handle uneven or dynamic sized collectives.
For example, users want to:
- Scatter tensors of different sizes.
- Dynamically determine broadcast tensor sizes on the source rank.
There are plenty of forum questions on this subject that validate the need for such new API.
In addition to be a constant source of problem for our users, such APIs are particularly difficult to efficiently and correctly implement when using NCCL. This stems from the fact that shape calculation happens on CPU but values must transit in CUDA tensors as NCCL doesn't support CPU tensors.
This device mismatch issue is not trivial and resulted in quite a few bugs in c10d's object collectives.
## The API
Add a `torch.distributed.dynamic` package that supports slower collectives that are significantly flexible and address those concerns.
The overall design of those collectives is that they should not assume perfect uniform knowledge across all ranks.
The module would have variants of broadcast, all_gather, gather, scatter, all_to_all that can handle dynamic and uneven tensor sizes transparently for users.
It's unclear whether we should extend this to reduce collectives as we'd have to deal with the issue of missing data.
### Alternatives
Not implement this module and keep suggesting work-arounds to users.
### Additional context
Data-dependent collective shapes is particularly common with models that employ sparse-computation like Mixture-of-Experts.
| 7 |
4,815 | 84,588 |
torch.jit.script IndentationError: unexpected indent
|
oncall: jit
|
### ๐ Describe the bug
```
import torch
import torchvision
import torchvision.models as models
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile
class ModifiedResNet18Model(torch.nn.Module):
def __init__(self):
super(ModifiedResNet18Model, self).__init__()
model = models.resnet18(pretrained=True)
modules = list(model.children())[:-1]
model = nn.Sequential(*modules)
self.features = model
for param in self.features.parameters():
param.requires_grad = False
self.fc = nn.Sequential(
nn.Dropout(),
nn.Linear(512,1024),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(1024,512),
nn.ReLU(inplace=True),
nn.Linear(512, 4))
def forward(self, x):
x = self.features(x)
x = nn.functional.adaptive_avg_pool2d(x, 1).reshape(x.shape[0], -1)
x = self.fc(x)
return x
model = ModifiedResNet18Model()
print(model)
model_scripted = torch.jit.script(model) # throwing the error
opti_model = optimize_for_mobile(model_scripted)
opti_model._save_for_lite_interpreter("resnet4.ptl")
print('Create Model Success')
```
### Versions
Torch Version: 1.11.0+cu113
TorchVision Version: 0.12.0+cu113
**Error**
Traceback (most recent call last):
File "C:\Users\Visionhealth\Desktop\Experiment_3\onlyforpruning\testtorchscript.py", line 37, in <module>
model_scripted = torch.jit.script(model)
File "C:\Users\Visionhealth\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit\_script.py", line 1265, in script
return torch.jit._recursive.create_script_module(
File "C:\Users\Visionhealth\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit\_recursive.py", line 453, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
File "C:\Users\Visionhealth\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit\_check.py", line 74, in check
init_ast = ast.parse(textwrap.dedent(source_lines))
File "C:\Users\Visionhealth\AppData\Local\Programs\Python\Python310\lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
def __init__(self):
IndentationError: unexpected indent
| 0 |
4,816 | 84,578 |
module: multiprocessing SimpleQueue put cannot bigger 716 in windows.And it is not has any info.The program is blocked and does not move.
|
module: multiprocessing, triaged
|
### ๐ Describe the bug
#run the code.It is not has any info,and not work.
#python3.9.13 64-bit windows10 torch1.12.1+cu116
#the last code is connection.py line 205 self._send_bytes(m[offset:offset + size])
from torch import multiprocessing as mp
ctx = mp.get_context("spawn")
free_queue = ctx.SimpleQueue()
full_queue = ctx.SimpleQueue()
for m in range(1536):
print("put data index:",m)
if(m==716):
print("The program is blocked and does not move in 716.")
free_queue.put(m)
print("It is not go end.")
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 ๅฎถๅบญไธญๆ็
GCC version: (GCC) 5.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1070
Nvidia driver version: 516.59
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.12.1+cu116
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect
cc @VitalyFedyunin
| 0 |
4,817 | 84,573 |
Tensor slice copy across multiple devices fails silently
|
triaged, module: advanced indexing
|
### ๐ Describe the bug
When trying to update certain elements between two tensors with the source on GPU memory and the target on CPU memory, the .copy_ operation fails silently. Below is an example:
```
import torch as pt
dims = (4,5,)
gpu0 = pt.device(0)
# src and tgt matrices...src on gpu0...target on CPU
src = pt.randn(*dims).to(gpu0)
tgt = pt.zeros(dims)
# mask and idxs
mask = src > 0 # sample function...mask on GPU
idxs = mask.nonzero(as_tuple=True) # idxs on GPU as well
# copy elements
tgt[idxs].copy_(src[idxs]) # does not update tgt
```
### Versions
Collecting environment information...
PyTorch version: 1.10.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.17.3
Libc version: glibc-2.25
Python version: 3.6.9 (default, Jun 29 2022, 11:45:57) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-124-generic-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.10.0+cu113
[pip3] torch-scatter==2.0.5
[pip3] torchaudio==0.10.0+cu113
[pip3] torchvision==0.11.1+cu113
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] numpy 1.18.1 py37h4f9e942_0
[conda] numpy-base 1.18.1 py37hde5b4d6_1
[conda] numpydoc 0.9.2 py_0
| 13 |
4,818 | 84,565 |
Tensor Subclass that doesn't require grad may wrap a Tensor subclass that requires grad
|
triaged, tensor subclass
|
From a practical perspective, this only happens today in functorch.
## The Problem
We have some composite operations that check if a Tensor requires grad. If it does, then it goes down one path; if it doesn't, then it goes down a "non-differentiable" path.
In functorch, we can have Tensor subclasses that do not require grad that wrap Tensors that do require grad. Unfortunately, the composite operations check if a Tensor requires grad and return false on the Tensor subclass, causing it to go down a non-differentiable path.
See https://github.com/pytorch/pytorch/pull/84137 for example. This problem is related to composite compliance: https://github.com/pytorch/pytorch/issues/69991
cc @ezyang
| 1 |
4,819 | 84,560 |
[optim] asgd : handling of complex params as real params (NaN vs inf)
|
module: optimizer, triaged, module: edge cases
|
### ๐ Describe the bug
With patch in https://github.com/pytorch/pytorch/pull/84472
```python
def print_grad(grad):
print("PRINT GRAD HOOK:", grad)
return grad
a1 = torch.tensor([-0.4329+0.3561j, 0.1633+0.4901j], requires_grad=True)
a1_real = a1.real.clone().detach()
a1_imag = a1.imag.clone().detach()
a1_real.requires_grad_()
a1_imag.requires_grad_()
optimizer_constructor = torch.optim.ASGD
# Attach hook
a1.register_hook(print_grad)
a1_real.register_hook(print_grad)
a1_imag.register_hook(print_grad)
optim1 = optimizer_constructor([a1])
optim2 = optimizer_constructor([a1_real, a1_imag])
for i in range(10):
print(f"*****{i}****")
print(a1)
print(a1_real, a1_imag)
optim1.zero_grad()
optim2.zero_grad()
if i == 0:
torch.testing.assert_close(a1.grad, None)
torch.testing.assert_close(a1_real.grad, None)
torch.testing.assert_close(a1_imag.grad, None)
else:
torch.testing.assert_close(a1.grad.real, a1_real.grad, equal_nan=True)
torch.testing.assert_close(a1.grad.imag, a1_imag.grad, equal_nan=True)
a2 = torch.complex(a1_real, a1_imag)
torch.testing.assert_close(a1, a2)
o = f(a1)
o2 = f(a2)
o.backward()
print("GRAD:", a1.grad)
o2.backward()
print("REAL GRAD", a1.grad.real, a1_real.grad)
print("IMAG GRAD", a1.grad.imag, a1_imag.grad)
torch.testing.assert_close(a1.grad.real, a1_real.grad, equal_nan=True) # Fails here (optimizer shouldn't affect this ideally)!
torch.testing.assert_close(a1.grad.imag, a1_imag.grad, equal_nan=True)
optim1.step()
optim2.step()
```
Fails with
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 2 (50.0%)
Greatest absolute difference: nan at index (1,) (up to 1e-05 allowed)
Greatest relative difference: nan at index (1,) (up to 1.3e-06 allowed)
```
NOTE:
Interestingly value printed by the hook for a1.grad is different from the next print at
`print("REAL GRAD", a1.grad.real, a1_real.grad)`.
cc: @albanD @soulitzer
### Versions
PR : https://github.com/pytorch/pytorch/pull/84472
cc @vincentqb @jbschlosser @albanD
| 0 |
4,820 | 84,550 |
Pytorch does not recognize GPU in WSL2
|
triaged, module: wsl
|
### ๐ Describe the bug
After install pytorch in Ubuntu 20.4 in WSL2,
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
pytorch does not recognize GPU:
python3 -c 'import torch; print(torch.cuda.is_available())'
returned False
Similar setup worked on windows environment. Only WSL2 have the problem.
nvidia-smi gave the following output:
Mon Sep 5 14:42:50 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02 Driver Version: 516.94 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | N/A |
| 23% 33C P8 11W / 250W | 1281MiB / 11264MiB | 11% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I tried install and uninstall cuda-toolkit and cuDnn following instructions from nvidia, did not help either.
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] pytorch-lightning==1.4.2
[pip3] torch==1.12.1
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.2 pypi_0 pypi
[conda] numpy-base 1.19.2 py38h4c65ebe_1
[conda] pytorch 1.12.1 py3.8_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-lightning 1.4.2 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py38_cu116 pytorch
[conda] torchmetrics 0.6.0 pypi_0 pypi
[conda] torchvision 0.13.1 py38_cu116 pytorch
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm
| 4 |
4,821 | 84,545 |
Add nvfuser support for prims.copy_to
|
oncall: jit, triaged, open source, cla signed, release notes: jit, module: nvfuser, module: primTorch, no-stale
|
I use nvFuser's `aliasOutputToInput` here and since it implicitly adds outputs to the fusion, I need to drop those within Python.
Now we can lower the batch_norm implementation from torch._decomp to nvprims(see `test_batch_norm_forward_nvprims`).
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @kevinstephano @jjsjann123 @ezyang @mruberry @ngimel @Lezcano @fdrocha @peterbell10
| 8 |
4,822 | 84,539 |
list of tensors can't be converted to a torch tensor while list of lists gets easily converted to a pytorch tensor
|
triaged, module: numpy
|
### ๐ Describe the bug
```
import torch
import numpy as np
a = [torch.tensor([1,2]),torch.tensor([0.4,0.5]),torch.tensor([1,4])]
torch.tensor(np.array(a))
```
This is going to give type error

While this works correctly
```
a = [[1,2],[0.4,0.5],[1,4]]
torch.tensor(np.array(a))
```

### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @mruberry @rgommers
| 0 |
4,823 | 84,538 |
OpInfo tests should compare gpu to cpu implementations
|
module: tests, triaged, topic: not user facing
|
I believe that currently we do not have any OpInfo based tests that compare the results of gpu to cpu implementations.
Seems like this would be very useful to have, and shouldn't be too hard to implement.
You could just take the intersection of cpu and gpu dtypes for each OpInfo and compare outputs.
(FYI, I came across a discrepancy between gpu and cpu implementations (of `nn.functional.grid_sample`) which made me think of this. )
cc @mruberry
| 4 |
4,824 | 84,537 |
Minimal example for torch.optim.SparseAdam
|
module: sparse, module: docs, triaged
|
### ๐ The doc issue
Hey,
it is a bit confusing for me and my group for which cases [SparseAdam](https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html) can be used:
There is a lone saying
```
Implements lazy version of Adam algorithm suitable for sparse tensors.
```
but it is unclear wether the parameters are allowed to be sparse (With sparse parameters, it throws the error
```
ValueError: Sparse params at indices [0]: SparseAdam requires dense parameter tensors
```
Could you provide more details and a minimal example?
Thank you
### Suggest a potential alternative/fix
Give a minimal example of how to use this optimizer.
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @svekars @holly1238
| 1 |
4,825 | 84,530 |
`tensordot` not working for dtype int32 and lower when there is only 1 element in the given axis
|
triaged, module: linear algebra, actionable, bug
|
### ๐ Describe the bug
`tensordot` seems to be failing when there is only one element in the given axis and dtype is `int32` or lower. This happens when providing explicit lists of dimensions for `a` and `b`.
```
import torch
x = torch.randint(1, 10, ([1, 2, 3]), dtype=torch.int32)
axis = 0
torch.tensordot(x, x, dims=([axis], [axis]))
```
with the following error:
`RuntimeError: expected scalar type Long but found Int`
But it works completely fine with `int64` and float dtypes. It also works fine for `int32` but only when there are multiple elements in the given axis. For instance, the following works fine:
```
x = torch.randint(1, 10, ([2, 2, 3]), dtype=torch.int32)
axis = 0
torch.tensordot(x, x, dims=([axis], [axis]))
```
### Versions
```
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
4,826 | 84,529 |
test_prims.py:test_nvfuser_no_args_cuda, memory leak
|
triaged, module: primTorch
|
### ๐ Describe the bug
```py
2022-09-02T16:37:13.3046215Z ERROR [0.094s]: test_nvfuser_no_args_cuda (__main__.TestPrimsCUDA)
2022-09-02T16:37:13.3046792Z ----------------------------------------------------------------------
2022-09-02T16:37:13.3047015Z Traceback (most recent call last):
2022-09-02T16:37:13.3047567Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1940, in wrapper
2022-09-02T16:37:13.3048068Z method(*args, **kwargs)
2022-09-02T16:37:13.3048458Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1939, in wrapper
2022-09-02T16:37:13.3049178Z with policy():
2022-09-02T16:37:13.3049613Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1530, in __exit__
2022-09-02T16:37:13.3050013Z raise RuntimeError(msg)
2022-09-02T16:37:13.3050584Z RuntimeError: CUDA driver API confirmed a leak in __main__.TestPrimsCUDA.test_nvfuser_no_args_cuda! Caching allocator allocated memory was 2048 and is now reported as 2560 on device 0. CUDA driver allocated memory was 520880128 and is now 522977280.
```
### Versions
The test is added in https://github.com/pytorch/pytorch/pull/84416.
cc @ezyang @mruberry @ngimel
| 0 |
4,827 | 84,524 |
nn.Softmax should not allow default/implicit/unset dim constructor argument
|
module: nn, triaged, needs research, module: deprecation
|
### ๐ Describe the bug
Originally discussed in https://github.com/pytorch/pytorch/issues/84290#issuecomment-1232133690, more links to related comments / issues in this cited comment
The F.softmax / softmin / log_softmax have implicit dim support deprecated since a few years ago, so nn.Softmax and friends nn.LogSoftmax / nn.Softmin should also remove this support for reducing possible confusion
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @alband @jerryzh168
### Versions
N/A
| 6 |
4,828 | 84,523 |
Issue with MPS ops lead to make_grid broken with mps device Tensors, whole grid is the 'first' image
|
triaged, module: mps
|
### ๐ Describe the bug
This is a upstreaming of the following bug in torch vision that I've been asked to dos due to the lack of access to MPS hardware.
https://github.com/pytorch/vision/issues/6533
### ๐ Describe the bug
When calling make_grid on Tensor and List of Tensors where the device in 'mps' it returns a grid where all the Tensors are the same.
Simple example
create a venv and activate
cd into the venv directory
install the nightlies
pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
unzip the attached zip file, it contains a python script and two images.
validate the python script and run it.
```
from PIL import Image
from torchvision.utils import make_grid
import torchvision.transforms as transforms
import torch
img1 = Image.open('00035.png')
img2 = Image.open('00036.png')
transform = transforms.Compose([
transforms.PILToTensor()
])
t_img1 = transform(img1).to(torch.device("mps"), dtype=torch.float32) / 256.0
t_img2 = transform(img2).to(torch.device("mps"), dtype=torch.float32) / 256.0
grid = make_grid([t_img1, t_img2], nrow=1)
gridImage = transforms.ToPILImage()(grid.cpu());
gridImage.save('mps_grid.png')
t_img1 = transform(img1).to(torch.device("cpu"), dtype=torch.float32) / 256.0
t_img2 = transform(img2).to(torch.device("cpu"), dtype=torch.float32) / 256.0
grid = make_grid([t_img1, t_img2], nrow=1)
gridImage = transforms.ToPILImage()(grid.cpu());
gridImage.save('cpu_grid.png')
```
It generate two images, you'll see the cpu_grid image are the two images vertically stacked, the mps_grid image has the same image vertically stacked twice.
[make_grid_bug.zip](https://github.com/pytorch/vision/files/9473078/make_grid_bug.zip)
### Versions
PyTorch version: 1.13.0.dev20220901
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.10.4 (main, May 10 2022, 03:52:14) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-12.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.13.0.dev20220901
[pip3] torchaudio==0.13.0.dev20220901
[pip3] torchvision==0.14.0.dev20220901
[conda] Could not collect
cc @kulinseth @albanD
| 4 |
4,829 | 84,520 |
MPS backend appears to be limited to 32 bits
|
triaged, module: mps
|
### ๐ Describe the bug
Create a large job for MPS to work on and we hit an inbuilt limit that bombs out with...
/AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:705: failed assertion `[MPSTemporaryNDArray initWithDevice:descriptor:] Error: product of dimension sizes > 2**31'
This only occurs on MPS, I can easily run larger jobs on Cuda
### Versions
Not overly helpful...
Collecting environment information...
Traceback (most recent call last):
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 492, in <module>
main()
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 475, in main
output = get_pretty_env_info()
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 470, in get_pretty_env_info
return pretty_str(get_env_info())
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 319, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 301, in get_pip_packages
out = run_with_pip(sys.executable + ' -mpip')
File "/Users/ec2-user/Library/Lartis/collect_env.py", line 289, in run_with_pip
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
cc @kulinseth @albanD
| 6 |
4,830 | 84,515 |
Torch.FX work with autograd.Function
|
triaged, module: fx, fx
|
### ๐ The feature, motivation and pitch
Dear fx guys,
Making Torch.FX work with `autograd.Function` is important when transform or fuse custom implemented operators, there's someone else meet the same problem at https://discuss.pytorch.org/t/how-can-torch-fx-work-with-autograd-function/145922.
For now, Torch.JIT can trace `autograd.Function` like this: `%x : Float(a, b, c, strides=[a, b, c], requires_grad=1, device=cpu) = ^CustomFunction()(...)`.
While Torch.FX trying to call the custom function with proxied `fx.Proxy` parameters, that causes errors.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @SherlockNoMad @soumith
| 0 |
4,831 | 84,510 |
[NVFuser] RuntimeError: ref_id_it != replayed_concrete_ids_.vector().end() INTERNAL ASSERT FAILED
|
triaged, module: assert failure, module: nvfuser
|
### ๐ Describe the bug
```python
# debug_aev_nvfuser_minimal.py
import torch
torch._C._jit_set_nvfuser_single_node_mode(True)
torch._C._debug_set_autodiff_subgraph_inlining(False)
torch.manual_seed(0)
def func(x, y, z):
return (x + y)**z
func_script = torch.jit.script(func)
x = torch.rand([3, 1, 1, 1, 1], device="cuda").requires_grad_()
y = torch.rand([1, 1, 1, 4], device="cuda")
z = torch.rand([1, 1, 1, 1], device="cuda")
for i in range(10):
res = func(x, y, z)
grad = torch.autograd.grad(res, x, torch.ones_like(res))[0]
res_script = func_script(x, y, z)
grad_script = torch.autograd.grad(res_script, x, torch.ones_like(res))[0]
print(f"{i}: max_result_error {(res_script-res).abs().max()}, max_grad_error {(grad_script-grad).abs().max()}")
```
Run with
```
PYTORCH_NVFUSER_DISABLE=fallback PYTORCH_JIT_LOG_LEVEL=">partition:graph_fuser:>>kernel_cache" python debug_aev_nvfuser_minimal.py
```
error message:
```
[DEBUG kernel_cache.cpp:638] GraphCache constructor: 0x7fb774056cc0
[DUMP kernel_cache.cpp:639] GraphCache created for graph
[DUMP kernel_cache.cpp:639] graph(%0 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0),
[DUMP kernel_cache.cpp:639] %1 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0),
[DUMP kernel_cache.cpp:639] %2 : Float(1, 1, 1, 1, strides=[1, 1, 1, 1], requires_grad=0, device=cuda:0),
[DUMP kernel_cache.cpp:639] %3 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0)):
[DUMP kernel_cache.cpp:639] %4 : int[] = prim::Constant[value=[3, 1, 1, 1, 1]]()
[DUMP kernel_cache.cpp:639] %5 : int = prim::Constant[value=1]() # <string>:240:94
[DUMP kernel_cache.cpp:639] %6 : float = prim::Constant[value=0.]() # <string>:240:52
[DUMP kernel_cache.cpp:639] %7 : Bool(1, 1, 1, 1, strides=[1, 1, 1, 1], requires_grad=0, device=cuda:0) = aten::eq(%2, %6) # <string>:240:40
[DUMP kernel_cache.cpp:639] %8 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0) = aten::mul(%3, %2) # <string>:240:98
[DUMP kernel_cache.cpp:639] %9 : Float(1, 1, 1, 1, strides=[1, 1, 1, 1], requires_grad=0, device=cuda:0) = aten::sub(%2, %5, %5) # <string>:240:139
[DUMP kernel_cache.cpp:639] %10 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0) = aten::pow(%1, %9) # <string>:240:123
[DUMP kernel_cache.cpp:639] %11 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0) = aten::mul(%8, %10) # <string>:240:98
[DUMP kernel_cache.cpp:639] %12 : Float(3, 1, 1, 1, 4, strides=[4, 4, 4, 4, 1], requires_grad=0, device=cuda:0) = aten::where(%7, %0, %11) # <string>:240:28
[DUMP kernel_cache.cpp:639] %grad_self.20 : Float(3, 1, 1, 1, 1, strides=[1, 1, 1, 1, 1], requires_grad=0, device=cuda:0) = aten::_grad_sum_to_size(%12, %4) # <string>:13:29
[DUMP kernel_cache.cpp:639] return (%grad_self.20)
[DEBUG kernel_cache.cpp:647] running GraphCache: 0x7fb774056cc0
Traceback (most recent call last):
File "debug_aev_nvfuser_minimal.py", line 23, in <module>
grad_script = torch.autograd.grad(res_script, x, torch.ones_like(res))[0]
File "/home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/autograd/__init__.py", line 294, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: ref_id_it != replayed_concrete_ids_.vector().end() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1662103173222/work/torch/csrc/jit/codegen/cuda/lower_index_compute.cpp":724, please report a bug to PyTorch. Could not find required iter domain in reference replay: bblockIdx.y214{( 1 * ( 1 * 1 ) )}
ref_id_it != replayed_concrete_ids_.vector().end() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1662103173222/work/torch/csrc/jit/codegen/cuda/lower_index_compute.cpp":724, please report a bug to PyTorch. Could not find required iter domain in reference replay: bblockIdx.y214{( 1 * ( 1 * 1 ) )}
Exception raised from constructLoopDomains at /opt/conda/conda-bld/pytorch_1662103173222/work/torch/csrc/jit/codegen/cuda/lower_index_compute.cpp:724 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd528ba9577 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7fd528b77e2c in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::string const&) + 0x3f (0x7fd528ba749f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x2f5c6aa (0x7fd52bb566aa in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #4: <unknown function> + 0x2f5df57 (0x7fd52bb57f57 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #5: <unknown function> + 0x2f5e263 (0x7fd52bb58263 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #6: <unknown function> + 0x2f48370 (0x7fd52bb42370 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #7: <unknown function> + 0x2f4e569 (0x7fd52bb48569 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #8: <unknown function> + 0x2f4e692 (0x7fd52bb48692 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #9: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::BinaryOp const*) + 0x21 (0x7fd52bbd7301 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #10: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::IfThenElse const*) + 0xc0 (0x7fd52bbd6e90 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #11: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::ForLoop const*) + 0xdf (0x7fd52bbd815f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #12: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::IfThenElse const*) + 0xc0 (0x7fd52bbd6e90 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #13: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::ForLoop const*) + 0xdf (0x7fd52bbd815f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #14: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::ForLoop const*) + 0xdf (0x7fd52bbd815f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #15: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::ForLoop const*) + 0xdf (0x7fd52bbd815f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #16: torch::jit::fuser::cuda::IndexLowering::handle(torch::jit::fuser::cuda::kir::ForLoop const*) + 0xdf (0x7fd52bbd815f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #17: torch::jit::fuser::cuda::IndexLowering::generate(std::vector<torch::jit::fuser::cuda::Expr*, std::allocator<torch::jit::fuser::cuda::Expr*> > const&) + 0x27 (0x7fd52bbd6ca7 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #18: torch::jit::fuser::cuda::GpuLower::lower(torch::jit::fuser::cuda::Fusion*, torch::jit::fuser::cuda::DataType) + 0x13c7 (0x7fd52bc276e7 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #19: torch::jit::fuser::cuda::FusionExecutor::compileFusion(torch::jit::fuser::cuda::Fusion*, c10::ArrayRef<c10::IValue> const&, torch::jit::fuser::cuda::LaunchParams const&, torch::jit::fuser::cuda::CompileOptions) + 0xcc1 (0x7fd52baea111 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #20: torch::jit::fuser::cuda::FusionKernelRuntime::runKernelWithInput(c10::ArrayRef<c10::IValue> const&, unsigned long, torch::jit::fuser::cuda::SegmentedGroup*) + 0x591 (0x7fd52bba7421 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #21: torch::jit::fuser::cuda::FusionKernelRuntime::runWithInput(c10::ArrayRef<c10::IValue> const&, unsigned long) + 0x4ff (0x7fd52bba908f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #22: torch::jit::fuser::cuda::FusionExecutorCache::runFusionWithInputs(c10::ArrayRef<c10::IValue> const&) + 0x375 (0x7fd52bbab915 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #23: <unknown function> + 0x2fb1c8f (0x7fd52bbabc8f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #24: <unknown function> + 0x302ffa8 (0x7fd52bc29fa8 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #25: torch::jit::fuser::cuda::runCudaFusionGroup(torch::jit::Node const*, std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x43c (0x7fd52bc2a7fc in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #26: <unknown function> + 0x443fef2 (0x7fd55c8f7ef2 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #27: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x3f (0x7fd55c8e407f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #28: <unknown function> + 0x441c61a (0x7fd55c8d461a in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #29: <unknown function> + 0x441f4f6 (0x7fd55c8d74f6 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #30: <unknown function> + 0x406051b (0x7fd55c51851b in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #31: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1638 (0x7fd55c511c28 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #32: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x698 (0x7fd55c512798 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #33: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x8b (0x7fd55c509b3b in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #34: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4f (0x7fd56ce42d4f in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #35: <unknown function> + 0xdbbf4 (0x7fd57f9d8bf4 in /home/richard/program/anaconda3/envs/torch_nightly/lib/python3.8/site-packages/torch/lib/../../../../libstdc++.so.6)
frame #36: <unknown function> + 0x8609 (0x7fd5a0383609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #37: clone + 0x43 (0x7fd5a02a8133 in /lib/x86_64-linux-gnu/libc.so.6)
```
cc @ngimel @jjsjann123 @zasdfgbnm
### Versions
the latest pytorch nightly
```
PyTorch version: 1.13.0.dev20220902
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 510.85.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] torch==1.13.0.dev20220902
[pip3] torchani==2.3.dev174+gbe932233.d20220903
[pip3] torchaudio==0.13.0.dev20220902
[pip3] torchvision==0.14.0.dev20220902
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 py38h6c91a56_0
[conda] numpy-base 1.23.1 py38ha15fc14_0
[conda] pytorch 1.13.0.dev20220902 py3.8_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchani 2.3.dev174+gbe932233.d20220903 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220902 py38_cu113 pytorch-nightly
[conda] torchvision 0.14.0.dev20220902 py38_cu113 pytorch-nightly
```
| 6 |
4,832 | 84,495 |
functionalize: Does not compose cleanly with torch.jit.script/torch.jit.trace
|
oncall: jit, module: functionalization
|
### ๐ Describe the bug
```python
import torch
from functorch.experimental import functionalize
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc0 = torch.nn.Linear(12, 17)
def forward(self, x):
return self.fc0(x).relu_()
net = Net()
input = torch.randn([1, 12])
# These work.
torch.jit.script(net)
torch.jit.trace(net, input)
# Both of these fail with different errors (see below).
#torch.jit.script(functionalize(net))
#torch.jit.trace(functionalize(net), input)
```
For script the error is:
```
TypeError: module, class, method, function, traceback, frame, or code object was expected, got Net
```
([full traceback](https://gist.github.com/silvasean/6cfa2510a010c4f8b07721b96caa4509))
For trace the error is:
```
RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
```
([full traceback](https://gist.github.com/silvasean/833df0af01bdf489529713d4684e30ca))
### Versions
torch 1.13.0.dev20220830+cpu
cc @bdhirsh @ezyang @soumith
| 3 |
4,833 | 84,491 |
Embedding scale_grad_by_freq should probably shrink by sqrt(count)
|
triaged, module: embedding, oncall: pt2
|
Given the analysis and experiments in "Frequency-aware SGD for Efficient Embedding Learning with Provable Benefits" (https://openreview.net/pdf?id=ibqTBNfJmi), it would seem that it would be better for `scale_grad_by_freq` to divide by sqrt(count) rather than count.
https://github.com/pytorch/pytorch/blob/0a07488ed2c47765e337e290bd138c0e6e459cbd/aten/src/ATen/native/Embedding.cpp#L133
Also, what is the reason for initializing the counter to zeros, then incrementing by 1 [here](https://github.com/pytorch/pytorch/blob/0a07488ed2c47765e337e290bd138c0e6e459cbd/aten/src/ATen/native/Embedding.cpp#L120), rather than initializing to 1?
cc @ezyang @soumith
| 1 |
4,834 | 84,489 |
For PyTorch Nightly, failure when changing MPS device to CPU after PYTORCH_ENABLE_MPS_FALLBACK occurs.
|
triaged, module: mps
|
### ๐ Describe the bug
When trying to generate text with a GPT-2 from the transformers library, I get this error:
NotImplementedError: The operator 'aten::cumsum.out' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
So I activated the environment variable (I set it in the terminal, because it didn't work with the version in the following code), but afterwards another error occurs (I posted it after the code used). I need to mention that if I only use CPU from the start, the generation works without problems.
```python
# Sample code to reproduce the problem
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = "1"
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-base')
model = AutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-base').to(device)
inputs = tokenizer('Salut priete', return_tensors='pt').to(device)
generation = model.generate(inputs['input_ids'], max_length = len(inputs['input_ids'][0]) + 10, no_repeat_ngram_size=2, num_beams=5, early_stopping=True, num_return_sequences=1)
```
```
The error message you got, with the full traceback.
Traceback (most recent call last):
File "/Users/alexandrudima/home/Research/test.py", line 15, in <module>
generation = model.generate(inputs['input_ids'], max_length = len(inputs['input_ids'][0]) + 10, no_repeat_ngram_size=2, num_beams=5, early_stopping=True, num_return_sequences=1)[0][len(inputs['input_ids'][0]):]
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_utils.py", line 1386, in generate
return self.beam_search(
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_utils.py", line 2232, in beam_search
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1016, in prepare_inputs_for_generation
position_ids = attention_mask.long().cumsum(-1) - 1
NotImplementedError: The operator 'aten::cumsum.out' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
(ml) alexandrudima@Alex-MacBook Research % PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
tensor([[23640, 344, 3205]], device='mps:0') tensor([23640, 344, 3205], device='mps:0')
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:1016: UserWarning: The operator 'aten::cumsum.out' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1661929883516/work/aten/src/ATen/mps/MPSFallback.mm:11.)
position_ids = attention_mask.long().cumsum(-1) - 1
Traceback (most recent call last):
File "/Users/alexandrudima/home/Research/test.py", line 15, in <module>
generation = model.generate(inputs['input_ids'], max_length = len(inputs['input_ids'][0]) + 10, no_repeat_ngram_size=2, num_beams=5, early_stopping=True, num_return_sequences=1)[0][len(inputs['input_ids'][0]):]
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_utils.py", line 1386, in generate
return self.beam_search(
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_utils.py", line 2253, in beam_search
next_token_scores_processed = logits_processor(input_ids, next_token_scores)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_logits_process.py", line 92, in __call__
scores = processor(input_ids, scores)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.9/site-packages/transformers/generation_logits_process.py", line 333, in __call__
scores[i, banned_tokens] = -float("inf")
RuntimeError: dst_.nbytes() >= dst_byte_offset INTERNAL ASSERT FAILED at "/Users/runner/work/_temp/anaconda/conda-bld/pytorch_1661929883516/work/aten/src/ATen/native/mps/operations/Copy.mm":184, please report a bug to PyTorch.
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220831
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.5.8
[pip3] torch==1.13.0.dev20220831
[pip3] torchaudio==0.13.0.dev20220831
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.14.0.dev20220831
[conda] numpy 1.22.4 py39h7df2422_0 conda-forge
[conda] pytorch 1.13.0.dev20220831 py3.9_0 pytorch-nightly
[conda] pytorch-lightning 1.5.8 pyhd8ed1ab_0 conda-forge
[conda] torchaudio 0.13.0.dev20220831 py39_cpu pytorch-nightly
[conda] torchmetrics 0.9.3 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.14.0.dev20220831 py39_cpu pytorch-nightly
cc @kulinseth @albanD
| 1 |
4,835 | 84,487 |
A little improvement to torch.nn.ReflectionPad2d
|
triaged, module: padding, oncall: pt2
|
### ๐ The feature, motivation and pitch
A little improvement, so little that I think it does not deserve to post about my intended feature and later discuss the design and implementation. Probably you can consider this little extension directly.
torch.nn.ReflectionPad2d is limited to pad to a new size less than or equal to the double of the original size.
I want to propose a new MultipleReflectionPad2d without this limitation.
```
class MultipleReflectionPad2d():
def __init__(self, out_height, out_width):
self.out_height = out_height
self.out_width = out_width
def __call__(self, image):
height, width = image.shape[-2:]
while height<self.out_height or width<self.out_width:
new_height = self.out_height
new_width = self.out_width
if new_height > 2*height : new_height = 2*height
if new_width > 2*width : new_width = 2*width
padding_top = (new_height-height)//2
padding_bottom = new_height-height-padding_top
padding_left = (new_width-width)//2
padding_right = new_width-width-padding_left
padder = torch.nn.ReflectionPad2d((padding_left, padding_right, padding_top, padding_bottom))
image = padder(image)
height, width = image.shape[-2:]
return image
```
### Alternatives
May be you can rewrite ReflectionPad2d without this limitation
### Additional context
_No response_
cc @ezyang @soumith
| 1 |
4,836 | 84,473 |
Install LibTorch by Conan or other C++ package manager
|
module: cpp, triaged, topic: binaries
|
### ๐ The feature, motivation and pitch
At present, There is no detailed guide for how to install LibTorch or construct the environment. The official tutorial does not show where should I put the downloaded library, or how to write a correct `CMakeList.txt` file.
### Alternatives
I find out that there are some good C++ package managers like [Conan](https://conan.io). It is kind of like `pip` and `conda` for Python. It would be easier to install and use LibTorch if we can access it by a C++ package manager, say [Conan](https://conan.io).
### Additional context
_No response_
cc @jbschlosser
| 0 |
4,837 | 84,468 |
[c10d] Support a public API to retrieve default process group
|
oncall: distributed, feature, triaged, pt_distributed_rampup
|
### ๐ The feature, motivation and pitch
- We should provide a public API to retrieve the default process group, as per the comment in https://github.com/pytorch/pytorch/pull/84105#issuecomment-1230519909, there is no documented way to get this, and as a result it makes it hard to use APIs that require a process group to be passed in.
Currently, we have `_get_default_group` as a private API, we should consider whether it is okay to make this a public API and if we can provide BC guarantees especially given the work around dispatchable collectives.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
4,838 | 84,445 |
Strange cuda illegal memory allocation error
|
module: cuda, triaged
|
### ๐ Describe the bug
I am trying to run the following GitHub repository.
https://github.com/neuroailab/EISEN
However, I am receiving a strange Cuda illegal memory allocation error.

It does not seem to be the code since the repo author states that the code is running fine on his machine. It will be great if someone can help to verify if this is specific to my machine or not. Any idea if this is an issue with my Cuda/torch version? I have tried to debug this but was not successful. I did find that the error seems to come from this line: `indices = sample_inds[-1].view(B, T * K).unsqueeze(-1).expand(-1, -1, D)` from core/utils/utils.py line 97.
When trying to print the max value of the indices variable, it also throws the cuda illegal memory allocation error. The error is gone when I remove the `.expand(-1, -1, D)`, but it leads to a dimension issue further down the code.
The following is my environment.

I am also running this on wsl2 with the following nvidia-smi output:

### Versions

cc @ngimel
| 3 |
4,839 | 84,422 |
Set up tests to run periodically and surface them on HUD
|
module: ci, triaged
|
This task is composed of following subtasks:
- [x] #84763
- [x] #84764
Execute nightly channel tests 1 time a day after the nightly build is completed.
Execute release channel test 2 times a day at morning, afternoon
Execute test channel after rc has been cut
- [x] #85295
- [x] #84765
- [x] #86370
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
4,840 | 84,415 |
Deepcopy of FX graph fails with nested make_fx and constant tensors
|
triaged, module: fx, fx
|
### ๐ Describe the bug
Deepcopy of `make_fx` produced graphs is needed when using `torch.fx.passes.infra.partitioner.CapabilityBasedPartitioner` because that partitioner modifies the graph in-place.
```python
import torch
from copy import deepcopy
from torch.fx.experimental.proxy_tensor import make_fx
class Test(torch.overrides.TorchFunctionMode):
def __torch_function__(
self,
f,
types,
args = (),
kwargs = {},
):
gm = make_fx(f)(*args)
return f(*args, **(kwargs or {}))
a = torch.randn(3, 3)
b = torch.randn(3, 3)
def func(b):
# note a is constant
return a + b
# This works
gm = make_fx(func)(b)
deepcopy(gm)
with Test():
gm = make_fx(func)(b)
# This fails
try:
deepcopy(gm)
except Exception as e:
print(e)
pass
# But normal make_fx also fails now
try:
gm = make_fx(func)(b)
deepcopy(gm)
except Exception as e:
print(e)
pass
```
```py
File ~/dev/pytorch/master/torch/fx/experimental/proxy_tensor.py:383, in PythonKeyTracer.create_arg(self, a)
381 assert a.get_pyobj().constant is not None
382 return a.get_pyobj().constant
--> 383 return super().create_arg(a)
File ~/dev/pytorch/master/torch/fx/_symbolic_trace.py:344, in Tracer.create_arg(self, a)
340 setattr(self.root, qualname, a)
342 return self.create_node("get_attr", qualname, (), {})
--> 344 return super().create_arg(a)
File ~/dev/pytorch/master/torch/fx/proxy.py:166, in TracerBase.create_arg(self, a)
163 elif isinstance(a, base_types) or a is None or a is ...:
164 return a
--> 166 raise NotImplementedError(f"argument of type: {type(a)}")
NotImplementedError: argument of type: <class 'torch.storage.UntypedStorage'>
```
### Versions
Latest master.
cc @ezyang @SherlockNoMad @soumith
| 4 |
4,841 | 84,414 |
several questions about pytorch DDP
|
triaged, module: nccl, module: ddp
|
Hi there, I have several questions about pytorch DDP impletion.
1. Which method allreduce op used by nccl in pytorch c10d, ring allreducer or double binary tree allreduce? better point out the code.
2. What data type is the gradient transfer between the processes when use amp, float32 or float16 or same data type as the parameter tensor?
### Versions
- pytorch version is 1.10.0
- nccl version is 2.10.3
| 0 |
4,842 | 84,412 |
Odd type-casting behaviour in prims.div
|
triaged, module: type promotion, module: primTorch
|
### ๐ Describe the bug
We have that:
```python
>>> prims.div(torch.tensor([3]), torch.tensor([2.]))
```
Errors with a
> RuntimeError: Tensor with dtype torch.float32 is not the expected dtype of torch.int64!
but
```python
>>> prims.div(torch.tensor([3]), 2.)
tensor([1.])
>>> prims.div(torch.tensor([3.]), 2)
tensor([1.5000])
```
I know that scalars have some funny type casting rules. Is this one of them? Or are we missing a kind check in the scalar case in `div_meta`? I am leaning more towards the latter one, as the check for integer inputs to see whether to perform the true division or truncated division is performed just on the first input (as if it was already known that the inputs have the same type).
### Versions
master
cc @nairbv @mruberry @ezyang @ngimel
| 0 |
4,843 | 84,370 |
Installation prefix is not passed to CMake appropriately
|
needs reproduction, module: build, triaged
|
### ๐ Describe the bug
When installing py-torch using the setup.py, giving it an installation prefix that is not where the source has been downloaded fails to pass the installation prefix to the CMake call. This wound up with many paths to the source location hard-coded into the .so libraries, which are not necessarily available when running. The issue was only uncovered when trying to run in distributed mode using more than one processor. Interrogating `libtorch_cuda.so` with the `strings` program shows 444 instances of paths to files in the source prefix that aren't part of error messages.
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: True
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux Server release 7.9 (Maipo) (ppc64le)
GCC version: (GCC) 4.9.3
Clang version: 12.0.1
CMake version: version 3.14.5
Libc version: glibc-2.17
Python version: 3.8.13 (default, Aug 11 2022, 15:49:12) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)] (64-bit runtime)
Python platform: Linux-4.14.0.ppc64le-ppc64le-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.4.100
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.0
[pip3] torchvision==0.13.0
[conda] Could not collect
```
cc @malfet @seemethere
| 4 |
4,844 | 93,656 |
explain() has confusing explanation of graph breaks
|
triaged, oncall: pt2
|
For full repro, see: n2409035.
The output of explain:
```
Dynamo produced 12 graphswith 11 graph break and 24 ops
Break reasons:
1. generic_jump TensorVariable()
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/models/transformer/transformer_encoder.py", line 207, in forward_scriptable
x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings)
2. generic_jump TensorVariable()
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/models/transformer/transformer_encoder.py", line 127, in forward_embedding
x = embed + self.embed_positions(src_tokens)
3. generic_jump
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/modules/sinusoidal_positional_embedding.py", line 71, in forward
if self.weights is None or max_pos > self.weights.size(0):
4. generic_jump
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/models/transformer/transformer_encoder.py", line 210, in forward_scriptable
if has_pads:
5. generic_jump
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/models/transformer/transformer_encoder.py", line 225, in forward_scriptable
x, encoder_padding_mask=encoder_padding_mask if has_pads else None
6. generic_jump
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/modules/multihead_attention.py", line 506, in forward
assert (
7. generic_jump
File "/mnt/xarfuse/uid-572259/de1cb585-seed-nspid4026531836_cgpid105054292-ns-4026531840/fairseq/modules/multihead_attention.py", line 506, in forward
assert (
```
Two immediate issues:
- 11 graph breaks are quoted, but the list only has seven elements.
- Breaks 1 and 2 point to generic_jump but do not actually reference a control flow statement. They appear to be the callstack for break 3? If I fix break 3, breaks 1 and 2 go away.
Some more thoughts, but require more fleshing out to be actionable:
- I don't think a string is the right way to present this information; ideally it's queryable, (so I can do something like `breaks[2].source_location` in order to get more context then is provided here, including a full stack trace).
- The break reason is quite opaque if you don't understand the dynamo implementation.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
4,845 | 84,353 |
torch.Size should convert all elements to ints
|
triaged, module: python frontend
|
### ๐ The feature, motivation and pitch
Currently some things are kept as is. E.g.,
```py
In [227]: type(torch.Size([np.int64(3)])[0])
Out[227]: numpy.int64
```
This is usually not an issue but part of pytorch works with `np.int64` (e.g., `torch.randn(np.int64(3))`) and part don't (https://github.com/pytorch/pytorch/issues/43094).
Either we should accept all `np.int64` as input to `int` types, which https://github.com/pytorch/pytorch/issues/43094 seems don't want to do, or we should not allow `torch.Size` to contain them.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
4,846 | 84,348 |
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'
|
triaged, module: half
|
### ๐ The feature, motivation and pitch
Hi, I need this implementation to make a lot of models (resnets, efficientnets, ...) work on float16 inference.
When I run these models on GPU, it works properly.
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
4,847 | 84,347 |
Lack of newly raised optimizers
|
high priority, feature, module: optimizer, triaged, needs research
|
### ๐ The feature, motivation and pitch
I'm working on train Sota DNNs for research purposes. Optimizers like LAMB, Ranger and Lookahead have shown their advantage compared to SGD or AdamW in many tasks, including large batch training. However, Pytorch has stopped adding new optimizers algorithms for a long time. There are some third-party optimizers packages though, but an official implementation will be helpful.
### Alternatives
Using third-party implementations
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @vincentqb @jbschlosser @albanD @janeyx99
| 4 |
4,848 | 84,346 |
fix ATen tests that do not compile
|
triaged
|
### ๐ Describe the bug
#84345 added a few missing tests to build and execute in CI. It had to exclude a few because they were not compiling successfully.
### Versions
Trunk.
| 0 |
4,849 | 84,340 |
Floordiv is deprecated.
|
oncall: jit
|
### ๐ Describe the bug
In torch/nn/functional.py, __floordiv__ is used in group_norm function. __floordiv__ is deprecated, and its behavior will change in a future version of pytorch.
```
def group_norm(
input: Tensor, num_groups: int, weight: Optional[Tensor] = None, bias: Optional[Tensor] = None, eps: float = 1e-5
) -> Tensor:
r"""Applies Group Normalization for last certain number of dimensions.
See :class:`~torch.nn.GroupNorm` for details.
"""
if has_torch_function_variadic(input, weight, bias):
return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
_verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.16.6
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 465.19.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchvision==0.13.1+cu113
[conda] numpy 1.23.2 pypi_0 pypi
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torchaudio 0.12.1+cu113 pypi_0 pypi
[conda] torchvision 0.13.1+cu113 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 1 |
4,850 | 84,336 |
torch 1.12.1 cuda 10.2 runs slower than torch 1.8.2 cuda 10.2
|
module: performance, module: cudnn, module: cuda, triaged
|
## Issue description
Same code, using torch profile. It runs much slower on torch 1.12.1 than on torch 1.8.2 with cuda 10.2 and cudnn 7605. Torch are downloaded from https://download.pytorch.org/whl/cu102/torch_stable.html. Could anyone help me out of this problem?
## Code example
``` python
with torch.autograd.profiler.profile(enabled=True, use_cuda=True, record_shapes=False, profile_memory=False) as prof:
self.train() # forward and backward several times
print(prof.key_averages().table(sort_by="self_cuda_time_total", row_limit=100))
prof.export_chrome_trace('./resnet_profile.json')
```
1.8.2:
<html>
<body>
<!--StartFragment--><!DOCTYPE html>
Name | Self CPU % | Self CPU | CPU total % | CPU total | CPU time avg | Self CUDA | Self CUDA % | CUDA total | CUDA time avg | # of Calls
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
aten::cudnn_convolution_backward_input | 6.55% | 3.019s | 6.59% | 3.040s | 637.234us | 4.607s | 9.96% | 4.607s | 965.773us | 4770
aten::cudnn_convolution_backward_weight | 5.30% | 2.444s | 5.49% | 2.530s | 527.175us | 4.488s | 9.70% | 4.488s | 934.970us | 4800
aten::index_put_impl | 6.50% | 2.999s | 17.40% | 8.025s | 10.210ms | 4.361s | 9.43% | 4.974s | 6.328ms | 786
aten::cudnn_convolution | 5.33% | 2.459s | 5.39% | 2.485s | 517.661us | 3.964s | 8.57% | 3.971s | 827.318us | 4800
aten::sort | 8.51% | 3.925s | 8.51% | 3.925s | 4.994ms | 3.795s | 8.20% | 3.795s | 4.829ms | 786
SyncBatchNorm | 1.35% | 620.404ms | 14.65% | 6.754s | 2.186ms | 2.474s | 5.35% | 5.474s | 1.771ms | 3090
SyncBatchNormBackward | 1.42% | 655.297ms | 7.90% | 3.645s | 1.179ms | 2.376s | 5.14% | 3.739s | 1.210ms | 3090
aten::copy_ | 4.00% | 1.847s | 4.00% | 1.847s | 73.266us | 1.875s | 4.05% | 1.875s | 74.375us | 25206
aten::mul_ | 3.66% | 1.689s | 3.66% | 1.689s | 42.384us | 1.639s | 3.54% | 1.639s | 41.145us | 39846
aten::add_ | 3.64% | 1.678s | 3.64% | 1.678s | 40.636us | 1.297s | 2.80% | 1.297s | 31.395us | 41302
aten::fill_ | 0.80% | 367.642ms | 0.80% | 367.642ms | 17.658us | 1.218s | 2.63% | 1.218s | 58.498us | 20820
aten::scatter_ | 0.05% | 21.818ms | 0.05% | 23.170ms | 29.478us | 823.199ms | 1.78% | 823.199ms | 1.047ms | 786
aten::mul | 1.02% | 471.278ms | 1.09% | 502.562ms | 58.986us | 811.317ms | 1.75% | 811.317ms | 95.225us | 8520
aten::div | 1.71% | 790.098ms | 1.86% | 858.417ms | 40.404us | 779.192ms | 1.68% | 779.192ms | 36.675us | 21246
Optimizer.step#AdamW.step | 2.57% | 1.185s | 13.68% | 6.309s | 210.316ms | 695.652ms | 1.50% | 5.618s | 187.258ms | 30
aten::batch_norm_gather_stats_with_counts | 0.23% | 107.335ms | 0.29% | 132.311ms | 42.819us | 560.058ms | 1.21% | 560.058ms | 181.249us | 3090
aten::nonzero | 11.03% | 5.086s | 11.14% | 5.139s | 3.037ms | 554.498ms | 1.20% | 569.915ms | 336.829us | 1692
aten::addcmul_ | 1.25% | 576.361ms | 1.83% | 842.129ms | 64.680us | 539.481ms | 1.17% | 787.051ms | 60.449us | 13020
aten::sqrt | 1.02% | 470.158ms | 1.11% | 512.980ms | 39.399us | 529.064ms | 1.14% | 529.064ms | 40.635us | 13020
aten::batch_norm_backward_elemt | 2.47% | 1.138s | 2.70% | 1.247s | 403.576us | 489.826ms | 1.06% | 507.219ms | 164.149us | 3090
aten::batch_norm_backward_reduce | 1.47% | 675.860ms | 1.91% | 879.752ms | 284.709us | 477.165ms | 1.03% | 496.336ms | 160.626us | 3090
aten::threshold_backward | 0.89% | 410.714ms | 0.92% | 426.501ms | 116.530us | 418.105ms | 0.90% | 418.105ms | 114.236us | 3660
aten::zero_ | 0.95% | 436.176ms | 1.60% | 735.996ms | 41.371us | 407.443ms | 0.88% | 1.580s | 88.804us | 17790
aten::index | 0.21% | 96.017ms | 0.90% | 417.174ms | 246.557us | 379.260ms | 0.82% | 673.901ms | 398.286us | 1692
aten::batch_norm_stats | 0.20% | 92.754ms | 0.44% | 202.313ms | 65.473us | 354.815ms | 0.77% | 430.311ms | 139.259us | 3090
aten::addcdiv_ | 0.84% | 389.394ms | 1.38% | 636.954ms | 48.921us | 338.894ms | 0.73% | 582.971ms | 44.775us | 13020
aten::batch_norm_elemt | 0.20% | 93.432ms | 0.30% | 137.139ms | 44.382us | 307.737ms | 0.67% | 315.618ms | 102.142us | 3090
nccl:all_gather | 1.08% | 498.480ms | 1.96% | 902.411ms | 286.480us | 281.268ms | 0.61% | 586.863ms | 186.306us | 3150
aten::sum | 0.73% | 335.813ms | 0.84% | 386.259ms | 46.448us | 259.049ms | 0.56% | 279.370ms | 33.594us | 8316
aten::addcmul | 0.58% | 265.768ms | 0.58% | 265.768ms | 20.412us | 247.570ms | 0.54% | 247.570ms | 19.015us | 13020
aten::add | 0.41% | 191.072ms | 0.41% | 191.072ms | 37.421us | 245.389ms | 0.53% | 245.389ms | 48.059us | 5106
aten::addcdiv | 0.54% | 247.560ms | 0.54% | 247.560ms | 19.014us | 244.077ms | 0.53% | 244.077ms | 18.746us | 13020
aten::select | 0.58% | 267.881ms | 0.61% | 282.312ms | 25.938us | 221.090ms | 0.48% | 221.090ms | 20.313us | 10884
aten::eq | 0.43% | 196.792ms | 0.69% | 317.035ms | 30.900us | 216.781ms | 0.47% | 349.749ms | 34.089us | 10260
aten::threshold_ | 0.08% | 37.969ms | 0.08% | 37.969ms | 13.322us | 207.245ms | 0.45% | 207.245ms | 72.718us | 2850
aten::narrow | 0.39% | 181.472ms | 0.82% | 379.777ms | 15.197us | 162.169ms | 0.35% | 162.169ms | 6.489us | 24990
aten::_local_scalar_dense | 9.53% | 4.397s | 9.53% | 4.397s | 957.894us | 161.980ms | 0.35% | 161.980ms | 35.290us | 4590
aten::_cat | 0.43% | 199.887ms | 0.56% | 259.903ms | 27.243us | 161.344ms | 0.35% | 161.902ms | 16.971us | 9540
MulBackward0 | 0.49% | 227.946ms | 1.17% | 538.075ms | 232.329us | 149.274ms | 0.32% | 516.302ms | 222.928us | 2316
aten::reshape | 0.81% | 371.633ms | 0.95% | 436.767ms | 16.816us | 146.019ms | 0.32% | 243.907ms | 9.390us | 25974
aten::sub | 0.15% | 70.532ms | 0.17% | 79.130ms | 33.558us | 143.811ms | 0.31% | 143.811ms | 60.989us | 2358
aten::cat | 0.55% | 252.875ms | 1.11% | 512.778ms | 53.750us | 134.633ms | 0.29% | 296.535ms | 31.083us | 9540
aten::split | 0.45% | 205.602ms | 0.87% | 400.623ms | 64.826us | 120.865ms | 0.26% | 233.907ms | 37.849us | 6180
aten::_softmax_backward_data | 0.00% | 1.899ms | 0.01% | 3.120ms | 52.002us | 118.390ms | 0.26% | 172.045ms | 2.867ms | 60
aten::_log_softmax_backward_data | 0.00% | 1.489ms | 0.00% | 1.755ms | 29.254us | 117.674ms | 0.25% | 117.674ms | 1.961ms | 60
aten::to | 0.19% | 87.685ms | 2.73% | 1.260s | 110.987us | 117.138ms | 0.25% | 500.381ms | 44.075us | 11353
aten::_log_softmax | 0.00% | 2.072ms | 0.01% | 2.406ms | 40.108us | 110.746ms | 0.24% | 110.746ms | 1.846ms | 60
aten::cudnn_convolution_transpose | 0.11% | 50.498ms | 0.11% | 50.749ms | 563.874us | 107.779ms | 0.23% | 107.779ms | 1.198ms | 90
aten::_softmax | 0.01% | 6.843ms | 0.02% | 7.252ms | 120.867us | 94.688ms | 0.20% | 94.688ms | 1.578ms | 60
aten::remainder | 0.04% | 17.165ms | 0.04% | 20.687ms | 26.320us | 85.431ms | 0.18% | 85.431ms | 108.691us | 786
aten::rsub | 0.11% | 49.156ms | 0.12% | 53.415ms | 33.343us | 83.047ms | 0.18% | 83.047ms | 51.839us | 1602
aten::stack | 0.22% | 101.318ms | 0.54% | 248.895ms | 78.269us | 78.597ms | 0.17% | 158.489ms | 49.839us | 3180
aten::floor_divide | 0.03% | 11.563ms | 0.03% | 11.563ms | 14.711us | 72.163ms | 0.16% | 72.163ms | 91.811us | 786
aten::_cumsum | 0.08% | 37.536ms | 0.10% | 45.497ms | 28.942us | 71.103ms | 0.15% | 71.103ms | 45.231us | 1572
aten::slice_backward | 0.17% | 77.019ms | 0.30% | 136.770ms | 174.007us | 69.269ms | 0.15% | 883.591ms | 1.124ms | 786
torch::autograd::AccumulateGrad | 0.67% | 311.172ms | 2.34% | 1.078s | 82.773us | 67.385ms | 0.15% | 344.809ms | 26.483us | 13020
aten::cudnn_convolution_backward | 1.03% | 473.016ms | 13.11% | 6.047s | 1.260ms | 62.080ms | 0.13% | 9.170s | 1.910ms | 4800
Optimizer.zero_grad#AdamW.zero_grad | 0.24% | 112.693ms | 1.42% | 654.674ms | 21.822ms | 60.239ms | 0.13% | 632.327ms | 21.078ms | 30
aten::relu_ | 0.19% | 88.351ms | 0.27% | 126.320ms | 44.323us | 57.718ms | 0.12% | 264.962ms | 92.969us | 2850
aten::convolution | 0.14% | 66.392ms | 5.96% | 2.750s | 562.345us | 57.583ms | 0.12% | 4.337s | 887.004us | 4890
aten::cudnn_convolution_transpose_backward_input | 0.09% | 40.233ms | 0.09% | 40.669ms | 451.881us | 56.885ms | 0.12% | 56.885ms | 632.050us | 90
aten::is_nonzero | 0.12% | 55.350ms | 9.73% | 4.486s | 1.046ms | 55.870ms | 0.12% | 258.608ms | 60.281us | 4290
aten::abs | 0.09% | 43.417ms | 0.13% | 59.488ms | 37.842us | 54.711ms | 0.12% | 87.001ms | 55.344us | 1572
aten::dot | 0.10% | 45.942ms | 0.10% | 48.029ms | 61.106us | 54.031ms | 0.12% | 54.031ms | 68.741us | 786
aten::cudnn_convolution_transpose_backward_weight | 0.03% | 13.205ms | 0.03% | 14.851ms | 165.013us | 53.427ms | 0.12% | 53.427ms | 593.633us | 90
aten::_convolution | 0.17% | 77.291ms | 5.82% | 2.683s | 548.768us | 51.101ms | 0.11% | 4.280s | 875.229us | 4890
aten::item | 0.10% | 44.525ms | 9.63% | 4.441s | 967.594us | 46.883ms | 0.10% | 208.863ms | 45.504us | 4590
aten::mean | 0.06% | 27.828ms | 0.08% | 36.203ms | 40.226us | 45.969ms | 0.10% | 45.969ms | 51.077us | 900
aten::neg | 0.07% | 30.641ms | 0.10% | 47.496ms | 27.110us | 42.645ms | 0.09% | 76.275ms | 43.536us | 1752
aten::sgn | 0.09% | 42.741ms | 0.15% | 69.469ms | 44.192us | 41.335ms | 0.09% | 73.811ms | 46.954us | 1572
aten::cumsum | 0.12% | 54.457ms | 0.23% | 105.924ms | 67.382us | 40.295ms | 0.09% | 115.329ms | 73.364us | 1572
aten::zeros_like | 0.02% | 10.588ms | 0.11% | 50.934ms | 58.143us | 38.724ms | 0.08% | 54.422ms | 62.125us | 876
aten::nll_loss2d_backward | 0.01% | 3.131ms | 0.01% | 3.131ms | 52.189us | 37.951ms | 0.08% | 37.951ms | 632.521us | 60
aten::conv2d | 0.09% | 42.953ms | 5.93% | 2.736s | 569.969us | 35.841ms | 0.08% | 4.234s | 882.033us | 4800
aten::full | 0.10% | 45.472ms | 0.26% | 120.943ms | 39.140us | 32.619ms | 0.07% | 76.475ms | 24.749us | 3090
aten::sigmoid | 0.07% | 31.503ms | 0.08% | 35.420ms | 42.167us | 31.874ms | 0.07% | 31.874ms | 37.945us | 840
enumerate(DataLoader)#_MultiProcessingDataLoaderIter... | 0.05% | 25.112ms | 0.05% | 25.152ms | 838.390us | 25.075ms | 0.05% | 25.075ms | 835.821us | 30
CudnnConvolutionBackward | 0.29% | 132.425ms | 13.40% | 6.179s | 1.287ms | 24.718ms | 0.05% | 9.195s | 1.916ms | 4800
aten::relu | 0.04% | 17.842ms | 0.07% | 31.890ms | 39.371us | 23.776ms | 0.05% | 41.471ms | 51.198us | 810
aten::sigmoid_backward | 0.10% | 47.457ms | 0.11% | 51.341ms | 61.121us | 21.914ms | 0.05% | 21.914ms | 26.088us | 840
aten::zeros | 0.11% | 51.560ms | 0.53% | 246.205ms | 63.718us | 21.282ms | 0.05% | 1.012s | 261.991us | 3864
aten::clone | 0.04% | 19.877ms | 0.09% | 41.600ms | 43.064us | 20.823ms | 0.05% | 252.168ms | 261.043us | 966
aten::nll_loss2d_forward | 0.01% | 3.632ms | 0.01% | 3.632ms | 60.531us | 20.672ms | 0.04% | 20.672ms | 344.542us | 60
aten::select_backward | 0.07% | 32.220ms | 0.45% | 208.351ms | 147.140us | 18.403ms | 0.04% | 672.332ms | 474.811us | 1416
ReluBackward1 | 0.09% | 41.356ms | 0.92% | 425.500ms | 149.298us | 17.945ms | 0.04% | 418.488ms | 146.838us | 2850
aten::threshold | 0.02% | 11.310ms | 0.03% | 14.048ms | 17.344us | 17.695ms | 0.04% | 17.695ms | 21.845us | 810
aten::gt | 0.02% | 7.937ms | 0.04% | 16.825ms | 46.735us | 15.927ms | 0.03% | 32.166ms | 89.350us | 360
aten::set_ | 0.06% | 27.912ms | 0.06% | 27.912ms | 16.496us | 15.417ms | 0.03% | 15.417ms | 9.112us | 1692
aten::conj | 0.07% | 31.975ms | 0.07% | 31.975ms | 6.859us | 12.442ms | 0.03% | 12.442ms | 2.669us | 4662
AddBackward0 | 0.05% | 22.301ms | 0.05% | 22.301ms | 5.709us | 12.138ms | 0.03% | 12.138ms | 3.108us | 3906
ViewBackward | 0.07% | 34.431ms | 0.14% | 65.211ms | 33.441us | 10.773ms | 0.02% | 73.920ms | 37.908us | 1950
nccl:all_reduce | 0.20% | 92.087ms | 0.20% | 92.087ms | 29.802us | 10.416ms | 0.02% | 10.416ms | 3.371us | 3090
aten::upsample_nearest2d | 0.02% | 10.356ms | 0.02% | 10.379ms | 345.975us | 10.110ms | 0.02% | 10.110ms | 336.992us | 30
aten::upsample_bilinear2d | 0.00% | 906.354us | 0.00% | 1.629ms | 54.315us | 9.278ms | 0.02% | 9.278ms | 309.283us | 30
aten::value_selecting_reduction_backward | 0.05% | 22.869ms | 0.28% | 127.428ms | 162.122us | 7.721ms | 0.02% | 910.592ms | 1.159ms | 786
aten::flatten | 0.03% | 11.973ms | 0.12% | 53.415ms | 52.061us | 7.689ms | 0.02% | 161.333ms | 157.245us | 1026
SelectBackward | 0.11% | 50.893ms | 0.56% | 259.243ms | 183.082us | 7.288ms | 0.02% | 679.620ms | 479.958us | 1416
aten::upsample_bilinear2d_backward | 0.00% | 928.227us | 0.00% | 1.775ms | 59.166us | 7.160ms | 0.02% | 8.146ms | 271.533us | 30
aten::contiguous | 0.01% | 5.880ms | 0.03% | 12.937ms | 107.811us | 6.919ms | 0.01% | 116.119ms | 967.656us | 120
MeanBackward1 | 0.16% | 74.091ms | 0.35% | 162.211ms | 193.109us | 6.713ms | 0.01% | 108.535ms | 129.209us | 840
<!--EndFragment-->
</body>
</html>
1.12.1:
<html>
<body>
<!--StartFragment--><!DOCTYPE html>
Name | Self CPU % | Self CPU | CPU total % | CPU total | CPU time avg | Self CUDA | Self CUDA % | CUDA total | CUDA time avg | # of Calls
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
aten::convolution_backward | 4.93% | 5.875s | 6.82% | 8.119s | 1.660ms | 75.444s | 54.86% | 75.752s | 15.491ms | 4890
aten::cudnn_convolution | 1.68% | 2.004s | 2.20% | 2.621s | 545.968us | 25.200s | 18.32% | 25.342s | 5.280ms | 4800
record_param_comms | 0.43% | 507.603ms | 0.49% | 582.620ms | 60.494us | 5.162s | 3.75% | 5.165s | 536.287us | 9631
aten::index_put_impl | 0.13% | 157.533ms | 5.83% | 6.942s | 8.656ms | 3.749s | 2.73% | 4.449s | 5.548ms | 802
aten::nonzero | 0.91% | 1.089s | 27.39% | 32.617s | 2.967ms | 2.670s | 1.94% | 3.299s | 300.036us | 10994
aten::copy_ | 0.23% | 276.372ms | 0.79% | 940.697ms | 19.377us | 1.949s | 1.42% | 1.949s | 40.140us | 48548
aten::add | 0.09% | 102.464ms | 0.17% | 203.353ms | 37.982us | 1.662s | 1.21% | 1.662s | 310.495us | 5354
Optimizer.step#AdamW.step | 2.04% | 2.428s | 4.08% | 4.859s | 161.973ms | 1.587s | 1.15% | 4.869s | 162.306ms | 30
aten::sort | 0.07% | 86.208ms | 0.22% | 261.301ms | 325.812us | 1.285s | 0.93% | 1.320s | 1.646ms | 802
aten::index | 0.61% | 721.432ms | 22.94% | 27.316s | 2.485ms | 1.232s | 0.90% | 4.745s | 431.631us | 10994
aten::mul | 0.23% | 272.841ms | 0.35% | 412.120ms | 19.882us | 968.027ms | 0.70% | 968.027ms | 46.701us | 20728
aten::add_ | 0.46% | 545.468ms | 1.09% | 1.303s | 22.684us | 967.276ms | 0.70% | 1.829s | 31.849us | 57428
aten::scatter_ | 0.03% | 31.895ms | 0.03% | 39.452ms | 49.192us | 876.742ms | 0.64% | 881.933ms | 1.100ms | 802
aten::fill_ | 0.15% | 175.846ms | 0.28% | 329.190ms | 11.772us | 810.262ms | 0.59% | 810.262ms | 28.975us | 27964
aten::as_strided | 0.08% | 96.996ms | 0.08% | 96.996ms | 1.051us | 570.728ms | 0.41% | 570.728ms | 6.184us | 92286
aten::empty | 0.52% | 615.215ms | 0.64% | 766.584ms | 8.819us | 498.067ms | 0.36% | 498.067ms | 5.730us | 86926
aten::empty_strided | 0.12% | 140.033ms | 0.31% | 373.503ms | 15.024us | 494.714ms | 0.36% | 494.714ms | 19.900us | 24860
aten::mul_ | 0.24% | 280.976ms | 0.37% | 435.157ms | 10.917us | 491.003ms | 0.36% | 491.003ms | 12.318us | 39862
aten::_to_copy | 0.40% | 471.375ms | 0.79% | 945.325ms | 53.865us | 482.967ms | 0.35% | 1.273s | 72.531us | 17550
SyncBatchNorm | 1.34% | 1.598s | 25.56% | 30.436s | 9.850ms | 458.728ms | 0.33% | 8.360s | 2.706ms | 3090
aten::batch_norm_backward_elemt | 0.06% | 75.280ms | 0.20% | 232.941ms | 75.385us | 457.397ms | 0.33% | 487.502ms | 157.768us | 3090
aten::batch_norm_backward_reduce | 13.71% | 16.329s | 14.20% | 16.910s | 5.472ms | 435.912ms | 0.32% | 601.949ms | 194.806us | 3090
aten::threshold_backward | 0.07% | 85.410ms | 3.60% | 4.284s | 1.171ms | 409.248ms | 0.30% | 409.248ms | 111.816us | 3660
DistributedDataParallel.forward | 0.89% | 1.055s | 30.52% | 36.340s | 1.211s | 394.082ms | 0.29% | 36.663s | 1.222s | 30
aten::slice | 0.70% | 829.054ms | 0.72% | 855.069ms | 29.424us | 375.569ms | 0.27% | 510.767ms | 17.576us | 29060
aten::to | 0.21% | 252.151ms | 1.01% | 1.197s | 51.696us | 344.848ms | 0.25% | 1.618s | 69.840us | 23164
aten::div | 0.16% | 192.152ms | 0.22% | 260.268ms | 17.257us | 322.762ms | 0.23% | 322.762ms | 21.400us | 15082
aten::clamp_min_ | 0.03% | 34.953ms | 0.05% | 55.179ms | 19.361us | 316.585ms | 0.23% | 316.585ms | 111.082us | 2850
aten::batch_norm_elemt | 0.06% | 76.202ms | 0.27% | 326.538ms | 105.676us | 315.652ms | 0.23% | 556.380ms | 180.058us | 3090
aten::batch_norm_stats | 0.16% | 186.603ms | 0.35% | 421.129ms | 136.288us | 310.332ms | 0.23% | 437.433ms | 141.564us | 3090
aten::sum | 0.84% | 1.005s | 0.98% | 1.169s | 215.682us | 278.945ms | 0.20% | 317.240ms | 58.510us | 5422
aten::sqrt | 0.12% | 141.242ms | 0.16% | 195.275ms | 14.998us | 255.718ms | 0.19% | 255.718ms | 19.640us | 13020
aten::reshape | 0.18% | 212.281ms | 0.30% | 357.446ms | 19.893us | 252.506ms | 0.18% | 523.568ms | 29.139us | 17968
aten::t | 0.12% | 143.496ms | 0.26% | 305.038ms | 27.746us | 237.969ms | 0.17% | 368.440ms | 33.513us | 10994
Optimizer.zero_grad#AdamW.zero_grad | 0.25% | 292.486ms | 0.44% | 528.706ms | 17.624ms | 231.372ms | 0.17% | 510.154ms | 17.005ms | 30
aten::resize_ | 0.06% | 68.324ms | 0.06% | 76.434ms | 5.954us | 230.068ms | 0.17% | 230.068ms | 17.921us | 12838
aten::addcdiv_ | 0.09% | 104.086ms | 0.13% | 150.716ms | 11.576us | 227.215ms | 0.17% | 227.215ms | 17.451us | 13020
aten::zero_ | 0.27% | 319.079ms | 0.51% | 605.392ms | 24.309us | 220.124ms | 0.16% | 1.004s | 40.311us | 24904
aten::item | 0.17% | 207.628ms | 2.28% | 2.710s | 188.989us | 211.232ms | 0.15% | 405.139ms | 28.252us | 14340
autograd::engine::evaluate_function: torch::autograd... | 0.87% | 1.033s | 1.64% | 1.956s | 150.192us | 194.219ms | 0.14% | 705.359ms | 54.175us | 13020
aten::_local_scalar_dense | 0.01% | 16.901ms | 2.10% | 2.502s | 174.510us | 193.907ms | 0.14% | 193.907ms | 13.522us | 14340
aten::addcmul_ | 0.08% | 94.889ms | 0.12% | 141.018ms | 10.831us | 191.320ms | 0.14% | 191.320ms | 14.694us | 13020
aten::batch_norm_gather_stats_with_counts | 0.10% | 117.458ms | 0.14% | 164.123ms | 53.114us | 186.357ms | 0.14% | 248.674ms | 80.477us | 3090
aten::narrow | 0.23% | 268.376ms | 0.89% | 1.061s | 42.540us | 185.883ms | 0.14% | 644.222ms | 25.841us | 24930
aten::sub | 0.04% | 45.214ms | 0.05% | 64.893ms | 16.063us | 179.324ms | 0.13% | 179.324ms | 44.387us | 4040
aten::mean | 0.04% | 41.849ms | 0.09% | 112.486ms | 124.984us | 170.373ms | 0.12% | 170.901ms | 189.890us | 900
torch.distributed.ddp.reducer::copy_bucket_to_grad | 3.06% | 3.646s | 3.25% | 3.875s | 297.603us | 144.493ms | 0.11% | 298.933ms | 22.960us | 13020
aten::view | 0.03% | 40.314ms | 0.03% | 40.314ms | 1.861us | 130.085ms | 0.09% | 130.085ms | 6.006us | 21660
aten::rsub | 0.12% | 137.232ms | 0.13% | 160.529ms | 98.243us | 126.236ms | 0.09% | 191.576ms | 117.244us | 1634
aten::empty_like | 0.25% | 295.137ms | 0.64% | 758.590ms | 61.315us | 123.312ms | 0.09% | 504.129ms | 40.748us | 12372
aten::flatten | 0.11% | 125.309ms | 0.15% | 183.384ms | 175.992us | 119.771ms | 0.09% | 304.967ms | 292.675us | 1042
aten::select | 0.25% | 297.481ms | 0.26% | 309.050ms | 22.015us | 119.229ms | 0.09% | 321.807ms | 22.924us | 14038
aten::_softmax_backward_data | 0.00% | 1.662ms | 0.00% | 3.234ms | 53.900us | 118.384ms | 0.09% | 173.740ms | 2.896ms | 60
aten::cat | 0.12% | 142.912ms | 0.18% | 215.138ms | 33.355us | 118.320ms | 0.09% | 121.942ms | 18.906us | 6450
aten::_log_softmax_backward_data | 0.00% | 851.000us | 0.00% | 1.358ms | 22.633us | 115.844ms | 0.08% | 115.844ms | 1.931ms | 60
aten::split | 0.24% | 281.333ms | 0.66% | 790.589ms | 127.927us | 111.768ms | 0.08% | 529.993ms | 85.759us | 6180
aten::_log_softmax | 0.00% | 1.441ms | 0.01% | 7.485ms | 124.750us | 109.909ms | 0.08% | 109.909ms | 1.832ms | 60
aten::remainder | 0.03% | 35.270ms | 0.03% | 41.520ms | 51.771us | 95.538ms | 0.07% | 95.538ms | 119.125us | 802
aten::_softmax | 0.00% | 1.275ms | 0.01% | 6.545ms | 109.083us | 93.144ms | 0.07% | 93.144ms | 1.552ms | 60
aten::zeros | 0.22% | 260.810ms | 0.54% | 639.160ms | 68.905us | 91.340ms | 0.07% | 748.781ms | 80.722us | 9276
aten::cumsum | 0.06% | 67.627ms | 0.07% | 83.266ms | 51.911us | 89.693ms | 0.07% | 94.311ms | 58.797us | 1604
aten::eq | 0.02% | 29.536ms | 0.03% | 40.440ms | 19.824us | 84.234ms | 0.06% | 84.234ms | 41.291us | 2040
aten::arange | 0.03% | 31.841ms | 0.06% | 69.181ms | 43.130us | 83.917ms | 0.06% | 168.721ms | 105.188us | 1604
torch::autograd::AccumulateGrad | 0.43% | 514.044ms | 0.60% | 717.056ms | 55.073us | 83.330ms | 0.06% | 299.909ms | 23.034us | 13020
aten::unflatten_dense_tensors | 0.17% | 203.059ms | 0.64% | 759.802ms | 12.663ms | 80.774ms | 0.06% | 362.612ms | 6.044ms | 60
SyncBatchNormBackward | 1.65% | 1.963s | 16.85% | 20.071s | 6.496ms | 80.482ms | 0.06% | 3.823s | 1.237ms | 3090
aten::_reshape_alias | 0.03% | 35.457ms | 0.03% | 35.457ms | 1.964us | 77.020ms | 0.06% | 77.020ms | 4.265us | 18058
aten::div_ | 0.02% | 22.501ms | 0.03% | 30.896ms | 35.842us | 74.688ms | 0.05% | 75.859ms | 88.003us | 862
aten::convolution | 0.08% | 94.326ms | 2.54% | 3.030s | 619.712us | 72.134ms | 0.05% | 25.755s | 5.267ms | 4890
aten::_convolution | 0.19% | 220.939ms | 2.47% | 2.936s | 600.423us | 68.615ms | 0.05% | 25.683s | 5.252ms | 4890
aten::transpose | 0.12% | 147.657ms | 0.14% | 161.542ms | 14.694us | 68.534ms | 0.05% | 130.471ms | 11.867us | 10994
aten::ge | 0.07% | 81.909ms | 0.10% | 116.833ms | 36.397us | 64.953ms | 0.05% | 64.953ms | 20.235us | 3210
aten::cudnn_convolution_transpose | 0.01% | 11.104ms | 0.03% | 34.225ms | 380.278us | 58.693ms | 0.04% | 67.167ms | 746.300us | 90
aten::squeeze | 0.07% | 81.836ms | 0.07% | 86.066ms | 18.043us | 56.443ms | 0.04% | 74.960ms | 15.715us | 4770
aten::full | 0.08% | 99.029ms | 0.13% | 153.570ms | 49.699us | 55.243ms | 0.04% | 90.889ms | 29.414us | 3090
aten::abs | 0.03% | 41.404ms | 0.06% | 75.755ms | 47.229us | 54.953ms | 0.04% | 99.147ms | 61.812us | 1604
aten::clone | 0.62% | 743.305ms | 0.95% | 1.134s | 307.879us | 53.085ms | 0.04% | 642.364ms | 174.461us | 3682
aten::dot | 0.02% | 28.681ms | 0.04% | 49.688ms | 61.955us | 46.876ms | 0.03% | 49.187ms | 61.330us | 802
aten::conv2d | 0.06% | 68.264ms | 2.56% | 3.051s | 635.609us | 41.811ms | 0.03% | 25.709s | 5.356ms | 4800
aten::neg | 0.01% | 12.326ms | 0.01% | 17.399ms | 19.506us | 40.134ms | 0.03% | 40.134ms | 44.993us | 892
aten::flatten_dense_tensors | 0.08% | 96.447ms | 0.10% | 115.561ms | 1.926ms | 39.576ms | 0.03% | 90.748ms | 1.512ms | 60
aten::set_ | 0.12% | 141.582ms | 0.12% | 141.582ms | 12.878us | 37.054ms | 0.03% | 37.054ms | 3.370us | 10994
aten::sgn | 0.01% | 10.734ms | 0.01% | 15.762ms | 19.653us | 36.918ms | 0.03% | 36.918ms | 46.032us | 802
autograd::engine::evaluate_function: ConvolutionBack... | 0.14% | 163.685ms | 7.05% | 8.397s | 1.717ms | 36.218ms | 0.03% | 75.955s | 15.533ms | 4890
aten::sigmoid | 0.02% | 19.960ms | 0.02% | 26.123ms | 31.099us | 32.014ms | 0.02% | 32.014ms | 38.112us | 840
ConvolutionBackward0 | 0.07% | 82.317ms | 6.89% | 8.201s | 1.677ms | 30.754ms | 0.02% | 75.783s | 15.498ms | 4890
aten::relu_ | 0.13% | 153.385ms | 0.18% | 208.564ms | 73.180us | 26.268ms | 0.02% | 342.853ms | 120.299us | 2850
ReluBackward0 | 0.14% | 163.631ms | 3.74% | 4.448s | 1.215ms | 26.080ms | 0.02% | 435.328ms | 118.942us | 3660
autograd::engine::evaluate_function: ReluBackward0 | 1.26% | 1.500s | 4.99% | 5.948s | 1.625ms | 25.104ms | 0.02% | 460.432ms | 125.801us | 3660
aten::contiguous | 0.03% | 34.204ms | 0.84% | 1.001s | 370.876us | 23.087ms | 0.02% | 288.258ms | 106.762us | 2700
autograd::engine::evaluate_function: MulBackward0 | 0.22% | 257.082ms | 1.19% | 1.418s | 607.927us | 22.644ms | 0.02% | 517.881ms | 222.076us | 2332
aten::clamp_min | 0.01% | 11.062ms | 0.01% | 15.832ms | 19.546us | 21.387ms | 0.02% | 21.387ms | 26.404us | 810
MulBackward0 | 0.09% | 109.542ms | 0.17% | 204.256ms | 87.588us | 20.449ms | 0.01% | 378.814ms | 162.442us | 2332
autograd::engine::evaluate_function: SyncBatchNormBa... | 18.30% | 21.798s | 35.16% | 41.869s | 13.550ms | 20.067ms | 0.01% | 3.843s | 1.244ms | 3090
aten::nll_loss2d_backward | 0.00% | 3.570ms | 0.01% | 7.052ms | 117.533us | 19.591ms | 0.01% | 38.506ms | 641.767us | 60
AsStridedBackward1 | 0.06% | 67.687ms | 0.19% | 220.483ms | 262.480us | 19.406ms | 0.01% | 90.206ms | 107.388us | 840
aten::select_backward | 0.13% | 158.524ms | 0.27% | 317.500ms | 221.718us | 18.842ms | 0.01% | 339.262ms | 236.915us | 1432
aten::nll_loss2d_forward | 0.00% | 3.116ms | 0.00% | 4.571ms | 76.183us | 18.613ms | 0.01% | 19.571ms | 326.183us | 60
aten::new_zeros | 0.04% | 51.710ms | 0.12% | 142.151ms | 86.572us | 17.260ms | 0.01% | 96.207ms | 58.591us | 1642
autograd::engine::evaluate_function: AddBackward0 | 0.19% | 225.126ms | 0.19% | 226.791ms | 111.610us | 16.963ms | 0.01% | 25.421ms | 12.510us | 2032
<!--EndFragment-->
</body>
</html>
## System Info
PyTorch version: 1.8.2+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.27
Python version: 3.8.6 (default, Oct 6 2020, 03:22:36) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-45-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla V100-PCIE-32GB
GPU 1: Tesla V100-PCIE-32GB
GPU 2: Tesla V100-PCIE-32GB
GPU 3: Tesla V100-PCIE-32GB
Nvidia driver version: 450.36.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.0
[pip3] torch==1.8.2+cu102
[pip3] torch-scatter==2.0.5
[pip3] torchvision==0.9.2+cu102
[conda] Could not collect
cc @VitalyFedyunin @ngimel @csarofeen @ptrblck @xwang233
| 13 |
4,851 | 84,335 |
Should enable skipped tests for `to` OpInfo
|
triaged, module: primTorch
|
### ๐ Describe the bug
Compare this meta implementation to the one registered in native_functions.yaml. Meta implementation is missing the `non_blocking` argument.
https://github.com/pytorch/pytorch/blob/6a3ecda5a25025d48bbc5f0215db8c338745ef79/torch/_meta_registrations.py#L204-L212
PR that adds OpInfo for `torch.Tensor.to` fails with
```py
ERROR [0.005s]: test_make_fx_symbolic_exhaustive_to_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 378, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 815, in test_wrapper
return test(*args, **kwargs)
File "test_proxy_tensor.py", line 1263, in test_make_fx_symbolic_exhaustive
_test_make_fx_helper(self, device, dtype, op, "symbolic")
File "test_proxy_tensor.py", line 1233, in _test_make_fx_helper
new_f = make_fx(f, tracing_mode=tracing_mode)(args, kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 629, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 329, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/lib/python3.7/site-packages/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 343, in wrapped
out = f(*tensors)
File "test_proxy_tensor.py", line 1225, in f
return op.op(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_methods_invocations.py", line 11593, in <lambda>
op=lambda x, *args, **kwargs: x.to(*args, **kwargs),
File "/opt/conda/lib/python3.7/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
return f(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 362, in __torch_dispatch__
return self.inner_torch_dispatch(func_overload, types, args, kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 391, in inner_torch_dispatch
out = proxy_call(self, func_overload, args, kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py", line 242, in proxy_call
out = func_overload(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/_ops.py", line 60, in __call__
return self._op(*args, **kwargs or {})
File "/opt/conda/lib/python3.7/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
return f(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/_subclasses/fake_tensor.py", line 591, in __torch_dispatch__
r = meta_table[func](*args, **kwargs)
TypeError: _to_copy() got an unexpected keyword argument 'non_blocking'
```
After adding `non_blocking` argument it would fail with dtype mismatch:
```py
_________________________________________ TestProxyTensorOpInfoCPU.test_make_fx_symbolic_exhaustive_to_cpu_float32 _________________________________________
Traceback (most recent call last):
File "/home/iyashchuk/dev/pytorch/master/test/test_proxy_tensor.py", line 1263, in test_make_fx_symbolic_exhaustive
_test_make_fx_helper(self, device, dtype, op, "symbolic")
File "/home/iyashchuk/dev/pytorch/master/test/test_proxy_tensor.py", line 1245, in _test_make_fx_helper
self.assertEqual(new_out, old_out)
File "/home/iyashchuk/dev/pytorch/master/torch/testing/_internal/common_utils.py", line 2401, in assertEqual
assert_equal(
File "/home/iyashchuk/dev/pytorch/master/torch/testing/_comparison.py", line 1093, in assert_equal
raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float64.
```
### Versions
Latest master.
cc @ezyang @mruberry @ngimel
| 4 |
4,852 | 84,321 |
[Profiler] Snapshot CudaCachingAllocator on profile begin
|
module: bootcamp, oncall: profiler
|
### ๐ The feature, motivation and pitch
One of the largest technical hurdles to memory profiling is deducing information about Tensors which were created before profiling began. PyTorch uses a custom [GPU allocator](https://github.com/pytorch/pytorch/blob/master/c10/cuda/CUDACachingAllocator.h) for fine grained control over GPU memory. One feature of this allocator is that it can expose a [detailed snapshot](https://github.com/pytorch/pytorch/blob/master/c10/cuda/CUDACachingAllocator.h#L151) of its current state. Moreover, [lightweight, always on source attribution](https://github.com/pytorch/pytorch/pull/82146) was recently added to CUDACachingAllocator. This information would go a long way towards disambiguating memory use for profiled regions and would provide a more complete picture of overall usage, and we should collect it in profiler.
### Alternatives
_No response_
### Additional context
_No response_
cc @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 3 |
4,853 | 84,318 |
[Profiler] Generic Tensor summary
|
module: bootcamp, oncall: profiler
|
### ๐ The feature, motivation and pitch
The PyTorch profiler has several flags which enable optional functionality. One such flag is `record_shapes`, which is needed to assess kernel performance. However the scope of what is captured has gradually grown to include not only shape and dtype, but also [address, device, layout, and strides](https://github.com/pytorch/pytorch/blob/master/torch/csrc/profiler/collection.h#L51-L68). At the same time we have found applications for inspecting Tensors in other profiling contexts in the Python tracer for [`nn.Module` profiling](https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/profiler_python.cpp#L305-L308), and soon optimizer profiling as well.
As PyTorch evolves various sharp edges have emerged: NestedTensors do not have a concept of strides, not all Tensors have storage, etc. We are also developing tooling in Python and the disparity between representations requires us to fall back to the lowest common denominator for analysis.
A more elegant and maintainable approach would be to have a single summary class which takes an `at::Tensor` and extracts a broadly useful summary which could then be conveniently bound into Python. This would harden the collection path by minimizing the boundary between profiler and TensorImpl/StorageImpl and provide a featureful analysis experience out of the box.
### Alternatives
_No response_
### Additional context
_No response_
cc @chaekit @aaronenyeshi @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb
| 0 |
4,854 | 84,316 |
Torch.fx tracing bug with dictionary.update calls on input
|
triaged, module: fx, fx
|
### ๐ Describe the bug
## Bug Description
Torch.fx incorrectly traces models that call `update` on their input dictionary. The incorrect model structure results in most nodes being removed if a call to `model.graph.eliminate_dead_code()` is made. I believe from similar examples that instead of returning a traced model, this should error out.
## Minimal Reproducible Example
```
from torch import nn
import torch.nn.functional as F
from torch import fx
class TestModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10,10)
self.fc2 = nn.Linear(10,10)
def forward(self, input_dict):
out = {"out1": F.relu(self.fc1(input_dict['in']))}
input_dict.update(out)
out = {"out2": F.relu(self.fc2(input_dict['out1']))}
input_dict.update(out)
return input_dict['out2']
model = TestModel()
model = fx.symbolic_trace(model)
model.graph.print_tabular()
```
which results in:
```
opcode name target args kwargs
------------- ---------- --------------------------------- ------------------------------ ------------------
placeholder input_dict input_dict () {}
call_function getitem <built-in function getitem> (input_dict, 'in') {}
call_module fc1 fc1 (getitem,) {}
call_function relu <function relu at 0x7f70b9e11d90> (fc1,) {'inplace': False}
call_method update update (input_dict, {'out1': relu}) {}
call_function getitem_1 <built-in function getitem> (input_dict, 'out1') {}
call_module fc2 fc2 (getitem_1,) {}
call_function relu_1 <function relu at 0x7f70b9e11d90> (fc2,) {'inplace': False}
call_method update_1 update (input_dict, {'out2': relu_1}) {}
call_function getitem_2 <built-in function getitem> (input_dict, 'out2') {}
output output output (getitem_2,) {}
```
#### Visualization

### Result of calling `eliminate_dead_code`
```
from torch import nn
import torch.nn.functional as F
from torch import fx
class TestModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10,10)
self.fc2 = nn.Linear(10,10)
def forward(self, input_dict):
out = {"out1": F.relu(self.fc1(input_dict['in']))}
input_dict.update(out)
out = {"out2": F.relu(self.fc2(input_dict['out1']))}
input_dict.update(out)
return input_dict['out2']
model = TestModel()
model = fx.symbolic_trace(model)
model.graph.eliminate_dead_code()
model.graph.print_tabular()
```
This results in:
```
opcode name target args kwargs
------------- ---------- --------------------------- -------------------- --------
placeholder input_dict input_dict () {}
call_function getitem_2 <built-in function getitem> (input_dict, 'out2') {}
output output output (getitem_2,) {}
```
### Similar Example (without using update)
If we instead directly update the input dictionary, instead of using `update` calls, the trace will fail:
```
from torch import nn
import torch.nn.functional as F
from torch import fx
class TestModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10,10)
def forward(self, input_dict):
input_dict['out1'] = F.relu(self.fc1(input_dict['in']))
return input_dict['out1']
model = TestModel()
try:
model = fx.symbolic_trace(model)
except TypeError as e:
print(e)
```
We get the error:
```
'Proxy' object does not support item assignment
```
For this reason, I believe erroring out when the same behaviour is replicated by using `update` calls instead is the correct approach.
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.21.1
Libc version: N/A
Python version: 3.9.6 (default, Jun 29 2021, 05:25:02) [Clang 12.0.5 (clang-1205.0.22.9)] (64-bit runtime)
Python platform: macOS-12.4-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.1
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
cc @ezyang @SherlockNoMad @soumith
| 0 |
4,855 | 84,311 |
DecompositionInterpreter creates invalid graph
|
triaged, module: nvfuser
|
### ๐ Describe the bug
In FX's NVFuser backend (see nvfuser.py), DecompositionInterpreter implicitly assumes all placeholder nodes should happen before other types of nodes to avoid name confliction. For example,
```
placeholder _softmax_backward_data_default_11 _softmax_backward_data_default_11 () {}
call_function div_tensor_23 aten.div.Tensor (_softmax_backward_data_default_11, 8.0) {}
placeholder t_default_1 t_default_1 () {}
call_function t_default_357 aten.t.default (t_default_1,) {}
placeholder view_default_9 view_default_9 () {}
call_function transpose_int_67 aten.transpose.int (view_default_9, 1, 2) {}
placeholder _unsafe_view_default_1 _unsafe_view_default_1 () {}
call_function transpose_int_70 aten.transpose.int (_unsafe_view_default_1, 1, 2) {}
placeholder _unsafe_view_default _unsafe_view_default () {}
call_function transpose_int_69 aten.transpose.int (_unsafe_view_default, 1, 2) {}
placeholder t_default t_default () {}
call_function t_default_361 aten.t.default (t_default,) {}
placeholder t_default_2 t_default_2 () {}
call_function t_default_353 aten.t.default (t_default_2,) {}
call_function view_default_411 aten.view.default (div_tensor_23, [24, 12, 12]) {}
placeholder _unsafe_view_default_104 _unsafe_view_default_104 ()
```
is decomposed into
```
placeholder _softmax_backward_data_default_11 _softmax_backward_data_default_11 () {}
call_function div_tensor aten.div.Tensor (_softmax_backward_data_default_11, 8.0) {}
placeholder t_default_1 t_default_1 () {}
call_function t_default aten.t.default (t_default_1,) {}
placeholder view_default_9 view_default_9 () {}
call_function permute_default aten.permute.default (view_default_9, [0, 2, 1]) {}
placeholder _unsafe_view_default_1 _unsafe_view_default_1 () {}
call_function permute_default_1 aten.permute.default (_unsafe_view_default_1, [0, 2, 1]) {}
placeholder _unsafe_view_default _unsafe_view_default () {}
call_function permute_default_2 aten.permute.default (_unsafe_view_default, [0, 2, 1]) {}
placeholder t_default_2 t_default () {}
call_function t_default_3 aten.t.default (t_default_2,) {}
placeholder t_default_4 t_default_2 () {}
call_function t_default_5 aten.t.default (t_default_4,) {}
call_function view_default aten.view.default (div_tensor, [24, 12, 12]) {}
placeholder _unsafe_view_default_104 _unsafe_view_default_104 () {} {}
```
where you can see in the later graph, `t_default` is no longer a graph input, which must be wrong. A small repro can be found below.
```python
import torch
torch.manual_seed(0)
def define_correct():
fx_graph = torch.fx.Graph()
a = fx_graph.placeholder("a")
c = fx_graph.placeholder("view")
b = fx_graph.call_function(torch.ops.aten.view, (a, [4, 1]))
d = fx_graph.call_function(torch.ops.aten.view, (c, [4, 1]))
e = fx_graph.call_function(torch.ops.aten.div.Tensor, (b, 8.0))
f = fx_graph.call_function(torch.ops.aten.add.Tensor, (d, e))
fx_graph.output(f)
return fx_graph
def define_wrong():
fx_graph = torch.fx.Graph()
a = fx_graph.placeholder("a")
b = fx_graph.call_function(torch.ops.aten.view, (a, [4, 1]))
c = fx_graph.placeholder("view")
d = fx_graph.call_function(torch.ops.aten.view, (c, [4, 1]))
e = fx_graph.call_function(torch.ops.aten.div.Tensor, (b, 8.0))
f = fx_graph.call_function(torch.ops.aten.add.Tensor, (d, e))
fx_graph.output(f)
return fx_graph
fx_module_correct = torch.fx.GraphModule(torch.nn.Module(), define_correct())
fx_module_wrong = torch.fx.GraphModule(torch.nn.Module(), define_wrong())
x = torch.randn(2, 2)
y = torch.randn(2, 2)
z_correct = fx_module_correct(x, y)
z_wrong = fx_module_wrong(x, y)
print(z_correct, z_wrong)
#tensor([[-0.8919],
# [-1.4353],
# [ 0.1310],
# [ 0.9091]])
#tensor([[ 1.7336],
# [-0.3301],
# [-2.4511],
# [ 0.6395]])
```
`define_correct` and `define_wrong` should produce the same graph but their outputs doesn't match.
### Versions
Latest PyTorch should reproduce it. A simple fix is
```diff
--- a/torch/fx/interpreter.py
+++ b/torch/fx/interpreter.py
@@ -118,7 +118,7 @@ class Interpreter:
args = self.module.graph.process_inputs(*args)
self.args_iter : Iterator[Any] = iter(args)
- for node in self.module.graph.nodes:
+ for node in sorted(self.module.graph.nodes, key=lambda n: 0 if n.op == 'placeholder' else 1):
if node in self.env:
# Short circuit if we have this value. This could
# be used, for example, for partial evaluation
```
Should I submit a PR to expose sorting as an argument of `torch.fx.Interpreter.run`? Or people prefer another fix like
```diff
diff --git a/torch/fx/graph.py b/torch/fx/graph.py
index 7b942052a1..ae66f3bc9c 100644
--- a/torch/fx/graph.py
+++ b/torch/fx/graph.py
@@ -774,6 +774,8 @@ class Graph:
candidate = name if name is not None else self._target_to_str(target)
name = self._graph_namespace.create_name(candidate, None)
+ if op == "placeholder":
+ target = name
n = Node(self, name, op, target, args, kwargs, type_expr)
```
| 0 |
4,856 | 84,309 |
Unable to run a single convolutional layer in different CUDA-contexts
|
module: cuda, triaged
|
### ๐ Describe the bug
I'm trying to design a Real-TIme scheduler using Post-Volta MPS by creating CUDA-contexts with different SM counts. In this scheduler, I need to run a single model in different contexts on each iteration. The problem is that it works fine with non-convolutional layers like fully-conencted layer, but when I change the context and run a convolutional layer, I get **_CUDNN_STATUS_MAPPING_ERROR_** error. This is a sample code to reproduce the error. As you might see, I create two additional CUDA-contexts (with one already created default context), then run a fully-connected layer using all three contexts and it runs smoothly, but when it comes to the convolutional one, I get the error.
```
import torch
import torch.nn as nn
from cuda import cuda
inl = torch.rand(128, 128, device="cuda")
lin = nn.Linear(128, 128, device='cuda')
inc = torch.ones(50, 30, 10, 5, device="cuda").share_memory_()
conv = nn.Conv2d(30, 5, 3, stride=1, padding=1, device='cuda').share_memory()
def create_context(sm_count):
affinity = cuda.CUexecAffinityParam()
affinity.type = cuda.CUexecAffinityType.CU_EXEC_AFFINITY_TYPE_SM_COUNT
affinity.param.smCount.val = sm_count
ctx = cuda.cuCtxCreate_v3([affinity], 1, 0, 0)[1]
cuda.cuInit(0)
return ctx
# Creating two more contexts
ctx1 = create_context(10)
ctx2 = create_context(40)
# Trying Fully Connected layer
cuda.cuCtxSetCurrent(0) # Sets default context
dummy = lin(inl)
cuda.cuCtxSetCurrent(ctx1)
dummy = lin(inl)
cuda.cuCtxSetCurrent(ctx2)
dummy = lin(inl)
# Trying with Convolutional layer
cuda.cuCtxSetCurrent(0) # Sets default context
dummy = conv(inc)
cuda.cuCtxSetCurrent(ctx1)
dummy = conv(inc)
cuda.cuCtxSetCurrent(ctx2)
dummy = conv(inc)
```
This is a very simple Python version of what I'm doing, but my main project is based on the C++ API which I get the same error. Here is the full error code I get in C++ version:
```
terminate called after throwing an instance of 'c10::CuDNNError'
what(): cuDNN error: CUDNN_STATUS_MAPPING_ERROR
Exception raised from getCudnnHandle at ../aten/src/ATen/cudnn/Handle.cpp:48 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7ffa619f07d2 in /home/amir/repos/libtorch/lib/libc10.so)
frame #1: at::native::getCudnnHandle() + 0x427 (0x7ff9fe23be57 in /home/amir/repos/libtorch/lib/libtorch_cuda_cpp.so)
frame #2: <unknown function> + 0x26aed47 (0x7ff9fe207d47 in /home/amir/repos/libtorch/lib/libtorch_cuda_cpp.so)
frame #3: <unknown function> + 0x26af144 (0x7ff9fe208144 in /home/amir/repos/libtorch/lib/libtorch_cuda_cpp.so)
frame #4: <unknown function> + 0x26a8c4c (0x7ff9fe201c4c in /home/amir/repos/libtorch/lib/libtorch_cuda_cpp.so)
frame #5: at::native::cudnn_convolution(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) + 0x95 (0x7ff9fe202165 in /home/amir/repos/libtorch/lib/libtorch_cuda_cpp.so)
frame #6: <unknown function> + 0x2c734d6 (0x7ff9ae3384d6 in /home/amir/repos/libtorch/lib/libtorch_cuda_cu.so)
frame #7: <unknown function> + 0x2c7354f (0x7ff9ae33854f in /home/amir/repos/libtorch/lib/libtorch_cuda_cu.so)
frame #8: at::_ops::cudnn_convolution::call(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool, bool) + 0x23d (0x7ff9e5ba5f3d in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #9: at::native::_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool, bool) + 0xc80 (0x7ff9e5341300 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x1d69a3a (0x7ff9e5dc9a3a in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #11: at::_ops::_convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool, bool) + 0x277 (0x7ff9e58dd557 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #12: at::native::convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0xfb (0x7ff9e533a39b in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x1d697da (0x7ff9e5dc97da in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #14: at::_ops::convolution::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0x176 (0x7ff9e58b41d6 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x2891f18 (0x7ff9e68f1f18 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x2892a66 (0x7ff9e68f2a66 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #17: at::_ops::convolution::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0x251 (0x7ff9e58dc1e1 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #18: at::native::conv2d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) + 0x159 (0x7ff9e533ec49 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #19: <unknown function> + 0x1dfa022 (0x7ff9e5e5a022 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #20: at::_ops::conv2d::call(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) + 0x20e (0x7ff9e5c5f28e in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #21: <unknown function> + 0x385136c (0x7ff9e78b136c in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #22: torch::nn::Conv2dImpl::_conv_forward(at::Tensor const&, at::Tensor const&) + 0x3be (0x7ff9e78ab8ae in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #23: torch::nn::Conv2dImpl::forward(at::Tensor const&) + 0x10 (0x7ff9e78ab9b0 in /home/amir/repos/libtorch/lib/libtorch_cpu.so)
frame #24: dummy(thread_data) + 0x166 (0x5580a54a6d42 in ./build/fgprs)
frame #25: dummy2(void*) + 0x36 (0x5580a54a6f2d in ./build/fgprs)
frame #26: <unknown function> + 0x8609 (0x7ff9e4045609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #27: clone + 0x43 (0x7ff9ab024133 in /lib/x86_64-linux-gnu/libc.so.6)
```
### Versions
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.10
Python version: 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.2
[pip3] torchvision==0.11.3
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.2 py37h20f2e39_0
[conda] numpy-base 1.21.2 py37h79a1101_0
[conda] pytorch 1.10.2 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.11.3 py37_cu113 pytorch
cc @ngimel
| 2 |
4,857 | 84,304 |
op for aten::bitwise_and during torch.jit.trace
|
oncall: jit
|
### ๐ Describe the bug
Hi, I'm trying to run jit.trace on the openfold model found [here](https://github.com/aqlaboratory/openfold). However, I keep running into this error as shown here:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_18250/2760271908.py in <module>
1 #flattened_inputs, _ = flatten_to_tuple(batch)
2 #parallelModel = nn.DataParallel(model)
----> 3 trace = torch.jit.trace(model, batch, strict=False) # Succeeds
4
5 #adapter = TracingAdapter(model, batch)
/data/openfold/lib/conda/envs/openfold_venv/lib/python3.7/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
748 strict,
749 _force_outplace,
--> 750 _module_class,
751 )
752
/data/openfold/lib/conda/envs/openfold_venv/lib/python3.7/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
963 strict,
964 _force_outplace,
--> 965 argument_names,
966 )
967 check_trace_method = module._c._get_method(method_name)
RuntimeError: 0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::bitwise_and but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::bitwise_and.Tensor(Tensor self, Tensor other) -> (Tensor)
aten::bitwise_and.Tensor_out(Tensor self, Tensor other, *, Tensor(a!) out) -> (Tensor(a!))
aten::bitwise_and.Scalar(Tensor self, Scalar other) -> (Tensor)
aten::bitwise_and.Scalar_out(Tensor self, Scalar other, *, Tensor(a!) out) -> (Tensor(a!))
```
Just based on some quick searches it looks like it could be some issues with the OpenFold model using `bool` datatype in its tensors... Having a hard time to pinpoint where in the model this is failing on, and not sure if there were ways to get around the aten::bitwise_and operation issue here. Any help would be appreciated.
### Versions
```Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.5
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:21) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1083-aws-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.0
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.11.0
[pip3] torch-neuron==1.11.0.2.3.0.0
[pip3] torchmetrics==0.9.1
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl intel
[conda] cudatoolkit 10.2.89 h713d32c_10 conda-forge
[conda] cudatoolkit-dev 10.1.243 h516909a_3 conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-service 2.4.0 py37h94185c7_0 conda-forge
[conda] mkl_fft 1.3.1 py37h90e98c2_3 conda-forge
[conda] mkl_random 1.2.2 py37h693438c_1 conda-forge
[conda] mkl_umath 0.1.1 py37h3242e30_26 intel
[conda] numpy 1.20.0 pypi_0 pypi
[conda] pytorch-lightning 1.5.10 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.11.0 pypi_0 pypi
[conda] torch-neuron 1.11.0.2.3.0.0 pypi_0 pypi
[conda] torchmetrics 0.9.1 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi```
| 2 |
4,858 | 84,290 |
Fix convert path for fixed qparam ops (sigmoid and softmax)
|
oncall: quantization, triaged
|
### ๐ Describe the bug
sigmoid is an op that works for both fp32 and quantized input: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L4519
but we have a quantized sigomid module: https://github.com/pytorch/pytorch/blob/master/torch/ao/nn/quantized/modules/activation.py#L129-L148 this is confusing and we should remove it
## How to fix?
remove the quantized modules for all the modules in this list: https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/quantization_mappings.py#L163-L166
### Versions
master
cc @jianyuh @raghuramank100 @jamesr66a @vkuzo
| 5 |
4,859 | 84,265 |
torch.Tensor.to.dtype_layout overload is not available in Python
|
triaged, module: codegen, module: python frontend
|
### ๐ Describe the bug
There's an overload `func: to.dtype_layout(Tensor(a) self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, bool copy=False, MemoryFormat? memory_format=None) -> Tensor(a)`
https://github.com/pytorch/pytorch/blob/acd6ca8cfa9537284928fb5d36834d1e5ae1e6f3/aten/src/ATen/native/native_functions.yaml#L6581
But it's not used in dispatching for `torch.Tensor.to`, what's happening?
```py
In [1]: import torch
In [2]: a = torch.randn(3, 3, device="cuda")
In [3]: a.to(layout=torch.strided)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 a.to(layout=torch.strided)
TypeError: to() received an invalid combination of arguments - got (layout=torch.layout, ), but expected one of:
* (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
* (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
* (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)
```
### Versions
Latest master.
cc @ezyang @bhosmer @bdhirsh
| 4 |
4,860 | 84,261 |
relu-gru mse is 0.022 much greater than 0.003 with half dtype.
|
needs reproduction, triaged, module: half
|
### ๐ Describe the bug
gru op with nonlinearity type relu gets low percision when run fp16 datatype.
python code:
`
from __future__ import print_function
import time
import numpy as np
import torch
import torch.nn.utils.rnn as rnn_utils
from itertools import product
def two_tensor_diff_by_mse(self, other):
if self.dtype in (torch.bool, torch.double, torch.half):
self = self.float()
if other.dtype in (torch.bool, torch.double, torch.half):
other = other.float()
epsilon = 1.0 / 16384
assert self.size() == other.size(), "tensor size need be equal."
assert self.numel() > 0, "Tensor numel need be greater than 0."
# check that NaNs are in the same locations
nan_mask = self != self
assert torch.equal(nan_mask, other != other), "nan is in different position."
diff = self - other
diff[nan_mask] = 0
diff = diff.abs().pow(2).sum()
self_pow_sum = self.pow(2).sum()
if diff <= (2 * epsilon) * (2 * epsilon):
diff = 0.0
if self_pow_sum <= epsilon:
self_pow_sum = self_pow_sum + epsilon
diff = torch.div(diff, (self_pow_sum * 1.0))
assert diff.sqrt() < 0.003, "diff value {0} is greater than default_mse: {1}.".format(diff.sqrt(), 0.003)
def TensorGenerator(shape, dtype, func = lambda x:x):
cpu_tensor = torch.randn(shape).to(torch.half).to(torch.float)
mlu_tensor = func(cpu_tensor.to("cuda").to(dtype))
cpu_tensor = func(cpu_tensor)
return cpu_tensor, mlu_tensor
# almost copy from origin pytorch test case.
def test_RNN_cpu_vs_mlu():
def forward_backward(mlu, dtype, rnn, input_val, grad_output, \
hx_val, grad_hy, grad_cy=None):
if isinstance(input_val, rnn_utils.PackedSequence):
input = rnn_utils.PackedSequence(
input_val.data.data.requires_grad_(True), input_val.batch_sizes)
input_var = input.data
else:
input = input_val.clone().requires_grad_(True)
input_var = input
hx = hx_val.clone().requires_grad_(True)
if mlu:
rnn.cuda().to(dtype)
output, hy = rnn(input, hx)
if isinstance(output, rnn_utils.PackedSequence):
output = output.data
torch.autograd.backward([output, hy], [grad_output, grad_hy])
return {'output': output.data,
'hy': hy.data,
'weights': rnn.all_weights,
'grad_input': input_var.grad.data,
'grad_hx': hx.grad.data}
def compare_cpu_mlu(outputs_cpu, outputs_mlu):
#self.assertEqual(list(outputs_cpu.keys()), list(outputs_mlu.keys()))
for key in outputs_cpu.keys():
print(key, flush = True)
if key != 'weights':
two_tensor_diff_by_mse(outputs_cpu[key], outputs_mlu[key].cpu())
# check grad weights separately, as nested dict
for cpu_layer_weight, mlu_layer_weight in zip(outputs_cpu['weights'], \
outputs_mlu['weights']):
for (cpu_weight, mlu_weight) in zip(cpu_layer_weight, mlu_layer_weight):
two_tensor_diff_by_mse(cpu_weight.grad.data, mlu_weight.grad.data.cpu())
seed = int(time.time())
torch.backends.cudnn.enabled = True
torch.backends.cudnn.allow_tf32 = False
torch.backends.cuda.matmul.allow_tf32 = False
input_size = 2; hidden_size= 128; batch = 6; seq_length = 50; bidirectional = True; bias = True
batch_first = False; dtype = torch.half; num_layers = 2
num_directions = 2 if bidirectional else 1
hx_val_cpu, hx_val_mlu = TensorGenerator((num_layers * num_directions, batch, hidden_size), dtype)
input_val_cpu, input_val_mlu = TensorGenerator((seq_length, batch, input_size), dtype)
grad_output_cpu, grad_output_mlu = TensorGenerator((seq_length, batch, hidden_size * num_directions), dtype)
grad_hy_cpu, grad_hy_mlu = TensorGenerator((num_layers * num_directions, batch, hidden_size), dtype)
torch.manual_seed(seed)
rnn = torch.nn.RNN(input_size, hidden_size, num_layers, bias=bias, \
bidirectional=bidirectional, batch_first=False, \
nonlinearity="relu").to(dtype).float()
outputs_cpu = forward_backward(False, dtype, rnn, input_val_cpu, \
grad_output_cpu, hx_val_cpu, grad_hy_cpu)
torch.manual_seed(seed)
rnn_mlu = torch.nn.RNN(input_size, hidden_size, num_layers, bias=bias, \
bidirectional=bidirectional, batch_first=False, \
nonlinearity="relu").to(dtype).float()
outputs_mlu = forward_backward(True, dtype, rnn_mlu, \
input_val_mlu, grad_output_mlu, \
hx_val_mlu, grad_hy_mlu)
compare_cpu_mlu(outputs_cpu, outputs_mlu)
if __name__ == '__main__':
test_RNN_cpu_vs_mlu()
`
### Versions
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
| 1 |
4,861 | 84,259 |
would you like upload to the cpp libtorch to vcpkg package repo?
|
module: cpp, module: ci, triaged, enhancement, topic: binaries
|
### ๐ The feature, motivation and pitch
HI:
now I use libtorch cpp in my cpp project ,my cpp project use cmake with vcpkg, sometimes I need to defined the libtorch root path ,feel tried , if convenient for us to upload the libtorch to the vcpkg for package manage https://vcpkg.io/en/packages.html will be great ,thanks
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @seemethere @malfet @pytorch/pytorch-dev-infra
| 5 |
4,862 | 84,257 |
Support dict inputs and outputs when exporting to ONNX
|
module: onnx, triaged, onnx-needs-info
|
### ๐ The feature, motivation and pitch
I have a model that takes `features: Dict[str, Tensor]` as input and produces `predictions: Dict[str, Tensor]` as output. It's straightforward to convert this model to TorchScript, but I need to export it to ONNX for deployment. Unfortunately, the PyTorch-to-ONNX converter (`torch.onnx.export`) doesn't seem to support dictionaries as inputs/outputs even though ONNX itself does.
### Alternatives
One option is to convert the dictionaries to keyword parameters, but that approach has several drawbacks:
1. More cumbersome to iterate on code -- input features may change over the course of a model's development. Adding named parameters to a function's signature is unnecessary overhead.
2. Outputs aren't named, so we'd have to return a list/tuple. In such cases, we have to be very careful about the return value order and make sure all scripts/tools that invoke the model are updated on any additions, removals, or permutations to the output.
3. We may only have access to the TorchScript model, so changing the inputs/outputs isn't even possible for us in all cases.
### Additional context
Here's a simple script that demonstrates the missing support for this feature:
```python
import torch
from typing import Dict
class A(torch.nn.Module):
def forward(self, features: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
print(features['b'])
return {
'c': features['a'],
}
module = torch.jit.script(A())
sample_a = torch.zeros([1,2], dtype=torch.float32)
sample_b = torch.zeros([2,1], dtype=torch.float32)
features = { 'a': sample_a, 'b': sample_b }
module(features)
torch.onnx.export(module, (features, {}), 'output.onnx')
```
| 6 |
4,863 | 84,247 |
Ensure ops account for offsets and strides
|
triaged, module: nestedtensor, release notes: nested tensor
|
# Summary
Many ops were were written for nested tensors in the context that they every nested tensor is contiguous. The only metadata needed to define the shape of nested tensor was a nested tensor size.
This is no longer the case. nested_strides and nested_offsets were added. There may be ops that are still assuming sizes is all that is needed. One that comes to mind is: https://github.com/pytorch/pytorch/blob/372a19d2c673a20fe50955b20ea4e3685266d630/aten/src/ATen/native/nested/NestedTensorMath.cpp#L18
This should be updated to also pass in nested_strides and nested_offsets. There may be other functions that need to be update.
cc @cpuhrsch @jbschlosser @bhosmer @mikaylagawarecki
| 0 |
4,864 | 84,234 |
Randomness should be consistent across devices with use_deterministic_algorithms
|
triaged, module: random, module: determinism
|
### ๐ The feature, motivation and pitch
Being able to generate reproducible results is a key part of the research process, as as a research platform, pytorch should support this goal. It currently does this by providing a "deterministic " option as described in https://pytorch.org/docs/stable/notes/randomness.html. However, even with that option enabled, random number generation is inconsistent across GPUs.
I think it would be reasonable to expect to get consistent random numbers when using the same seed, PyTorch version, and CUDA version even when running on a different device (whether CPU or physical GPU). From what I understand of random number generation, hardware differences such as processor counts or FPU precision don't need to play a role in RNG. And if they do, I'm guessing a slower, deterministic pathway could be chosen when `torch.use_deterministic_algorithms()` is used.
### Alternatives
The current alternatives I know of are:
1. Always use CPU (which seems to be consistent, although I am not sure that is guaranteed) as the target for random number generation
2. Use a different, deterministic generator (e.g., numpy)
3. Use the same hardware devices everywhere
Options 1 and 2 require all researchers to know about this issue and write their code accordingly. Unfortunately, people are not aware of it. When they write code with determinism in mind, they typically allow for a torch generator to be provided as an API argument, assuming this will guarantee determinism. This is unfortunately not the case (and is further compounded by #62451).
Option 3 is unfortunately unrealistic and can't be sustained over time as hardware becomes obsolete.
### Additional context
Related issues: #62467 #79496 #62451
cc @pbelevich @mruberry @kurtamohler
| 12 |
4,865 | 84,202 |
Gradient value calculation error in MultiLabelMarginLoss
|
module: loss, module: cuda, triaged, module: correctness (silent)
|
### ๐ Describe the bug
We make sure that the gradient value of the backward of the multilabel_margin_loss operator is computed wrongly under some input cases. The situation is clear on some hardware, such as NVIDIA T4 / 1080, but it may occur randomly on V100 (currently we just have these several GPUs to test). We found this problem for the first time when comparing the same data input of float32 and float64, and there was a big difference in output data in one value (but not a numerical accuracy error). As shown in the following Python code, we provide a simplified case for reproducing the bug. More importantly, we dissect the CUDA code implementation in aten, we finally found that it was a BUG caused by thread synchronization. As this code shows: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/MultiLabelMarginCriterion.cu#L194-#L201
When thread zero reads and writes to grad_input_k[target_idx], but the writing is not completed yet, the non-zero thread continues to execute asynchronously, and when threadIdx.x == target_idx, the thread reads and writes to grad_input_k[target_idx] at the same time, the conflicts occurred. This is the origin of the bug. The solution is to add the necessary thread synchronization between these.
The Aten CUDA code snippet is as follows:
```cpp
for (int dt = 0; dt < dim; dt++) {
...
if (threadIdx.x == 0) {
if (threadIdx.x == 0) {
grad_input_k[target_idx] += static_cast<scalar_t>(total_sum);
}
}
for (int d = threadIdx.x; d < dim; d += blockDim.x) {
grad_input_k[d] *= *grad_output_k;
}
```
By the way, here is a google colab URL with test code, you can easily reproduce this bug with the NVIDIA T4 provided by the google colab platform:
https://colab.research.google.com/drive/170CHg9BBvoc9wiHXmHuhZ3EZjpnFPeHV?usp=sharing
A simplified case for reproducing:
```python
import torch
import numpy as np
n = 1
c = 33
index = 32
# make test case
x = np.zeros((n, c), dtype=np.float32)
y = np.zeros((n, c), dtype=np.int64) + -1
y[0][0] = index
x_fp32 = torch.tensor(x, dtype=torch.float32, requires_grad=True).cuda()
x_fp32.retain_grad()
x_fp64 = torch.tensor(x, dtype=torch.float64, requires_grad=True).cuda()
x_fp64.retain_grad()
y = torch.tensor(y, dtype=torch.int64).cuda()
# Compute with data input in float32 and float64 formats respectively.
out_fp32 = torch.nn.functional.multilabel_margin_loss(x_fp32, y)
out_fp32.backward()
out_fp64 = torch.nn.functional.multilabel_margin_loss(x_fp64, y)
out_fp64.backward()
# Print grad of input.
print('x_fp32.grad=', x_fp32.grad) # The value of the last element in the output should be -0.9697, not 0.0, it's easy to make a check by hand.
print('x_fp64.grad=', x_fp64.grad)
```
the result tested on NVIDIA T4 with PyTorch 1.12.1:
```text
x_fp32.grad= tensor([[0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0000]], device='cuda:0')
x_fp64.grad= tensor([[ 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303, 0.0303,
-0.9697]], device='cuda:0', dtype=torch.float64)
```
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @ngimel
| 1 |
4,866 | 84,194 |
Pytorch gets small bias on the result of different types of divisors while doing floating point division.
|
module: numerical-stability, module: cuda, triaged
|
### ๐ Describe the bug
results are different, which may arise from rounding.
```python
a1 = torch.tensor(858.8350830078125, dtype=torch.float32).cuda()/100.0
a2 = torch.tensor(858.8350830078125, dtype=torch.float32).cuda()/torch.tensor(100.0).cuda()
print(a1 - a2)
# out: tensor(-9.5367e-07, device='cuda:0')
```
### Versions
python: 3.8.10
pytorch: 1.8.1+cu111
While using tensorflow, the results are equal.
```python
with tf.device('/GPU:0'):
a1 = tf.constant([858.8350830078125], dtype=tf.float32) /100.0
a2 = tf.constant([858.8350830078125], dtype=tf.float32) / tf.constant(100.0, dtype=tf.float32)
print(a1 - a2)
# out: tf.Tensor([0.], shape=(1,), dtype=float32)
```
cc @ngimel
| 4 |
4,867 | 84,193 |
Attach execution time to each node in an fx trace
|
triaged, fx, hacktoberfest
|
## Proposal description
A proposal for HacktoberFest.
The idea is to plot a graph of a model that shows the execution time of each node or op.
Most likely, this project should be implemented as a standalone repo rather than a feature in PyTorch.
## Background
TBD
## Details
You would need to go over:
- torch.fx Documentation: https://pytorch.org/docs/stable/fx.html
- torch.fx Profiler Tutorial: https://pytorch.org/tutorials/intermediate/fx_profiling_tutorial.html
- Printing torch.fx Graph: https://github.com/pytorch/pytorch/blob/aec0f98e7088b68ba119becffd4fdbd64b4d75a8/torch/fx/passes/graph_drawer.py#L98C1-L120C16
cc @ezyang @SherlockNoMad @soumith
| 5 |
4,868 | 84,192 |
CUDA 11.6 linux-bionic-cuda11.6-py3-gcc7-slow-gradcheck failure
|
module: ci, triaged, module: linear algebra
|
### ๐ Describe the bug
We are working on making CUDA 11.6 be our stable version of cuda, and hence moving jobs from 11.3 and 10.2 to CUDA 11.6.
I am observing following failure:
https://github.com/pytorch/pytorch/runs/8044912666?check_suite_focus=true
On this PR: https://github.com/pytorch/pytorch/pull/84120
```
ERROR [3.311s]: test_fn_gradgrad_linalg_det_singular_cuda_float64 (__main__.TestGradientsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1941, in wrapper
method(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1941, in wrapper
method(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 391, in instantiated_test
raise rte
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 378, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 853, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 815, in test_wrapper
return test(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_ops_gradients.py", line 190, in test_fn_gradgrad
self._check_helper(device, dtype, op, op.get_op(), 'bwgrad_bwgrad')
File "/var/lib/jenkins/workspace/test/test_ops_gradients.py", line 133, in _check_helper
self.assertTrue(gradgradcheck(fn, gradcheck_args, **kwargs))
File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3233, in gradgradcheck
return torch.autograd.gradgradcheck(fn, inputs, grad_outputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1574, in gradgradcheck
return gradcheck(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1418, in gradcheck
return _gradcheck_helper(**args)
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1432, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1075, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1131, in _slow_gradcheck
raise GradcheckError(_get_notallclose_msg(a, n, i, j, complex_indices, test_imag))
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[-1.2642e-07, 2.8158e-07, 1.7074e-07, -7.6408e-08, 1.8773e-02,
-1.0848e-02, -2.0257e-08, -2.6593e-02, -2.0109e-02, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 8.6736e-13, 0.0000e+00, 0.0000e+00, -1.8773e-02, 0.0000e+00,
4.9516e-02, 2.6593e-02, 0.0000e+00, 8.8741e-03, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-9.3607e-08, 2.0849e-07, 1.2642e-07, 1.0848e-02, -4.9516e-02,
7.6408e-08, 2.0109e-02, -8.8740e-03, 2.0257e-08, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 4.1836e-07, -1.8774e-02, 1.0848e-02, 2.5285e-07, -5.6315e-07,
-3.4148e-07, 6.7034e-08, -1.9080e-02, -1.0416e-02, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 1.8773e-02, 0.0000e+00, -4.9516e-02, 0.0000e+00, 0.0000e+00,
-8.6736e-13, 1.9080e-02, 0.0000e+00, -2.5707e-03, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-1.0848e-02, 4.9516e-02, 0.0000e+00, 0.0000e+00, 0.0000e+00,
8.6736e-13, 1.0415e-02, 2.5707e-03, 2.1684e-13, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-8.6736e-13, 2.6593e-02, 2.0109e-02, -8.6736e-13, 1.9080e-02,
1.0415e-02, -2.1684e-13, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-2.6593e-02, 3.4694e-12, -8.8741e-03, -1.9080e-02, 1.7347e-12,
2.5707e-03, 0.0000e+00, 4.3368e-13, 2.1684e-13, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-2.0109e-02, 8.8741e-03, -1.7347e-12, -1.0415e-02, -2.5707e-03,
-8.6736e-13, 2.1684e-13, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 1.3878e-11, 0.0000e+00, -1.4490e-01, 3.5436e-01,
-1.7347e-12, 5.9408e-02, 1.2011e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -6.8057e-06,
2.9150e-06, -1.0446e-05, 1.4491e-01, -2.0972e-06, 7.0634e-02,
-5.9410e-02, 9.1734e-07, -1.4263e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
3.4694e-12, -1.3878e-11, -3.5436e-01, -7.0626e-02, 6.9389e-12,
-1.2011e-01, 1.4263e-01, -3.4694e-12],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
1.4490e-01, -3.5436e-01, 6.9389e-12, 0.0000e+00, -6.9389e-12,
1.7347e-12, 2.8592e-03, -1.9792e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -1.4491e-01,
1.7955e-06, -7.0632e-02, 3.0158e-06, -1.2917e-06, 4.6289e-06,
-2.8605e-03, 5.6502e-07, 8.0385e-02],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 3.5436e-01,
7.0625e-02, 3.5910e-06, -1.6832e-06, 7.2095e-07, -2.5835e-06,
1.9792e-01, -8.0388e-02, 1.1300e-06],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -8.2096e-06,
-5.9404e-02, -1.2012e-01, 5.9063e-06, -2.8617e-03, 1.9793e-01,
-2.5835e-06, 1.1066e-06, -3.9653e-06],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 5.9427e-02,
-8.2096e-06, 1.4266e-01, 2.8454e-03, 5.9063e-06, -8.0408e-02,
6.0316e-06, -2.5835e-06, 9.2577e-06],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.2010e-01,
-1.4263e-01, -8.2096e-06, -1.9792e-01, 8.0386e-02, 5.9063e-06,
-1.6832e-06, 7.2095e-07, -2.5835e-06]], device='cuda:0',
dtype=torch.float64)
analytical:tensor([[ 1.6487e-18, 2.5953e-19, 1.7094e-18, 1.0771e-18, 1.8773e-02,
-1.0848e-02, 4.3586e-21, -2.6593e-02, -2.0109e-02, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 4.7590e-19, 1.1971e-19, -1.6395e-18, -1.8773e-02, -3.5903e-18,
4.9516e-02, 2.6593e-02, 3.9621e-19, 8.8741e-03, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 2.5358e-20, -5.2994e-18, -3.4862e-18, 1.0848e-02, -4.9516e-02,
-2.9928e-19, 2.0109e-02, -8.8741e-03, 4.1405e-19, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-1.0177e-19, -1.8773e-02, 1.0848e-02, 1.0191e-19, 2.3322e-18,
1.5783e-18, 1.0780e-19, -1.9080e-02, -1.0415e-02, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 1.8773e-02, 2.9542e-20, -4.9516e-02, 4.6931e-18, -2.3037e-19,
1.4534e-18, 1.9080e-02, 1.2146e-18, -2.5707e-03, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-1.0848e-02, 4.9516e-02, 2.0289e-18, -3.3563e-18, 2.1024e-18,
-7.3856e-20, 1.0415e-02, 2.5707e-03, -7.5162e-19, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-1.5261e-18, 2.6593e-02, 2.0109e-02, -7.2256e-19, 1.9080e-02,
1.0415e-02, -4.3381e-19, -2.0338e-19, 2.5978e-19, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-2.6593e-02, -8.5855e-19, -8.8741e-03, -1.9080e-02, -2.2611e-18,
2.5707e-03, 2.0338e-19, -6.6969e-19, -3.7131e-19, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[-2.0109e-02, 8.8741e-03, 1.3362e-18, -1.0415e-02, -2.5707e-03,
2.1464e-18, 1.7390e-19, 8.1441e-19, 2.5094e-19, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -7.7575e-18,
1.1521e-17, -5.7016e-18, -1.1786e-17, -1.4490e-01, 3.5436e-01,
-2.1773e-17, 5.9408e-02, 1.2011e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -2.5399e-17,
8.2652e-18, 7.4699e-18, 1.4490e-01, 1.8690e-18, 7.0626e-02,
-5.9408e-02, -1.1662e-17, -1.4263e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 5.5287e-17,
-6.4569e-18, -1.9148e-17, -3.5436e-01, -7.0626e-02, 1.0427e-17,
-1.2011e-01, 1.4263e-01, 2.1830e-17],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 3.8757e-17,
1.4490e-01, -3.5436e-01, 3.6603e-17, -1.7146e-17, 5.1776e-17,
-1.3481e-17, 2.8592e-03, -1.9792e-01],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -1.4490e-01,
-9.7294e-18, -7.0626e-02, -2.7567e-18, 3.4256e-18, -1.2143e-17,
-2.8592e-03, 6.4823e-18, 8.0387e-02],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 3.5436e-01,
7.0626e-02, -5.4981e-17, 3.7349e-18, -4.7476e-18, 2.1103e-17,
1.9792e-01, -8.0387e-02, -1.5705e-17],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.1132e-18,
-5.9408e-02, -1.2011e-01, 8.9919e-18, -2.8592e-03, 1.9792e-01,
-1.7702e-17, 3.7265e-18, -2.0405e-17],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 5.9408e-02,
-6.4541e-18, 1.4263e-01, 2.8592e-03, 1.8921e-18, -8.0387e-02,
8.3473e-18, -4.1916e-18, 1.7186e-17],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.2011e-01,
-1.4263e-01, -8.7791e-18, -1.9792e-01, 8.0387e-02, 1.5111e-17,
-7.3510e-18, 3.4224e-18, 6.8696e-18]], device='cuda:0',
dtype=torch.float64)
----------------------------------------------------------------------
Ran 7438 tests in 9875.920s
FAILED (errors=2, skipped=3940, expected failures=70)
Generating XML reports...
Generated XML report: test-reports/python-unittest/test_ops_gradients/TEST-TestGradientsCUDA-20220826220207.xml
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/run_test.py", line 1065, in <module>
main()
File "/var/lib/jenkins/workspace/test/run_test.py", line 1043, in main
raise RuntimeError(err_message)
RuntimeError: test_ops_gradients failed!
real 180m56.922s
user 187m48.936s
sys 12m11.785s
```
cc: @malfet @ptrblck @ngimel
### Versions
Pytorch Nightly 1.13
CUDA 11.6
cc @seemethere @malfet @pytorch/pytorch-dev-infra @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 9 |
4,869 | 84,189 |
RuntimeError: outputs_[i]->uses().empty() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/ir.cpp:1027, please report a bug to PyTorch. (eraseOutput at /pytorch/torch/csrc/jit/ir.cpp:1027)
|
triage review, oncall: jit
|

| 3 |
4,870 | 93,652 |
Sparse jagged tensor support for inductor
|
triaged, oncall: pt2
|
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,871 | 84,187 |
[jit] WithInsertPoint can't get back to the prev_ node if the prev_ node has been destroyed
|
oncall: jit
|
### ๐ Describe the bug
**The WithInsertPoint code is here:** https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.h#L1444
I create a insert point guard1 before node1 and destroy node1, then I create another insert point guard2 before node2, the error will happen when the guard2 is destructed.
**The code to reproduce is like this:**
``` c++
{
WithInsertPoint guard1(node1);
node1->destroy()
WithInsertPoint guard2(node2)
}
```
**The output is like this:**
```c++
terminate called after throwing an instance of 'c10::Error'
what(): n->owningGraph() == this && n->inBlockList() INTERNAL ASSERT FAILED at "/torch/src/pytorch/torch/csrc/jit/ir/ir.h":1396, please report a bug to PyTorch.
Exception raised from setInsertPoint at /torch/src/pytorch/torch/csrc/jit/ir/ir.h:1213 (most recent call first):
frame #0: <unknown function> + 0xe14ad (0x7faf2773e4ad in /torch/src/pytorch/build/lib/libc10.so)
frame #1: std::function<std::string ()>::operator()() const + 0x3d (0x7faf2cbfc5fb in /torch/src/pytorch/build/lib/libtorch_cpu.so)
frame #2: c10::Error::Error(c10::SourceLocation, std::string) + 0x28 (0x7faf2773d854 in /torch/src/pytorch/build/lib/libc10.so)
frame #3: torch::jit::Graph::setInsertPoint(torch::jit::Node*) + 0xbf (0x431113 in ./build/bin/magicmind/test_remove_unnecessary_casts)
frame #4: torch::jit::WithInsertPoint::~WithInsertPoint() + 0x30 (0x431226 in ./build/bin/magicmind/test_remove_unnecessary_casts)
frame #5: torch_mlu::jit::parser::core::lowering::passes::RemoveSingleUse0DTensors(std::shared_ptr<torch::jit::Graph>&) + 0x14f0 (0x7faf3794ba9e in /torch/src/catch/build/jit/libjit_engine_adaptor.so)
frame #6: LoweringPasses_RemoveSingleUse0DTensorsIntCorrectly_Test::TestBody() + 0x314 (0x42ca96 in ./build/bin/magicmind/test_remove_unnecessary_casts)
frame #7: void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x65 (0x7faf35b2fcc7 in /torch/src/catch/build/test/libgtest_shared.so)
frame #8: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x4b (0x7faf35b2a2b6 in /torch/src/catch/build/test/libgtest_shared.so)
frame #9: testing::Test::Run() + 0xd2 (0x7faf35b0b520 in /torch/src/catch/build/test/libgtest_shared.so)
frame #10: testing::TestInfo::Run() + 0xed (0x7faf35b0bd73 in /torch/src/catch/build/test/libgtest_shared.so)
frame #11: testing::TestCase::Run() + 0x107 (0x7faf35b0c3c7 in /torch/src/catch/build/test/libgtest_shared.so)
frame #12: testing::internal::UnitTestImpl::RunAllTests() + 0x2a9 (0x7faf35b1674d in /torch/src/catch/build/test/libgtest_shared.so)
frame #13: bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x65 (0x7faf35b30cb3 in /torch/src/catch/build/test/libgtest_shared.so)
frame #14: bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x4b (0x7faf35b2b016 in /torch/src/catch/build/test/libgtest_shared.so)
frame #15: testing::UnitTest::Run() + 0xba (0x7faf35b15330 in /torch/src/catch/build/test/libgtest_shared.so)
frame #16: RUN_ALL_TESTS() + 0x11 (0x4268fa in ./build/bin/magicmind/test_remove_unnecessary_casts)
frame #17: main + 0x31 (0x4262be in ./build/bin/magicmind/test_remove_unnecessary_casts)
frame #18: __libc_start_main + 0xf5 (0x7faef5fca555 in /lib64/libc.so.6)
frame #19: ./build/bin/magicmind/test_remove_unnecessary_casts() [0x425dd9]
```
**The reason for the error is:**
When the guard2 destructs, it tries to set insert_before_ of the graph to the prev_ that has been destroyed, and the assert here: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.h#L1396 can not be satistified.
**Should the destructor of WithInsertPoint add a check whether the prev_ is valid ?**
### Versions
master
| 0 |
4,872 | 84,181 |
Session of Google Colab crashes when `torch.utils::SummaryWriter` is called after importing `torchaudio`
|
high priority, module: crash, triaged, module: tensorboard
|
### ๐ Describe the bug
I faced a session crash by running the following script on Google Colaboratory:
```python
import torchaudio
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter() # session crashes
```
I'm not sure this bug is due to `pytorch`, `torchaudio`, or Google Colab.
You can reproduce the crash by running https://gist.github.com/tky823/838b9bf3722c686213853e31aa3f1c1e
### Notes
I didn't face this bug by changing the order of imports, i.e.
```python
from torch.utils.tensorboard import SummaryWriter
import torchaudio
writer = SummaryWriter() # session works!
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519
| 4 |
4,873 | 84,178 |
Support setting strides on quantized weights of Embedding
|
triage review, oncall: quantization, triaged
|
### ๐ The feature, motivation and pitch
For quantized Embedding, [deberta in transformers](https://github.com/huggingface/transformers/blob/21f6f58721dd9154357576be6de54eefef1f1818/src/transformers/models/deberta/modeling_deberta.py#L694) will choose weights from original weights, but I get:
`*** RuntimeError: Setting strides is possible only on uniformly or per channel quantized tensors`.
Below are the details of quantized weights of Embedding. torch.per_channel_affine_float_qparams should be a per channel quantization schema, right?
---------
tensor([[ 0.0064, -0.0069, -0.0082, ..., -0.0215, -0.0056, -0.0003],
[-0.0069, 0.0087, -0.0225, ..., -0.0136, -0.0069, 0.0065],
[-0.0071, 0.0225, -0.0071, ..., -0.0182, -0.0219, 0.0151],
...,
[-0.0037, 0.0060, -0.0203, ..., -0.0176, 0.0088, 0.0102],
[-0.0126, -0.0350, -0.0074, ..., -0.0284, 0.0032, -0.0218],
[ 0.0065, -0.0123, 0.0018, ..., -0.0170, 0.0065, 0.0018]],
size=(50265, 768), dtype=torch.quint8,
quantization_scheme=torch.per_channel_affine_float_qparams,
scale=tensor([0.0013, 0.0022, 0.0037, ..., 0.0014, 0.0013, 0.0047]),
zero_point=tensor([ 87.2115, 50.1056, 93.9225, ..., 77.6577, 81.5797, 115.6229]),
axis=0)
### Alternatives
No
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
4,874 | 84,175 |
FSDP Forward order differs from that of first run
|
oncall: distributed, module: fsdp
|
### ๐ Describe the bug
Hi,
I am using fsdp(integrated with hf accelerate) to extend support for the [transformer reinforcement learning library](https://github.com/lvwerra/trl) to multi-gpu. This requires me to run multiple .generate calls, forward passes, and then a backwards pass.
I am getting a warning: `UserWarning: Forward order differs from that of the first iteration on rank 0 -- collectives are unchecked and may give incorrect results or hang`. Some insight would be appreciated.
Minimum code to reproduce:
```
import torch
import transformers
from transformers import (
AutoModelForCausalLM,
AutoTokenizer
)
from accelerate import Accelerator
from accelerate.logging import get_logger
from trl.gpt2 import GPT2HeadWithValueModel
from torch.optim import Adam
from transformers import DataCollatorForLanguageModeling
import torch.nn.functional as F
def main():
accelerator = Accelerator()
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPT2HeadWithValueModel.from_pretrained('gpt2')
ref_model = GPT2HeadWithValueModel.from_pretrained('gpt2')
optimizer = Adam(model.parameters(), lr=1.41e-5)
model.config.pad_token_id = model.config.eos_token_id
model = accelerator.prepare(model)
model.to(accelerator.device)
print(accelerator.state)
optimizer = accelerator.prepare(optimizer)
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "The purpose of life is "
elif rank == 1:
text_in = "Are you human? "
query_tensors = tokenizer(text_in, return_tensors="pt").to(accelerator.device)["input_ids"]
# had to run this 1 time at the start else was giving device mismatch error.
# So, before directly using `model.generate` pass a batch with dummy data through the model
outputs = model(query_tensors)
print(query_tensors)
gen_kwargs = {
"max_length": 64,
"min_length": 20,
}
with torch.no_grad():
unwrapped_model = accelerator.unwrap_model(model)
# synced_gpus was necessary else resulted into indefinite hang
response_tensors = unwrapped_model.generate(query_tensors, synced_gpus=True, **gen_kwargs)
text_out = tokenizer.decode(response_tensors[0], skip_special_tokens=True)
# Arbitrarily score generation
score = torch.tensor([1.0]).to(accelerator.device)
print(f"\nrank{rank}:\n in={text_in}\n out={text_out}")
# Now compute ppo loss
## First compute logprobs and ref_logprobs
collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
for i in range(10):
input_ids = collator([torch.cat([q, r]) for q, r in zip(query_tensors, response_tensors)])["input_ids"]
with torch.no_grad():
logits, _, v = model(input_ids)
#print('values', v)
ref_logits, _, _ = ref_model(input_ids.cpu())
ref_logits = ref_logits.to(accelerator.device)
logprobs = logprobs_from_logits(logits[:,:-1,:], input_ids[:,1:])
ref_logprobs = logprobs_from_logits(ref_logits[:,:-1,:], input_ids[:,1:])
# Only care about logprobs for generated text
start = query_tensors.size()[-1] - 1
end = query_tensors.size()[-1] + response_tensors.size()[-1] - 1
logprobs = logprobs[:, start:end]
ref_logprobs = ref_logprobs[:, start:end]
v = v[:, start-1: end-1]
print('logprob sizes', logprobs.size(), ref_logprobs.size(), v.size())
## Compute rewards
kl = logprobs - ref_logprobs
non_score_reward = .2 * kl
reward = non_score_reward.clone()
reward[-1] += score
## Compute losses
lastgaelam = 0
advantages_reversed = []
gen_len = response_tensors.shape[1]
for t in reversed(range(gen_len)):
nextvalues = v[:, t+1] if t < gen_len - 1 else 0.0
delta = reward[:, t] + 1.00 * nextvalues - v[:, t]
lastgaelam = delta + 1.00 * .99 * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + v
advantages = advantages.detach()
### With grad this time
logits, _, vpred = model(input_ids)
logprob = logprobs_from_logits(logits[:, :-1, :], input_ids[:, 1:])
logprob, vpred = logprob[:, -gen_len:], vpred[:, -gen_len-1:-1]
vf_loss = torch.mean((vpred - returns)**2)
# Backpropagate
optimizer.zero_grad()
accelerator.backward(vf_loss)
optimizer.step()
def logprobs_from_logits(logits, labels):
"""
See: https://github.com/pytorch/pytorch/issues/563#issuecomment-330103591
"""
logp = F.log_softmax(logits, dim=2)
logpy = torch.gather(logp, 2, labels.unsqueeze(2)).squeeze(-1)
return logpy
if __name__ == "__main__":
main()
```
A similar issue has already been opened on accelerate's repo with some discussion [here](https://github.com/huggingface/accelerate/issues/570)
### Versions
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Amazon Linux release 2 (Karoo) (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.26
Python version: 3.8.5 (default, Feb 18 2021, 01:24:20) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)] (64-bit runtime)
Python platform: Linux-5.10.126-117.518.amzn2.x86_64-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 7 |
4,875 | 84,167 |
`linux-bionic-cuda10.2-py3.9-gcc7` multigpu test are broken
|
oncall: distributed, module: ci, module: regression
|
### ๐ Describe the bug
With the following signature:
```
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_gloo.py", line 2357, in test_work_wait_gpu
self._test_work_wait(torch.ones(2, 2, device=self.rank) * self.rank)
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_common.py", line 1565, in _test_work_wait
traced_fn.graph.print_tabular()
File "/opt/conda/lib/python3.9/site-packages/torch/fx/graph.py", line 1237, in print_tabular
print(tabulate(node_specs,
UnboundLocalError: local variable 'tabulate' referenced before assignment
exiting process 0 with exit code: 10
ERROR:torch.testing._internal.common_distributed:Caught exception:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_distributed.py", line 622, in run_test
getattr(self, test_name)()
File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_distributed.py", line 503, in wrapper
fn()
File "/opt/conda/lib/python3.9/site-packages/torch/testing/_internal/common_distributed.py", line 145, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_gloo.py", line 2357, in test_work_wait_gpu
self._test_work_wait(torch.ones(2, 2, device=self.rank) * self.rank)
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_common.py", line 1595, in _test_work_wait
y = fn(x)
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_common.py", line 1554, in fn
work.wait()
RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:5258
exiting process 1 with exit code: 10
Process 0 terminated with exit code 10, terminating remaining processes.
ERROR (5.020s)
test_work_wait_gpu errored - num_retries_left: 3
```
### Versions
CI
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @seemethere @pytorch/pytorch-dev-infra
| 2 |
4,876 | 93,648 |
[nvfuser] view size is not compatible with input tensor's size and stride
|
triaged, oncall: pt2
|
Instructions
* Uncompress Repro file - [repro.tar.gz](https://github.com/pytorch/torchdynamo/files/9436625/repro.tar.gz)
* Run `python repro/repro.py`
The repro is minified using https://github.com/pytorch/torchdynamo/pull/1056
This only fail with `aot_nvfuser` backend.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 7 |
4,877 | 84,148 |
[Quant] Reference get_default_qconfig_mapping in docs/tutorials
|
oncall: quantization, triaged
|
Right now the recommended flow for FX graph mode quantization is:
```
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx
...
qconfig_mapping = get_default_qconfig_mapping("fbgemm")
model = prepare_fx(model, qconfig_mapping, example_inputs)
model(...) # calibrate
model = convert_fx(model)
```
However, `get_default_qconfig_mapping` is not documented or referenced (same for `get_default_qat_qconfig_mapping`). We should make sure this is easily accessible in the following places:
- https://pytorch.org/docs/master/quantization-support.html
- https://pytorch.org/docs/master/generated/torch.quantization.quantize_fx.prepare_fx.html#torch.quantization.quantize_fx.prepare_fx
- https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 0 |
4,878 | 84,140 |
Please include virtual/physical batch sizes in the tutorials
|
module: docs, triaged
|
### ๐ The doc issue
The tutorials in the website, e.g. the [CIFAR10 tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) only refer to the batch size when initializing the dataloader. The batch size used for the dataloader is the same batch size used for the optimization step:
```
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
```
However, when using a single GPU, people may sometimes want to simulate rather large virtual batch sizes, i.e., compute the gradients of many small "physical" batches, which are feasible to compute on their GPU, but only take the optimization step when reaching the desired amount of samples in the "virtual" batch. It is not trivial, since it turns out that the [gradients of the small batches are summed](https://discuss.pytorch.org/t/how-are-losses-aggregated-over-multiple-computed-batches/160041). This means that the optimization step size taken using the final gradient will be K times larger than the user-specified learning rate, which may be undesirable in most cases. The losses of the small batches should therefore be multiplied by their relative size, so the final gradient will exactly simulate a large batch.
### Suggest a potential alternative/fix
### I'd change the code as follows:
## Dataloader initialization
```
optimization_batch_size = 16
physical_batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=optimization_batch_size,
shuffle=True, num_workers=2)
```
## Training
```
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# run batch
batch_loss = 0.0
# compute all physical batches
for j in range(0, len(inputs), physical_batch_size):
end_index = min(j + physical_batch_size, len(inputs))
inputs_j = inputs[j:end_index]
labels_j = labels[j:end_index]
# forward + backward
outputs_j = net(inputs_j)
loss = criterion(outputs_j, labels_j) * len(inputs_j) / float(len(inputs))
batch_loss += loss.item()
loss.backward()
# optimize whole batch
optimizer.step()
# print statistics
running_loss += batch_loss
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
```
cc @svekars @holly1238
| 0 |
4,879 | 84,138 |
MPS convolution is sometimes returning NaNs for valid inputs.
|
triaged, module: mps
|
Splitting https://github.com/pytorch/pytorch/issues/81185 into two. This one focusses on the MPS side of the problem with the following repro:
```python
# bug_demo.py
import torch
n_trials = 100
for ii in range(n_trials):
a = torch.randn(1024, device='mps')
b = torch.randn(499, device='mps')
c = torch.nn.functional.conv1d(a.view(1, 1, -1), b.view(1, 1, -1))
if torch.isnan(torch.sum(c)):
print(f'mps: trial {ii}, nan elements {torch.isnan(c.squeeze()).nonzero().view(-1).cpu().numpy()}')
```
cc @kulinseth
| 9 |
4,880 | 84,135 |
[jit] ignored method calling static method results in an error
|
oncall: jit
|
### ๐ Describe the bug
```py
In [7]: class M(torch.nn.Module):
...: @staticmethod
...: def static(self):
...: return 1
...:
...: @torch.jit.ignore
...: def ignored(self):
...: return self.static()
...: def forward(self):
...: return self.ignored()
...:
In [8]: m = torch.jit.script(M())
In [9]: m
Out[9]: RecursiveScriptModule(original_name=M)
In [10]: m()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [10], in <cell line: 1>()
----> 1 m()
File ~/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1186, in Module._call_impl(self, *input, **kwargs)
1182 # If we don't have any hooks, we want to skip the rest of the logic in
1183 # this function, and just call forward.
1184 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1185 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1186 return forward_call(*input, **kwargs)
1187 # Do not call functions when jit is used
1188 full_backward_hooks, non_full_backward_hooks = [], []
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<ipython-input-7-176d8289088f>", line 10, in forward
def forward(self):
return self.ignored()
~~~~~~~~~~~~ <--- HERE
RuntimeError: AttributeError: 'RecursiveScriptModule' object has no attribute 'static'
At:
/Users/S_sn/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py(1261): __getattr__
/Users/S_sn/miniconda3/lib/python3.8/site-packages/torch/jit/_script.py(502): __getattr__
/Users/S_sn/miniconda3/lib/python3.8/site-packages/torch/jit/_script.py(785): __getattr__
<ipython-input-7-176d8289088f>(8): ignored
/Users/S_sn/miniconda3/lib/python3.8/site-packages/torch/jit/_recursive.py(921): lazy_binding_method
/Users/S_sn/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py(1186): _call_impl
<ipython-input-10-ec2213f3535c>(1): <cell line: 1>
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py(3361): run_code
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py(3301): run_ast_nodes
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py(3098): run_cell_async
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/async_helpers.py(129): _pseudo_sync_runner
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py(2900): _run_cell
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py(2854): run_cell
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py(646): interact
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py(653): mainloop
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/terminal/ipapp.py(318): start
/Users/S_sn/miniconda3/lib/python3.8/site-packages/traitlets/config/application.py(846): launch_instance
/Users/S_sn/miniconda3/lib/python3.8/site-packages/IPython/__init__.py(123): start_ipython
/Users/S_sn/miniconda3/bin/ipython(11): <module>
```
### Versions
```
PyTorch version: 1.13.0.dev20220805
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.20.1
Libc version: N/A
Python version: 3.8.11 (default, Jul 29 2021, 14:57:32) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.3.0a0+012d5a2
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.13.0.dev20220805
[pip3] torchaudio==0.11.0
[pip3] torchdynamo==1.13.0.dev0
[pip3] torchfile==0.1.0
[pip3] torchvision==0.13.0
[conda] numpy 1.21.2 py38hb38b75b_0
[conda] numpy-base 1.21.2 py38h6269429_0
[conda] torch 1.13.0.dev20220805 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchdynamo 1.13.0.dev0 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.13.0 pypi_0 pypi
```
| 0 |
4,881 | 84,071 |
Move self.subtest calls in FSDP test suite to run_subtests utility
|
oncall: distributed, triaged, better-engineering, module: fsdp
|
### ๐ The feature, motivation and pitch
As per the discussion in https://github.com/pytorch/pytorch/pull/83195#discussion_r952931023, we can use the `self.run_subtests` functionality which will require some refactoring of test code.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @ezyang
| 3 |
4,882 | 84,064 |
Better error message for qlinear_prepack
|
oncall: quantization, triaged
|
### ๐ Describe the bug
See https://discuss.pytorch.org/t/index-out-of-bounds-error-with-perchannel-quantization/159937
linear_prepack only works for per channel quantized weight with ch_axis=0, we should have a better error message.
### Versions
master
cc @jianyuh @raghuramank100 @jamesr66a @vkuzo
| 0 |
4,883 | 84,057 |
Expose API for Registering Post-Gradient-Computation Hook
|
module: autograd, triaged, module: fsdp
|
### ๐ The feature, motivation and pitch
To preface, this is not urgent or blocking.
Context: For FSDP to reduce-scatter gradients, it wants to register a hook that fires _after_ the gradient is computed. Currently, this is achieved by registering a hook on the `AccumulateGrad` object. This issue relates to how to access that `AccumulateGrad` object.
FSDP currently uses an `expand_as()` trick to bypass the fact that the `FlatParameter` (a leaf tensor) does not have a `grad_fn`. In the following code, `p_tmp` will have a `grad_fn` of `ExpandBackward0`, which then gives access to the `AccumulateGrad` object via `next_functions[0][0]`.
https://github.com/pytorch/pytorch/blob/00cb184512f3a636d87793f46d3f9c7fea406b25/torch/distributed/fsdp/fully_sharded_data_parallel.py#L2825-L2835
In my `IndirectParameter` prototype for FSDP, I changed `FlatParameter: nn.Parameter` to `FlatTensor: Tensor` since we no longer want to expose the `FlatParameter` to `nn.Module` methods like `named_parameters()`. In doing this, I still need to use the `expand_as()` trick since the `FlatTensor` is a leaf tensor. However, I find that the resulting tensor's `grad_fn` chain is now `AliasBackward0` -> `ExpandBackward0` -> `AccumulateGrad`; i.e. there is an additional `AliasBackward0`. I am not sure if this is expected. Either way, this provides some motivation for an API to register a post-gradient-computation hook (i.e. a hook that runs _after_ the gradient is computed) since this `expand_as()` trick is arguably hacky.
### Alternatives
The alternative is to continue using the `expand_as()` trick. When registering the post-backward hook for FSDP, we will need some logic to handle both the `FlatParameter` and `FlatTensor` case since we will need to preserve the existing code path in the near term. This is not a real problem.
### Additional context
In case this is relevant to the appearance of the `AliasBackward0`, the `FlatTensor`, which is just a `torch.Tensor`, is initialized like:
```
class FlatTensor(torch.Tensor):
...
@staticmethod
def flatten_params(
params: List[torch.Tensor],
requires_grad: bool,
) -> FlatTensor:
with torch.no_grad():
flat_tensors = [
p.detach().reshape(-1) if isinstance(p, nn.Parameter) else p.reshape(-1)
for p in params
]
flat_tensor_data = torch.cat(flat_tensors, dim=0)
flat_tensor = FlatTensor(flat_tensor_data)
flat_tensor.requires_grad_(requires_grad)
return flat_tensor
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @zhaojuanmao @mrshenli @rohan-varma
| 3 |
4,884 | 84,055 |
scripted fasterRCNN model cannot be loaded with libtorch c++ API
|
oncall: jit, module: cpp
|
### ๐ Describe the bug
I have a trained fasterRCNN model that I want to load via the libtorch C++ API for deployment. The model can be scripted using the following python code snippet successfully.
```python
# Sample code to reproduce the bug
import torch
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection import fasterrcnn_resnet50_fpn, FasterRCNN_ResNet50_FPN_Weights
weights = FasterRCNN_ResNet50_FPN_Weights.DEFAULT
model = fasterrcnn_resnet50_fpn(weights=weights)
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, 3)
# model.load_state_dict(torch.load('weights/tuned_weights.pth'))
model.eval()
script_module = torch.jit.script(model)
script_module.save("frcnn_torchscript_n3.pt")
```
Then I use the following C++ code to load it
```Cpp
#include <iostream>
#include <memory>
#include <torch/script.h>
int main(int argc, const char* argv[])
{
torch::jit::script::Module module;
try {
module = torch::jit::load(argv[1]);
std::cout<<"SUCCESS"<<std::endl;
}
catch (const c10::Error& e) {
std::cerr << "Error loading the model!\n";
std::exit(EXIT_FAILURE);
}
}
```
I get the following error
```bash
terminate called after throwing an instance of 'c10::Error'
what(): isTuple()INTERNAL ASSERT FAILED at "../aten/src/ATen/core/ivalue_inl.h":1916, please report a bug to PyTorch. Expected Tuple but got String
Exception raised from toTupleRef at ../aten/src/ATen/core/ivalue_inl.h:1916 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f30a36070eb in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f30a3602abe in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libc10.so)
frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x4e (0x7f30a360485e in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libc10.so)
frame #3: <unknown function> + 0x360de17 (0x7f3025757e17 in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x360e075 (0x7f3025758075 in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libtorch_cpu.so)
frame #5: torch::jit::SourceRange::highlight(std::ostream&) const + 0x3d (0x7f3023399b6d in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libtorch_cpu.so)
frame #6: torch::jit::ErrorReport::what() const + 0x351 (0x7f302337cd51 in /home/ash/Ash/libtorch-1.11.0+cu113/libtorch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x41f58 (0x55a325061f58 in /home/ash/Ash/cmake_build/load_torch_model)
frame #8: <unknown function> + 0x1c222 (0x55a32503c222 in /home/ash/Ash/cmake_build/load_torch_model)
frame #9: <unknown function> + 0x1cbd3 (0x55a32503cbd3 in /home/ash/Ash/cmake_build/load_torch_model)
frame #10: <unknown function> + 0x1d46d (0x55a32503d46d in /home/ash/Ash/cmake_build/load_torch_model)
frame #11: <unknown function> + 0x29ac5 (0x55a325049ac5 in /home/ash/Ash/cmake_build/load_torch_model)
frame #12: <unknown function> + 0x4a279 (0x55a32506a279 in /home/ash/Ash/cmake_build/load_torch_model)
frame #13: <unknown function> + 0x42e63 (0x55a325062e63 in /home/ash/Ash/cmake_build/load_torch_model)
frame #14: <unknown function> + 0x282e4 (0x55a3250482e4 in /home/ash/Ash/cmake_build/load_torch_model)
frame #15: <unknown function> + 0xc3b6 (0x55a32502c3b6 in /home/ash/Ash/cmake_build/load_torch_model)
frame #16: <unknown function> + 0xba6f (0x55a32502ba6f in /home/ash/Ash/cmake_build/load_torch_model)
frame #17: __libc_start_main + 0xe7 (0x7f2fe8325c87 in /lib/x86_64-linux-gnu/libc.so.6)
frame #18: <unknown function> + 0xb4da (0x55a32502b4da in /home/ash/Ash/cmake_build/load_torch_model)
```
Then I tried loading my model with the latest [libtorch](https://pytorch.org/) (1.12.1, cuda 11.3) and I get the following error
```
C++ exception with description "
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "code/__torch__/torchvision/ops/boxes.py", line 138
_59 = __torch__.torchvision.extension._assert_has_ops
_60 = _59()
_61 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _61
'nms' is being compiled since it was called from '_batched_nms_vanilla'
File "/home/ash/anaconda3/envs/torch_latest/lib/python3.10/site-packages/torchvision/ops/boxes.py", line 109
for class_id in torch.unique(idxs):
curr_indices = torch.where(idxs == class_id)[0]
curr_keep_indices = nms(boxes[curr_indices], scores[curr_indices], iou_threshold)
~~~ <--- HERE
keep_mask[curr_indices[curr_keep_indices]] = True
keep_indices = torch.where(keep_mask)[0]
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 83
_31 = torch.index(boxes, _30)
_32 = annotate(List[Optional[Tensor]], [curr_indices])
curr_keep_indices = __torch__.torchvision.ops.boxes.nms(_31, torch.index(scores, _32), iou_threshold, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_33 = annotate(List[Optional[Tensor]], [curr_keep_indices])
_34 = torch.index(curr_indices, _33)
'_batched_nms_vanilla' is being compiled since it was called from 'batched_nms'
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 35
idxs: Tensor,
iou_threshold: float) -> Tensor:
_9 = __torch__.torchvision.ops.boxes._batched_nms_vanilla
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_10 = __torch__.torchvision.ops.boxes._batched_nms_coordinate_trick
_11 = torch.numel(boxes)
'batched_nms' is being compiled since it was called from 'RegionProposalNetwork.filter_proposals'
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 72
_11 = __torch__.torchvision.ops.boxes.clip_boxes_to_image
_12 = __torch__.torchvision.ops.boxes.remove_small_boxes
_13 = __torch__.torchvision.ops.boxes.batched_nms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
num_images = (torch.size(proposals))[0]
device = ops.prim.device(proposals)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
File "/home/ash/anaconda3/envs/torch_latest/lib/python3.10/site-packages/torchvision/models/detection/rpn.py", line 376
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
proposals = proposals.view(num_images, -1, 4)
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
losses = {}
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 43
proposals0 = torch.view(proposals, [num_images, -1, 4])
image_sizes = images.image_sizes
_8 = (self).filter_proposals(proposals0, objectness0, image_sizes, num_anchors_per_level, )
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
boxes, scores, = _8
losses = annotate(Dict[str, Tensor], {})
" thrown in the test body.
Process finished with exit code 1
```
### Versions
downloaded the libtorch-1.11.0+cu113 and also tried with 1.12.1 + cuda 11.3
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.27
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 9.1.85
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 with Max-Q Design
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] ros_numpy==0.0.3
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.1 py310h1794996_0
[conda] numpy-base 1.23.1 py310hcba007f_0
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.1 py310_cu113 pytorch
[conda] torchvision 0.13.1 py310_cu113 pytorch
cc @jbschlosser
| 3 |
4,885 | 84,052 |
Index out of bounds Error with PerChannel Quantization
|
oncall: quantization, triaged
|
### ๐ Index out of bounds Error with PerChannel Quantization
Hello,
I have encountered this problem while trying to perform per-channel quantization on weights with ch_axis=1 quantization parameter. It causes the โindex out of bounds errorโ when dimention of axis 1 of the weight tensor is smaller than dimention of axis 0(in the following example 100 is smaller than 110(note that 100 will be axis 1 in weight matrix)). If the axis 0 dimention is smaller(when changing 110 to 90) the error doesnโt occure. It is not reproducible with ch_axis=0 quantization parameter. With ch_axis=0 it doesnโt matter if one axis dimention is bigger or smaller then the other.
Here is a minimal example that fails:
```
import torch
import torch.nn as nn
from torch.quantization.observer import PerChannelMinMaxObserver, MinMaxObserver
class QTestNet1(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(100,110, bias=False)
def set_qconfig(self):
self.linear.qconfig = torch.quantization.QConfig(
weight=PerChannelMinMaxObserver.with_args(dtype=torch.qint8, ch_axis=1),
activation=MinMaxObserver.with_args(qscheme=torch.per_tensor_affine)
)
def forward(self, x):
x = self.linear(x)
return x
model = QTestNet1()
model.set_qconfig()
model_prepared = torch.quantization.prepare_qat(model, inplace=False)
input_x = torch.randn(1,100)
model_prepared(input_x).shape # just checking that forward doesn't fail
model_int8 = torch.quantization.convert(model_prepared.eval()) # error
```
This is a full error:
```
/usr/local/lib/python3.7/dist-packages/torch/ao/quantization/quantize.py in convert(module, mapping, inplace, remove_qconfig, is_reference, convert_custom_config_dict)
519 _convert(
520 module, mapping, inplace=True, is_reference=is_reference,
--> 521 convert_custom_config_dict=convert_custom_config_dict)
522 if remove_qconfig:
523 _remove_qconfig(module)
/usr/local/lib/python3.7/dist-packages/torch/ao/quantization/quantize.py in _convert(module, mapping, inplace, is_reference, convert_custom_config_dict)
557 _convert(mod, mapping, True, # inplace
558 is_reference, convert_custom_config_dict)
--> 559 reassign[name] = swap_module(mod, mapping, custom_module_class_mapping)
560
561 for key, value in reassign.items():
/usr/local/lib/python3.7/dist-packages/torch/ao/quantization/quantize.py in swap_module(mod, mapping, custom_module_class_mapping)
590 new_mod = qmod.from_float(mod, weight_qparams)
591 else:
--> 592 new_mod = qmod.from_float(mod)
593 swapped = True
594
/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/linear.py in from_float(cls, mod)
271 mod.out_features,
272 dtype=dtype)
--> 273 qlinear.set_weight_bias(qweight, mod.bias)
274 qlinear.scale = float(act_scale)
275 qlinear.zero_point = int(act_zp)
/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/linear.py in set_weight_bias(self, w, b)
232
233 def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
--> 234 self._packed_params.set_weight_bias(w, b)
235
236 @classmethod
/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/linear.py in set_weight_bias(self, weight, bias)
25 def set_weight_bias(self, weight: torch.Tensor, bias: Optional[torch.Tensor]) -> None:
26 if self.dtype == torch.qint8:
---> 27 self._packed_params = torch.ops.quantized.linear_prepack(weight, bias)
28 elif self.dtype == torch.float16:
29 self._packed_params = torch.ops.quantized.linear_prepack_fp16(weight, bias)
/usr/local/lib/python3.7/dist-packages/torch/_ops.py in __call__(self, *args, **kwargs)
141 # We save the function ptr as the `op` attribute on
142 # OpOverloadPacket to access it here.
--> 143 return self._op(*args, **kwargs or {})
144
145 # TODO: use this to make a __dir__
IndexError: select(): index 100 out of range for tensor of size [100] at dimension 0
```
I am using torch==1.12.1+cu113 from google colab.
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 3 |
4,886 | 84,051 |
model.load_state_dict won't work in Child process if a sufficiently large tensor was padded in the Parent (even if empty padded)
|
high priority, module: multiprocessing, module: memory usage, triaged
|
### ๐ Describe the bug
A `model.load_state_dict`on CPU failes when executing in child process (with `multiprocessing.Process`) *if* you padded a sufficiently large tensor in the parent process (even with empty pad).
The child process blocks indefinitely on `model.load_state_dict` and ends up with a ghost process if you KeyInterrupt the parent.
Fun facts :
- the issue occurs when the tensor defined in parent is sufficiently large, and only if padded
- tensor is not even passed to child process
- the critical size of the tensor seems to depend on the size of the model according to this relation : if the model size decreases, then no problem anymore
reproducible code :
```python
import multiprocessing
import torch
import torch.nn as nn
from torch.nn import functional as F
class TheModelClass(nn.Module):
def __init__(self):
super(TheModelClass, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# no need for all this to have the error
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
# x = x.view(-1, 16 * 5 * 5)
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
# x = self.fc3(x)
# return x
pass
def init_model_and_load():
model = TheModelClass()
print("before load")
model.load_state_dict(torch.load("./model.torch", map_location=torch.device('cpu')))
print("after load") # never reaches it
return
if __name__ == "__main__":
# First, let's save a random init model
model = TheModelClass()
torch.save(model.state_dict(), "./model.torch")
print("nb params", sum(p.numel() for p in model.parameters()))
# it's probably related to the number of params : if we remove model.fc1, it works
size = [33**3] # 32**3 works, but not 33**3 ... :scream:
tensor = torch.rand(size)
pad = (0, 0)
# whether you want the error or not...
I_WANT_THIS_NOT_TO_WORK = True
if I_WANT_THIS_NOT_TO_WORK:
tensor = F.pad( # everything about this bug comes from padding, even with 0
tensor,
pad,
mode="constant",
value=0,
)
# and it doesn't work, even if you reinitialize the tensor!
tensor = torch.rand([*size])
p = multiprocessing.Process( # no need to pass tensor as argument, it still fails
target=init_model_and_load,
)
p.start()
p.join()
```
### Versions
```
PyTorch version: 1.12.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.2 (default, May 8 2022, 20:05:14) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.0
[pip3] torch==1.12.1+cpu
[pip3] torchaudio==0.12.1+cpu
[pip3] torchvision==0.13.1+cpu
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @VitalyFedyunin
| 7 |
4,887 | 84,050 |
I cannot install pytorch by Bad CRC-32 for file 'torch/lib/libtorch_cpu.so'
|
module: build, triaged
|
### ๐ Describe the bug
I cannot install Pytorch
This is the process to reproduce the error
1. pip3 cache purge
2. pip3 install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 --no-cache-dir
Output
```
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113
Collecting torch==1.11.0+cu113
Downloading https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp310-cp310-linux_x86_64.whl (1637.0 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.6/1.6 GB 75.7 MB/s eta 0:00:00
Collecting torchvision==0.12.0+cu113
Downloading https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl (22.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 22.3/22.3 MB 85.2 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in ./env/lib/python3.10/site-packages (from torch==1.11.0+cu113) (4.3.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in ./env/lib/python3.10/site-packages (from torchvision==0.12.0+cu113) (9.2.0)
Requirement already satisfied: numpy in ./env/lib/python3.10/site-packages (from torchvision==0.12.0+cu113) (1.23.2)
Collecting requests
Downloading requests-2.28.1-py3-none-any.whl (62 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 62.8/62.8 kB 4.7 MB/s eta 0:00:00
Requirement already satisfied: urllib3<1.27,>=1.21.1 in ./env/lib/python3.10/site-packages (from requests->torchvision==0.12.0+cu113) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in ./env/lib/python3.10/site-packages (from requests->torchvision==0.12.0+cu113) (2022.6.15)
Requirement already satisfied: idna<4,>=2.5 in ./env/lib/python3.10/site-packages (from requests->torchvision==0.12.0+cu113) (3.3)
Requirement already satisfied: charset-normalizer<3,>=2 in ./env/lib/python3.10/site-packages (from requests->torchvision==0.12.0+cu113) (2.1.1)
Installing collected packages: torch, requests, torchvision
ERROR: Exception:
Traceback (most recent call last):
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
return func(self, options, args)
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 461, in run
installed = install_given_reqs(
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/req/__init__.py", line 73, in install_given_reqs
requirement.install(
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/req/req_install.py", line 790, in install
install_wheel(
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py", line 727, in install_wheel
_install_wheel(
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py", line 587, in _install_wheel
file.save()
File "/home/jack3/TKHAD-main/env/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py", line 388, in save
shutil.copyfileobj(f, dest)
File "/usr/lib/python3.10/shutil.py", line 195, in copyfileobj
buf = fsrc_read(length)
File "/usr/lib/python3.10/zipfile.py", line 925, in read
data = self._read1(n)
File "/usr/lib/python3.10/zipfile.py", line 1015, in _read1
self._update_crc(data)
File "/usr/lib/python3.10/zipfile.py", line 943, in _update_crc
raise BadZipFile("Bad CRC-32 for file %r" % self.name)
zipfile.BadZipFile: Bad CRC-32 for file 'torch/lib/libtorch_cpu.so'
```
I also tried the next
pip install torch
Output:
```
Using cached torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl (776.3 MB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torch from https://files.pythonhosted.org/packages/ca/74/7342c7f21449557a8263c925071a55081edd7e9b641404cfe31d6fb71d3b/torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl:
Expected sha256 9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286
Got aee43dd7ff45ca5a8b6d770e152396c09d4ed072fa49079a29f3f70cb21d8571
```
pip install torch --no-cache-dir
Output:
```
Downloading torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl (776.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 776.3/776.3 MB 75.6 MB/s eta 0:00:00
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torch from https://files.pythonhosted.org/packages/ca/74/7342c7f21449557a8263c925071a55081edd7e9b641404cfe31d6fb71d3b/torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl:
Expected sha256 9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286
Got 02b75d6a16e74c50313a6519acf2d495c4bde0df0f0ed9528b6a29f6dcd09646
```
I am getting the same results with `--no-cache` flag
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.4 (main, Jun 29 2022, 12:14:53) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.23.2
[conda] No relevant packages
cc @malfet @seemethere
| 0 |
4,888 | 84,049 |
TorchScript unsupport tuple unpacking as function inputs.
|
oncall: jit
|
### ๐ Describe the bug
```
from typing import Tuple
def ff(a:int, b:int, c:int, d:int=1):
print(a, b, c, d)
def f(x: Tuple[int]):
ff(*x)
f((1,2,3)) # 1,2,3,1
```
Pure python works well with unpacking tuple as function inputs, but it did not work with torch.jit.script.
```
@torch.jit.script
def _f(x: Tuple[int]):
ff(*x)
```
will raise RuntimeError as :
```
RuntimeError:
ff(int a, int b, int c, int d=1) -> (NoneType):
Argument b not provided.
:
File "/tmp/ipykernel_11140/631393143.py", line 3
@torch.jit.script
def _f(x: Tuple[int]):
ff(*x)
~~ <--- HERE
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.63.01
cuDNN version: Probably one of the following:
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.0
[pip3] pytorch-wpe==0.0.1
[pip3] torch==1.12.0+cu113
[pip3] torch-complex==0.4.3
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.12.0+cu113
[pip3] torchmetrics==0.9.3
[conda] numpy 1.22.4 py39hc58783e_0 conda-forge
[conda] pytorch-lightning 1.7.0 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] torch 1.12.0+cu113 pypi_0 pypi
[conda] torch-complex 0.4.3 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.12.0+cu113 pypi_0 pypi
[conda] torchmetrics 0.9.3 pypi_0 pypi
| 0 |
4,889 | 84,045 |
COREMLTOOLs/NNPACK Python Issue
|
triaged, module: nnpack
|
### ๐ Describe the bug
Hi All,
I am trying to conduct the export.py operations via coreML and I'm sure that this is not the way the process should be initialised.
The export process should allow me to use these following on the MBP M1 Silicon MAX 32 cores by follow @glenn-jocher's instructions.
```
1. python3 export.py --weights yolov5s.pt --include coreml # CoreML export
2. python3 detect.py --weights yolov5s.mlmodel # CoreML inference (MacOS-only)
3. python3 val.py --weights yolov5s.mlmodel # CoreML validation (MacOS-only)
```
WHAT I DID:
install all of the dependences as per @glenn-jocher requirements.py and pip3 install coremltools
```
# YOLOv5 requirements
# Usage: pip install -r requirements.tx
# Base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.1
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
torch>=1.7.0
torchvision>=0.8.1
tqdm>=4.64.0
protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012
# Logging -------------------------------------
tensorboard>=2.4.1
# wandb
# clearml
# Plotting ------------------------------------
pandas>=1.1.4
seaborn>=0.11.0
# Export --------------------------------------
# coremltools>=5.2 # CoreML export
# onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.4.1 # ONNX simplifier
# nvidia-pyindex # TensorRT export
# nvidia-tensorrt # TensorRT export
# scikit-learn==0.19.2 # CoreML quantization
# tensorflow>=2.4.1 # TFLite export (or tensorflow-cpu, tensorflow-aarch64)
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export
# Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop>=0.1.1 # FLOPs computation
# albumentations>=1.0.3
# pycocotools>=2.0 # COCO mAP
# roboflow
```
THEN:
install additional upgraded packages per the error notification for such.
```
symbadian$ pip3 list
Package Version
---------------------------- --------------------
absl-py 1.2.0
anyio 3.6.1
appnope 0.1.3
arel 0.2.0
asttokens 2.0.8
astunparse 1.6.3
async-asgi-testclient 1.4.11
backcall 0.2.0
bidict 0.22.0
CacheControl 0.12.11
cachetools 5.2.0
cachy 0.3.0
certifi 2022.6.15
charset-normalizer 2.1.1
cleo 0.8.1
click 8.1.3
clikit 0.6.2
coremltools 6.0b2
crashtest 0.3.1
cycler 0.11.0
decorator 5.1.1
distlib 0.3.5
executing 0.9.1
fastapi 0.80.0
filelock 3.8.0
flatbuffers 1.12
fonttools 4.36.0
gast 0.4.0
google-auth 2.11.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.47.0
h11 0.13.0
h5py 3.7.0
html5lib 1.1
httptools 0.4.0
idna 3.3
imageio 2.21.1
imgaug 0.4.0
ipython 8.4.0
jedi 0.18.1
Jinja2 3.1.2
jsonpatch 1.32
jsonpointer 2.3
keras 2.9.0
Keras-Preprocessing 1.1.2
keyring 23.8.2
kiwisolver 1.4.4
libclang 14.0.6
lockfile 0.12.2
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.5.3
matplotlib-inline 0.1.6
mpmath 1.2.1
msgpack 1.0.4
multidict 6.0.2
natsort 8.1.0
networkx 2.8.6
numpy 1.23.2
oauthlib 3.2.0
opencv-python 4.6.0.66
opt-einsum 3.3.0
packaging 20.9
pafy 0.5.5
pandas 1.3.5
parso 0.8.3
pastel 0.2.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.2.0
pip 22.2.2
pkginfo 1.8.3
platformdirs 2.5.2
poetry 1.1.15
poetry-core 1.0.8
prompt-toolkit 3.0.30
protobuf 3.20.1
psutil 5.9.1
PTable 0.9.2
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.4.8
pyasn1-modules 0.2.8
pydantic 1.9.2
pydicom 2.3.0
Pygments 2.13.0
pylev 1.4.0
pynrrd 0.4.3
pyparsing 3.0.9
python-dateutil 2.8.2
python-dotenv 0.20.0
python-json-logger 2.0.4
python-magic 0.4.27
pytz 2022.2.1
PyWavelets 1.3.0
PyYAML 6.0
requests 2.28.1
requests-oauthlib 1.3.1
requests-toolbelt 0.9.1
rsa 4.9
scikit-image 0.19.3
scikit-video 1.1.11
scipy 1.9.0
seaborn 0.11.2
setuptools 63.2.0
Shapely 1.8.4
shellingham 1.5.0
SimpleITK 2.1.1.2
six 1.16.0
sniffio 1.2.0
stack-data 0.4.0
starlette 0.19.1
stringcase 1.2.0
supervisely 6.58.4
sympy 1.10.1
tensorboard 2.9.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.9.1
tensorflow-estimator 2.9.0
tensorflow-io-gcs-filesystem 0.26.0
termcolor 1.1.0
thop 0.1.1.post2207130030
tifffile 2022.8.12
tomlkit 0.11.4
torch 1.12.1
torchaudio 0.12.1
torchvision 0.13.1
tqdm 4.64.0
traitlets 5.3.0
trimesh 3.14.0
typing_extensions 4.3.0
urllib3 1.26.11
uvicorn 0.18.3
uvloop 0.16.0
varname 0.9.0
virtualenv 20.16.3
watchfiles 0.16.1
watchgod 0.6
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.3
Werkzeug 2.2.2
wheel 0.37.1
wrapt 1.14.1
youtube-dl 2021.12.17
```
LASTLY INITIALISE Command:
```
symbadian$ python3 export.py --weights yolov5s.pt --include coreml --imgs 640
```
I keep getting the error "python quit unexpectedly" followed by the output below
### Versions
symbadian$ python3 export.py --weights yolov5s.pt --include coreml --imgs 640
export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['coreml']
YOLOv5 ๐ v6.2-51-ge6f54c5 Python-3.10.6 torch-1.12.1 CPU
Fusing layers...
[W NNPACK.cpp:51] Could not initialize NNPACK! Reason: Unsupported hardware.
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
Illegal instruction: 4
| 1 |
4,890 | 84,044 |
hipErrorNoBinaryForGpu, but reversed
|
module: rocm, triaged
|
### ๐ Describe the bug
When running a self-Compiled PyTorch on an AMD 6800XT, I get this error:
>>> import torch
>>> print(torch.cuda.is_available())
:3:rocdevice.cpp :416 : 11110111952 us: 215481: [tid:0x7f945a871740] Initializing HSA stack.
:3:comgrctx.cpp :33 : 11110122738 us: 215481: [tid:0x7f945a871740] Loading COMGR library.
:3:rocdevice.cpp :205 : 11110122781 us: 215481: [tid:0x7f945a871740] Numa selects cpu agent[0]=0x55cb99239340(fine=0x55cb9acc7a30,coarse=0x55cb9ad5e1a0) for gpu agent=0x55cb9ad5eb10
:3:rocdevice.cpp :1610: 11110122915 us: 215481: [tid:0x7f945a871740] HMM support: 1, xnack: 0, direct host access: 0
:4:rocdevice.cpp :1918: 11110122931 us: 215481: [tid:0x7f945a871740] Allocate hsa host memory 0x7f942d59a000, size 0x28
:4:rocdevice.cpp :1918: 11110123072 us: 215481: [tid:0x7f945a871740] Allocate hsa host memory 0x7f915d600000, size 0x101000
:4:rocdevice.cpp :1918: 11110123214 us: 215481: [tid:0x7f945a871740] Allocate hsa host memory 0x7f915d400000, size 0x101000
:4:rocdevice.cpp :2054: 11110123253 us: 215481: [tid:0x7f945a871740] Allocate hsa device memory 0x7f915d000000, size 0x100000
:4:runtime.cpp :83 : 11110123255 us: 215481: [tid:0x7f945a871740] init
:3:hip_context.cpp :50 : 11110123257 us: 215481: [tid:0x7f945a871740] Direct Dispatch: 1
:1:hip_code_object.cpp :459 : 11110123371 us: 215481: [tid:0x7f945a871740] hipErrorNoBinaryForGpu: Unable to find code object for all current devices!
:1:hip_code_object.cpp :461 : 11110123374 us: 215481: [tid:0x7f945a871740] Devices:
:1:hip_code_object.cpp :463 : 11110123377 us: 215481: [tid:0x7f945a871740] amdgcn-amd-amdhsa--gfx1030 - [Not Found]
:1:hip_code_object.cpp :468 : 11110123379 us: 215481: [tid:0x7f945a871740] Bundled Code Objects:
:1:hip_code_object.cpp :485 : 11110123380 us: 215481: [tid:0x7f945a871740] host-x86_64-unknown-linux - [Unsupported]
:1:hip_code_object.cpp :482 : 11110123383 us: 215481: [tid:0x7f945a871740] hipv4-amdgcn-amd-amdhsa--gfx803 - [code object v4 is amdgcn-amd-amdhsa--gfx803]
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
The opposite error has been documented, but I see nowhere this one being mentioned. This does NOT happen using the official Wheel, which works fine. Only when I self compile it.
I have made sure my compilation folder does not mention gfx803 in any text file, only gfx1030.
I needed to apply rocBLAS patch from AUR to compile this.
I am using Arch Linux, with arch4edu repos.
### Versions
This is using the wheel, as this crashes on my compiled version.
Collecting environment information...
PyTorch version: 1.12.1+rocm5.1.1
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.1.20531-cacfa990
OS: Arch Linux (x86_64)
GCC version: (GCC) 11.3.0
Clang version: 14.0.6
CMake version: version 3.24.1
Libc version: glibc-2.36
Python version: 3.10.6 (main, Aug 3 2022, 17:39:45) [GCC 12.1.1 20220730] (64-bit runtime)
Python platform: Linux-5.19.1-arch2-1-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: AMD Radeon RX 6800 XT
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.1.20531
MIOpen runtime version: 2.16.0
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-benchmark==0.3.4
[pip3] torch==1.12.1+rocm5.1.1
[pip3] torchaudio==0.12.1+rocm5.1.1
[pip3] torchvision==0.13.1+rocm5.1.1
[conda] Could not collect
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
| 1 |
4,891 | 84,039 |
[MPS] MPSNDArray error: product of dimension sizes > 2**31
|
triaged, module: mps
|
### ๐ Describe the bug
## Full error message (no traceback):
```
AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:705: failed assertion '[MPSNDArray initWithDevice:descriptor:] Error: product of dimension sizes > 2**31 '
```
## How to reproduce
1. Install stable-diffusion using [instructions for macOS](https://github.com/magnusviri/stable-diffusion/tree/apple-silicon-mps-support)
2. run ```python scripts/txt2img.py --prompt "a horse" --plms --n_samples 1 --n_rows 1 --n_iter 1```: runs well.
3. But if you add ```--W 1024 --H 1024``` flag, which means width and height respectively, then it'll return the error.
* default width and height is 512, so no flag means 512x512
I'm finding a way to reproduce it without installing the whole procedure, so I'll update the procedure soon.
Edit: repro by @Birch-san
```python
from torch import einsum, ones
import argparse
parser = argparse.ArgumentParser(description='mpsndarray test')
parser.add_argument('--n_samples', type=int, default=2)
args = parser.parse_args()
n_samples = args.n_samples
einsum('b i d, b j d -> b i j', ones(16 * n_samples, 4096, 40, device='mps'), ones(16 * n_samples, 4096, 40, device='mps')).shape
print(n_samples, 'passed')
```
It fails when n_samples is 2 or over 7. Which looks pretty weird.
## About vram?
As you would all expect, the error seems to be something about VRAM. However, there remains question.
1. The error seems to be the size exceeding ```INT_MAX(2**31)```
The error doesn't occur at ```--W 512 --H 512``` or lower resolution.
2. The error is a software issue
Unlike errors like ```CUDA out of memory```, this error isn't about the real memory limit.
If the error was due to lack of VRAM, the code above (```--W 1024 --H 1024```) should run on M1 Max 64GB since ```--W 512 --H 512``` runs well on my M1 8G macbook. Also, the limit 2**31 is a fixed number, which would not change from the current memory usage.
So, my expectation is that something is being computed in 32-bit, which shouldn't be.
This **might not be torch's problem** - maybe(surely) metal.
However, all helps will be accepted gracefully.
Thanks.
### Versions
PyTorch version: 1.13.0.dev20220824
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:14) [Clang 12.0.1 ] (64-bit runtime)
Python platform: macOS-12.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] pytorch-lightning==1.7.2
[pip3] torch==1.13.0.dev20220824
[pip3] torch-fidelity==0.3.0
[pip3] torchaudio==0.13.0.dev20220824
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.14.0.dev20220824
[conda] numpy 1.23.2 py38h579d673_0 conda-forge
[conda] pytorch 1.13.0.dev20220824 py3.8_0 pytorch-nightly
[conda] pytorch-lightning 1.7.2 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchaudio 0.13.0.dev20220824 py38_cpu pytorch-nightly
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220824 py38_cpu pytorch-nightly
cc @kulinseth @albanD
| 31 |
4,892 | 93,643 |
Large number of WONT CONVERTs on detectron2 model
|
triaged, oncall: pt2
|
Repro: n2409433.
Dynamo is really choking on this modelโthere's many WONT CONVERTS and as a result we only get 17 ops out of the whole thing. I pulled out pytorch/torchdynamo#1010 and pytorch/torchdynamo#1009 into self-contained repros, but the rest of them probably need someone to look at the model directly.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
4,893 | 84,014 |
fill_ OpInfo code not used, also, doesn't test the case where the second argument is a Tensor
|
module: tests, triaged
|
Two observations:
1. `sample_inputs_fill_` is no longer used. Can be deleted (https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L1798-L1807)
2. The new OpInfo for fill doesn't actually test the `tensor.fill_(other_tensor)` case. Previously we did test this, as shown by `sample_inputs_fill_`
cc @mruberry
| 0 |
4,894 | 84,009 |
[Nested Tensor] Enable Nestedtensor to work with OpInfos
|
triaged, module: nestedtensor
|
### ๐ Describe the bug
We currently use [test/test_nestedtensor.py](https://github.com/pytorch/pytorch/blob/master/test/test_nestedtensor.py) to test nested tensor functions. Ideally, we would like to integrate this with OpInfos as has been done for sparse layouts, etc.
Creating this issue to track progress on this.
### Versions
n/a
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 0 |
4,895 | 84,003 |
Linux cuda-11.x binary build jobs intermittently take more than 4 hours
|
high priority, oncall: releng, module: ci, triaged
|
### ๐ Describe the bug
When we do:
```
conda install pytorch cudatoolkit=11.3 -c pytorch-nightly
```
Conda installs the CPU version:
```
pytorch pytorch-nightly/linux-64::pytorch-1.13.0.dev20220824-py3.9_cpu_0
```
The Linux jobs upload have failed:
<img width="1058" alt="Screen Shot 2022-08-24 at 9 15 41 AM" src="https://user-images.githubusercontent.com/7563158/186431127-c16e5dab-d1dc-4e33-a32c-cbaf6bbc5bb7.png">
Github upload failed:
https://github.com/pytorch/pytorch/runs/7989185420?check_suite_focus=true
```
Uploaded /home/ec2-user/actions-runner/_work/_temp/artifacts/pytorch-1.13.0.dev20220824-py3.9_cuda11.3_cudnn8.3.2_0.tar.bz2 (90.2%) bytes 1191182336:1199570943
Total file count: 1 ---- Processed file #0 (0.0%)
Error: The operation was canceled.
```
Root cause timeout more then 4hrs:
<img width="396" alt="Screen Shot 2022-08-24 at 9 00 59 PM" src="https://user-images.githubusercontent.com/7563158/186550393-e49cb402-39dd-4d8a-8b54-1e9931e2cd3f.png">
### Versions
cuda 11.3
cuda 11.6
cc @ezyang @gchanan @zou3519 @seemethere @malfet @pytorch/pytorch-dev-infra
| 5 |
4,896 | 83,994 |
General NestedTensor op coverage tracking issue
|
feature, triaged, module: nestedtensor
|
### This issue is to have a centralized place to list and track work on adding support to new ops for the NestedTensor backend.
There are large number of operators in pytorch and so they are not all implemented yet for the NestedTensor backends as it is still in the prototype phase. We will be prioritizing adding new operators based on user feedback. If possible, please also provide link to the network or use-case where this op is getting used.
If you want to work on adding support for such op, feel free to comment below to get assigned one. Please avoid picking up an op that is already being worked on or that already has a PR associated with it.
#### Op coverage requests
- [ ] aten::convolution
- [ ] aten::max.dim
- [x] detach #84078
- [x] to #87146
- [ ] eq
- [ ] masked_select
- [ ] index_select
- [ ] narrow
- [ ] alias
- [ ] Broadcasting Ufuncs along implicit NT dim
- [ ] Zero-copy Nt construction from Size info
- [ ] BCE/ Other loss functions for NT
- [ ] nested tensor creation from arbitrary masks rather than left-aligned
#### Backward op coverage requests
- [ ] gelu, relu backward
- [ ] layernorm backward
cc @cpuhrsch @jbschlosser @bhosmer
| 2 |
4,897 | 83,986 |
PyTorch EC2 runners can not be used with standard actions
|
module: ci, triaged
|
### ๐ Describe the bug
I.e. even setup-python action fails, see
https://github.com/pytorch/torchdynamo/runs/7996989843?check_suite_focus=true
for following simple workflow file:
```
test-inductor:
runs-on: linux.4xlarge.nvidia.gpu
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v4
with:
python-version: '3.8.13'
cache: 'pip'
```
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
| 5 |
4,898 | 83,980 |
Scatter min/max reduce operation that returns the corresponding indices
|
triaged, enhancement, module: scatter & gather ops
|
### ๐ The feature, motivation and pitch
There are cases where you may be interested in the maximum index associated with a scatter reduce operation. For example, if you have a tensor of vectors and a corresponding vector of group IDs and want to find the vector with the maximum norm from each group.
For example say a tensor of vectors `x` of shape `[100, 512]` and a corresponding list of group ids in the range `0` to `10`.
Example data
```python
x = torch.randn(100, 512)
group_ids = torch.randint(10, (100,))
```
and then say you want to find the vector from each group with maximum norm.
Using `scatter_reduce` with the `reduce='amax'` argument you can find the maximum norm, but there's currently no built-in way to get the index of the maximum element so that you can find the corresponding vector with that norm.
I'm proposing a `scatter_argreduce` function that would provide this functionality.
```python
x_norm = torch.norm(x, dim=1)
max_idx = torch.scatter_argreduce(x_norm, 0, group_ids, reduce='max')
# y is a [10, 512] tensor containing rows from x with maximum norm for each of the 10 groups
y = x[max_idx]
```
The following diagram gives a graphical representation of what I'm proposing

### Proposed function:
`torch.scatter_argreduce(input, dim, index, reduce, *, output_size=None, default_index=0, return_values=False) -> Tensor`
Parameters
- **input** (Tensor) - The input tensor
- **dim** (int) - the axis along which to index
- **index** (LongTensor) - the indices of elements to scatter and reduce
- **reduce** (str) - "max" or "min"
- **output_size** (int, optional) - the size of the output at dimension `dim`.
- **default_index** (int, optional) - the default index to place at intermediate indices not present in `index`.
- **return_values** (bool, optional) - additionally return the min/max value (equivalent to gather_reduce output)
A naive python implementation may look something like this (borrowed heavily from [scatter_reduce_two_cpu](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/aten/src/ATen/native/TensorAdvancedIndexing.cpp#L1274))
```python
def scatter_argreduce(input, dim, index, reduce, *, output_size=None, default_index=0, return_values=False):
assert -input.dim() <= dim < input.dim(), f"Expected `dim` to be in range {-input.dim()}, to {input.dim()}" \
f" (got {dim})"
assert reduce == 'max' or reduce == 'min', f"`reduce` argument must be one of ('min`, 'max'), (got '{reduce}')"
assert input.size() == index.size(), f"Shape mismatched between `input` (got {input.size()} and `index`" \
f" (got {index.size()})"
assert output_size is None or (isinstance(output_size, int) and output_size >= 0), "Invalid output size," \
f"should be non-negative int (got {output_size})"
assert isinstance(default_index, int), f"`default_index` should be an integer, (got {default_index})"
assert isinstance(return_values, bool), f"`return_values` should be a boolean, (got {return_values})"
dim = dim + input.dim() if dim < 0 else dim
sizes = list(input.shape)
if output_size is not None:
sizes[dim] = output_size
else:
sizes[dim] = index.max().item() + 1 if index.numel() else 0
if reduce == 'max':
fill_val = torch.finfo(input.dtype).min
else:
fill_val = torch.finfo(input.dtype).max
out_value = torch.full(sizes, fill_val, dtype=input.dtype, device=input.device)
out_idx = torch.full(sizes, default_index, dtype=index.dtype, device=index.device)
if input.numel() == 0:
if return_values:
return out_idx, out_value
return out_idx
offset1 = 1
offset2 = 1
for s in input.shape[:dim]:
offset1 *= s
for s in input.shape[dim + 1:]:
offset2 *= s
input = input.contiguous()
index = index.contiguous()
input_data = input.view(offset1, -1, offset2)
index_data = index.view(offset1, -1, offset2)
out_value_data = out_value.view(offset1, -1, offset2)
out_idx_data = out_idx.view(offset1, -1, offset2)
for i in range(offset1):
for j in range(input.shape[dim]):
for k in range(offset2):
value = input_data[i, j, k]
dim_index = index_data[i, j, k]
assert 0 <= dim_index < out_idx.size(dim), f"Expected `input` values to be in the range {0} to " \
f"{out_idx.size(dim)}, (got {dim_index})"
if reduce == 'max':
if out_value_data[i, dim_index, k] <= value:
out_value_data[i, dim_index, k] = value
out_idx_data[i, dim_index, k] = j
else:
if out_value_data[i, dim_index, k] >= value:
out_value_data[i, dim_index, k] = value
out_idx_data[i, dim_index, k] = j
if return_values:
out_value.masked_fill_(out_value == fill_val, 0)
return out_idx, out_value
return out_idx
```
### Alternatives
_No response_
### Additional context
_No response_
cc @mikaylagawarecki
| 1 |
4,899 | 83,979 |
Undefined reference in libtorch_cpu.so `...std::__cxx11::basic_string...`
|
module: build, triaged
|
### ๐ Describe the bug
When I try to build app that should link with libtorch I get following (output of make):
```
g++ -c -pipe -D_GLIBCXX_USE_CXX11_ABI=1 -O2 -std=gnu++1y -Wall -W -fPIC -I. -o main.o ../main.cpp
g++ -Wl,-O1 -o test main.o -lc10 -ltorch_cpu
//usr/local/lib/libtorch_cpu.so: undefined reference to `std::allocator<std::pair<long, std::tuple<torch::jit::SourceRange, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::intrusive_ptr<torch::jit::InlinedCallStack, c10::detail::intrusive_target_default_null_type<torch::jit::InlinedCallStack> > > > >::allocator()'
```
libtorch (1.11.0) has been builded successfully with commands:
```
wget https://github.com/pytorch/pytorch/releases/download/v1.11.0/pytorch-v1.11.0.tar.gz && \
tar -xvzf pytorch-v1.11.0.tar.gz && rm pytorch-v1.11.0.tar.gz && \
cd pytorch-v1.11.0 && mkdir build && cd build && \
cmake -D CMAKE_BUILD_TYPE=Release \
-D USE_AVX=OFF \
-D USE_NNPACK=OFF \
-D USE_MKLDNN=OFF \
-D BUILD_PYTHON=OFF \
.. && \
make -j1 && \
sudo make install && \
sudo ldconfig
```
what is the reason?
I already try:
- build app with -D_GLIBCXX_USE_CXX11_ABI=0 - another errors appear
- use Ubuntu18.04 - success, all works, but I can not understand why on RED OS (my target) it is not working...
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: RED OS (7.2) (x86_64)
GCC version: (GCC) 7.2.1 20180116 (Red Hat 7.2.1-7)
Clang version: Could not collect
CMake version: version 3.9.0
Libc version: glibc-2.3.4
Python version: 3.6.13 (default, Apr 5 2022, 16:16:18) [GCC 7.2.1 20180116 (Red Hat 7.2.1-7)] (64-bit runtime)
Python platform: Linux-4.19.204-2.el7.x86_64-x86_64-with-centos-7.2
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect```
```
cc @malfet @seemethere
| 0 |
4,900 | 83,977 |
pytorch 1.12.1 doesn't build with ffmpeg 5.0
|
module: build, triaged
|
### ๐ Describe the bug
I am trying to create a Yocto recipe for Pytorch and the build fails with these errors
```
[yocto@ip-192-168-1-213 temp]$ grep 'error:' log.do_compile.3444492
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:15:5: error: 'av_register_all' was not declared in this scope
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:16:5: error: 'avcodec_register_all' was not declared in this scope
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:30:17: error: 'avcodec_decode_audio4' was not declared in this scope; did you mean 'avcodec_decode_subtitle2'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:188:21: error: 'struct AVStream' has no member named 'codec'
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:193:21: error: 'struct AVStream' has no member named 'codec'
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:210:40: error: 'AVStream' {aka 'struct AVStream'} has no member named 'codec'
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:229:70: error: 'AVStream' {aka 'struct AVStream'} has no member named 'codec'
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:455:13: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:460:13: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:486:13: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:491:15: error: 'avcodec_decode_video2' was not declared in this scope; did you mean 'avcodec_decode_subtitle2'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:500:13: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:506:15: error: 'av_frame_get_best_effort_timestamp' was not declared in this scope
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:537:17: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:563:17: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:574:15: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:586:30: error: 'avpicture_get_size' was not declared in this scope
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:591:20: error: 'AVPicture' was not declared in this scope; did you mean 'gotPicture'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:591:30: error: expected primary-expression before ')' token
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:590:26: error: 'avpicture_fill' was not declared in this scope
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:631:9: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
/data/dtd-yocto-4.0/tmp-sicom-glibc/work/corei7-64-oe-linux/libtorch/1.12.1-r0/git/caffe2/video/video_decoder.cc:633:9: error: 'av_free_packet' was not declared in this scope; did you mean 'av_new_packet'?
```
Yocto 4.0 Kirkstone, FFMPEG 5.0.x
### Versions
Yocto is a cross-compiler environment and `collect_env.py` script collects data from the build host, which definitely doesn't match what Yocto uses. But here it is:
```
[yocto@ip-192-168-1-213 git]$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Fedora 33 (Cloud Edition) (x86_64)
GCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Clang version: 11.0.0 (Fedora 11.0.0-3.fc33)
CMake version: Could not collect
Libc version: glibc-2.32
Python version: 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)] (64-bit runtime)
Python platform: Linux-5.14.17-101.fc33.x86_64-x86_64-with-glibc2.32
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
```
cc @malfet @seemethere
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.