Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
4,301 | 93,604 |
TorchBench - moco - RuntimeError: Tensors must be CUDA and dense
|
triaged, bug
|
### 🐛 Describe the bug
Run TB model moco
```
benchmarks/dynamo/torchbench.py -d cuda --inductor --training --float32 --accuracy --no-skip --only moco
```
### Error logs
```
cuda train moco TorchDynamo optimized model failed to run because of following error
ERROR:common:
from user code:
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 172, in <graph break in concat_all_gather>
torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torchdynamo.config.suppress_errors = True
==========
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 131, in _get_fake_value
return wrap_fake_exception(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 725, in wrap_fake_exception
return fn()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 132, in <lambda>
lambda: _run_node(tx.output, node, args, kwargs, nnmodule)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 51, in _run_node
return node.target(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/distributed/distributed_c10d.py", line 1318, in wrapper
return func(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/distributed/distributed_c10d.py", line 2313, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
File "/scratch/ybliang/work/repos/pytorch/torch/_subclasses/fake_tensor.py", line 894, in __torch_dispatch__
r = func(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_ops.py", line 257, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Tensors must be CUDA and dense
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/benchmarks/dynamo/common.py", line 1095, in check_accuracy
new_result = optimized_model_iter_fn(model, example_inputs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 157, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/benchmarks/dynamo/common.py", line 999, in run_n_iterations
self.model_iter_fn(mod, inputs, collect_outputs=False)
File "/scratch/ybliang/work/repos/pytorch/benchmarks/dynamo/torchbench.py", line 332, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
File "/scratch/ybliang/work/repos/pytorch/benchmarks/dynamo/torchbench.py", line 335, in <graph break in forward_and_backward_pass>
pred = mod(*cloned_inputs)
File "/scratch/ybliang/work/repos/pytorch/torch/nn/modules/module.py", line 1423, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/nn/parallel/distributed.py", line 1093, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/nn/parallel/distributed.py", line 1047, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])
File "/scratch/ybliang/work/repos/pytorch/torch/nn/modules/module.py", line 1423, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 130, in forward
self._momentum_update_key_encoder() # update the key encoder
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 133, in <graph break in forward>
im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k)
File "/scratch/ybliang/work/repos/pytorch/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 76, in _batch_shuffle_ddp
x_gather = concat_all_gather(x)
File "/scratch/ybliang/work/repos/pytorch/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 171, in concat_all_gather
for _ in range(torch.distributed.get_world_size())]
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 236, in catch_errors
return callback(frame, cache_size)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 466, in _convert_frame
result = inner_convert(frame, cache_size)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 118, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 92, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 394, in _compile
out_code = transform_code_object(code, transform)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/convert_frame.py", line 382, in transform
tracer.run()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 1452, in run
super().run()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 352, in run
and self.step()
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 322, in step
getattr(self, inst.opname)(inst)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 174, in wrapper
return inner_fn(self, inst)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 807, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/symbolic_convert.py", line 264, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/torch.py", line 381, in call_function
tensor_variable = TensorVariable.create(
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 201, in create
example_value = _get_fake_value(proxy.node, tx)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/variables/tensor.py", line 145, in _get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
from user code:
File "/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 172, in <graph break in concat_all_gather>
torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torchdynamo.config.suppress_errors = True
==========
FAIL
```
Look at ```"/scratch/ybliang/work/repos/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 172```, the failure op is ```c10d.allgather_.default```. It seems ```moco``` model explicitly calling these ops rather than calling them through DDP. We have to think about how to support such case.
### Minified repro
_No response_
| 4 |
4,302 | 93,603 |
Move Dropout to LowMemDropout Replacement To PyDispatcher
|
triaged, oncall: pt2
|
Dynamo tracing will replace [dropout](https://github.com/pytorch/pytorch/blob/master/torch/_inductor/overrides.py#L295) with [LowMemDropout](https://github.com/pytorch/pytorch/blob/master/torch/_inductor/overrides.py#L247). Because we are doing it at the bytecode level, this replacment is fragile, and does not work for `nn.Dropout`. We should instead use a pre-autograd pydispatcher decomposition to do the replacement. We should also do this for the `rand_like` replacement.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
4,303 | 88,327 |
MSE documentation is weak
|
module: docs, triaged
|
### 📚 The doc issue
Currently we have the MSE described as following:
> `CLASS` `torch.nn.MSELoss`(`size_average=None`, `reduce=None`, `reduction='mean'`)[[SOURCE]](https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html#MSELoss)
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input $x$ and target $y$.
>
> The unreduced (i.e. with `reduction` set to `'none'`) loss can be described as:
>
> $$
> \ell(x, y) = L = \lbrace l_1,\dots,l_N\rbrace^\top, \quad l_n = \left( x_n - y_n \right)^2,
> $$
Which is not the case.
Say `x` and `y` have `N = 3` elements, `L` does **not** contain the `N` squared L2 norms.
```python
>>> # Define cost
>>> C = nn.MSELoss(reduction='none')
>>> x = torch.randn(3, 5)
>>> y = torch.randn(3, 5)
>>> C(x, y).size()
torch.Size([3, 5])
```
We're assuming the first dimension is the batch size, yes?
cc @svekars @carljparker
| 7 |
4,304 | 88,325 |
Group losses in a common namespace
|
module: nn, module: loss, triaged, needs research
|
### 🚀 The feature, motivation and pitch
It would be nice to remove the need to go fish losses from the `nn` module and have them all in a common `nn.loss` namespace.
So, instead of having
```
nn.L1Loss
nn.MSELoss
nn.CrossEntropyLoss
nn.CTCLoss
nn.NLLLoss
```
we would have
```
nn.loss.L1
nn.loss.MSE
nn.loss.CrossEntropy
nn.loss.CTC
nn.loss.NLL
```
Such a hierarchy (Containers, Convolution Layers, Pooling layers, Padding Layers, *etc*…) already exists in the [`nn` documentation](https://pytorch.org/docs/stable/nn.html) and, IMHO, should be reflected in the API as well. Is there a reason why this is not the case?
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 3 |
4,305 | 88,320 |
`torch.load()` cannot load data saved at non-zero position in a file (`failed finding central directory`)
|
module: serialization, triaged
|
### 🐛 Describe the bug
The data saved by `torch.save()` at position in a file other than the beginning cannot be loaded by `torch.load()`.
```
Python 3.10.7 (main, Oct 11 2022, 21:51:23) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import io, torch
>>> torch.__version__
'1.13.0'
>>> with open('test.bin', 'wb') as out:
... out.write(b'header')
... pos = out.tell()
... torch.save(b'data', out)
...
6
>>> with open('test.bin', 'rb') as inp:
... inp.seek(pos)
... data = torch.load(inp)
...
6
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/Users/azawlocki/.pyenv/versions/torch-13/lib/python3.10/site-packages/torch/serialization.py", line 777, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/Users/azawlocki/.pyenv/versions/torch-13/lib/python3.10/site-packages/torch/serialization.py", line 282, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
```
The same happens when using a `BytesIO` buffer instead of a file:
``` # ... continuing
>>> bio = io.BytesIO()
>>> bio.write(b'header')
6
>>> pos = bio.tell()
>>> torch.save(b'data', bio)
>>>
>>> bio.seek(pos)
6
>>> torch.load(bio)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/azawlocki/.pyenv/versions/torch-13/lib/python3.10/site-packages/torch/serialization.py", line 777, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/Users/azawlocki/.pyenv/versions/torch-13/lib/python3.10/site-packages/torch/serialization.py", line 282, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
```
But:
```
>>> torch.load(io.BytesIO(bio.getvalue()[pos:]))
b'data'
```
I think the errors are due to a bug in this code:
https://github.com/pytorch/pytorch/blob/7c98e70d44abc7a1aead68b6ea6c8adc8c554db5/torch/csrc/jit/python/init.cpp#L1709-L1714
Line 1712 should be
```
buffer.attr("seek")(0, py::module::import("os").attr("SEEK_END"));
```
the current code sets file position not at the end of file but `current` bytes before.
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.7 (main, Oct 11 2022, 21:51:23) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] torch==1.13.0
[conda] Could not collect
```
cc @mruberry
| 1 |
4,306 | 88,309 |
ASAN shard 4 started to OOM after unrelated commit
|
module: ci, triaged
|
### 🐛 Describe the bug
See https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=asan%20%2F%20test%20(default%2C%204%2C%205
Error looks as follows:
```
test_serialization_offset_filelike_weights_only_False (__main__.TestOldSerialization) ... =================================================================
==1033==ERROR: AddressSanitizer: allocator is out of memory trying to allocate 0x80000000 bytes
```
First commit (clearly unrelated): https://hud.pytorch.org/pytorch/pytorch/commit/a51da28551e9f13a7afca5bbc829a8d9abced44e
### Versions
CI
cc @ezyang @gchanan @zou3519 @seemethere @pytorch/pytorch-dev-infra
| 2 |
4,307 | 88,304 |
AttributeError: module 'tensorboard.compat.tensorflow_stub.io.gfile' has no attribute 'MakeDirs'
|
triaged, module: tensorboard
|
### 🐛 I am trying to run this code of BERT-based calssification, but I got this error message, where it says that there is no attribute in the tensorboard. Below is the imported line and the error line is given where the code threw the error.
Is it because of the version that I am using:
> tensorboard 2.10.1 pypi_0 pypi
> tensorboard-data-server 0.6.1 pypi_0 pypi
> tensorboard-plugin-wit 1.8.1 pypi_0 pypi
> tensorflow 1.15.0 mkl_py37h28c19af_0
> tensorflow-base 1.15.0 mkl_py37he1670d9_0
> tensorflow-estimator 1.15.1 pyh2649769_0
> tensorflow-hub 0.12.0 pypi_0 pypi
> tensorflow-io-gcs-filesystem 0.27.0 pypi_0 pypi
>
```
import tensorboard as tb
tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
...
tf.io.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
```
### Versions
2022-11-02 21:46:13 (55.1 MB/s) - ‘collect_env.py’ saved [17278/17278]
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Oct 18 2022, 18:57:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-debian-bookworm-sid
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 510.85.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.5
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.2.2 hbe64b41_10 conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37h6c91a56_3
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] tensorflow 1.15.0 mkl_py37h28c19af_0
[conda] tensorflow-base 1.15.0 mkl_py37he1670d9_0
| 0 |
4,308 | 88,301 |
1.12.1 incompatible with c++ built for 1.12.0 and vice versa
|
oncall: jit, module: cpp
|
### 🐛 Describe the bug
I have a c++/cuda extension which I compile and distribute via PyPI. My build matrix covers all "recent" PyTorch major and minor versions (aswell as python and CUDA versions, if anybody is interest I host my build system as a git workflow [here](https://github.com/charliebudd/torch-extension-builder)). So far this has been enough to ensure my code runs and I have not had to take patch version into account. However, I have recently had an issue (described below) with 1.12.1. My understanding is that only critical bug fixes should be released in patches. If this is not the case, then distributing built c++ PyTorch extensions would be even harder than it currently is.
The error I get is when passing a jit model down to c++. Before 1.12.1, the _c attribute on a jit script model returned a ``torch.jit.ScriptModule`` which will cast to a ``torch::jit::Module`` when passed to a c++ function. However, in 1.12.1, the _c attribute returns a ``torch.ScriptModule`` which no longer casts in the same way, causing an exception to be thrown.
Minimal code example below. To recreate, compile the c++ for torch 1.12.0 and run the python script with torch 1.12.1 and/or vice versa. When the version matches, the code will run fine, else there is an error.
```python
import torch
import myextension
model = torch.jit.load("model.pt")
myextension.test(model._c)
```
```c++
#include <torch/extension.h>
void test_function(torch::jit::Module model)
{
py::print("Test function called");
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
{
m.def("test", &test_function);
}
```
See forum discussion on this issue [here](https://discuss.pytorch.org/t/version-compatability-scheme-for-pytorch-and-compiled-torch-extensions/164637).
### Versions
torch 1.12.0 and 1.12.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jbschlosser
| 9 |
4,309 | 93,602 |
Diffuser pipeline device attribute broken when using optimized model
|
triaged, bug
|
### 🐛 Describe the bug
Hi,
just adding this for tracking purposes as a solution is potentially provided by [this PR](https://github.com/pytorch/pytorch/pull/88164).
The issue is that when using a compiled model in a [DDPMPipeline](https://github.com/huggingface/diffusers/blob/v0.6.0/src/diffusers/pipelines/ddpm/pipeline_ddpm.py#L24) the self.device property does not give the correct result. It [always return "cpu"](https://github.com/huggingface/diffusers/blob/ad9d7ce4763f8fb2a9e620bff017830c26086c36/src/diffusers/pipeline_utils.py#L213) as the optimized model class does not inherent from nn.Module.
Here is a simple repro:
```
import torch
import torch._dynamo as dynamo
from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel
model = UNet2DModel(
sample_size=128, # the target image resolution
in_channels=3, # the number of input channels, 3 for RGB images
out_channels=3, # the number of output channels
layers_per_block=2, # how many ResNet layers to use per UNet block
block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channes for each UNet block
down_block_types=(
"DownBlock2D", # a regular ResNet downsampling block
"DownBlock2D",
"DownBlock2D",
"DownBlock2D",
"AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention
"DownBlock2D",
),
up_block_types=(
"UpBlock2D", # a regular ResNet upsampling block
"AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D"
),
)
if torch.cuda.is_available():
model.to('cuda')
model = torch._dynamo.optimize('inductor')(model)
noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
pipeline = DDPMPipeline(unet=model, scheduler=noise_scheduler)
if torch.cuda.is_available():
pipeline.to('cuda')
pipeline(
batch_size = 16,
generator=torch.manual_seed(42),
)
```
PR [88164](https://github.com/pytorch/pytorch/pull/88164) will solve this as it makes a compiled model a nn.Module.
### Error logs
```
Traceback (most recent call last):
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 131, in _get_fake_value
return wrap_fake_exception(
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 725, in wrap_fake_exception
return fn()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 132, in <lambda>
lambda: _run_node(tx.output, node, args, kwargs, nnmodule)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 56, in _run_node
return nnmodule(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1423, in _call_impl
return forward_call(*input, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 860, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_prims_common/wrappers.py", line 212, in _fn
result = fn(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_decomp/decompositions.py", line 63, in inner
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_decomp/decompositions.py", line 1111, in addmm
out = alpha * torch.mm(mat1, mat2)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 901, in __torch_dispatch__
return self.wrap_meta_outputs_with_default_device_logic(r, func, args, kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 934, in wrap_meta_outputs_with_default_device_logic
return tree_map(partial(wrap), r)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 951, in wrap
) = FakeTensor._find_common_device(func, args, kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 650, in _find_common_device
tree_map(merge_devices, args)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 646, in merge_devices
raise RuntimeError(
RuntimeError: Unhandled FakeTensor Device Propagation for aten.mm.default, found two different devices cpu, cuda:0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/home/mreso/torchdynamo/diffuser_training/train_repro.py", line 46, in <module>
pipeline(
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/diffusers/pipelines/ddpm/pipeline_ddpm.py", line 84, in __call__
model_output = self.unet(image, t).sample
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 137, in __call__
return self.forward(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 134, in forward
return optimized_forward(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 157, in _fn
return fn(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 236, in catch_errors
return callback(frame, cache_size)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 466, in _convert_frame
result = inner_convert(frame, cache_size)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 118, in _fn
return fn(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 92, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
out_code = transform_code_object(code, transform)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 382, in transform
tracer.run()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1452, in run
super().run()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 352, in run
and self.step()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 322, in step
getattr(self, inst.opname)(inst)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 174, in wrapper
return inner_fn(self, inst)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 758, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 264, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 221, in call_function
return tx.inline_user_function_return(
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 293, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1524, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1578, in inline_call_
tracer.run()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 352, in run
and self.step()
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 322, in step
getattr(self, inst.opname)(inst)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 174, in wrapper
return inner_fn(self, inst)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 758, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 264, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 201, in call_function
return variables.TensorVariable.create(
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 201, in create
example_value = _get_fake_value(proxy.node, tx)
File "/fsx/users/mreso/conda/envs/tdn/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 145, in _get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torchdynamo.config.suppress_errors = True
==========
```
### Minified repro
Minifier does not work here but I am not sure if it should in these cases.
| 2 |
4,310 | 93,601 |
[Feature Request][XLA] Support fallback for the dynamo-xla bridge
|
triaged, enhancement, oncall: pt2
|
### 🐛 Describe the bug
With both https://github.com/pytorch/pytorch/pull/87741 and https://github.com/pytorch/xla/pull/4119 landed, we can run dynamo inference with pytorch/xla like
```
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
import torch._dynamo as dynamo
@dynamo.optimize("torchxla_trace_once")
def fn_simple(x, y):
a = torch.cos(x)
b = torch.sin(y)
return a + b
x = torch.tensor(100.0)
y = torch.tensor(200.0)
res = fn_simple(x, y)
```
However if within the function we are trying to trace, there is a op that xla does not support, the program will crashed. For example xla does not support `addmm` with `beta != 1.0`
```
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
import torch._dynamo as dynamo
@dynamo.optimize("torchxla_trace_once")
def fn_fallback(M, mat1, mat2, beta):
# xla currently only support alpha and beta == 1
return torch.addmm(M, mat1, mat2, beta=beta)
M = torch.randn(2, 3)
mat1 = torch.randn(2, 3)
mat2 = torch.randn(3, 3)
res = fn_fallback(M, mat1, mat2, 0.5)
```
Error message is attached below.
IMO there are two ways to handle this issue
1. PyTorch/XLA(or any backend) finds a way to tell the Dynamo about its supported op list(including supported through decomposition) and dynamo will do the graph break at unsupported op.
2. PyTorch/XLA(or any backend) takes the whole model and generate single hash despite there will be more than one graph being created and silently handle fallback within the backend.
First approaches seems more general and might be cleaner.
In terms of types of fallback, common ones are
1. certain ops are not lowered to xla, this can be find out by looking at the [yaml file ](https://github.com/pytorch/xla/blob/master/xla_native_functions.yaml)
2. op itself is supported, but xla can't handle certain input shapes or certain input values. For example XLA can't handle [addmm](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L616) with `beta != 0`. XLA also can't handle overlapping windows for [_adaptive_avg_pool3d](https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_xla_type.cpp#L323).
FYI @wconstab @shunting314 @Krovatkin @alanwaketan @desertfire @wonjoolee95
### Error logs
```
[2022-11-02 00:10:51,561] torch._dynamo.optimizations.backends: [ERROR] torchxla_trace_once error
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py", line 53, in inner
return fn(model, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/backends.py", line 823, in torchxla_trace_once
return integration.extract_compiled_graph(model, example_inputs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/optimizations/torchxla_integration.py", line 94, in extract_compiled_graph
f"Fail to extact the compiled graph because of fallback: {','.join(fallback_ops)}"
RuntimeError: Fail to extact the compiled graph because of fallback: aten::addmm=1
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/output_graph.py", line 436, in call_user_compiler
assert callable(compiled_fn), "compiler_fn did not return callable"
AssertionError: compiler_fn did not return callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/test/test_dynamo.py", line 26, in <module>
res = fn_fallback(M, mat1, mat2, 0.5)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py", line 157, in _fn
return fn(*args, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py", line 236, in catch_errors
return callback(frame, cache_size)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py", line 466, in _convert_frame
result = inner_convert(frame, cache_size)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py", line 118, in _fn
return fn(*args, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/utils.py", line 92, in time_wrapper
r = func(*args, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py", line 348, in _convert_frame_assert
frame,
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
out_code = transform_code_object(code, transform)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py", line 382, in transform
tracer.run()
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py", line 1452, in run
super().run()
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py", line 352, in run
and self.step()
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py", line 322, in step
getattr(self, inst.opname)(inst)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py", line 1514, in RETURN_VALUE
self.output.compile_subgraph(self)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/output_graph.py", line 332, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/output_graph.py", line 402, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_dynamo/output_graph.py", line 439, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: torchxla_trace_once raised AssertionError: compiler_fn did not return callable
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torchdynamo.config.suppress_errors = True
==========
```
real error is `torch._dynamo.exc.BackendCompilerFailed: torchxla_trace_once raised AssertionError: compiler_fn did not return callable`
### Minified repro
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 5 |
4,311 | 93,600 |
Blanket disable torch function/dispatch mode/subclass in dynamo
|
triaged, oncall: pt2
|
I noticed in https://github.com/pytorch/pytorch/pull/88218 that there are a lot of one off DisableTorchFunction all over the torchdynamo codebase. This seems unsustainable. We ought to have some clear invariant about how dynamically scoped state and subclasses affect dynamo. What is this invariant? The obvious one to me is that all modes / context managers are disabled in the compiler, and you explicitly restore the modes if you want to evaluate through them. Is this what we are doing?
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
4,312 | 88,265 |
build: failure when upgrade oneTBB to 2021.7.0
|
module: build, triaged, module: tbb
|
### 🐛 Describe the bug
Currently third_party/tbb uses an old `tbb_2018` branch by https://github.com/pytorch/pytorch/pull/20454.
Upgrading to a later branch `onetbb_2021` and building pytorch from source causes few errors.
Run the following to reproduce the failure with updated tbb branch:
`MKLDNN_CPU_RUNTIME=TBB MKL_THREADING=TBB ATEN_THREADING=TBB USE_TBB=1 USE_OPENMP=0 python setup.py bdist_wheel`
1. changes in file locations in `onetbb_2021` branch
```
CMake Error at aten/src/ATen/cpu/tbb/CMakeLists.txt:268 (add_library):
Cannot find source file:
<VIRTUAL_ENV>/pytorch/third_party/tbb/src/rml/client/rml_tbb.cpp
Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h
.hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc
CMake Error in aten/src/ATen/cpu/tbb/CMakeLists.txt:
Cannot find source file:
<VIRTUAL_ENV>/pytorch/third_party/tbb/src/tbb/lin64-tbb-export.def
Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h
.hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc
CMake Error in aten/src/ATen/cpu/tbb/CMakeLists.txt:
Cannot find source file:
<VIRTUAL_ENV>/pytorch/third_party/tbb/src/tbbmalloc/lin64-tbbmalloc-export.def
Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm .h
.hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90 .f95 .f03 .hip .ispc
```
2. linkage error to tbb
The error occurs if mismatched versions of `libmkl_tbb_thread.so` and `libtbb.so`. The cause of error such as `undefined symbol: _ZN3tbb4task13note_affinityEt (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)` is that the symbol `_ZN3tbb4task13note_affinityEt` is required in `libmkl_tbb_thread.so (MKL 2018)` but not required in `libmkl_tbb_thread.so.2 (MKL 2022)`. The symbol `_ZN3tbb4task13note_affinityEt` is defined in `libtbb.so (TBB 2018)` [here](https://github.com/oneapi-src/oneTBB/blob/a51a90bc609bb73db8ea13841b5cf7aa4344d4a9/src/tbb/lin64-tbb-export.lst) but not defined in `libtbb.so (TBB 2021)` [here](https://github.com/oneapi-src/oneTBB/blob/onetbb_2021/src/tbb/def/lin64-tbb.def). So the error does not occur with `libmkl_tbb_thread.so.2 (MKL 2022)`. But regardless, it may be better to add `-ltbb` linker flag rather than requiring specific MKL version for TBB.
```
pytorch/build/lib$ ldd -r libtorch_cpu.so
linux-vdso.so.1 (0x00007fffd2ff7000)
libc10.so => <VIRTUAL_ENV>/pytorch/build/lib/libc10.so (0x00007fe226965000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007fe22675a000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe226552000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe22633a000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe226136000)
libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00007fe225649000)
libmkl_tbb_thread.so => /opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so (0x00007fe223eff000)
libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00007fe21fef6000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe21fcd7000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe21f939000)
libtbb.so => <VIRTUAL_ENV>/pytorch/build/lib/libtbb.so (0x00007fe21f6f4000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fe21f2e7000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe21eef6000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe232acb000)
undefined symbol: _ZN3tbb4task13note_affinityEt (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb18task_group_context4initEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb19task_scheduler_init9terminateEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v3D2Ev (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb13queuing_mutex11scoped_lock7releaseEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb8internal20allocate_child_proxy8allocateEm (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal28affinity_partitioner_base_v36resizeEj (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb10interface58internal9task_base7destroyERNS_4taskE (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal19allocate_root_proxy8allocateEm (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal25concurrent_vector_base_v316internal_grow_byEmmPFvPvPKvmES4_ (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v818internal_push_moveEPKv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb8internal27allocate_continuation_proxy8allocateEm (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v312internal_popEPv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal25deallocate_via_handler_v3EPv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb13queuing_mutex11scoped_lock7acquireERS0_ (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb8internal34allocate_additional_child_of_proxy8allocateEm (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb8internal32allocate_root_with_context_proxy8allocateEm (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal18throw_exception_v4ENS0_12exception_idE (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal8NFS_FreeEPv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb19task_scheduler_init10initializeEim (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb18task_group_contextD1Ev (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb8internal24concurrent_queue_base_v314internal_emptyEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb4task22spawn_and_wait_for_allERNS_9task_listE (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal23allocate_via_handler_v3Em (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb19task_scheduler_init19default_num_threadsEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v3C2Em (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v313internal_pushEPKv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal25concurrent_vector_base_v314internal_clearEPFvPvmE (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal36get_initial_auto_partitioner_divisorEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal19critical_section_v418internal_constructEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb10interface78internal15task_arena_base21internal_current_slotEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v323internal_pop_if_presentEPv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal25concurrent_vector_base_v3D2Ev (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal16thread_get_id_v3Ev (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZNK3tbb18task_group_context28is_group_execution_cancelledEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal24concurrent_queue_base_v321internal_finish_clearEv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
undefined symbol: _ZN3tbb8internal12NFS_AllocateEmmPv (/opt/intel/mkl/lib/intel64/libmkl_tbb_thread.so)
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~18.04) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: 9.1.85
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
cc @malfet @seemethere
| 0 |
4,313 | 88,308 |
Hessian is (incorrectly) zero when using MPS on M1 Mac, but not on cpu
|
triaged, module: mps, module: functorch
|
Thank you for hard work writing and maintaining functorch. I ran into this issue recently. I would appreciate any help or insight :)!
### Set-up:
Using python 3.10.6, PyTorch 1.13.0, and functorch 1.13.0 on an M1 Max Mac.
### Code:
```
import torch
from functorch import vmap, grad, jacrev, make_functional
device = "mps" if torch.backends.mps.is_available() else "cpu" #change to cpu to reproduce cpu output
class DNN(torch.nn.Module):
def __init__(self):
super(DNN, self).__init__()
self.net = torch.nn.Sequential(
torch.nn.Linear(1, 2),
torch.nn.Tanh(),
torch.nn.Linear(2, 2),
torch.nn.Tanh(),
torch.nn.Linear(2, 2),
torch.nn.Tanh(),
torch.nn.Linear(2, 1),
)
def forward(self, x):
out = self.net(x)
return out
dnn = DNN().to(device)
fmodel, params = make_functional(dnn)
batch_size = 3
data = torch.ones(batch_size, 1).to(device)
targets = torch.ones(batch_size, 1).to(device)
#Modified from various functorch tutorials:
def loss_fn(predictions, targets):
return torch.nn.functional.mse_loss(predictions, targets)
def compute_loss_stateless_model (params, sample, target):
batch = sample.unsqueeze(0)
targets = target.unsqueeze(0)
predictions = fmodel(params, batch)
loss = loss_fn(predictions, targets)
return loss
def comp_loss (params, data, target):
predictions = fmodel(params, data)
loss = loss_fn(predictions, targets)
return loss
ft_compute_grad = grad(compute_loss_stateless_model)
ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, 0, 0))
grads = ft_compute_sample_grad(params, data, targets)
compute_sample_hess = jacrev(vmap(ft_compute_grad, in_dims=(None, 0, 0)))
hess = compute_sample_hess(params,data,targets)
print(hess)
```
Output with MPS (cut off to save some space):
All zeros
```
((tensor([[[[[0.],
[0.]]],
[[[0.],
[0.]]]],
[[[[0.],
[0.]]],
[[[0.],
[0.]]]],
[[[[0.],
[0.]]],
[[[0.],
[0.]]]]], device='mps:0', grad_fn=<ViewBackward0>), tensor([[[[0., 0.]],
...
[[0.]],
[[0.]]], device='mps:0', grad_fn=<ViewBackward0>)))
```
Output with CPU (cut off to save some space):
```
((tensor([[[[[-0.0468],
[ 0.0294]]],
[[[ 0.0294],
[-0.0064]]]],
[[[[-0.0468],
[ 0.0294]]],
[[[ 0.0294],
[-0.0064]]]],
[[[[-0.0468],
[ 0.0294]]],
[[[ 0.0294],
[-0.0064]]]]], grad_fn=<ViewBackward0>), tensor([[[[-0.0468, 0.0294]],
...
[[2.]],
[[2.]]], grad_fn=<ViewBackward0>)))
```
### Other info:
`grads` gives what appears to be a correct non-zero result on both MPS and CPU
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev @zou3519 @Chillee @samdow @soumith
| 1 |
4,314 | 88,264 |
[ONNX] Flaky CI test failures with different random seed
|
module: onnx, triaged
|
### 🐛 Describe the bug
Discovered by #88145, when changing random seed from 0 to `SEED` in `common_utils.TestCase`.
Log: https://pipelines.actions.githubusercontent.com/serviceHosts/7d146c05-69c3-4c20-a0e7-818111670117/_apis/pipelines/1/runs/2351893/signedlogcontent/306?urlExpires=2022-11-01T22%3A46%3A31.2721765Z&urlSigningMethod=HMACV1&urlSignature=2frjLmyJUmYOxtaoJCSuZ%2FmDa0skjVVasOLeupl7VJ4%3D
Log segment:
```log
...
2022-11-01T18:07:52.1102989Z Mismatched elements: 1 / 15840 (0.0%)
2022-11-01T18:07:52.1103338Z Greatest absolute difference: 1.3276803656481206e-07 at index (15, 10, 21) (up to 1e-07 allowed)
2022-11-01T18:07:52.1103631Z Greatest relative difference: 0.004392559933859445 at index (15, 10, 21) (up to 0.001 allowed)
2022-11-01T18:07:52.1104300Z [31mFAILED[0m test/onnx/test_pytorch_onnx_onnxruntime.py::[1mTestONNXRuntime_opset_version_17_is_script_False_keep_initializers_as_inputs_False::test_grid_sample_mode_bicubic_padding_mode_border_False[0m - AssertionError: Tensor-likes are not close!
2022-11-01T18:07:52.1104632Z
2022-11-01T18:07:52.1104720Z Mismatched elements: 4 / 8 (50.0%)
2022-11-01T18:07:52.1104977Z Greatest absolute difference: 0.09897196292877197 at index (0, 0, 1, 0) (up to 0.02 allowed)
2022-11-01T18:07:52.1105259Z Greatest relative difference: 3.0055481606676424 at index (0, 0, 0, 3) (up to 0.02 allowed)
2022-11-01T18:07:52.1105856Z [31mFAILED[0m test/onnx/test_pytorch_onnx_onnxruntime.py::[1mTestONNXRuntime_opset_version_17_is_script_False_keep_initializers_as_inputs_False::test_rpn[0m - AssertionError: Tensor-likes are not close!
2022-11-01T18:07:52.1106146Z
2022-11-01T18:07:52.1106238Z Mismatched elements: 8 / 3208 (0.2%)
2022-11-01T18:07:52.1106646Z Greatest absolute difference: 92.06178665161133 at index (152, 2) (up to 1e-07 allowed)
2022-11-01T18:07:52.1106931Z Greatest relative difference: 5.274154608570277 at index (152, 0) (up to 0.001 allowed)
2022-11-01T18:07:52.1166774Z [31m========= [31m[1m55 failed[0m, [32m24211 passed[0m, [33m10614 skipped[0m[31m in 661.78s (0:11:01)[0m[31m ==========[0m
```
### Versions
master branch.
| 0 |
4,315 | 88,251 |
Weird random SIGTERM occurance
|
oncall: distributed, oncall: r2p
|
### 🐛 Describe the bug
After training for a few epochs I get thrown an error SIGTERM. There are some models that this happens during training and there are some that this doesn't happen. The error doesn't even tell me which part of my code is causing the problem.
Note: I am using DistributedDataParallel and using torchrun to run my code. The error seems to happen after I replace layernorm with batchnorm.
```
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59460 closing signal SIGTERM [862/1800]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59461 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59462 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59463 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59466 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59470 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 59471 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 5 (pid: 59469) of binary: /home/user/.
conda/envs/deit/bin/python
Traceback (most recent call last):
File "/home/user/.conda/envs/deit/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/user/.conda/envs/deit/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.p
y", line 345, in wrapper
return f(*args, **kwargs)
File "/home/user/.conda/envs/deit/lib/python3.7/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/home/user/.conda/envs/deit/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/home/user/.conda/envs/deit/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/user/.conda/envs/deit/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_age
nt
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-11-01_17:31:08
host : host1
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 59469)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
torch==1.10.0
torchvision==0.11.1
timm==0.6.7
transformers==4.5.1
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
4,316 | 88,245 |
Add `gloo` support for `all_to_all`
|
oncall: distributed, module: c10d
|
### 🚀 The feature, motivation and pitch
I'm working on a sparse data distribution using collective comms in torchrec. Currently we only use [`all_to_all_single`](https://github.com/pytorch/pytorch/blob/master/torch/distributed/distributed_c10d.py#L2904) in our sparse data distribution, but we now want to use [`all_to_all`](https://github.com/pytorch/pytorch/blob/master/torch/distributed/distributed_c10d.py#L3035) as well.
Torchrec has built in support for the `gloo` backend, and `all_to_all_single` works with `gloo`; however, `all_to_all` with `gloo` is not supported. We get this error `RuntimeError: ProcessGroup gloo does not support alltoall`.
Since `gloo` supports `all_to_all_single` already, would it be possible to add support for `all_to_all` with `gloo`?
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
4,317 | 93,599 |
Inductor - resnet18 - large batch size - CUDA error: an illegal memory access was encountered
|
triaged, bug, oncall: pt2
|
### 🐛 Describe the bug
Repro:
```
import torch
import torch._dynamo
import torch._inductor
from torch._inductor import config
import logging
from torchvision import models
resnet18 = models.resnet18(weights=models.ResNet18_Weights.DEFAULT)
batch_size = 4096
device = "cuda"
resnet18 = resnet18.eval().to(device)
opt_resnet18 = torch._dynamo.optimize("inductor")(resnet18)
input = torch.randn((batch_size, 3, 224, 224)).to(device)
output = opt_resnet18(input)
print(output.shape)
```
This only happens when batch size is large.
### Error logs
```
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 458, in preserve_rng_state
yield
File "/scratch/ybliang/work/repos/pytorch/torch/_inductor/compile_fx.py", line 202, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/scratch/ybliang/work/repos/pytorch/torch/_inductor/compile_fx.py", line 257, in cudagraphify_impl
model(list(static_inputs))
File "/tmp/torchinductor_ybliang/7q/c7qimro7rryowl6fbgxobggppym6ux4mwk4x5htmdqso66ydxlb3.py", line 691, in call
buf3 = empty_strided((4096, 64, 56, 56), (200704, 3136, 56, 1), device='cuda', dtype=torch.int64)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/ybliang/work/repos/pytorch/debug/debug5.py", line 35, in <module>
output = opt_resnet18(input)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 138, in __call__
return self.forward(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 135, in forward
return optimized_forward(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/torchvision/torchvision/models/resnet.py", line 284, in forward
def forward(self, x: Tensor) -> Tensor:
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/eval_frame.py", line 166, in _fn
return fn(*args, **kwargs)
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 870, in forward
return compiled_f(
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 861, in new_func
return compiled_fn(args)
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 230, in g
return f(*args)
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 489, in compiled_function
return CompiledFunction.apply(*remove_dupe_args(args))
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 450, in forward
fw_outs = call_func_with_args(
File "/scratch/ybliang/work/repos/pytorch/functorch/_src/aot_autograd.py", line 255, in call_func_with_args
out = normalize_as_list(f(args))
File "/scratch/ybliang/work/repos/pytorch/torch/_inductor/compile_fx.py", line 185, in run
return model(new_inputs)
File "/scratch/ybliang/work/repos/pytorch/torch/_inductor/compile_fx.py", line 202, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/scratch/ybliang/work/env/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 462, in preserve_rng_state
torch.cuda.set_rng_state(cuda_rng)
File "/scratch/ybliang/work/repos/pytorch/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/scratch/ybliang/work/repos/pytorch/torch/cuda/__init__.py", line 176, in _lazy_call
callable()
File "/scratch/ybliang/work/repos/pytorch/torch/cuda/random.py", line 62, in cb
default_generator.set_state(new_state_copy)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
### Minified repro
_No response_
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 10 |
4,318 | 88,227 |
Enable `torch.topk` to support `stable` flag
|
triaged, module: sorting and selection
|
### 🚀 The feature, motivation and pitch
Referring to issue #88184
It seems that the sorting algorithm used to implement `topk` is unstable which makes it give unjustifiable results on duplicate elements of input tensor.
Code example
```python
import numpy as np
import torch
x = np.array([-1, -1, -1, -1])
for k in range(1, 5):
val, index = x[np.argpartition(x, -k)[-k:]], np.argpartition(x, -k)[-k:]
print(f"top_val: {val}, Index: {index}")
# output
# top_val: [-1], Index: [3]
# top_val: [-1 -1], Index: [2 3]
# top_val: [-1 -1 -1], Index: [1 2 3]
# top_val: [-1 -1 -1 -1], Index: [0 1 2 3]
# torch.topk
y = torch.topk([-1, -1, -1, -1])
for k in range(1, 5):
val, index = torch.topk(y, k)
print(f"top_val: {val}, Index: {index}")
# output
# top_val: tensor([-1]), Index: tensor([2])
# top_val: tensor([-1, -1]), Index: tensor([2, 3])
# top_val: tensor([-1, -1, -1]), Index: tensor([2, 3, 0])
# top_val: tensor([-1, -1, -1, -1]), Index: tensor([2, 3, 0, 1])
```
Adding `stable` functionality must solve this problem.
Thanks.
### Alternatives
_No response_
### Additional context
_No response_
| 6 |
4,319 | 88,221 |
Add torch.tensor replacement and int_tensor prim
|
no-stale
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #88221
| 6 |
4,320 | 93,598 |
[Inductor] incorrect result of vision_maskrcnn
|
triaged, bug
|
### 🐛 Describe the bug
Vision_maskrcnn of torchbench has incorrect result when using inductor.
RuntimeError: The size of tensor a (36) must match the size of tensor b (34) at non-singleton dimension 0
Reproduce command:
python benchmarks/dynamo/torchbench.py --accuracy --float32 -dcpu --inductor-settings --float32 -n50 --inductor --no-skip --quiet -k "vision_maskrcnn"
### Error logs
cpu eval vision_maskrcnn
Traceback (most recent call last):
File "benchmarks/dynamo/torchbench.py", line 349, in <module>
main(TorchBenchmarkRunner(), original_dir)
File "/home/bzheng/workspace/pytorch/benchmarks/dynamo/common.py", line 1532, in main
return maybe_fresh_cache(run, args.cold_start_latency and args.only)(
File "/home/bzheng/workspace/pytorch/benchmarks/dynamo/common.py", line 772, in inner
return fn(*args, **kwargs)
File "/home/bzheng/workspace/pytorch/benchmarks/dynamo/common.py", line 1845, in run
runner.run_one_model(
File "/home/bzheng/workspace/pytorch/benchmarks/dynamo/common.py", line 1267, in run_one_model
status = self.check_accuracy(
File "/home/bzheng/workspace/pytorch/benchmarks/dynamo/common.py", line 1104, in check_accuracy
if not same(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 774, in same
return len(ref) == len(res) and all(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 775, in <genexpr>
same(ai, bi, fp64_refi, cos_similarity, tol, equal_nan, exact_dtype)
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 785, in same
same(
File "/home/bzheng/workspace/pytorch/torch/_dynamo/utils.py", line 831, in same
if torch.allclose(ref, res, atol=tol, rtol=tol, equal_nan=equal_nan):
RuntimeError: The size of tensor a (36) must match the size of tensor b (34) at non-singleton dimension 0
ERROR
cpu gmean=nanx mean=nanx
vision_maskrcnn gmean=nanx mean=nanx
0 gmean=nanx mean=nanx
0.0000 gmean=nanx mean=nanx
### Minified repro
python benchmarks/dynamo/torchbench.py --accuracy --float32 -dcpu --inductor-settings --float32 -n50 --inductor --no-skip --quiet -k "vision_maskrcnn"
| 24 |
4,321 | 88,196 |
cudagraphify Dynamo's nvFuser backend
|
open source, module: nvfuser, module: dynamo, ciflow/inductor, no-stale
|
This PR rewrites the "nvprims_nvfuser" code to reuse `cudagraphify` function.
cc @kevinstephano @jjsjann123 @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 3 |
4,322 | 88,194 |
Add a config option to raise errors instead of warnings in nvFuser integration
|
triaged, module: nvfuser
|
FWIW, this test will be less fragile if you have a way to convert nvfuser warnings into true errors. Otherwise, if someone adds an unrelated warning that this test happens to exercise, this test will start failing
_Originally posted by @ezyang in https://github.com/pytorch/pytorch/pull/88186#discussion_r1010485596_
cc @kevinstephano @jjsjann123
| 1 |
4,323 | 88,192 |
[docs] torch.is_neg/torch.Tensor.is_neg not documented
|
module: docs, triaged
|
### 📚 The doc issue
It seems to be public but there is no documentation.
```python
>>> torch.is_neg(t)
False
>>> torch.is_neg.__doc__
>>>
```
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 0 |
4,324 | 88,191 |
`torch.nn.RReLU` not reporting `lower > upper` on CUDA
|
module: cuda, triaged
|
### 🐛 Describe the bug
If we set `lower > upper` for a `torch.nn.RReLU` layer, it does not report error when the layer is built, or even when it is applied to a CUDA tensor. It is discovered only after being applied to a tensor on CPU.
``` python
m = torch.nn.RReLU(lower=0.3, upper=0.2)
x = torch.tensor([-1., -1, -1, -1]).cuda()
mx = m(x)
print(mx)
y = torch.tensor([-1., -1, -1, -1])
my = m(y)
```
Run the code above and you get:
```
tensor([-0.2930, -0.2569, -0.2189, -0.2043], device='cuda:0')
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-3-44e43dbfd4d8>](https://localhost:8080/#) in <module>
4 print(mx)
5 y = torch.tensor([-1., -1, -1, -1])
----> 6 my = m(y)
2 frames
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in rrelu(input, lower, upper, training, inplace)
1673 result = torch.rrelu_(input, lower, upper, training)
1674 else:
-> 1675 result = torch.rrelu(input, lower, upper, training)
1676 return result
1677
RuntimeError: Expected from <= to to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
Maybe we can check for the validity of `lower` and `upper` immediately when the layer is constructed. It will not affect runtime performance much since it is only done once, not every time when it is applied to a tensor.
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.15 (default, Oct 12 2022, 19:14:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @ngimel
| 0 |
4,325 | 88,189 |
Moving tensor to GPU by .cuda() gets stucked when AMD Secure Encripted Virtualization (SEV) is activated
|
module: cuda, module: rocm, triaged
|
### 🐛 Describe the bug
My workstation is equipped with an AMD Threadripper 3960x and GPUs of 30 series, with Ubuntu 22.04.1 installed.
Before I activated the AMD SEV, everything is normal. After I activated the AMD SEV by:
1. change the line <GRUB_CMDLINE_LINUX_DEFAULT="quite splash"> to <GRUB_CMDLINE_LINUX_DEFAULT="quite splash mem_encrypt=on kvm_amd.sev=1"> in /etc/default/grub to enable AMD SEV.
2. run 'sudo update-grub'.
3. reboot.
The ".to('cuda')" or ".cuda()" operations get stucked for Pytorch tensor: I tested in the terminal
```
import torch
a = torch.rand(5).cuda()
```
and got stucked (no response for a long time). Meanwhile, I founded that the GPU memory does not change and there was only one CPU core busy.
After I disabled the AMD SEV by restoring <GRUB_CMDLINE_LINUX_DEFAULT="quite splash"> in /etc/default/grub, the bug disappeared.
### Versions
CPU: AMD Threadripper 3960x
OS: Ubuntu 22.04.01
Linux kernel: 5.15.0-52-generic
GCC version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Tested driver: nvidia-driver-510, nvidia-driver-520
Tested pytorch version: 1.12, 1.3 (latest)
Tested python version: 3.10.4, 3.10.6.
cc @ngimel @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport
| 0 |
4,326 | 88,185 |
`torch.mm` Trigger RuntimeError with UndefinedBehaviorSanitizer
|
triaged, module: sanitizers
|
### 🐛 Describe the bug
A test case for `torch.mm` aborted because of runtime error when it's run with the undefined behavior sanitizer. Without sanitizers, the program terminated normally.
Reproduction:
```python
import torch
def test():
arg_1_tensor = torch.rand([0, 64], dtype=torch.float32)
arg_1 = arg_1_tensor.to_sparse()
arg_2_tensor = torch.rand([64, 1], dtype=torch.float32)
arg_2 = arg_2_tensor.to_sparse()
res = torch.mm(arg_1,arg_2,)
test()
```
Error log:
```bash
/home/yuyao/dev/pytorch/c10/core/TensorImpl.h:1495:38: runtime error: applying non-zero offset 8 to null pointer
```
It seems that the error can be triggered only if we use sparse tensors.
### Versions
```
PyTorch version: 1.14.0a0+gita86278b
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 11.1.0-6
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.14.0a0+gita86278b
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
| 0 |
4,327 | 88,148 |
☂️ Issues that trigger crashes due to corner-case API usages
|
triaged, module: edge cases
|
### 🐛 Describe the bug
Does not seem like a real user issues, but detected using some sort of API fuzzing:
- https://github.com/pytorch/pytorch/issues/87960
- https://github.com/pytorch/pytorch/issues/87961
- https://github.com/pytorch/pytorch/issues/87963
- https://github.com/pytorch/pytorch/issues/87964
- https://github.com/pytorch/pytorch/issues/94594
Sanitizer detected issues:
- https://github.com/pytorch/pytorch/issues/88724
- https://github.com/pytorch/pytorch/issues/88940
- https://github.com/pytorch/pytorch/issues/88939
### Versions
None
| 2 |
4,328 | 88,147 |
Conv2d is not deterministic when input tensor has different strides
|
module: convolution, triaged
|
### 🐛 Describe the bug
I would expect that if I pass two identical tensors through a Conv2d in deterministic mode, they would produce an identical output. However this is not the case if the tensors are identical in every way except their stride - in that case the output is different. A difference in stride doesn't cause the two tensors to not be `torch.equal`, which is especially confusing - two "equal" tensors can produce a different output.
```python
import os
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
import torch
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.benchmark = False
conv = torch.nn.Conv2d(3, 3, kernel_size=2).cuda()
in_a = torch.randn(4, 3, 64, 64).cuda()
out_a = conv(in_a)
in_b = torch.clone(in_a.permute(0, 2, 3, 1), memory_format=torch.contiguous_format).permute(0, 3, 1, 2)
out_b = conv(in_b)
print(in_a.stride()) # (12288, 4096, 64, 1)
print(in_b.stride()) # (12288, 1, 192, 3)
print(torch.equal(in_a, in_b)) # True
print(torch.equal(out_a, out_b)) # False
```
Ideally this should be fixed such that differences in stride don't affect the output of otherwise-deterministic operations, but at minimum the page on [reproducibility](https://pytorch.org/docs/stable/notes/randomness.html) should mention this.
### Versions
Collecting environment information...
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] geotorch==0.2.0
[pip3] mypy==0.971
[pip3] mypy-boto3-ec2==1.17.41.0
[pip3] mypy-boto3-s3==1.17.41.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.7.3
[pip3] torch==1.13.0+cu116
[pip3] torchmetrics==0.7.0
[pip3] torchvision==0.13.0+cu113
[conda] Could not collect
| 3 |
4,329 | 88,144 |
AvgPool2D output shapes are inconsistent when ceil_mode=True
|
triaged, module: correctness (silent), module: pooling
|
### 🐛 Describe the bug
Certain inputs produce unexpected output shapes from [AvgPool2D](https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html) when `ceil_mode=True`.
## Example 1
```python
>>> t = torch.ones((1, 3, 32, 32))
>>> pool = torch.nn.AvgPool2d(8, stride=4, ceil_mode=True)
>>> pool(t).shape
torch.Size([1, 3, 7, 7])
```
This is surprising behavior because there are 8 possible windows with `stride=4` (not 7) for both height and width, each starting from the following indexes:
```
0
4
8
12
16
20
24
28
```
The final window at index 28 is a partial window, but should be included when `ceil_mode=True`, otherwise `ceil_mode` seems to serve no purpose. (This window does not start in a padding region as there is no padding used here.)
## Example 2
```python
>>> t = torch.ones((1, 3, 4, 4))
>>> pool = torch.nn.AvgPool2d(3, stride=3, ceil_mode=True)
>>> pool(t).shape
torch.Size([1, 3, 2, 2])
```
With `stride=3`, we have 2 potential start positions:
```
0
3
```
Torch chooses to use both of them. But based on the behavior observed in Example 1 (where the last frame was discarded) we would have expected output shape of `[1, 3, 1, 1]`.
### Versions
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.2.5
Python version: 3.7.10 (default, Jun 3 2021, 00:02:01) [GCC 7.3.1 20180712 (Red Hat 7.3.1-13)] (64-bit runtime)
Python platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.2.5
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.0
[pip3] torch==1.12.1
[pip3] torch-neuron==1.12.1.2.0.0.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519
| 6 |
4,330 | 88,142 |
Refactor `torch.return_types.topk` to behave like a `namedtuple` or a `dict`
|
triaged, enhancement, module: sorting and selection
|
### 🚀 The feature, motivation and pitch
When you use `torch.topk`, it returns a namedtuple-like object of type `torch.return_types.topk`. Unfortunately, it is not possible to use it as a dictionary. For example:
```python
import torch
x = torch.topk(torch.randn(10, 10), k=5)
assert isinstance(x, torch.return_types.topk)
# This is okay:
x.indices
# This will give an error:
x['indices']
```
Ideally, accessing the returned item as a dict would make it more useful and reduce the amount of post-processing needed. Alternatively, it could behave like a namedtuple by allowing the use of the documented method [`_asdict()`](https://docs.python.org/3/library/collections.html#collections.somenamedtuple._asdict) (which is officially supported and named this way to avoid collision). For example:
```python
# Potential improvement:
di = x._asdict()
di['indices'] # Should show the indices
di['values'] # should show the values
```
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
4,331 | 88,137 |
Add eq, to, masked_select, index_select, narrow to nested tensors
|
triaged, module: nestedtensor
|
### 🚀 The feature, motivation and pitch
I am working on better implementation of SQL operators in [TQP](https://www.vldb.org/pvldb/vol15/p2811-he.pdf) and I am exploring how to leverage nested tensors for this. However in order to use nested tensors in practice without bloating the memory we would love to have support for `eq`, `to`, `masked_select`, `index_select` and `narrow`.
### Alternatives
Use `to_padded_tensor` but this will increase memory pressure.
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 2 |
4,332 | 88,136 |
Placing LSTM model on bfloat16 on GPU causes error
|
module: rnn, module: cuda, triaged, module: bfloat16
|
### 🐛 Describe the bug
import torch.nn as nn
import torch as th
# If using CPU as the device, the following codes run perfectly
rnn = nn.LSTM(10, 20, 2).to(device="cpu", dtype=th.bfloat16)
input = th.randn(5, 3, 10).to(device="cpu", dtype=th.bfloat16)
h0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.bfloat16)
c0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.bfloat16)
output, (hn, cn) = rnn(input, (h0, c0))
# However, change the device to be GPU, it gives out an error
rnn = nn.LSTM(10, 20, 2).to(device="cuda", dtype=th.bfloat16)
input = th.randn(5, 3, 10).to(device="cuda", dtype=th.bfloat16)
h0 = th.randn(2, 3, 20).to(device="cuda", dtype=th.bfloat16)
c0 = th.randn(2, 3, 20).to(device="cuda", dtype=th.bfloat16)
output, (hn, cn) = rnn(input, (h0, c0))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_20268/1164814484.py in <module>
3 h0 = th.randn(2, 3, 20).to(device="cuda", dtype=th.bfloat16)
4 c0 = th.randn(2, 3, 20).to(device="cuda", dtype=th.bfloat16)
----> 5 output, (hn, cn) = rnn(input, (h0, c0))
~\anaconda3\envs\py37\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\envs\py37\lib\site-packages\torch\nn\modules\rnn.py in forward(self, input, hx)
760 if batch_sizes is None:
761 result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
--> 762 self.dropout, self.training, self.bidirectional, self.batch_first)
763 else:
764 result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias,
RuntimeError: "_thnn_fused_lstm_cell_cuda" not implemented for 'BFloat16'
# I've also tried datatype 'float16', it works for GPU but doesn't work for CPU.
rnn = nn.LSTM(10, 20, 2).to(device="cpu", dtype=th.float16)
input = th.randn(5, 3, 10).to(device="cpu", dtype=th.float16)
h0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.float16)
c0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.float16)
output, (hn, cn) = rnn(input, (h0, c0))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_8944/1937453628.py in <module>
3 h0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.float16)
4 c0 = th.randn(2, 3, 20).to(device="cpu", dtype=th.float16)
----> 5 output, (hn, cn) = rnn(input, (h0, c0))
~\anaconda3\envs\py37\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\envs\py37\lib\site-packages\torch\nn\modules\rnn.py in forward(self, input, hx)
760 if batch_sizes is None:
761 result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
--> 762 self.dropout, self.training, self.bidirectional, self.batch_first)
763 else:
764 result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias,
RuntimeError: "sigmoid_cpu" not implemented for 'Half'
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.7.11 (default, Jul 27 2021, 09:42:29) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22000-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti
Nvidia driver version: 516.59
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] numpy-ext==0.9.6
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchfile==0.1.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py37h2bbff1b_0
[conda] mkl_fft 1.3.1 py37h277e83a_0
[conda] mkl_random 1.2.2 py37hf11a4ad_0
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.20.3 py37ha4e8547_0
[conda] numpy-base 1.20.3 py37hc2deb75_0
[conda] numpy-ext 0.9.6 pypi_0 pypi
[conda] pytorch 1.11.0 py3.7_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py37_cu113 pytorch
[conda] torchfile 0.1.0 py_0 conda-forge
[conda] torchvision 0.12.0 py37_cu113 pytorch
cc @zou3519 @ngimel
| 3 |
4,333 | 88,109 |
Python Dispatcher registrations beyond BackendSelect do nothing
|
triaged, module: dispatch, module: python dispatcher
|
### 🐛 Describe the bug
Take https://github.com/albanD/subclass_zoo/blob/main/use_cpu_for_rng.py and replace BackendSelect with CUDA. The test fails, because we never hit python dispatcher. Probably BackendSelect needs to restore the Python Dispatcher key before redispatching.
### Versions
master
| 0 |
4,334 | 88,106 |
WIP: feat: LARS optimizer
|
module: optimizer, module: mkldnn, open source, release notes: nn, ciflow/inductor
|
Followup to [#6323](https://github.com/pytorch/vision/issues/6323).
Addition of LARS optimizer.
- [ ] LARS optimizer
- [ ] Tests
- [ ] Documentation
- [ ] Multi-Tensor support
- [ ] Extra params (e.g., ~~maximize~~, ~~differentiable~~, foreach)
- [ ] .pyi
Reference implementations: [[1](https://lightning-flash.readthedocs.io/en/0.5.0/_modules/flash/core/optimizers/lars.html#LARS)]
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @soumith @voznesenskym @penguinwu @anijain2305 @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 20 |
4,335 | 88,103 |
ProcessGroupNCCL watchdog can't catch NCCL comm initialization issues
|
oncall: distributed, triaged, module: c10d
|
### 🐛 Describe the bug
ProcessGroupNCCL watchdog is designed to catch hanging collectives and tear down the process, but in the special case of first collective, ProcessGroupNCCL initializes NCCL communicators, which can hang. Since no comm future object has been created, watchdog cannot handle this situation.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
4,336 | 88,096 |
Add nondeterministic alert to `torch.Tensor.scatter()`
|
triaged, module: determinism
|
### 🐛 Describe the bug
``` python
torch.use_deterministic_algorithms(True)
input = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9])
index = torch.tensor([0, 1, 1, 1, 2])
src = torch.tensor([11, 12, 13, 14, 15])
result = torch.Tensor.scatter(input, 0, index, src)
print(result)
inputCuda = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9]).cuda()
indexCuda = torch.tensor([0, 1, 1, 1, 2]).cuda()
srcCuda = torch.tensor([11, 12, 13, 14, 15]).cuda()
resultCuda = torch.Tensor.scatter(inputCuda, 0, indexCuda, srcCuda)
print(resultCuda)
```
Run this code and you will get different results on CPU and on GPU:
```
tensor([11, 14, 15, 4, 5, 6, 7, 8, 9])
tensor([11, 12, 15, 4, 5, 6, 7, 8, 9], device='cuda:0')
```
The value on position 1 is overwritten for 3 times with 12, 13 and 14. It looks like CUDA overwrites it in a reversed order of indices. This is not reported even with `torch.use_deterministic_algorithms(True)`.
Related: #55516, #50750
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.26
Python version: 3.7.15 (default, Oct 12 2022, 19:14:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @mruberry @kurtamohler
| 2 |
4,337 | 88,081 |
out of memory with pytorch version after 1.8.1
|
module: cuda, module: memory usage, triaged, module: CUDACachingAllocator
|
### 🐛 Describe the bug
I realize that my code can only run with pytorch1.8.1
pytorch1.11 pytorch1.13 and pytorch1.8.2 all give OOM errors.
The code is pretty complicated so I cannot provide a minimal example.
On pytorch1.8.1, my code can run on RTX 2080ti (11G) with no problem.
But with pytorch1.11 pytorch1.13 and pytorch1.8.2, even with RTX 3090 (24G) I get OOM errors:
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 23.70 GiB total capacity; 1.21 GiB already allocated; 6.69 MiB free; 1.22 GiB reserved in total by PyTorch)
The error is also weird since there is around 22G memory missing
pytorch1.11 pytorch1.13 and pytorch1.8.2 trigger the OOM error at different lines
Below is the memory summary right before the line that triggers the OOM errors with pytorch1.8.2 with CUDA_LAUNCH_BLOCKING = 1
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True

### Versions
Collecting environment information...
PyTorch version: 1.8.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.17.0-rc1
Libc version: glibc-2.10
Python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-131-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 470.141.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.8.1
[pip3] torch-cluster==1.5.9
[pip3] torch-geometric==2.0.1
[pip3] torch-scatter==2.0.8
[pip3] torch-sparse==0.6.12
[pip3] torch-spline-conv==1.2.1
[pip3] torchaudio==0.8.0a0+e4e171a
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.9.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h8f6ccaa_8 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py37h8f50634_2 conda-forge
[conda] mkl_fft 1.3.0 py37h54f3939_0
[conda] mkl_random 1.2.0 py37h9fdb41a_1 conda-forge
[conda] numpy 1.19.2 py37h54aff64_0
[conda] numpy-base 1.19.2 py37hfa32c7d_0
[conda] pyg 2.0.1 py37_torch_1.8.0_cu102 pyg
[conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-cluster 1.5.9 py37_torch_1.8.0_cu102 pyg
[conda] pytorch-scatter 2.0.8 py37_torch_1.8.0_cu102 pyg
[conda] pytorch-sparse 0.6.12 py37_torch_1.8.0_cu102 pyg
[conda] pytorch-spline-conv 1.2.1 py37_torch_1.8.0_cu102 pyg
[conda] torchaudio 0.8.1 py37 pytorch
[conda] torchmetrics 0.8.2 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.9.1 py37_cu102 pytorch
---------------------------------------------------------------------------------------
Collecting environment information...
PyTorch version: 1.8.2
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-194-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.8.2
[pip3] torchaudio==0.8.2
[pip3] torchvision==0.9.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.3 py38h14f4228_0
[conda] numpy-base 1.23.3 py38h31eccc5_0
[conda] pytorch 1.8.2 py3.8_cuda11.1_cudnn8.0.5_0 pytorch-lts
[conda] torchaudio 0.8.2 py38 pytorch-lts
[conda] torchvision 0.9.2 py38_cu111 pytorch-lts
cc @ngimel
| 0 |
4,338 | 88,072 |
convert torch.jit.script model to ONNX get wrong result
|
module: onnx, triaged, onnx-needs-info
|
### 🐛 Describe the bug
I want to convert `torch.jit.script` model to ONNX, however the inference of ONNX and raw pytorch is different. My code snippet:
```python
import torch.nn as nn
import torch
import torch.nn.functional as F
import numpy as np
import onnxruntime as ort
import pdb
class test_net(nn.Module):
def __init__(self,):
super(test_net, self).__init__()
self.model = nn.Conv3d(3,64,kernel_size=(1,3,3), stride=(2,1,2))
self.relu = nn.ReLU()
self.relu6 = nn.ReLU6()
self.relu66 = nn.ReLU6()
def forward(self, x):
out1 = self.model(x)
f_mean = torch.mean(out1) # -> ReduceMean
out2 = torch.div(out1, f_mean)
o1 = self.relu(out2)
o2 = self.relu6(out2)
o3 = self.relu66(out2) # script模式下被优化掉了
out = torch.cat([o1,o2,o3])
return out
# 模型构建和运行
imgh, imgw = 24, 94
net = test_net().eval() # 若存在batchnorm、dropout层则一定要eval() 使得BN层参数不更新
dummy_input = torch.randn(1, 3, 3, imgh, imgw)# n c d h w
torch_out = net.forward(dummy_input)# net(dummy_input)
# export onnx
dynamic_axes = {'input': {3: 'height', 4: 'width'}, 'output': {3: 'height', 4: 'width'}} # 配置动态分辨率
onnx_pth = "test-conv-relu.onnx"
# 传入 scriptModule
net_script= torch.jit.script(test_net().eval()) # 生成一个 ScriptModule
# 需要外加配置 example_outputs,用来获取输出的shape和dtype,无需运行模型
torch.onnx.export(net_script, dummy_input, onnx_pth, input_names=['input'], output_names=['output'], opset_version=11,
dynamic_axes=dynamic_axes)
# example_outputs=[torch_out]
onnx_session = ort.InferenceSession(onnx_pth, providers=['CUDAExecutionProvider'])
onnx_blob = dummy_input.data.numpy()
onnx_out = onnx_session.run(None, {'input':onnx_blob})[0]
print('mean diff = ', np.mean(onnx_out - torch_out.data.numpy()))
```
There is a significant difference between `onnx_out ` and `torch_out`
PS: It seems Pytorch 1.13 `torch.onnx.export` does not have `example_outputs` parameter
### Versions
Pytorch Version: 1.13.0a0+08820cb
| 5 |
4,339 | 88,062 |
Cannot import `traverse_dps` from torch.data.utils.graph
|
triaged, module: data
|
### 🐛 Describe the bug
Cannot import `traverse_dps` from `torch.data.utils.graph`.
```
from torchtext.datasets import WikiText2
```
does not work, but instead throws the error:
```
[/usr/local/lib/python3.7/dist-packages/torchdata/datapipes/utils/_visualization.py](https://localhost:8080/#) in <module>
11
12 from torch.utils.data.datapipes.iter.combining import _ChildDataPipe, IterDataPipe
---> 13 from torch.utils.data.graph import traverse_dps
14
15 if TYPE_CHECKING:
ImportError: cannot import name 'traverse_dps' from 'torch.utils.data.graph' (/usr/local/lib/python3.7/dist-packages/torch/utils/data/graph.py)
```
### Versions
Using Google Collab. Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.0
[pip3] torchaudio==0.12.1+cu113
[pip3] torchdata==0.5.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.13.1+cu113
[conda] Could not collect
cc @VitalyFedyunin @ejguan @NivekT
| 3 |
4,340 | 88,053 |
Different behaviour in sparse matmul
|
module: sparse, triaged
|
The same code with and without the utilization of a **sparse** tensor gives an error:
The following is a working example of the code **without** sparse tensor:
``` python
a = torch.randn(2, 3, 3).requires_grad_(True)
print(a)
b = torch.randn(3, 1, requires_grad=True)
print(b)
y = torch.matmul(a, b)
print(y)
y.sum().backward()
print(a.grad)
```
This is the same example code with a sparse tensor utilization:
``` python
a = torch.randn(2, 3, 3).to_sparse().requires_grad_(True)
print(a)
b = torch.randn(3, 1, requires_grad=True)
print(b)
y = torch.matmul(a, b)
print(y)
y.sum().backward()
print(a.grad)
```
The code with the sparse variant gives the following error:
```
NotImplementedError Traceback (most recent call last)
Cella 22 in <cell line: 7>()
[4] b = torch.randn(3, 1, requires_grad=True)
[5] print(b)
----> [7] y = torch.matmul(a, b)
[8] print(y)
[10] y.sum().backward()
NotImplementedError: Tensors of type SparseTensorImpl do not have is_contiguous
```
I'm not very confident that I am not wrong, but I think that it is better if matmul for dense and sparse tensor follows the same behaviour.
If I do not understand something, how can I replicate the same behaviour in the sparse case?
Thank you!
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Could not collect
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.12 (tags/v3.9.12:b28265d, Mar 23 2022, 23:52:46) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] Could not collect
[conda] Could not collect
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
4,341 | 88,047 |
`torch.nn.CTCLoss` Trigger heap-buffer-overflow under AddressSanitizer
|
module: nn, module: loss, triaged, actionable, module: sanitizers, module: edge cases
|
### 🐛 Describe the bug
A test case for `torch.nn.CTCLoss` aborted because of heap-buffer-overflow error when it's run under the address sanitizer.
Reproduction:
```python
import torch
def test():
ctc_loss = torch.nn.CTCLoss()
arg_1_0 = torch.rand([50, 16, 20], dtype=torch.float32).clone()
arg_1_1 = torch.randint(-512,4,[16, 30], dtype=torch.int64).clone()
arg_1_2 = torch.randint(-4,0,[16], dtype=torch.int64).clone()
arg_1_3 = torch.randint(-512,0,[16], dtype=torch.int64).clone()
arg_1 = [arg_1_0,arg_1_1,arg_1_2,arg_1_3,]
res = ctc_loss(*arg_1)
test()
```
Error log:
```bash
=================================================================
==202==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x609000000284 at pc 0x7fecebd6b267 bp 0x7ffe509837b0 sp 0x7ffe509837a8
WRITE of size 4 at 0x609000000284 thread T0
#0 0x7fecebd6b266 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbbd266)
#1 0x7fecebd5fac4 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbb1ac4)
#2 0x7fecebd747ad (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbc67ad)
#3 0x7fecee341ece (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0x10193ece)
#4 0x7fecece4375a (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xec9575a)
#5 0x7fececb6bbc6 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xe9bdbc6)
#6 0x7fecf2acf943 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0x14921943)
#7 0x7fecf2aced44 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0x14920d44)
#8 0x7fececb6b53f (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xe9bd53f)
#9 0x7fed129edfed (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so+0x25d8fed)
#10 0x4d23df (/usr/bin/python3.7+0x4d23df)
#11 0x51088a (/usr/bin/python3.7+0x51088a)
#12 0x58fd36 (/usr/bin/python3.7+0x58fd36)
#13 0x50c4fb (/usr/bin/python3.7+0x50c4fb)
#14 0x5b4ee5 (/usr/bin/python3.7+0x5b4ee5)
#15 0x6005a2 (/usr/bin/python3.7+0x6005a2)
#16 0x607795 (/usr/bin/python3.7+0x607795)
#17 0x60785b (/usr/bin/python3.7+0x60785b)
#18 0x60a435 (/usr/bin/python3.7+0x60a435)
#19 0x64db81 (/usr/bin/python3.7+0x64db81)
#20 0x64dd2d (/usr/bin/python3.7+0x64dd2d)
#21 0x7fed1d089c86 (/lib/x86_64-linux-gnu/libc.so.6+0x21c86)
#22 0x5b6369 (/usr/bin/python3.7+0x5b6369)
0x609000000284 is located 0 bytes to the right of 4-byte region [0x609000000280,0x609000000284)
allocated by thread T0 here:
#0 0x7fed1d55f308 (/usr/lib/llvm-6.0/lib/clang/6.0.0/lib/linux/libclang_rt.asan-x86_64.so+0x106308)
#1 0x7fecdddc5101 (/usr/local/lib/python3.7/dist-packages/torch/lib/libc10.so+0x16a101)
#2 0x7fecddd4f9a1 (/usr/local/lib/python3.7/dist-packages/torch/lib/libc10.so+0xf49a1)
#3 0x7fecebbf3ada (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xda45ada)
#4 0x7fecebbf481e (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xda4681e)
#5 0x7fecebbf6240 (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xda48240)
#6 0x7fecebbf320d (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xda4520d)
#7 0x7fecebbf2ffb (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xda44ffb)
#8 0x7fecebd5f70c (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbb170c)
#9 0x7fecebd747ad (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbc67ad)
#10 0x7fecee341ece (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0x10193ece)
SUMMARY: AddressSanitizer: heap-buffer-overflow (/usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_cpu.so+0xdbbd266)
Shadow bytes around the buggy address:
0x0c127fff8000: fa fa fa fa fa fa fa fa fd fa fa fa fa fa fa fa
0x0c127fff8010: fa fa fa fa fa fa fa fa 04 fa fa fa fa fa fa fa
0x0c127fff8020: fa fa fa fa fa fa fa fa fd fd fa fa fa fa fa fa
0x0c127fff8030: fa fa fa fa fa fa fa fa 00 00 fa fa fa fa fa fa
0x0c127fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c127fff8050:[04]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==202==ABORTING
```
### Versions
```
PyTorch version: 1.14.0a0+gita86278b
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 11.1.0-6
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.14.0a0+gita86278b
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
```
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 2 |
4,342 | 93,596 |
Minifier doesn't work on DebertaForQuestionAnswering
|
triaged, bug
|
### 🐛 Describe the bug
Build pytorch e4a8661ab84022c1bff622c6d2f6e679180b1df5
Apply this patch
```
diff --git a/torch/_dynamo/optimizations/backends.py b/torch/_dynamo/optimizations/backends.py
index 660e7a5ca56..d53733bec73 100644
--- a/torch/_dynamo/optimizations/backends.py
+++ b/torch/_dynamo/optimizations/backends.py
@@ -552,7 +552,8 @@ def cudagraphs_inner(model, inputs, copy_outputs=True):
def aot_autograd(subgraph, **kwargs):
def _wrapped_bw_compiler(*args, **kwargs):
# stop TorchDynamo from trying to compile our generated backwards pass
- return disable(bw_compiler(*args, **kwargs))
+ bwd_func = disable(bw_compiler)(*args, **kwargs)
+ return disable(bwd_func)
bw_compiler = kwargs.get("bw_compiler") or kwargs["fw_compiler"]
kwargs["bw_compiler"] = _wrapped_bw_compiler
```
Deberta failures accuracy with aot_eager, but the minifier doesn't do anything
```
$ pp PYTHONUNBUFFERED=1 TORCHDYNAMO_REPRO_AFTER="dynamo" TORCHDYNAMO_REPRO_LEVEL=4 python benchmarks/dynamo/huggingface.py --accuracy --backend aot_eager --training --only DebertaForQuestionAnswering
[2022-10-29 19:30:33,226] torch._dynamo.testing: [WARNING] High loss value alert - 6.63. Can result in unstable gradients.
cuda train DebertaForQuestionAnswering [2022-10-29 19:30:33,509] torch._dynamo.testing: [WARNING] High loss value alert - 6.63. Can result in unstable gradients.
[2022-10-29 19:30:33,626] torch._dynamo.testing: [WARNING] High loss value alert - 6.63. Can result in unstable gradients.
[2022-10-29 19:30:33,731] torch._dynamo.testing: [WARNING] High loss value alert - 6.63. Can result in unstable gradients.
[2022-10-29 19:30:54,042] torch._dynamo.testing: [WARNING] High loss value alert - 6.63. Can result in unstable gradients.
[2022-10-29 19:30:54,050] torch._dynamo.utils: [ERROR] RMSE (res-fp64): 0.00016, (ref-fp64): 0.00000 and shape=torch.Size([50265, 768])
[2022-10-29 19:30:54,050] torch._dynamo.utils: [ERROR] Accuracy failed for key name deberta.embeddings.word_embeddings.weight.grad
```
### Error logs
_No response_
### Minified repro
_No response_
| 2 |
4,343 | 93,593 |
Inductor gives obscure error when FX graph to be compiled returns tuple
|
triaged, bug
|
### 🐛 Describe the bug
```
var_mean = torch.ops.aten.var_mean.correction(add, [2], correction = 0, keepdim = True); add = None
```
triggers an assert error in inductor
### Error logs
```
File "/data/users/ezyang/pytorch-tmp2/torch/_inductor/graph.py", line 299, in run_node
result = super().run_node(n)
File "/data/users/ezyang/pytorch-tmp2/torch/fx/interpreter.py", line 171, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/ezyang/pytorch-tmp2/torch/_inductor/graph.py", line 267, in output
assert all(
AssertionError: ((TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
load(buf1, i1) / index_expr(1024, torch.float32),
ranges=[1, 128, 1],
origins={var_mean}
)
)), TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
load(buf2, i1) / index_expr(1024, torch.float32),
ranges=[1, 128, 1],
origins={var_mean}
)
))),)
While executing return ((getitem, getitem_1),)
```
### Minified repro
```
import torch._inductor.overrides
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
from torch.fx.experimental.proxy_tensor import make_fx
# torch version: 1.13.0a0+git3eb2722
# torch cuda version: 11.4
# torch git version: 3eb27229dd74dd0bea434326c471f16c50e558a4
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2021 NVIDIA Corporation
# Built on Sun_Aug_15_21:14:11_PDT_2021
# Cuda compilation tools, release 11.4, V11.4.120
# Build cuda_11.4.r11.4/compiler.30300941_0
# GPU Hardware Info:
# NVIDIA A100-PG509-200 : 8
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, add):
var_mean = torch.ops.aten.var_mean.correction(add, [2], correction = 0, keepdim = True); add = None
return (var_mean,)
args = [((1, 128, 1024), (131072, 1024, 1), torch.float32, 'cuda')]
args = [rand_strided(sh, st, dt, dev) for (sh, st, dt, dev) in args]
mod = make_fx(Repro().to(device="cuda"))(*args)
from torch._inductor.compile_fx import compile_fx_inner
from torch._dynamo.debug_utils import same_two_models
compiled = compile_fx_inner(mod, args)
assert same_two_models(mod, compiled, args, only_fwd=True), "Accuracy failed"
```
| 9 |
4,344 | 93,592 |
Turning on minifier causes bug to go away (on DebertaForMaskedLM)
|
triaged, bug
|
### 🐛 Describe the bug
1. Check out and build pytorch at e4a8661ab84022c1bff622c6d2f6e679180b1df5 (Oct 29, before breaking commit was reverted)
2. Run `python benchmarks/dynamo/huggingface.py --accuracy --backend inductor --training --only DebertaForMaskedLM`, it fails
3. Run `TORCHDYNAMO_REPRO_AFTER=dynamo python benchmarks/dynamo/huggingface.py --accuracy --backend inductor --training --only DebertaForMaskedLM`, it passes
### Error logs
https://www.internalfb.com/intern/paste/P543371878/
### Minified repro
minifier did not work
| 1 |
4,345 | 88,036 |
A segment fault can be triggered in fbgemm_pack_gemm_matrix_fp16
|
module: crash, triaged, module: third_party, module: half
|
### 🐛 Describe the bug
The following code can trigger a segment fault:
````python
import torch
import numpy as np
print(torch.__version__)
input = torch.rand([12, 10, 0, 8, 7], dtype=torch.float32)
res = torch.fbgemm_pack_gemm_matrix_fp16(
input
)
````
Output:
````
1.13.0+cu117
Segmentation fault (core dumped)
````
### Versions
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 11.0.0-2~ubuntu20.04.1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.0
[conda] numpy 1.23.4 pypi_0 pypi
[conda] torch 1.13.0 pypi_0 pypi
| 0 |
4,346 | 88,027 |
getting error error: namespace "cub" has no member "Debug" when try to build v1.8.2 with CUDA 11.6
|
module: cuda, triaged
|
### 🐛 Describe the bug
I am trying to build pytorch v1.8.2 with CUDA 11.6 but i am getting error: namespace "cub" has no member "Debug". i see there is post https://github.com/pytorch/pytorch/pull/66219 for similar issue but I am not able to fix the error with the information. just wondering if it's possible to build pytorch v1.8.2 with CUDA 11.6?
i ran below command with branch v1.8.2 checkout
USE_LMDB=1 USE_OPENCV=1 MAX_JOBS=2 python setup.py bdist_wheel
and got below error:-
/usr/local/cuda/include/cub/device/dispatch/dispatch_select_if.cuh(417): error: namespace "cub" has no member "Debug"
detected during:
instantiation of "cudaError_t at::native::cub::DeviceSelect::Flagged(void *, size_t &, InputIteratorT, FlagIterator, OutputIteratorT, NumSelectedIteratorT, int, cudaStream_t, __nv_bool) [with InputIteratorT=at::native::cub::CountingInputIterator<int64_t, ptrdiff_t>, FlagIterator=at::native::cub::TransformInputIterator<__nv_bool, at::native::<unnamed>::NonZeroOp<uint8_t>, uint8_t *, ptrdiff_t>, OutputIteratorT=int64_t *, NumSelectedIteratorT=int *]"
/home/mx/projs/pytorch/aten/src/ATen/native/cuda/Nonzero.cu(75): here
instantiation of "void at::native::nonzero_cuda_out_impl<scalar_t>(const at::Tensor &, at::Tensor &) [with scalar_t=uint8_t]"
/home/mx/projs/pytorch/aten/src/ATen/native/cuda/Nonzero.cu(107): here
49 errors detected in the compilation of "/home/mx/projs/pytorch/aten/src/ATen/native/cuda/Nonzero.cu".
CMake Error at torch_cuda_generated_Nonzero.cu.o.Release.cmake:281 (message):
Error generating file
/home/mx/projs/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Nonzero.cu.o
[5044/6004] Building NVCC (Device) obj...cuda_generated_PointwiseOpsKernel.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 510.85.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.23.3
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.3 py38h14f4228_0
[conda] numpy-base 1.23.3 py38h31eccc5_0
cc @ngimel
| 1 |
4,347 | 88,025 |
[WIP] Composable FSDP Follow-Ups
|
oncall: distributed, triaged, module: fsdp
|
This tracks the follow-ups from my PR stack(s) for implementing composable FSDP.
**High Priority**
- Investigate behavior around `_apply()` and whether pre- and post-`_apply()` hooks are needed/reasonable
- Since composable FSDP requires `use_orig_params=True` and `use_orig_params=True` usually overrides `nn.Module._apply()` to expose the `FlatParameter`s before `_apply()` and restore the original parameters after:
https://github.com/pytorch/pytorch/blob/3c6bddc3f6347ce7d1ed33aee94cdaa953cbc387/torch/distributed/fsdp/fully_sharded_data_parallel.py#L1605-L1610
- The reason for this is to allocate contiguous memory for each `FlatParameter` when performing `.cuda()` (or similar) instead of allocating memory per original parameter and requiring a separate allocation and write back.
- One possibility is to enforce the `device_id` argument for composable FSDP so that FSDP initialization can ensure the sharded `FlatParameter` memory is allocated on GPU directly before setting the original parameters to be views into that memory.
**Medium Priority**
- Define FSDP's contract for buffers (e.g. for ignored modules)
https://github.com/pytorch/pytorch/pull/87923#discussion_r1008537088
- Add sibling shared parameter support
https://github.com/pytorch/pytorch/pull/87923#discussion_r1008550533
- Investigate enabling side streams in `_to_kwargs()` in the root pre-forward
https://github.com/pytorch/pytorch/pull/87915#discussion_r1008406766
- Migrate to module pre-backward hook to avoid registering the pre-backward hook every iteration (on the forward pass output tensors that require gradient)
https://github.com/pytorch/pytorch/pull/87927#discussion_r1009505679
- Investigate the padded unsharded gradient allocation in the post-backward (i.e. can we avoid making a copy by calling `F.pad()`)
https://github.com/pytorch/pytorch/pull/87927#discussion_r1009516921
- Investigate if reference cycle involving `_post_backward_hook_state()` is a concern
https://github.com/pytorch/pytorch/pull/87927#discussion_r1009512801
**Low Priority**
- [BE] Investigate `tree_map()` instead of `_apply_to_tensors()` for mapping the low precision cast for forward pass inputs
https://github.com/pytorch/pytorch/pull/87915#discussion_r1008408406
- `tree_map()` does not seem to currently support `dataclass` and `PackedSequence`, which `_apply_to_tensor()` does. This is not high priority and can be revisited in the future.
- [Done] ~~Add note to `BackwardPrefetch` docs pointing to using the rate limiter if encountering issues from increased memory when enabling the prefetching
https://github.com/pytorch/pytorch/pull/87917#discussion_r1008455071~~
- Verify why `CPUOffload` is a `dataclass` and if there are any plans to add additional attributes
https://github.com/pytorch/pytorch/pull/87920#discussion_r1008470789
- [Done] ~~[BE] Investigate why offloading to CPU in the FSDP constructor happens in `torch.no_grad()` context but not the preceding `module.to()`
https://github.com/pytorch/pytorch/pull/87921#discussion_r1008508533~~
- Decide if the `device_id` constructor argument should support `str` or not (e.g. `device_id="cuda"`)
https://github.com/pytorch/pytorch/pull/87921#discussion_r1008505562
- Investigate if FSDP should permit `FullyShardedDataParallel` instances in `ignored_modules` but simply ignore them and recurse into their child modules to be consistent with the behavior for nested FSDP instances not directly passed to `ignored_modules`
https://github.com/pytorch/pytorch/pull/87921#discussion_r1008502560
- [BE] Investigate if the manual BFS in `_apply_to_modules` is necessary instead of simply looping over `named_modules()` (see `_get_param_to_unflat_param_names()` and `_get_buffer_names()`)
https://github.com/pytorch/pytorch/pull/87921#discussion_r1008494191
- [Done] ~~[BE] Refactor `_get_param_to_unflat_param_names()` to unify implementation naming conventions
https://github.com/pytorch/pytorch/pull/87921#discussion_r1008483348~~
- Make the usage of preceding underscore (`_`) for class, method, and function names consistent (i.e. unify what it means to be private or public)
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
4,348 | 88,006 |
C++ Extensions can't import c10d/reducer.hpp
|
oncall: distributed
|
### 🐛 Describe the bug
Hi, I was hoping to subclass the Reducer class and add custom functionality to how it buckets gradients and came across the C++ extensions functionality. I followed the guide here: https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-a-c-extension, which compiles fine in my container.
It however does not compile when I add `#include <c10d/reducer.hpp>` to the top. `#include <c10d/ProcessGroup.hpp>` compiles fine, but including the Reducer header file shows this error:
```
root@GCR-OPENPAI-22:/home/----/lltm# /usr/src/Python-3.8.0/python setup.py install
running install
running bdist_egg
running egg_info
writing lltm_cpp.egg-info/PKG-INFO
writing dependency_links to lltm_cpp.egg-info/dependency_links.txt
writing top-level names to lltm_cpp.egg-info/top_level.txt
reading manifest file 'lltm_cpp.egg-info/SOURCES.txt'
writing manifest file 'lltm_cpp.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'lltm_cpp' extension
Emitting ninja build file /home/----/lltm/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Using envvar MAX_JOBS (32) as the number of workers...
[1/1] c++ -MMD -MF /home/----/lltm/build/temp.linux-x86_64-3.8/lltm.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/root/.local/lib/python3.8/site-packages/torch/include -I/root/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/.local/lib/python3.8/site-packages/torch/include/TH -I/root/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/src/Python-3.8.0/Include -I/usr/src/Python-3.8.0 -c -c /home/----/lltm/lltm.cpp -o /home/----/lltm/build/temp.linux-x86_64-3.8/lltm.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DUSE_ROCM=1 -fPIC -D__HIP_PLATFORM_HCC__=1 -DUSE_ROCM=1 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=lltm_cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /home/----/lltm/build/temp.linux-x86_64-3.8/lltm.o
c++ -MMD -MF /home/----/lltm/build/temp.linux-x86_64-3.8/lltm.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/root/.local/lib/python3.8/site-packages/torch/include -I/root/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/.local/lib/python3.8/site-packages/torch/include/TH -I/root/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/src/Python-3.8.0/Include -I/usr/src/Python-3.8.0 -c -c /home/----/lltm/lltm.cpp -o /home/----/lltm/build/temp.linux-x86_64-3.8/lltm.o -fPIC -D__HIP_PLATFORM_HCC__=1 -DUSE_ROCM=1 -fPIC -D__HIP_PLATFORM_HCC__=1 -DUSE_ROCM=1 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=lltm_cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /home/----/lltm/lltm.cpp:8:
/root/.local/lib/python3.8/site-packages/torch/include/c10d/reducer.hpp:23:10: fatal error: torch/csrc/distributed/autograd/context/context.h: No such file or directory
23 | #include <torch/csrc/distributed/autograd/context/context.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/root/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1896, in _run_ninja_build
subprocess.run(
File "/usr/src/Python-3.8.0/Lib/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v', '-j', '32']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "setup.py", line 4, in <module>
setup(name='lltm_cpp',
File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/src/Python-3.8.0/Lib/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/src/Python-3.8.0/Lib/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/src/Python-3.8.0/Lib/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/usr/src/Python-3.8.0/Lib/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/src/Python-3.8.0/Lib/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/usr/local/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/usr/src/Python-3.8.0/Lib/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/src/Python-3.8.0/Lib/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/usr/src/Python-3.8.0/Lib/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/usr/src/Python-3.8.0/Lib/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/src/Python-3.8.0/Lib/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/usr/src/Python-3.8.0/Lib/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/root/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 839, in build_extensions
build_ext.build_extensions(self)
File "/usr/src/Python-3.8.0/Lib/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/usr/src/Python-3.8.0/Lib/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/usr/local/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 205, in build_extension
_build_ext.build_extension(self, ext)
File "/usr/src/Python-3.8.0/Lib/distutils/command/build_ext.py", line 528, in build_extension
objects = self.compiler.compile(sources,
File "/root/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 654, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/root/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1569, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/root/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1912, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
```
The files are as follows:
reducer_subclass.cpp
```
#include <c10d/reducer.hpp>
```
setup.py
```
from setuptools import setup, Extension
from torch.utils import cpp_extension
setup(name='reducer_subclass_cpp',
ext_modules=[cpp_extension.CppExtension('reducer_subclass_cpp', ['reducer_subclass.cpp'])],
cmdclass={'build_ext': cpp_extension.BuildExtension})
```
And to get the error, run `python setup.py install`
This is run in a container with python3.8 installed from source (with very slight modification to its shared_memory module).
Am I importing this the wrong way, or is it just not supported? If it's not supported, is there another way I can go about doing this?
Thanks for the help!
### Versions
```
root@GCR-OPENPAI-22:/home/----# /usr/src/Python-3.8.0/python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0a0+git46a6a50
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.1.20531-cacfa990
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 14.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.1.1 22114 5cba46feb6af367b1cafaa183ec42dbfb8207b14)
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.0 (default, Sep 22 2022, 22:16:25) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-1031-azure-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.1.20531
MIOpen runtime version: 2.16.0
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==1.13.0a0+git46a6a50
[pip3] torchvision==0.13.1
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.8.1+96f20ce pypi_0 pypi
[conda] torchvision 0.9.0a0+8fb5838 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
4,349 | 88,002 |
Einsum Optimization Tracker
|
module: performance, triaged, module: linear algebra
|
There are many things one can do to improve einsum (in terms of perf and usability!) so this issue seeks to track them all.
## The Multi-Contractions Case
As of torch 1.13, `torch.einsum` automatically uses opt-einsum to optimize the contraction path for when there are 3+ tensors to contract and opt-einsum is installed. As of now, section 1 of https://github.com/pytorch/pytorch/issues/60295 has been completed.
Tasks remaining include to:
- [ ] Achieve numpy compatibility by adding optimize as an torch.einsum API kwarg (our previous attempt was blocked by failing JIT tracing https://github.com/pytorch/pytorch/pull/85908)
- [ ] [P0] Submit a fix to https://github.com/dgasmith/opt_einsum to set `torch.backends.opt_einsum.enabled = False` to avoid recomputing the path when opt_einsum calls torch.einsum as a backend. https://github.com/dgasmith/opt_einsum/pull/205
- [ ] Test out cuQuantum perf as an opt_einsum replacement
- [ ] Add path computation caching by equation + op size
## The Single Contraction Case
The path to faster single contractions includes:
- [ ] Benchmarking einsum with torch.inductor, experimenting with replacing bmm with mul -> sum for discontiguous tensors.
- [ ] Testing out cuTensor perf (to see if there are significant improvements since Yaroslav's last benchmarks in #57121.)
## Additional Context
See previous relevant issues https://github.com/pytorch/pytorch/issues/60295 and https://github.com/pytorch/pytorch/issues/57121 for context.
cc @VitalyFedyunin @ngimel @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 0 |
4,350 | 87,995 |
multi-node distributed training rank0 hang at dataloader after a few epochs
|
oncall: distributed, module: dataloader
|
### 🐛 Describe the bug
I am using pytorch to do multi-GPU distributed training. When I use 1-node 8-GPUs, the training works well. However, when I used 4-nodes 32-GPUs, after N epochs (N can be 1 to 3), the training will freeze. From the debugger, it shows that rank0 is stuck at dataloader place with the following stack. And all other 31 ranks are waiting for rank0. For this hang case, dataloader uses num_workers=8. When I changed num_workers=4, the job can run longer before hang. When I changed num_workers=0, the job won't hang.
This is 100% reproducible. I ran the same tests about 20 times.
Each time it will stuck at the same code place: rank0 will hang at dataloader, all other ranks will wait because of all_reduce(). When this hangs, rank0 already finished several steps (e.g. X is 5).
It always stuck at rank0.
The stack for rank0:
```
Thread 0x7FA6E7C01740 (idle): "MainThread"
wait (threading.py:302)
get (queue.py:170)
__next__ (fairseq/data/iterators.py:648)
__iter__ (fairseq/data/iterators.py:59)
_chunk_iterator (fairseq/data/iterators.py:528)
__iter__ (fairseq/data/iterators.py:59)
__iter__ (fairseq/logging/progress_bar.py:256)
train (fairseq_cli/train.py:274)
inner (contextlib.py:75)
main (fairseq_cli/train.py:165)
distributed_main (fairseq/distributed/utils.py:326)
call_main (fairseq/distributed/utils.py:352)
cli_main (fairseq_cli/train.py:499)
<module> (train.py:14)
```
The stack for other ranks:
```
Thread 0x7F6F32481740 (active): "MainThread"
_all_reduce_dict (fairseq/distributed/utils.py:661)
all_reduce_dict (fairseq/distributed/utils.py:667)
_fast_stat_sync_sum (fairseq/trainer.py:1195)
_aggregate_logging_outputs (fairseq/trainer.py:1135)
train_step (fairseq/trainer.py:708)
inner (contextlib.py:75)
train (fairseq_cli/train.py:278)
inner (contextlib.py:75)
main (fairseq_cli/train.py:165)
distributed_main (fairseq/distributed/utils.py:326)
call_main (fairseq/distributed/utils.py:352)
cli_main (fairseq_cli/train.py:499)
<module> (train.py:14)
```
### Versions
pytorch 1.11
OS: Linux Ubuntu 20.4
Python version: 3.8
CUDA: 11.5
GPU: A100 80GB
fairseq: 1.0.0a0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @SsnL @VitalyFedyunin @ejguan @NivekT
| 7 |
4,351 | 87,992 |
torch.rand(...) is not consistent for large shape dimensions across GPUs (with the same random seed)
|
module: cuda, triaged, module: random
|
### 🐛 Describe the bug
Generating a random tensor with `torch.rand` yields different results on different GPUs once dimensions are starting to become large.
The following simple Python code:
```python
import torch
generator = torch.Generator(device='cuda').manual_seed(0)
print(torch.rand((8, 77, 1024), device='cuda', generator=generator).abs().sum())
```
yields different results depending on the GPU one is working with. This poses a quite significant problem for testing where applications start with gaussian noise of higher dimensions such as https://github.com/huggingface/diffusers
The interesting part is that one does get the same results for smaller dimensions such as `(8, 77, 10)`
### Versions
First is GPU is a TITAN RTX with the following version:
```
PyTorch version: 1.12.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 19.10 (x86_64)
GCC version: (Ubuntu 8.4.0-1ubuntu1~19.10) 8.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.30
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA TITAN RTX
GPU 1: NVIDIA TITAN RTX
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.2
[pip3] open-clip-torch==1.3.0
[pip3] pytorch-fid==0.2.1
[pip3] pytorch-lightning==1.6.4
[pip3] torch==1.12.1
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.9.1
[pip3] torchvision==0.12.0
[pip3] v-diffusion-pytorch==0.0.1
[conda] _pytorch_select 0.1 cpu_0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] libmklml 2019.0.5 h06a4308_0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.23.2 pypi_0 pypi
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] open-clip-torch 1.3.0 dev_0 <develop>
[conda] pytorch-fid 0.2.1 pypi_0 pypi
[conda] pytorch-lightning 1.6.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.12.1 pypi_0 pypi
[conda] torchaudio 0.10.1+cu113 pypi_0 pypi
[conda] torchmetrics 0.9.1 pypi_0 pypi
[conda] torchvision 0.12.0 py38_cu113 pytorch
[conda] v-diffusion-pytorch 0.0.1 pypi_0 pypi
```
and second is a V100 with the following version:
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.13
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 460.73.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchmetrics==0.10.1
[pip3] torchvision==0.13.1+cu113
[conda] blas 2.113 mkl conda-forge
[conda] blas-devel 3.9.0 13_linux64_mkl conda-forge
[conda] cudatoolkit 11.1.1 h6406543_10 conda-forge
[conda] dlenv-pytorch-1-10-gpu 1.0.20220226 py37h0ee201a_0 file:///tmp/conda-pkgs
[conda] libblas 3.9.0 13_linux64_mkl conda-forge
[conda] libcblas 3.9.0 13_linux64_mkl conda-forge
[conda] liblapack 3.9.0 13_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 13_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h8d4b97c_803 conda-forge
[conda] mkl-devel 2022.0.1 ha770c72_804 conda-forge
[conda] mkl-include 2022.0.1 h8d4b97c_803 conda-forge
[conda] numpy 1.19.5 py37h038b26d_2 conda-forge
[conda] pytorch 1.10.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.11.1+cu111 pypi_0 pypi
```
cc @ngimel @pbelevich
| 5 |
4,352 | 87,979 |
amp with `bf16`: backward happens in `f16` when using `@torch.cuda.amp.custom_bwd`
|
triaged, module: bfloat16, module: half, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
I don't think it's expected, and it can be quite surprising/leading to errors, or silently do something different than what most people would expect. It requires A100 to have autocast in bf16.
This is due to not passing the `dtype` to `autocast ` here: https://github.com/pytorch/pytorch/blob/e2a4dfa468330c0587849bea4896ff5fffb33010/torch/cuda/amp/autocast_mode.py#L113-L124
Repro:
```python
import torch
class SomeFunc(torch.autograd.Function):
@classmethod
@torch.cuda.amp.custom_fwd
def forward(cls, ctx, x):
out = x.transpose(0, 1) @ x
print("Forward dtype:", out.dtype)
ctx.save_for_backward(out, x)
return out
@classmethod
@torch.cuda.amp.custom_bwd
def backward(cls, ctx, grad):
out, x = ctx.saved_tensors
out_recalc = x.transpose(0, 1) @ x
print("Backward dtype:", out_recalc.dtype)
return None
x = torch.randn([10, 10], device="cuda", requires_grad=True)
with torch.autocast("cuda", dtype=torch.bfloat16):
y = SomeFunc.apply(x)
y.backward(torch.zeros_like(y))
```
Output:
```bash
Forward dtype: torch.bfloat16
Backward dtype: torch.float16
```
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.4
Libc version: glibc-2.31
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 hecad31d_10 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h8d4b97c_729 conda-forge
[conda] mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
[conda] mkl_fft 1.3.1 py310h2b4bcf5_1 conda-forge
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.1 py310h1794996_0
[conda] numpy-base 1.23.1 py310hcba007f_0
[conda] pytorch 1.12.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.1 py310_cu116 pytorch
[conda] torchmetrics 0.9.3 pypi_0 pypi
[conda] torchvision 0.13.1 py310_cu116 pytorch
```
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 1 |
4,353 | 87,964 |
`torch.distributed` crash with abort only inside if
|
oncall: distributed, module: crash
|
### 🐛 Describe the bug
`torch.distributed` crash with abort:
```
import torch
import torch.distributed as dist
if True:
input_tensor = torch.randn(4, 3)
dist.FileStore('./')
```
```
abort (core dumped)
```
However, if we remove the if statement, it will not crash
```
import torch
import torch.distributed as dist
input_tensor = torch.randn(4, 3)
dist.FileStore('./')
```
```
<torch._C._distributed_c10d.FileStore at 0x7f5b68e42b70>
```
### Versions
pytorch 1.12.1
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
4,354 | 87,961 |
crash in `torch.package.PackageExporter`
|
triaged, module: edge cases
|
### 🐛 Describe the bug
`torch.package.PackageExporter` crash with abort.
```
import torch
data = torch.randn(2, 3)
print(data)
torch.package.PackageExporter(data, './test_package.zip')
```
```
tensor([[-1.3200, -1.0457, -0.0773],
[ 0.6770, 1.8295, -0.8031]])
abort (core dumped)
```
This is probably related to https://github.com/pytorch/pytorch/issues/85329, except that in the code example above, no Error is thrown and it directly crashes.
### Versions
pytorch 1.12.1
| 0 |
4,355 | 87,960 |
crash when call `torch.set_num_interop_threads` twice
|
module: crash, triaged
|
### 🐛 Describe the bug
Crash with abort when call `torch.set_num_interop_threads` more than once.
According to [doc](https://pytorch.org/docs/stable/generated/torch.set_num_interop_threads.html), `set_num_interop_threads`:
> can only be called once and before any inter-op parallel work is started (e.g. JIT execution).
However, I think this can be handled with an Exception instead of directly crash.
### Versions
pytorch 1.12.1
cc @ezyang @gchanan @zou3519
| 0 |
4,356 | 87,957 |
VS2022Preview ParallelCommon.cpp.obj : fatal error LNK1161: invalid export specification
|
module: build, triaged, module: static linking
|
### 🐛 Describe the bug
```
[5554/5844] Linking CXX shared library bin\torch_cpu.dll
FAILED: bin/torch_cpu.dll lib/torch_cpu.lib
cmd.exe /C "cd . && C:\Prog\cmake\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_cpu.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100226~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100226~1.0\x64\mt.exe --manifests -- C:\PROGRA~1\MICROS~4\2022\Preview\VC\Tools\MSVC\1434~1.319\bin\Hostx64\x64\link.exe /nologo @CMakeFiles\torch_cpu.rsp /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO -WHOLEARCHIVE:C:/Github/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/Github/pytorch/build/lib/onnx.lib && cd ."
LINK: command "C:\PROGRA~1\MICROS~4\2022\Preview\VC\Tools\MSVC\1434~1.319\bin\Hostx64\x64\link.exe /nologo @CMakeFiles\torch_cpu.rsp /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO -WHOLEARCHIVE:C:/Github/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/Github/pytorch/build/lib/onnx.lib /MANIFEST /MANIFESTFILE:bin\torch_cpu.dll.manifest" failed (exit code 1161) with the following output:
ParallelCommon.cpp.obj : fatal error LNK1161: invalid export specification
```
after some comparsion
VS2022Preview 17.4.0 Preview 5.0
emit this
```
/EXPORT:?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA,THREAD_DATA
```
while
VS2019 16.11.20
```
/EXPORT:?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA,DATA
```
here the object files
[obj_files.zip](https://github.com/pytorch/pytorch/files/9885294/obj_files.zip)
Workaround?
idk
Thanks.
CC @vladimir-aubrecht
### Versions
pytorch trunk
commit f9679184116f1d29c483c2b2a4c3a9d730be4694
VS2022
```
C:\Github\pytorch>cl -v
Microsoft (R) C/C++ Optimizing Compiler Version 19.34.31932 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
```
VS2019
```
C:\Github\pytorch_VS2019>cl -v
Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30146 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
```
cc @malfet @seemethere
| 2 |
4,357 | 87,956 |
Autograd doesn't stop executing backward graph early enough in situations involving set_
|
module: autograd, triaged, has workaround, actionable
|
### 🐛 Describe the bug
Before the bug, the reproducing program is a bit whacky, so I need to explain why it might be reasonable to expect it not to error.
Suppose I have a naughty custom autograd function, that returns gradients of the wrong size. If I use grad() to cut off autograd computation before I actually get to the naughty autograd function, I would expect to never actually hit the error case. And indeed, this program runs successfully:
```
import torch
class F(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x.clone()
@staticmethod
def backward(ctx, grad_output):
return torch.zeros(6, requires_grad=True)
x = torch.zeros(4, requires_grad=True)
y = F.apply(x)
z = y.clone()
print(torch.autograd.grad((z,), (y,), grad_outputs=torch.ones_like(z)))
```
In general, I would expect that no matter how naughty the history of how x was created, so long as I grad with respect to it as an input, it should be "as if" I had detached it from the rest of the graph, all sins absolved.
Here is a program for which I am too naughty, and this invariant does not hold:
```
x = torch.zeros(0)
x.requires_grad = True
x2 = x.clone()
with torch.no_grad():
x2.set_(torch.zeros(5).storage(), 0, (5,), (1,))
y = x2.clone()
print(torch.autograd.grad((y.sum(),), (x2,)))
```
Fails with:
```
File "auto.py", line 26, in <module>
print(torch.autograd.grad((y.sum(),), (x2,)))
File "/raid/ezyang/pytorch-scratch2/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Function CloneBackward0 returned an invalid gradient at index 0 - got [5] but expected shape compatible with [0]
```
This minimal repro is from real code extracted from MetaConverter (written by yours truly) which tries to synthesize variables which resemble input variables as much as possible in all properties. Note that if you ablate the `no_grad`, the program passes, but I am not allowed to do this as in some real programs x may be a leaf variable.
I have a workaround for this problem, which is to allocate an appropriately sized x from the get go, so this is not a blocker to fix. But it would be good to better understand what is going on here, in case there is a more serious bug.
cc @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
### Versions
master
| 2 |
4,358 | 87,955 |
AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'
|
module: dataloader, triaged
|
### 🐛 Describe the bug
the same code can run in pytorch11.3, but can’t in 11.7.。。
inputs_x, targets_x, index_x = labeled_train_iter.next()
AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'
### Versions
version
cuda11.8
cudnn8.6.0
python3.8
torch version:1.14.0
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 2 |
4,359 | 88,004 |
link error happen when intergrate libtorch to other tool
|
module: build, module: cpp, triaged
|
Hi, author. I recently try to integrate libtorch to a symbolic execution tool [klee](https://github.com/klee/klee), but always fail to linke libtorch in the last stage.
The part of CMakeList is
```text
set(Torch_DIR ${LIB_TORCH_ROOT}/share/cmake/Torch)
find_package(Torch REQUIRED)
if (Torch_FOUND)
message(STATUS "Torch library found!")
message(STATUS "include path: ${TORCH_INCLUDE_DIRS}")
message(STATUS "libraries path: ${TORCH_LIBRARIES}")
else ()
message(FATAL_ERROR "Could not locate Torch" \n)
endif()
xxxx
target_link_libraries(KleeCore "${TORCH_LIBRARIES}")
```
This could work when I write a simple libtool, but fail to link intergrating it to klee.
I print the value of `${TORCH_LIBRARIES}$`, and found the value is `torch;torch_library;<path2libtorch>/lib/libc10.so`, is this correct? I also notice there is a variable `${TORCH_LIBRARY}` in `TorchConfig.cmake`, is that variable useful for cmake? How could I fix this error?
cc @malfet @seemethere @jbschlosser
| 7 |
4,360 | 93,590 |
test_conv_large_cuda: RuntimeError: CUDA error: an illegal memory access was encountered
|
triaged
|
Repro:
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/nn/test_convolution.py -k test_conv_large_cuda
```
Error:
```
TEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
ETEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
======================================================================
ERROR: test_conv_large_cuda (__main__.TestNNDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/_dynamo/utils.py", line 442, in preserve_rng_state
yield
File "/fsx/users/binbao/pytorch/torch/_inductor/compile_fx.py", line 203, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/fsx/users/binbao/pytorch/torch/_inductor/compile_fx.py", line 258, in cudagraphify_impl
model(list(static_inputs))
File "/tmp/torchinductor_binbao/5x/c5xaannepkzj65rsuvrsfwf6kviuwaz4eic3zumxo7nerrkxkvgh.py", line 55, in call
buf1 = aten.convolution(buf2, primals_1, None, (8, 8), (0, 0), (1, 1), False, (0, 0), 1)
File "/fsx/users/binbao/pytorch/torch/_ops.py", line 446, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 394, in instantiated_test
raise rte
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 381, in instantiated_test
result = test(self, **param_kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 991, in only_fn
return fn(slf, *args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 869, in dep_fn
return fn(slf, *args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 952, in dep_fn
return fn(self, *args, **kwargs)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 15390, in test_conv_large
conv = nn.Conv2d(2, 2, 8, 8, bias=False).to(device).to(dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 15390, in <graph break in test_conv_large>
conv = nn.Conv2d(2, 2, 8, 8, bias=False).to(device).to(dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 15390, in <graph break in test_conv_large>
conv = nn.Conv2d(2, 2, 8, 8, bias=False).to(device).to(dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 15390, in <graph break in test_conv_large>
conv = nn.Conv2d(2, 2, 8, 8, bias=False).to(device).to(dtype)
File "/fsx/users/binbao/pytorch/torch/_dynamo/eval_frame.py", line 157, in _fn
return fn(*args, **kwargs)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 870, in forward
return compiled_f(
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 861, in new_func
return compiled_fn(args)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 230, in g
return f(*args)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 489, in compiled_function
return CompiledFunction.apply(*remove_dupe_args(args))
File "/fsx/users/binbao/torchdynamo-tip/torchdynamo/eval_frame.py", line 163, in _fn
return fn(*args, **kwargs)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 450, in forward
fw_outs = call_func_with_args(
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 255, in call_func_with_args
out = normalize_as_list(f(args))
File "/fsx/users/binbao/pytorch/torch/_inductor/compile_fx.py", line 186, in run
return model(new_inputs)
File "/fsx/users/binbao/pytorch/torch/_inductor/compile_fx.py", line 203, in run
compiled_fn = cudagraphify_impl(model, new_inputs, static_input_idxs)
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/fsx/users/binbao/pytorch/torch/_dynamo/utils.py", line 446, in preserve_rng_state
torch.cuda.set_rng_state(cuda_rng)
File "/fsx/users/binbao/pytorch/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/fsx/users/binbao/pytorch/torch/cuda/__init__.py", line 176, in _lazy_call
callable()
File "/fsx/users/binbao/pytorch/torch/cuda/random.py", line 62, in cb
default_generator.set_state(new_state_copy)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
| 1 |
4,361 | 93,589 |
test_batchnorm_eval_cuda_float32: AttributeError: 'NoneType' object has no attribute 'clone'
|
triaged
|
Repro:
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_nn.py -k test_batchnorm_eval_cuda_float32
```
Error:
```
/fsx/users/binbao/pytorch/torch/_dynamo/variables/builtin.py:681: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /fsx/users/binbao/pytorch/build/aten/src/ATen/core/TensorBody.h:485.)
example_value = grapharg.example.grad
[2022-10-27 23:56:53,541] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _test_batchnorm_eval> /fsx/users/binbao/pytorch/test/test_nn.py line 16045
due to:
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/_dynamo/variables/constant.py", line 61, in const_getattr
member = getattr(self.value, name)
AttributeError: 'NoneType' object has no attribute 'clone'
from user code:
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16046, in <graph break in _test_batchnorm_eval>
grad1 = data.grad.clone()
Set torch._dynamo.config.verbose=True for more information
==========
/fsx/users/binbao/pytorch/test/test_nn.py:16046: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /fsx/users/binbao/pytorch/build/aten/src/ATen/core/TensorBody.h:485.)
grad1 = data.grad.clone()
E
======================================================================
ERROR: test_batchnorm_eval_cuda_float32 (__main__.TestNNDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 381, in instantiated_test
result = test(self, **param_kwargs)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16085, in test_batchnorm_eval
self._test_batchnorm_eval(2, device, dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16037, in _test_batchnorm_eval
module = nn.BatchNorm1d(3).to(device, module_dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16037, in <graph break in _test_batchnorm_eval>
module = nn.BatchNorm1d(3).to(device, module_dtype)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16038, in <graph break in _test_batchnorm_eval>
module.eval()
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16045, in <graph break in _test_batchnorm_eval>
res1.backward(grad)
File "/fsx/users/binbao/pytorch/test/test_nn.py", line 16046, in <graph break in _test_batchnorm_eval>
grad1 = data.grad.clone()
AttributeError: 'NoneType' object has no attribute 'clone'
```
| 2 |
4,362 | 93,588 |
test_LSTM_grad_and_gradgrad_cuda_float64: ValueError: gradcheck expects at least one input tensor to require gradient, but none of the them have requires_grad=True.
|
triaged
|
Repro:
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_nn.py -k test_LSTM_grad_and_gradgrad_cuda_float64
```
Error (uupdated):
```
/scratch/binbao/work/pytorch/torch/_dynamo/eval_frame.py:361: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled.Consider setting `torch.set_float32_matmul_precision('high')`
warnings.warn(
[2022-11-30 21:19:34,297] torch._dynamo.convert_frame: [ERROR] WON'T CONVERT <graph break in _test_rnn_mod> /scratch/binbao/work/pytorch/test/test_nn.py line 9981
due to:
Traceback (most recent call last):
File "/scratch/binbao/work/pytorch/torch/_dynamo/variables/builder.py", line 815, in wrap_fx_proxy_cls
raise AssertionError(
AssertionError: torch.* op returned non-Tensor _GeneratorContextManager call_function <function flags at 0x7f758f923430>
from user code:
File "/scratch/binbao/work/pytorch/test/test_nn.py", line 9982, in <graph break in _test_rnn_mod>
with torch.backends.cudnn.flags(enabled=False):
Set torch._dynamo.config.verbose=True for more information
/scratch/binbao/work/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
[2022-11-30 21:19:35,546] torch._inductor.ir: [WARNING] Using FallbackKernel: aten._thnn_fused_lstm_cell
[2022-11-30 21:19:37,051] torch._inductor.graph: [WARNING] Creating implicit fallback for:
target: aten._thnn_fused_lstm_cell_backward_impl.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float64,
load(tangents_2, i1 + 4 * i0) + load(tangents_1, i1 + 4 * i0),
ranges=[3, 4],
origins={select, tangents_1, tangents_2, select_2, add}
)
))
args[1]: TensorBox(
ReinterpretView(
StorageBox(
InputBuffer(name='tangents_3', layout=FixedLayout('cuda', torch.float64, size=[1, 3, 4], stride=[12, 4, 1]))
),
FixedLayout('cuda', torch.float64, size=[3, 4], stride=[4, 1]),
no origins?
)
)
args[2]: TensorBox(StorageBox(
InputBuffer(name='getitem', layout=FixedLayout('cuda', torch.float64, size=[3, 4], stride=[4, 1]))
))
args[3]: TensorBox(StorageBox(
InputBuffer(name='getitem_4', layout=FixedLayout('cuda', torch.float64, size=[3, 4], stride=[4, 1]))
))
args[4]: TensorBox(StorageBox(
InputBuffer(name='getitem_5', layout=FixedLayout('cuda', torch.float64, size=[3, 16], stride=[16, 1]))
))
args[5]: True
[2022-11-30 21:19:37,064] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten._thnn_fused_lstm_cell_backward_impl.default
/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py:950: UserWarning: Backwards compatibility: New undefined gradient support checking feature is enabled by default, but it may break existing callers of this function. If this is true for you, you can call this function with "check_undefined_grad=False" to disable the feature
warnings.warn((
E
======================================================================
ERROR: test_LSTM_grad_and_gradgrad_cuda_float64 (__main__.TestNNDeviceTypeCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 959, in check_undefined_grad_support
grads_input = torch.autograd.grad(output_to_check, diff_input_list,
File "/scratch/binbao/work/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/scratch/binbao/work/pytorch/torch/autograd/function.py", line 270, in apply
return user_fn(self, *args)
File "/scratch/binbao/work/pytorch/functorch/_src/aot_autograd.py", line 1385, in backward
all_args = list(ctx.symints) + list(ctx.saved_tensors) + list(contiguous_args)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.DoubleTensor [3, 4]] is at version 7; expected version 4 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_utils.py", line 2053, in wrapper
method(*args, **kwargs)
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_device_type.py", line 391, in instantiated_test
raise rte
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_device_type.py", line 378, in instantiated_test
result = test(self, **param_kwargs)
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_device_type.py", line 866, in dep_fn
return fn(slf, *args, **kwargs)
File "/scratch/binbao/work/pytorch/test/test_nn.py", line 10012, in test_LSTM_grad_and_gradgrad
self._test_rnn_mod(mod, inp)
File "/scratch/binbao/work/pytorch/test/test_nn.py", line 9981, in _test_rnn_mod
gradcheckfunc = partial(flatten_out, mod)
File "/scratch/binbao/work/pytorch/test/test_nn.py", line 9983, in <graph break in _test_rnn_mod>
gradcheck(gradcheckfunc, inp, check_batched_grad=False)
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_utils.py", line 3716, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 1418, in gradcheck
return _gradcheck_helper(**args)
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 1451, in _gradcheck_helper
_test_undefined_backward_mode(func, outputs, tupled_inputs)
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 992, in _test_undefined_backward_mode
return all(check_undefined_grad_support(output) for output in outputs_to_check)
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 992, in <genexpr>
return all(check_undefined_grad_support(output) for output in outputs_to_check)
File "/scratch/binbao/work/pytorch/torch/autograd/gradcheck.py", line 963, in check_undefined_grad_support
raise GradcheckError((
torch.autograd.gradcheck.GradcheckError: Expected backward function to handle undefined output grads. Please look at "Notes about undefined output gradients" in "tools/autograd/derivatives.yaml"
```
| 1 |
4,363 | 93,587 |
test_Bilinear_empty_cuda: IndexError: pop from empty list
|
triaged
|
Repro:
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_nn.py -k test_Bilinear_empty_cuda
```
Error:
```
----------------------------------------------------------------------
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/_inductor/ir.py", line 1229, in dynamic_reshape_indexer
reindex = cls._dynamic_reshape_indexer(old_size, new_size)
File "/fsx/users/binbao/pytorch/torch/_inductor/ir.py", line 1263, in _dynamic_reshape_indexer
var2, size_new2 = stack_new.pop()
IndexError: pop from empty list
```
| 1 |
4,364 | 93,585 |
test_memory_format_ao_nn_quantized_MaxPool2d_cuda_float32: assert not memory_format, "TODO"
|
triaged
|
Repro:
Comment out https://github.com/pytorch/pytorch/blob/cda0d5a57b9126c6d244fdd5b02198f05c742615/test/test_modules.py#L585 and then run
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_modules.py -k test_memory_format_ao_nn_quantized_MaxPool2d_cuda_float32
```
Error:
```
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/_inductor/graph.py", line 241, in call_function
out = lowerings[target](*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/_inductor/lowering.py", line 204, in wrapped
return decomp_fn(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/_inductor/lowering.py", line 376, in _to_copy
assert not memory_format, "TODO"
AssertionError: TODO
```
| 0 |
4,365 | 93,584 |
test_cpu_gpu_parity_nn_AdaptiveAvgPool2d_cuda_float32: networkx.exception.NetworkXError: node sink not in graph
|
triaged
|
Repro:
Comment out https://github.com/pytorch/pytorch/blob/cda0d5a57b9126c6d244fdd5b02198f05c742615/test/test_modules.py#L494, and then run
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_modules.py -k test_cpu_gpu_parity_nn_AdaptiveAvgPool2d_cuda_float32
```
Error:
```
Traceback (most recent call last):
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_utils.py", line 2001, in wrapper
method(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 381, in instantiated_test
result = test(self, **param_kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_modules.py", line 112, in test_wrapper
return test(*args, **kwargs)
File "/fsx/users/binbao/pytorch/torch/testing/_internal/common_device_type.py", line 991, in only_fn
return fn(slf, *args, **kwargs)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 508, in test_cpu_gpu_parity
module_inputs_cpu = module_info.module_inputs_func(module_info, device="cpu", dtype=dtype,
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 530, in <graph break in test_cpu_gpu_parity>
self._retain_grad((cpu_forward_args, cpu_forward_kwargs, gpu_forward_args, gpu_forward_kwargs))
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 314, in _retain_grad
self._traverse_obj(obj, inner_retain_grad)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in _traverse_obj
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in <graph break in _traverse_obj>
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in <genexpr>
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in _traverse_obj
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in <graph break in _traverse_obj>
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 300, in <genexpr>
return type(obj)(self._traverse_obj(o, func) for o in obj)
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 301, in _traverse_obj
elif isgenerator(obj):
File "/fsx/users/binbao/pytorch/test/test_modules.py", line 301, in <graph break in _traverse_obj>
elif isgenerator(obj):
File "/fsx/users/binbao/pytorch/torch/_dynamo/eval_frame.py", line 157, in _fn
return fn(*args, **kwargs)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 870, in forward
return compiled_f(
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 856, in new_func
compiled_fn = create_aot_dispatcher_function(
File "/fsx/users/binbao/torchdynamo-tip/torchdynamo/utils.py", line 86, in time_wrapper
r = func(*args, **kwargs)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 576, in create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/fsx/users/binbao/pytorch/functorch/_src/aot_autograd.py", line 423, in aot_dispatch_autograd
fw_module, bw_module = aot_config.partition_fn(fx_g, joint_inputs)
File "/fsx/users/binbao/pytorch/functorch/_src/partitioners.py", line 427, in min_cut_rematerialization_partition
cut_value, partition = nx.minimum_cut(nx_graph, "source", "sink")
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/networkx/algorithms/flow/maxflow.py", line 458, in minimum_cut
R = flow_func(flowG, _s, _t, capacity=capacity, value_only=True, **kwargs)
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/networkx/algorithms/flow/preflowpush.py", line 421, in preflow_push
R = preflow_push_impl(G, s, t, capacity, residual, global_relabel_freq, value_only)
File "/fsx/users/binbao/conda/envs/release/lib/python3.9/site-packages/networkx/algorithms/flow/preflowpush.py", line 27, in preflow_push_impl
raise nx.NetworkXError(f"node {str(t)} not in graph")
networkx.exception.NetworkXError: node sink not in graph
```
| 1 |
4,366 | 87,902 |
Permute
|
module: docs, triaged
|
## 📚 Documentation
In documentation:
we are told to use permute like this
torch.permute(x, (2, 0, 1)).size()
Also usage:
x = x.permute(2,0,1)
cc @svekars @carljparker
| 0 |
4,367 | 87,890 |
"No CUDA GPUs are available" coming from GHA g5 runners
|
oncall: releng, module: ci, triaged
|
### 🐛 Describe the bug
There is a new trend of flaky issues when the GPU seems to disappear from GHA g5 runners. There are 2 common errors:
* `RuntimeError: No CUDA GPUs are available`: https://hud.pytorch.org/failure/RuntimeError%3A%20No%20CUDA%20GPUs%20are%20available
* `nvidia-smi` failure `nvidia-smi --query-gpu=driver_version --format=csv,noheader --> INSTALLED_DRIVER_VERSION='No devices were found'`. They might be related
* https://hud.pytorch.org/pytorch/pytorch/commit/bf113e38fad30fb1eec1f94563f419518ae3178c
* https://hud.pytorch.org/pytorch/pytorch/commit/107f92a6830f61b88a7eb55934610f491623dc9b
**Examples**:
https://hud.pytorch.org/pytorch/pytorch/commit/78b406932f0e4afd82b672f959b8cb9ce1e79f9d
### Versions
GHA g5 runners (NVIDIA A10G). Inductors and some trunk tests use them
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @jeanschmidt @DanilBaibak
| 2 |
4,368 | 87,886 |
Add aten::empty.memory_format for SparseMPS
|
module: sparse, triaged, enhancement, module: mps
|
### 🚀 The feature, motivation and pitch
Please consider adding: aten::empty.memory_format for SparseMPS back-end. It is required to move sparse_coo_tensor to device:
```
import torch
i = torch.tensor([[0, 1, 1], [2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
torch.sparse_coo_tensor(i, v, [2, 4]).to(torch.device("mps"), non_blocking=True)
```
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Input In [3], in <cell line: 4>()
2 i = torch.tensor([[0, 1, 1], [2, 0, 2]])
3 v = torch.tensor([3, 4, 5], dtype=torch.float32)
----> 4 torch.sparse_coo_tensor(i, v, [2, 4]).to(torch.device("mps"), non_blocking=True)
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'SparseMPS' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, MPS, Meta, QuantizedCPU, QuantizedMeta, MkldnnCPU, SparseCPU, SparseMeta, SparseCsrCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterCPU.cpp:30823 [kernel]
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterMPS.cpp:21082 [kernel]
Meta: registered at /dev/null:241 [kernel]
QuantizedCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterQuantizedCPU.cpp:936 [kernel]
QuantizedMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterQuantizedMeta.cpp:105 [kernel]
MkldnnCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterMkldnnCPU.cpp:492 [kernel]
SparseCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseCPU.cpp:1299 [kernel]
SparseMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseMeta.cpp:249 [kernel]
SparseCsrCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseCsrCPU.cpp:1060 [kernel]
BackendSelect: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterBackendSelect.cpp:726 [kernel]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:141 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:488 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:291 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:23 [kernel]
ZeroTensor: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:16912 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:16866 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:479 [backend fallback]
AutocastCUDA: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:350 [backend fallback]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:801 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:189 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:149 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:484 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:145 [backend fallback]
```
### Alternatives
_No response_
### Additional context
Version tested: 1.14.0.dev20221027
cc @nikitaved @pearu @cpuhrsch @amjames @bhosmer @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 18 |
4,369 | 87,864 |
Failure to export scripted models to ONNX when input is a list of tensor
|
needs reproduction, module: onnx, triaged, onnx-needs-info
|
### 🐛 Describe the bug
Given the following example:
```python
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = torch.nn.Conv2d(512, 1024, kernel_size=1)
self.b = torch.nn.Conv2d(512, 1024, kernel_size=1)
def forward(self, features: list[torch.Tensor]) -> torch.Tensor:
a = self.a(features[0])
b = self.b(features[0])
return a + b
input = [
torch.randn((1, 512, 64, 64)),
torch.randn((1, 512, 64, 64)), # Removing this fixes the error.
]
# Create a scripted model.
model = MyModule()
model.eval()
model = torch.jit.script(model)
# Export to ONNX.
torch.onnx.export(
model,
(input,),
'model.onnx',
verbose=True,
)
# Test the model using ONNX.
import onnx
model = onnx.load('./model.onnx')
onnx.checker.check_model(model)
# Load the model using onnxruntime.
import onnxruntime
onnxruntime.InferenceSession('./model.onnx')
```
I get the following output:
```
Exported graph: graph(%features.1 : Float(1, 512, 64, 64, strides=[2097152, 4096, 64, 1], requires_grad=0, device=cpu)[],
%a.weight : Float(1, 512, 64, 64, strides=[2097152, 4096, 64, 1], requires_grad=0, device=cpu),
%a.bias : Float(1024, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cpu),
%b.weight : Float(1024, strides=[1], requires_grad=0, device=cpu),
%b.bias : Float(1024, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cpu)):
%/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={0}, onnx_name="/Constant"](), scope: MyModule::
%/SequenceAt_output_0 : Float(1, 512, 64, 64, strides=[2097152, 4096, 64, 1], device=cpu) = onnx::SequenceAt[onnx_name="/SequenceAt"](%features.1, %/Constant_output_0), scope: MyModule:: # /tmp/2022-10-27-15-20-00/test.py:10:19
%/a/Conv_output_0 : Float(1, 1024, 1, 1, strides=[1, 1, 1, 1], device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[64, 64], pads=[0, 0, 0, 0], strides=[1, 1], onnx_name="/a/Conv"](%/SequenceAt_output_0, %a.weight), scope: MyModule::/torch.nn.modules.conv.Conv2d::a # /usr/lib/python3.10/site-packages/torch/nn/modules/conv.py:459:15
%/a/Add_output_0 : Float(1024, 1024, 1, 1, strides=[512, 1, 1, 1], device=cpu) = onnx::Add[onnx_name="/a/Add"](%/a/Conv_output_0, %a.bias), scope: MyModule::/torch.nn.modules.conv.Conv2d::a # /usr/lib/python3.10/site-packages/torch/nn/modules/conv.py:459:15
%/b/Conv_output_0 : Tensor = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=annotate(List[int], []), pads=[0, 0, 0, 0], strides=[1, 1], onnx_name="/b/Conv"](%/SequenceAt_output_0, %b.weight), scope: MyModule::/torch.nn.modules.conv.Conv2d::b # /usr/lib/python3.10/site-packages/torch/nn/modules/conv.py:459:15
%/b/Add_output_0 : FloatTensor = onnx::Add[onnx_name="/b/Add"](%/b/Conv_output_0, %b.bias), scope: MyModule::/torch.nn.modules.conv.Conv2d::b # /usr/lib/python3.10/site-packages/torch/nn/modules/conv.py:459:15
%11 : Float(*, *, *, *, strides=[4194304, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add[onnx_name="/Add"](%/a/Add_output_0, %/b/Add_output_0), scope: MyModule:: # /tmp/2022-10-27-15-20-00/test.py:12:15
return (%11)
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/torch/onnx/utils.py", line 1652, in _export
_C._check_onnx_proto(proto, full_check=True)
RuntimeError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec for node. Name: /b/Conv OpType: Conv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/2022-10-27-15-20-00/test.py", line 27, in <module>
torch.onnx.export(
File "/usr/lib/python3.10/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/usr/lib/python3.10/site-packages/torch/onnx/utils.py", line 1654, in _export
raise errors.CheckerError(e)
torch.onnx.errors.CheckerError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec for node. Name: /b/Conv OpType: Conv
```
Interestingly, if I remove one of the two inputs so that there is only one input (still as a list) then the error disappears.
Also, if I don't script the model, so that it traces before conversion, then there is no error either. However, this is not what I want because my input has dynamic shapes and as far as I know this does not work well with traced models (or am I wrong here?). Regardless, this seems to be an issue for scripted models.
An additional observation: if I remove `self.b` entirely and just return the output of `self.a`, then I get the following error:
```
[ShapeInferenceError] Inferred shape and existing shape differ in dimension 0: (1) vs (1024)
```
Which also disappears if I use tracing instead of scripting.
### Versions
I tried this both on 1.12.1 from the Arch repositories, as well as the current `master` branch, both provided the same results.
```
PyTorch version: 1.14.0a0+git1780e0e
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 14.0.6
CMake version: version 3.24.2
Libc version: glibc-2.36
Python version: 3.10.8 (main, Oct 13 2022, 21:13:48) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.75-1-lts-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 520.56.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] segmentation-models-pytorch==0.3.0
[pip3] torch==1.14.0a0+git1780e0e
[conda] Could not collect
```
| 3 |
4,370 | 87,861 |
RuntimeError: unable to mmap 29764 bytes from file </torch_10182_3020184674_63991>: Cannot allocate memory (12)
|
triaged, module: tensor creation
|
### 🐛 Describe the bug
for _ in range(flags.num_buffers):
for key in buffers:
buffers[key].append(torch.zeros(**specs[key]).share_memory_())
return buffers
Traceback (most recent call last):
File "/home/cloudam/rl-baseline (copy)/monobeast.py", line 634, in <module>
File "/home/cloudam/rl-baseline (copy)/monobeast.py", line 512, in train
File "/home/cloudam/rl-baseline (copy)/monobeast.py", line 140, in create_buffers
File "/home/cloudam/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 619, in share_memory_
File "/home/cloudam/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 627, in share_memory_
File "/home/cloudam/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 208, in share_memory_
RuntimeError: unable to mmap 29764 bytes from file </torch_10182_3020184674_63991>: Cannot allocate memory (12)
### Versions
pytorch 10.0 cu116
cc @gchanan @mruberry
| 1 |
4,371 | 87,859 |
M1 Mac, MPS: Buffer is not large enough
|
triaged, module: mps
|
### 🐛 Describe the bug
Issue related to #87351: `Error: buffer is not large enough. Must be 600 bytes`
Not sure if it is the same issue though as my use case is somewhat different. But I can confirm all reproductions suggested in comments of #87351 crash for me as well.
In my case, trying to repurpose the code from [this repository](https://github.com/naver/sqlova) to work using MPS.
After running
`python3 train.py --seed 1 --bS 16 --accumulate_gradients 2 --bert_type_abb uS --fine_tune --lr 0.001 --lr_bert 0.00001 --max_seq_leng 222 --tepoch 10 --do_train`
Output:
```
BERT-type: uncased_L-12_H-768_A-12
Batch_size = 32
BERT parameters:
learning rate: 1e-05
Fine-tune BERT: True
vocab size: 30522
hidden_size: 768
num_hidden_layer: 12
num_attention_heads: 12
hidden_act: gelu
intermediate_size: 3072
hidden_dropout_prob: 0.1
attention_probs_dropout_prob: 0.1
max_position_embeddings: 512
type_vocab_size: 2
initializer_range: 0.02
Load pre-trained parameters.
Seq-to-SQL: the number of final BERT layers to be used: 2
Seq-to-SQL: the size of hidden dimension = 100
Seq-to-SQL: LSTM encoding layer size = 2
Seq-to-SQL: dropout rate = 0.3
Seq-to-SQL: learning rate = 0.001
/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:782: failed assertion `[MPSNDArray, initWithBuffer:descriptor:] Error: buffer is not large enough. Must be 600 bytes'
python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 20 leaked semaphore objects to clean up at shutdown
```
I currently suspect the issue is somehow related to the `torch.utils.data.DataLoader` function, as I do **not** get the error when setting `batch_size` to 1. The error does show up when using any `batch_size` greater than 1. Notably, the number of bytes required in the error message also changes depending on `batch_size`, eg:
```
[...] --bS 8 --accumulate_gradients 2 [...]
Batch_size = 16
...
Error: buffer is not large enough. Must be 396 bytes
```
```
[...] --bS 4 --accumulate_gradients 2 [...]
Batch_size = 8
...
Error: buffer is not large enough. Must be 280 bytes
```
Also worth noting that changing the value of the `--accumulate_gradients` argument does not impact whether the code runs successfully: `--bS 4 --accumulate_gradients 8 \\ Batch_size = 32` will crash, while `--bS 1 --accumulate_gradients 32 \\ Batch_size = 32` will run. Given that `--accumulate_gradients` is not passed to `torch.utils.data.DataLoader`, I suspect it rather has to do with `batch_size`, though I could also be completely off the mark, this is just what I have observed while running the code.
Happy to provide any further details if it helps identify the issue!
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev @ebraminio
### Versions
Python version:
```
python --version
Python 3.9.13
```
Torch versions:
```
pip freeze | grep torch
pytorch-pretrained-bert==0.6.2
torch==1.14.0.dev20221027
torchaudio==0.14.0.dev20221025
torchvision==0.15.0.dev20221027
```
| 0 |
4,372 | 93,582 |
[Inductor] Support deterministic parallel reduction in CPP backend
|
triaged
|
CPP backend uses OMP to do parallel reduction which cannot guarantee a deterministic reduction order and might produce different run-to-run result while determinism is important to ease the debugging. We should replace omp parallel reduction with deterministic parallel reduction.
Relevant issue/PR: https://github.com/pytorch/pytorch/issues/93542, https://github.com/pytorch/pytorch/pull/87259
| 0 |
4,373 | 87,841 |
`max_unpool3d` will trigger an assertion fail under compute sanitizer
|
module: cuda, triaged, module: sanitizers, module: pooling
|
### 🐛 Describe the bug
A test case for `torch.nn.functional.max_unpool3d()` terminates normally without compute sanitizer but trigger an assertion error when it's executed under compute sanitizer.
Reproduction:
```python
import torch
def test():
arg_1_tensor = torch.rand([20, 16, 25, 16, 7], dtype=torch.float32)
arg_1 = arg_1_tensor.clone().cuda()
arg_2_tensor = torch.randint(-8,16384,[20, 16, 25, 16, 7], dtype=torch.int64)
arg_2 = arg_2_tensor.clone().cuda()
arg_3 = [-3, 14, -12]
arg_4 = [2, 2, 2]
arg_5 = [0, 0, 0]
arg_6 = None
res = torch.nn.functional.max_unpool3d(arg_1,arg_2,arg_3,arg_4,arg_5,arg_6,)
test()
```
Error log:
```bash
========= COMPUTE-SANITIZER
========= Warp assert
========= at 0x1c90 in __assertfail
========= by thread (0,3,0) in block (0,1,61)
========= Device Frame:void at::native::max_unpooling3d_forward_kernel<float>(at::GenericPackedTensorAccessor<T1, (unsigned long)4, at::DefaultPtrTraits, long>, at::GenericPackedTensorAccessor<long, (unsigned long)4, at::DefaultPtrTraits, long>, T1 *, long, long, long, long) [0x750]
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2ea4a1]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x1433c]
========= in /lib/x86_64-linux-gnu/libcudart.so.11.0
========= Host Frame:cudaLaunchKernel [0x69c38]
========= in /lib/x86_64-linux-gnu/libcudart.so.11.0
========= Host Frame:at::native::max_unpooling3d_forward_out_cuda(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, at::Tensor&)::{lambda()#1}::operator()() const [0x1f310ba]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::native::max_unpooling3d_forward_out_cuda(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, at::Tensor&) [0x1f3414a]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::native::max_unpooling3d_forward_cuda(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x1f34e2d]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::(anonymous namespace)::wrapper__max_unpool3d(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x2e0af21]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>), &at::(anonymous namespace)::(anonymous namespace)::wrapper__max_unpool3d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long> > >, at::Tensor (at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x2e0afc7]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::max_unpool3d::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x1f4203c]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::max_unpool3d(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x3744769]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>), &torch::autograd::VariableType::(anonymous namespace)::max_unpool3d>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x3744dfa]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::max_unpool3d::call(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>) [0x1fc43fe]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_max_unpool3d(_object*, _object*, _object*) [0x5c1226]
========= in /home/yuyao/.local/lib/python3.10/site-packages/torch/lib/libtorch_python.so
========= Host Frame: [0x15c8de]
========= in /usr/bin/python3
========= Host Frame:_PyObject_MakeTpCall [0x1533bb]
========= in /usr/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault [0x14bf59]
========= in /usr/bin/python3
========= Host Frame:_PyFunction_Vectorcall [0x15d12c]
========= in /usr/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault [0x14b5f2]
========= in /usr/bin/python3
========= Host Frame:_PyFunction_Vectorcall [0x15d12c]
========= in /usr/bin/python3
========= Host Frame:_PyEval_EvalFrameDefault [0x1458a1]
========= in /usr/bin/python3
========= Host Frame: [0x142026]
========= in /usr/bin/python3
========= Host Frame:PyEval_EvalCode [0x238106]
========= in /usr/bin/python3
========= Host Frame: [0x264e18]
========= in /usr/bin/python3
========= Host Frame: [0x25dc6b]
========= in /usr/bin/python3
========= Host Frame: [0x264b65]
========= in /usr/bin/python3
========= Host Frame:_PyRun_SimpleFileObject [0x264048]
========= in /usr/bin/python3
========= Host Frame:_PyRun_AnyFileObject [0x263d43]
========= in /usr/bin/python3
========= Host Frame:Py_RunMain [0x25517e]
========= in /usr/bin/python3
========= Host Frame:Py_BytesMain [0x22b0dd]
========= in /usr/bin/python3
========= Host Frame:../sysdeps/nptl/libc_start_call_main.h:58:__libc_start_call_main [0x29d90]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:../csu/libc-start.c:379:__libc_start_main [0x29e40]
========= in /lib/x86_64-linux-gnu/libc.so.6
========= Host Frame:_start [0x22afd5]
========= in /usr/bin/python3
=========
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [0,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [1,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [2,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [3,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [4,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [5,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,61], thread: [6,2,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [0,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [1,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [2,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [3,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [4,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [5,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,0,20], thread: [6,0,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [0,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [1,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [2,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [3,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [4,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [5,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,61], thread: [6,3,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [0,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [1,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [2,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [3,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [4,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [5,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
/home/yuyao/dev/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:70: max_unpooling3d_forward_kernel: block: [0,1,58], thread: [6,1,0] Assertion `index >= 0 && index < outputImageSize` failed.
========= ERROR SUMMARY: 1 error
```
### Versions
PyTorch version: 1.14.0a0+gita86278b
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 11.1.0-6
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.14.0a0+gita86278b
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
cc @ngimel
| 1 |
4,374 | 87,800 |
[ONNX] Graph passes analysis
|
module: onnx, triaged, onnx-triaged
|
## Objective
Categorize the different passes applied to the jit Graph during ONNX export and determine their relevance for future designs.
## Categories
- [Tracing] steps (inline autograd)
- Graph [optimization] (dce, constant folding)
- ONNX necessary [adaptation] (Scalar to tensors etc.)
## Passes
| Pass | Category | What it does | Notes |
| -------------------------------------------------------- | ------------ | ------------ | ----- |
| _C._jit_pass_canonicalize | ? | | |
| _C._jit_pass_canonicalize_graph_fuser_ops | ? | | |
| _C._jit_pass_constant_propagation | optimization | | |
| _C._jit_pass_cse | optimization | | |
| _C._jit_pass_custom_pattern_based_rewrite_graph | ? | | |
| _C._jit_pass_dce | optimization | | |
| _C._jit_pass_dce_allow_deleting_nodes_with_side_effects | optimization | | |
| _C._jit_pass_erase_number_types | adaptation | | |
| _C._jit_pass_filter_non_tensor_arguments | ? | | |
| _C._jit_pass_fuse_addmm | ? | | |
| _C._jit_pass_inline | tracing | | |
| _C._jit_pass_inline_fork_wait | tracing | | |
| _C._jit_pass_lint | tracing | | |
| _C._jit_pass_lower_all_tuples | adaptation | | |
| _C._jit_pass_onnx | ? | | |
| _C._jit_pass_onnx_assign_output_shape | adaptation | | |
| _C._jit_pass_onnx_assign_scoped_names_for_node_and_value | adaptation | | |
| _C._jit_pass_onnx_autograd_function_process | tracing | | |
| _C._jit_pass_onnx_cast_all_constant_to_floating | adaptation | | |
| _C._jit_pass_onnx_clear_scope_records | | | |
| _C._jit_pass_onnx_constant_fold | optimization | | |
| _C._jit_pass_onnx_deduplicate_initializers | optimization | | |
| _C._jit_pass_onnx_eliminate_unused_items | optimization | | |
| _C._jit_pass_onnx_eval_peephole | optimization | | |
| _C._jit_pass_onnx_function_extraction | | | |
| _C._jit_pass_onnx_function_substitution | | | |
| _C._jit_pass_onnx_graph_shape_type_inference | adaptation | | |
| _C._jit_pass_onnx_lint | | | |
| _C._jit_pass_onnx_peephole | optimization | | |
| _C._jit_pass_onnx_preprocess | ? | | |
| _C._jit_pass_onnx_preprocess_caffe2 | ? | | |
| _C._jit_pass_onnx_quantization_insert_permutes | adaptation | | |
| _C._jit_pass_onnx_remove_inplace_ops_for_onnx | adaptation | | |
| _C._jit_pass_onnx_remove_print | adaptation | | |
| _C._jit_pass_onnx_scalar_type_analysis | adaptation | | |
| _C._jit_pass_onnx_set_dynamic_input_shape | adaptation | | |
| _C._jit_pass_onnx_track_scope_attributes | adaptation | | |
| _C._jit_pass_onnx_unpack_quantized_weights | adaptation | | |
| _C._jit_pass_peephole | ? | | |
| _C._jit_pass_prepare_division_for_onnx | adaptation | | |
| 1 |
4,375 | 87,794 |
CUDA error: operation not permitted when stream is capturing (2 GPUs)
|
module: multi-gpu, triaged
|
### 🐛 Describe the bug
The error occurs when capturing a CUDA graph with a small network on a machine with more than 1 GPU.
The MWE indicates that some temporary requests or allocations are silently made on a different (default) device than where the network and the data reside and may not necessarily be an issue with capturing itself.
The minimal setup to reproduce is as follows. At least 2 GPUs, The MWE is:
```
import torch.nn as nn
import torch
import torch.optim as optim
import torch.nn.functional as F
dev = torch.device('cuda:1') # we intend to create data network and capture the graph on this GPU only
D = 32
input_shape = torch.Size([4, D])
s = torch.cuda.Stream(device=dev)
s.wait_stream(torch.cuda.current_stream())
x = torch.rand(input_shape, dtype=torch.float32, device=dev)
L = nn.Linear(D,D, bias=None).to(dev)
x.requires_grad = True
def loss(x):
x = F.linear(x, L.weight)
x = F.softplus(x)
x = F.linear(x, L.weight)
return x.sum()
with torch.cuda.device(0): # this clause should not have effect, the issue occurs with or without it
print("Warmup 3 batches")
with torch.cuda.stream(s): # warmup on a side stream, why?
for i in range(3):
l = loss(x)
l.backward()
torch.cuda.current_stream().wait_stream(s)
print("Capturing")
g = torch.cuda.CUDAGraph()
with torch.cuda.graph(g):
l = loss(x)
l.backward()
print("Replay 3 batches")
for i in range(3):
g.replay()
print(f'Test passed')
```
Output:
```
Warmup 3 batches
Capturing
Traceback (most recent call last):
File "/home.nfs/shekhole/work/quant/demo/MWE.py", line 35, in <module>
l.backward()
File "/mnt/appl/software/PyTorch/1.11.0-foss-2021a-CUDA-11.3.1/lib/python3.9/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/mnt/appl/software/PyTorch/1.11.0-foss-2021a-CUDA-11.3.1/lib/python3.9/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: operation not permitted when stream is capturing
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
...
```
The workaround is therefore rather simple: to use CUDA_VISIBLE_DEVICES if we do not need more than 1 GPU or to enclose all the execution into `with torch.cuda.device(dev)'. But it appears to be a bug nevertheless.
### Versions
At this moment I was able to test only on two servers with older GPUs, where the maximal version of pytorch supporting these GPUs is 1.11. I will try to find a way to run it in a newer setup.
```
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.9.5 (default, Nov 29 2021, 15:17:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-21-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1: NVIDIA GeForce GTX 1080 Ti
GPU 2: NVIDIA GeForce GTX 1080 Ti
GPU 3: NVIDIA GeForce GTX 1080 Ti
GPU 4: NVIDIA GeForce GTX 1080 Ti
GPU 5: NVIDIA GeForce GTX 1080 Ti
GPU 6: NVIDIA GeForce GTX 1080 Ti
GPU 7: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
```
```
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.9.5 (default, Nov 29 2021, 15:17:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-18-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080
GPU 1: NVIDIA GeForce GTX 1080
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
```
| 5 |
4,376 | 87,792 |
`AvgPool` and `MaxPool` will crash in JIT w/o profiling executor
|
oncall: jit
|
### 🐛 Describe the bug
`AvgPool` and `MaxPool` will crash in JIT w/o profiling executor when the output size contains `0`
For `AdaptiveMaxPool`, it will directly crash in the forward phase
```py
import torch
torch._C._jit_set_profiling_executor(False)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
output_size = [0, 1, 1]
arg_class = torch.nn.AdaptiveMaxPool3d(output_size)
self.arg_class = arg_class
def forward(self, inp):
arg_class = self.arg_class
fn_res = arg_class(inp)
return fn_res
fn = M().to('cuda')
inp = torch.randn([1, 1, 1, 9, 10], dtype=torch.float32, device='cuda')
jit_fn = torch.jit.script(fn)
jit_fn(inp)
```
```
floating point exception (core dumped)
```
For `AvgPool`, it will crash in the backward phase
```py
import torch
torch._C._jit_set_profiling_executor(False)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
output_size = [0, 1, 1]
arg_class = torch.nn.AdaptiveAvgPool3d(output_size, )
self.arg_class = arg_class
def forward(self, inp):
arg_class = self.arg_class
fn_res = arg_class(inp)
return fn_res
inp = torch.rand([1, 1, 1, 9, 10], dtype=torch.float32)
inp = inp.to('cuda')
fn = M().to('cuda')
jit_fn = torch.jit.trace(fn, inp)
jit_fn(inp.requires_grad_()).sum().backward()
```
```
floating point exception (core dumped)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0.dev20221020
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.14.0.dev20221020
[pip3] torchaudio==0.14.0.dev20221020
[pip3] torchvision==0.15.0.dev20221020
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.14.0.dev20221020 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
| 0 |
4,377 | 87,787 |
`BatchNorm` a 0-shape tensor will crash in JIT trace w/o profiling executor on cuda
|
oncall: jit
|
### 🐛 Describe the bug
`BatchNorm` a 0-shape tensor will crash in JIT trace w/o profiling executor on cuda
```py
import torch
torch._C._jit_set_profiling_executor(False)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
num_features = 256
arg_class = torch.nn.BatchNorm1d(num_features, device='cuda', )
self.num_features = num_features
self.arg_class = arg_class
def forward(self, inp):
arg_class = self.arg_class
fn_res = arg_class(inp)
return fn_res
fn = M().to('cuda')
inp = torch.empty([11, 0], dtype=torch.float32).to('cuda')
print(fn(inp))
jit_fn = torch.jit.trace(fn, inp)
print(jit_fn(inp))
```
```
tensor([], device='cuda:0', size=(11, 0), grad_fn=<AddBackward0>)
[1] 1568595 floating point exception (core dumped)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0.dev20221020
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.5-051905-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.3
[pip3] torch==1.14.0.dev20221020
[pip3] torchaudio==0.14.0.dev20221020
[pip3] torchvision==0.15.0.dev20221020
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 pypi_0 pypi
[conda] numpy-base 1.23.1 py39ha15fc14_0
[conda] pytorch 1.14.0.dev20221020 py3.9_cuda11.6_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow-base 2.9.1 mkl_py39h353358b_0
[conda] torchaudio 0.13.0.dev20220923+cu116 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220923+cu116 pypi_0 pypi
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
4,378 | 87,785 |
ONNX-exported model cannot output Dict[str, X] or str
|
module: onnx, triaged
|
### 🐛 Describe the bug
`Dict[str, float]` or simply even `str` cannot be a return value from a scripted Pytorch model passed through `torch.onnx.export`
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import *
class ALSModel(nn.Module):
def __init__(self, n_items: int, n_factors: int):
super(ALSModel, self).__init__()
def forward(self, ind: torch.Tensor, size: int = 5)-> str:
return "heyo"
m = torch.jit.script(ALSModel(10000, 128))
x = (torch.ones(1).long(), 3)
torch.onnx.export(m, x, "data/model2.onnx", input_names=['contentId', 'size'], verbose=True, opset_version=16)
```
Presents the error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In [1], line 17
14 m = torch.jit.script(ALSModel(10000, 128))
15 x = (torch.ones(1).long(), 3)
---> 17 torch.onnx.export(m, x, "data/model2.onnx", input_names=['contentId', 'size'], verbose=True, opset_version=16)
File ~/micromamba/envs/onnx/lib/python3.10/site-packages/torch/onnx/__init__.py:350, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions)
74 r"""
75 Exports a model into ONNX format. If ``model`` is not a
76 :class:`torch.jit.ScriptModule` nor a :class:`torch.jit.ScriptFunction`, this runs
(...)
345 model to the file ``f`` even if this is raised.
346 """
348 from torch.onnx import utils
--> 350 return utils.export(
351 model,
352 args,
353 f,
354 export_params,
355 verbose,
356 training,
357 input_names,
358 output_names,
359 operator_export_type,
360 opset_version,
361 do_constant_folding,
362 dynamic_axes,
363 keep_initializers_as_inputs,
364 custom_opsets,
365 export_modules_as_functions,
366 )
File ~/micromamba/envs/onnx/lib/python3.10/site-packages/torch/onnx/utils.py:163, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions)
145 def export(
146 model,
147 args,
(...)
160 export_modules_as_functions=False,
161 ):
--> 163 _export(
164 model,
165 args,
166 f,
167 export_params,
168 verbose,
169 training,
170 input_names,
171 output_names,
172 operator_export_type=operator_export_type,
173 opset_version=opset_version,
174 do_constant_folding=do_constant_folding,
175 dynamic_axes=dynamic_axes,
176 keep_initializers_as_inputs=keep_initializers_as_inputs,
177 custom_opsets=custom_opsets,
178 export_modules_as_functions=export_modules_as_functions,
179 )
File ~/micromamba/envs/onnx/lib/python3.10/site-packages/torch/onnx/utils.py:1074, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, onnx_shape_inference, export_modules_as_functions)
1071 dynamic_axes = {}
1072 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
-> 1074 graph, params_dict, torch_out = _model_to_graph(
1075 model,
1076 args,
1077 verbose,
1078 input_names,
1079 output_names,
1080 operator_export_type,
1081 val_do_constant_folding,
1082 fixed_batch_size=fixed_batch_size,
1083 training=training,
1084 dynamic_axes=dynamic_axes,
1085 )
1087 # TODO: Don't allocate a in-memory string for the protobuf
1088 defer_weight_export = (
1089 export_type is not torch.onnx.ExportTypes.PROTOBUF_FILE
1090 )
File ~/micromamba/envs/onnx/lib/python3.10/site-packages/torch/onnx/utils.py:731, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
728 params_dict = _get_named_param_dict(graph, params)
730 try:
--> 731 graph = _optimize_graph(
732 graph,
733 operator_export_type,
734 _disable_torch_constant_prop=_disable_torch_constant_prop,
735 fixed_batch_size=fixed_batch_size,
736 params_dict=params_dict,
737 dynamic_axes=dynamic_axes,
738 input_names=input_names,
739 module=module,
740 )
741 except Exception as e:
742 torch.onnx.log("Torch IR graph at exception: ", graph)
File ~/micromamba/envs/onnx/lib/python3.10/site-packages/torch/onnx/utils.py:332, in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module)
330 _C._jit_pass_lint(graph)
331 if GLOBALS.onnx_shape_inference:
--> 332 _C._jit_pass_onnx_graph_shape_type_inference(
333 graph, params_dict, GLOBALS.export_onnx_opset_version
334 )
335 return graph
RuntimeError: unexpected tensor scalar type
```
More realistically, I would like to output a `Dict[str, float]` which holds an external id and a score for that id. Seemingly this cannot be done, either because of ONNX itself, or the Pytorch export.
With Pytorch==1.12.1, this code simply spins forever, without it ever even hardcrashing for me.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import *
class ALSModel(nn.Module):
def __init__(self, n_items: int, n_factors: int):
super(ALSModel, self).__init__()
self.emb = nn.Embedding(n_items, n_factors)
self.it2ind = {str(i): i for i in range(n_items)}# TODO use real values
self.ind2it = {v:k for k,v in self.it2ind.items()}
def map_out(self, scores: torch.Tensor, indices: torch.Tensor) -> Dict[str, float]:
inds: List[int] = indices.squeeze().tolist()
scorez: List[float] = scores.squeeze().tolist()
return {self.ind2it[i]: s for i, s in zip(inds, scorez)}
def forward(self, ind: torch.Tensor, size: int = 30)-> Dict[str, float]:
u = self.emb(ind)
scores = u @ self.emb.weight.t()
s, i = scores.topk(size)
return self.map_out(s, i)
m = torch.jit.script(ALSModel(10000, 128))
x = (torch.ones(1).long(), 3)
torch.onnx.export(m, x, "data/model2.onnx", input_names=['contentId', 'size'], verbose=True, opset_version=16)
```
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] Could not collect /// (micromamba==0.26.0)
This is related to #83801 , in which it was noted that `Dict`s might be on the roadmap. Is this the case? Because it's quite cumbersome to keep separate mapping files existing outside the model artifact itself, and I would really like to avoid it if possible.
Thanks for your time :slightly_smiling_face:
| 2 |
4,379 | 93,580 |
AssertionError: Unknown expression s2
|
triaged, bug
|
### 🐛 Describe the bug
```
TORCHDYNAMO_DYNAMIC_SHAPES=1 AOT_DYNAMIC_SHAPES=0 time python benchmarks/dynamo/huggingface.py --accuracy --backend eager --training --only AllenaiLongformerBase
```
on master
```
File "/private/home/ezyang/.local/lib/python3.8/site-packages/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/private/home/ezyang/.local/lib/python3.8/site-packages/sympy/printing/str.py", line 779, in _print_Relational
return '%s(%s, %s)' % (charmap[expr.rel_op], self._print(expr.lhs),
File "/private/home/ezyang/.local/lib/python3.8/site-packages/sympy/printing/printer.py", line 331, in _print return printmethod(expr, **kwargs)
File "/raid/ezyang/pytorch-scratch2/torch/_dynamo/guards.py", line 526, in _print_Symbol
assert expr in self.expr_to_tensor_ref, f"Unknown expression {expr}"
AssertionError: Unknown expression s2
```
### Error logs
_No response_
### Minified repro
_No response_
| 4 |
4,380 | 87,782 |
Libtorch windows binaries publishing
|
module: ci, triaged, topic: binaries
|
### 🐛 Describe the bug
Looks like following libtorch configuations are missing from windows while present in linux.
Research if they are required. And if they are add make sure we build them:
Windows Libtorch missing:
```
libtorch_variants = [
"shared-without-deps",
"static-with-deps",
"static-without-deps",
]
```
### Versions
1.14
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
4,381 | 87,781 |
`torchtyping` annotations make saving to Torchscript fail
|
oncall: jit
|
### 🐛 Describe the bug
It seems that there is already some interest in offering a type-annotation system for PyTorch.
[This issue](https://github.com/pytorch/pytorch/issues/73359) mentions [torchtyping](https://github.com/patrick-kidger/torchtyping). However, on trying to save a `nn.Module` that includes such annotations, to TorchScript, I get an issue that looks like this:
```
RuntimeError:
Unknown type constructor TensorType:
File "my/code/vae.py", line 34
def forward(
self, x: TensorType["batch_size", "input_dim"]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
) -> TensorType["batch_size", "input_dim"]:
```
It's very unclear to me why a type annotation would impact the TorchScript saving process, but I don't know anything about TorchScript.
However, the `torchtyping` maintainer seems to have looked into it and they mentioned a year ago that the issue lies with the TorchScript implementation. See the full discussion here: <https://github.com/patrick-kidger/torchtyping/issues/13>. Since they recommended contacting the TorchScript maintainers, and I could not find an issue about it in this project, I'd like to bring this to the attention to whom it may concern, if they're not already aware of this.
Some kind people have shared [workarounds](https://github.com/patrick-kidger/torchtyping/issues/13#issuecomment-1243725436) in the comments of this issue, and they seem to work.
Thanks for your attention!
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.2
Libc version: N/A
Python version: 3.10.7 (main, Sep 15 2022, 01:51:29) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-12.5.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1
[pip3] torchmetrics==0.10.0
[pip3] torchtyping==0.1.4
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
4,382 | 87,777 |
Improvements to fuse optimization
|
oncall: quantization, triaged, module: fx
|
### 🚀 The feature, motivation and pitch
Some optimization packages call pytorch's fuse function to try optimization without knowing much about the model, notably ipex and packages which call ipex.
This operation relies on symbolic tracing, which fails frequently due to the limitations on conditional statements.
A possible improvement here is before tracing, filter the patterns by those that could work given the submodules, i.e. if there is no conv or batch norm, there is no reason to include the pattern. If there are none left after this step, just return the model.
### Alternatives
Alternatively or additionally, symbolic tracing (or its use in fuse) could be improved.
The points of failure in symbolic tracing are outlined in the ipex issue and are mainly:
* input checking on optional arguments, which could be resolved by allowing concrete_args in fuse
* dimension checks, which could be improved by allowing a proxy to have a dimension or even a shape (although this goes quite far, and I can see the potential for some strange bugs here)
### Additional context
[this issue](https://github.com/intel/intel-extension-for-pytorch/issues/250) contains some examples of typical failures.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @vincentqb @jbschlosser @ezyang @SherlockNoMad @soumith @EikanWang @wenzhe-nrv
| 2 |
4,383 | 87,755 |
Autograd precision for CONV + BN between pytorch version 1.11.0 and 1.12.1
|
module: numerical-stability, module: autograd, triaged, module: numerical-reproducibility
|
### 🐛 Describe the bug
```
import numpy as np
import torch
def to_numpy(data):
if isinstance(data, torch.Tensor):
if data.requires_grad == True:
return data.detach().cpu().numpy()
else:
return data.cpu().numpy()
else:
return np.array(data)
def save_model_grad_to_file(model, grad_file):
grad = {}
for name, para in model.named_parameters():
grad[name] = para.grad
torch.save(grad, grad_file)
def compare_stat_dict_strict(stat_dict_1, stat_dict_2, rtol=5e-2, atol=1e-5):
for k,v in stat_dict_1.items():
if k not in stat_dict_2:
raise KeyError
else:
np.testing.assert_allclose(to_numpy(v), to_numpy(stat_dict_2[k]), rtol=rtol, atol=atol)
def check_pytorch_autograd(input_data=None, targets=None, model: torch.nn.Module= None, device='cpu', test_cycle=2, check_model=False):
#fix input
if input_data is None:
np.random.seed(0)
np_input = np.random.random((32, 3, 32, 32)).astype(np.float32)
input_data = torch.from_numpy(np_input).to(device)
# test model
if model == None:
model = torch.nn.Sequential(
torch.nn.Conv2d(3,32,3,1),
torch.nn.BatchNorm2d(32)
)
model.train()
model.to(device)
loss_fun = torch.nn.MSELoss()
save_model = False
for _ in range(test_cycle):
if save_model == False and not check_model:
save_model = True
if save_model:
torch.save(model.state_dict(), "model_paramter.pth")
else:
load_dict = torch.load("model_paramter.pth", map_location=device)
model.load_state_dict(load_dict, strict=True)
model_output = model(input_data)
if save_model:
np.savez("model_output.npz", out=to_numpy(model_output))
else:
test_out = np.load('model_output.npz')['out']
np.testing.assert_allclose(test_out, to_numpy(model_output))
loss = loss_fun(torch.ones_like(model_output), model_output)
print("loss=", loss)
#empty grad
for name, para in model.named_parameters():
para.grad = None
loss.backward()
grad = {}
for name, para in model.named_parameters():
grad[name] = para.grad
if save_model:
torch.save(grad, "model_grad.pth")
else:
test_grad = torch.load("model_grad.pth")
compare_stat_dict_strict(test_grad, grad, rtol=1e-7, atol=0)
print("check_pytorch_autograd sucess!!!")
if __name__ == "__main__":
check_pytorch_autograd(check_model=False) # pytorch 1.11.0 run this will generate the reference grad data
check_pytorch_autograd(check_model=False) # pytorch 1.12.1 run
```
I build a simple model with only one CONV and BN. But when using loss.backward() to generate the grad of parameters in model, there shows a precision difference of grad between pytorch 1.11.0 and pytorch 1.12.1. the max relative error of grad precision is about 4% in my test. Does my test has something wrong or is it normal to has a small precision of autograd bewteen pytorch 1.11.0 and 1.12.1 ?
### Versions
autograd precision between pytorch 1.11.0 and 1.12.1 for CONV + BN
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 4 |
4,384 | 87,753 |
`torch.min`/`torch.max` returns bogus values for default int tensors on MPS
|
triaged, module: correctness (silent), module: mps
|
### 🐛 Describe the bug
```
% python -c "import torch;x=torch.tensor([1, 2, 3],device='mps');print(x, x.max(), x.min())"
tensor([1, 2, 3], device='mps:0') tensor(1, device='mps:0') tensor(0, device='mps:0')
```
### Versions
1.12.1, 1.13.0, nightly
cc @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 3 |
4,385 | 87,745 |
TorchDynamo: there has a accuracy issue for conv+unary(binary) post ops for gpu path
|
triaged, module: dynamo
|
### 🐛 Describe the bug
The PR https://github.com/pytorch/pytorch/pull/87678 adds two test cases about conv+unary(binary) post ops, which meet accuracy issue:
```
2022-10-25T09:02:51.8540504Z [2022-10-25 09:00:45,140] torch._inductor.codegen.triton: [CODE] schedule: [SchedulerNode(name='buf1')]
2022-10-25T09:02:51.8540752Z [2022-10-25 09:00:45,142] torch._inductor.codegen.triton: [CODE] schedule: [SchedulerNode(name='buf2')]
2022-10-25T09:02:51.8540922Z test_conv2d_unary_cuda failed - num_retries_left: 3
2022-10-25T09:02:51.8541022Z Traceback (most recent call last):
2022-10-25T09:02:51.8541210Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 1606, in test_conv2d_unary
2022-10-25T09:02:51.8541287Z self.common(
2022-10-25T09:02:51.8541434Z File "/opt/conda/lib/python3.10/unittest/mock.py", line 1369, in patched
2022-10-25T09:02:51.8541539Z return func(*newargs, **newkeywargs)
2022-10-25T09:02:51.8541719Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 358, in check_model_cuda
2022-10-25T09:02:51.8541800Z check_model(
2022-10-25T09:02:51.8541946Z File "/opt/conda/lib/python3.10/unittest/mock.py", line 1369, in patched
2022-10-25T09:02:51.8542049Z return func(*newargs, **newkeywargs)
2022-10-25T09:02:51.8542227Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 256, in check_model
2022-10-25T09:02:51.8542313Z self.assertEqual(
2022-10-25T09:02:51.8542581Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2464, in assertEqual
2022-10-25T09:02:51.8542659Z assert_equal(
2022-10-25T09:02:51.8542915Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1093, in assert_equal
2022-10-25T09:02:51.8543017Z raise error_metas[0].to_error(msg)
2022-10-25T09:02:51.8543170Z AssertionError: Tensor-likes are not close!
2022-10-25T09:02:51.8543204Z
2022-10-25T09:02:51.8543305Z Mismatched elements: 9262 / 401408 (2.3%)
2022-10-25T09:02:51.8543524Z Greatest absolute difference: 0.00048828125 at index (0, 13, 0, 31) (up to 1e-05 allowed)
2022-10-25T09:02:51.8543669Z Greatest relative difference: 1.0 at index (0, 2, 12, 11) (up to 0.001 allowed)
```
and
```
2022-10-25T09:02:51.8199504Z test_conv2d_binary_cuda failed - num_retries_left: 3
2022-10-25T09:02:51.8199603Z Traceback (most recent call last):
2022-10-25T09:02:51.8199787Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 1687, in test_conv2d_binary
2022-10-25T09:02:51.8199862Z self.common(
2022-10-25T09:02:51.8200011Z File "/opt/conda/lib/python3.10/unittest/mock.py", line 1369, in patched
2022-10-25T09:02:51.8200114Z return func(*newargs, **newkeywargs)
2022-10-25T09:02:51.8200288Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 358, in check_model_cuda
2022-10-25T09:02:51.8200363Z check_model(
2022-10-25T09:02:51.8200510Z File "/opt/conda/lib/python3.10/unittest/mock.py", line 1369, in patched
2022-10-25T09:02:51.8200616Z return func(*newargs, **newkeywargs)
2022-10-25T09:02:51.8200792Z File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 256, in check_model
2022-10-25T09:02:51.8200876Z self.assertEqual(
2022-10-25T09:02:51.8201146Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2464, in assertEqual
2022-10-25T09:02:51.8201222Z assert_equal(
2022-10-25T09:02:51.8201473Z File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1093, in assert_equal
2022-10-25T09:02:51.8201571Z raise error_metas[0].to_error(msg)
2022-10-25T09:02:51.8201728Z AssertionError: Tensor-likes are not close!
2022-10-25T09:02:51.8201733Z
2022-10-25T09:02:51.8201834Z Mismatched elements: 27714 / 401408 (6.9%)
2022-10-25T09:02:51.8202052Z Greatest absolute difference: 0.001953125 at index (0, 28, 5, 91) (up to 1e-05 allowed)
2022-10-25T09:02:51.8202230Z Greatest relative difference: 76.47368421052632 at index (0, 14, 59, 105) (up to 0.001 allowed)
```
the failed log can be found at https://pipelines.actions.githubusercontent.com/serviceHosts/7d146c05-69c3-4c20-a0e7-818111670117/_apis/pipelines/1/runs/2323724/signedlogcontent/382?urlExpires=2022-10-26T00%3A44%3A19.3241974Z&urlSigningMethod=HMACV1&urlSignature=X5tBD2%2BVxKQX1s4LgPs2N%2BcUTCaXAMjW2dr0mPiI2ls%3D
### Versions
The test env can be seen at https://github.com/pytorch/pytorch/actions/runs/3319310723/jobs/5484655244.
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
| 0 |
4,386 | 87,734 |
Checkpointing Support for Modularized Optimizers
|
module: optimizer, triaged, needs design
|
### 🚀 The feature, motivation and pitch
Hi, I'm currently working on an implementation of distributed Shampoo in PyTorch (https://github.com/facebookresearch/optimizers/tree/main/distributed_shampoo). Because of the complexity of the optimizer, it is necessary to modularize the optimizer by creating additional classes that may also contain Tensor objects (i.e., Preconditioner objects). However, `torch.optim.Optimizer`'s checkpointing functionality (`state_dict` / `load_state_dict`) does not natively support this kind of modularization, which requires us to override the `load_state_dict` function to support other custom classes. This is in order to ensure that the tensors contained within these custom state objects are appropriately moved by the `cast` function after loading from checkpoint.
Rather than requiring the user to override and support this additional checkpointing functionality, would it be possible to create an `OptimizerModule` (similar to `nn.Module`) that contains a `load_state_dict` that recursively calls this function within other `OptimizerModule` objects and is compatible with `torch.optim.Optimizer`? Developers can then build their new classes on top of this `OptimizerModule` to ensure that the cast automatically supports new classes.
Thanks in advance!
CC'ing collaborators and TorchRec folks since we are also interested in making distributed Shampoo also compatible with TorchRec's `KeyedOptimizer`.
cc: @tsunghsienlee, @dmudiger, @mikerabbat, @YLGH, @albanD
### Alternatives
In order to currently support checkpointing for Shampoo, we created `.to` functions for all custom classes in our optimizer that enable us to move all Tensor objects within all of our custom classes to a different device. We then modified the `cast` function within `torch.optim.optimizer`'s `load_state_dict` function by including an additional case for our `Preconditioner` class. While this works, it is not ideal since it requires us to maintain the `.to` functions for each class or (non-Iterable or dict) data structure we create, which can be bug-prone.
### Additional context
Related to this issue on Distributed Shampoo: https://github.com/facebookresearch/optimizers/issues/3.
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 3 |
4,387 | 87,733 |
FakeTensorMode doesn't support two Scalar inputs, if we use prims' impl as the meta function
|
triaged, module: primTorch, module: fakeTensor
|
### 🐛 Describe the bug
uncomment following line in _meta_registration.py
```
# "aten::add.Tensor", # ValueError: Receive two Number inputs to an elementwise binary operation! inductor/test_torchinductor.py -k test_both_scalars # noqa: B950
# "aten::sub.Tensor", # ValueError: Receive two Number inputs to an elementwise binary operation! inductor/test_torchinductor.py -k test_both_scalars # noqa: B950
# "aten::mul.Tensor", # ValueError: Receive two Number inputs to an elementwise binary operation! inductor/test_torchinductor.py -k test_both_scalars # noqa: B950
# "aten::div.Tensor", # ValueError: Receive two Number inputs to an elementwise binary operation! test_fake_tensor.py -k test_scalar_inputs # noqa: B950
# "aten::div.Tensor_mode", # ValueError: Receive two Number inputs to an elementwise binary operation! inductor/test_torchinductor.py -k test_div8_cpu # noqa: B950
```
And this will use prims.add as the meta function.
Further commenting out following line in _refs/__init__.py for `def add` fucntion
```
# if isinstance(a, Number) and isinstance(b, Number):
# raise ValueError(
# "Receive two Number inputs to an elementwise binary operation!"
# )
```
Repo:
```
with FakeTensorMode():
out = torch.add(3, 4)
print(out)
```
Got
```
Traceback (most recent call last):
File "/scratch/bahuang/work/repos/pytorch/temp/t.py", line 28, in <module>
out = torch.add(3, 4)
File "/scratch/bahuang/work/repos/pytorch/torch/_subclasses/fake_tensor.py", line 831, in __torch_dispatch__
return self.wrap_meta_outputs_with_default_device_logic(
File "/scratch/bahuang/work/repos/pytorch/torch/_subclasses/fake_tensor.py", line 866, in wrap_meta_outputs_with_default_device_logic
return tree_map(partial(wrap), r)
File "/scratch/bahuang/work/repos/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/scratch/bahuang/work/repos/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/scratch/bahuang/work/repos/pytorch/torch/_subclasses/fake_tensor.py", line 879, in wrap
return converter(self, e, device or common_device)
File "/scratch/bahuang/work/repos/pytorch/torch/_subclasses/fake_tensor.py", line 268, in __call__
assert t.device.type == "meta", f"device must be meta, got {t.device.type} instead"
AssertionError: device must be meta, got cpu instead
```
PyTorch eager supports this, and there is a test case in `inductor/test_torchinductor.py -k test_both_scalars` for this.
### Versions
master
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha @eellison, @ngimel, @ezyang
| 8 |
4,388 | 87,706 |
C++ Adagrad optimizer doesn't initialize parameter state
|
module: cpp, module: optimizer, triaged, actionable
|
### 🐛 Describe the bug
optim/adagrad.cpp lacks code to initialize parameter state.
Running this code:
```
auto x=torch::tensor({0.5, 2.0, 4.0}, torch::requires_grad());
auto y=torch::tensor({1.0, 2.0, 3.0});
auto o=torch::optim::Adagrad(std::vector<torch::Tensor>{}, torch::optim::AdagradOptions(0.1));
auto& p=o.param_groups();
p[0].params().push_back(x);
auto l=torch::nn::functional::mse_loss(x,y);
l.backward();
o.step();
```
will throw:
_'state_[c10::guts::to_string(p.unsafeGetTensorImpl())] != nullptr INTERNAL ASSERT FAILED at "../torch/csrc/api/src/optim/adagrad.cpp":80, please report a bug to PyTorch. state found NULL for the Tensor_
the other optimizers have code that searches for the state and initializes if not found, e.g. from sgd.cpp:
```
auto param_state = state_.find(c10::guts::to_string(p.unsafeGetTensorImpl()));
if(param_state == state_.end()) {
buf = torch::clone(d_p).detach();
auto state = std::make_unique<SGDParamState>();
state->momentum_buffer(buf);
state_[c10::guts::to_string(p.unsafeGetTensorImpl())] = std::move(state);
} else {
buf = static_cast<SGDParamState&>(*param_state->second).momentum_buffer();
buf.mul_(momentum).add_(d_p, 1 - dampening);
}
```
adagrad.cpp doesn't contain this kind of code.
### Versions
This error in all versions since PR #29335
cc @jbschlosser @vincentqb @albanD
| 1 |
4,389 | 87,701 |
pytorch/pytorch cpu official Docker images
|
triaged, module: docker
|
### 🚀 The feature, motivation and pitch
First of all I want to thank you for maintaining PyTorch-based images. I think I can understand the complexity that caring about millions of times downloaded images has to have. I've been using these images since I switched to PyTorch and they never failed me.
Though, lately I've been missing cpu-only images.
The motivation behind this request is basically having a lighter version of them, focused especially in the ability to launch CI test (and potentially other kind of) jobs in which I don't pretend to test my models but just some tensor operations I do with PyTorch.
Not only that, this would also support the deployment of lightweight models that can run on cpu, when you can afford the time trade-off and want to save some cash.
Thank you,
Claudio
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
4,390 | 87,697 |
Get https://github.com/pytorch/benchmark working
|
module: windows, triaged
|
### Description
- The https://github.com/pytorch/benchmark seems to be the official PyTorch performance benchmark running on the CI.
- This is a subtask of https://github.com/microsoft/ARM-on-Windows-Ecosystem/issues/187
### Acceptance Criteria
- We can compare results of https://github.com/pytorch/benchmark on Windows and WSL.
### Documentation
Steps to make this benchmark working:
- Enable long file names policy [Make Windows 11 Accept File Paths over 260 Characters (thegeekpage.com)](https://thegeekpage.com/make-windows-11-accept-file-paths-over-260-characters/)
- Still an issue [Add long path support for Windows [Approach #2] by majaeger · Pull Request #2056 · ninja-build/ninja (github.com)](https://github.com/ninja-build/ninja/pull/2056)
- `conda create -n pytorch-benchmark-py3.8 python=3.8`
- `conda activate pytorch-benchmark-py3.8`
- `conda install -y -c pytorch pytorch torchvision torchtext`
- `conda install git-lfs pyyaml`
- `git clone https://github.com/pytorch/benchmark`
- Apply the following patch:
```
diff --git a/torchbenchmark/__init__.py b/torchbenchmark/__init__.py
index 897dbe23..92c1aca9 100644
--- a/torchbenchmark/__init__.py
+++ b/torchbenchmark/__init__.py
@@ -43,10 +43,11 @@ internal_model_dir = "fb"
install_file = 'install.py'
-def _test_https(test_url: str = 'https://github.com', timeout: float = 0.5) -> bool:
+def _test_https(test_url: str = 'https://github.com', timeout: float = 5.0) -> bool:
try:
request.urlopen(test_url, timeout=timeout)
- except OSError:
+ except OSError as e:
+ print(e)
return False
return True
@@ -57,7 +58,7 @@ def _install_deps(model_path: str, verbose: bool = True) -> Tuple[bool, Any]:
[sys.executable, install_file],
]
run_env = os.environ.copy()
- run_env["PYTHONPATH"] = this_dir.parent
+ run_env["PYTHONPATH"] = str(this_dir.parent)
run_kwargs = {
'cwd': model_path,
'check': True,
@@ -88,7 +89,10 @@ def _install_deps(model_path: str, verbose: bool = True) -> Tuple[bool, Any]:
return (False, e, io.FileIO(stdout_fpath, mode="r").read().decode())
finally:
del output_buffer
- os.remove(stdout_fpath)
+ try:
+ os.remove(stdout_fpath)
+ except:
+ pass
return (True, None, None)
diff --git a/torchbenchmark/models/fambench_xlmr/requirements.txt
b/torchbenchmark/models/fambench_xlmr/requirements.txt
index da75cfcd..06cb9fef 100644
--- a/torchbenchmark/models/fambench_xlmr/requirements.txt
+++ b/torchbenchmark/models/fambench_xlmr/requirements.txt
@@ -1,8 +1,7 @@
sacrebleu
bitarray
# pin fairseq version
-fairseq==0.10.2
+fairseq==0.10.0
omegaconf==2.1.1
hydra-core==1.1.2
sentencepiece
-xformers
diff --git a/torchbenchmark/models/opacus_cifar10/requirements.txt
b/torchbenchmark/models/opacus_cifar10/requirements.txt
index 106f0456..6a68fd02 100644
--- a/torchbenchmark/models/opacus_cifar10/requirements.txt
+++ b/torchbenchmark/models/opacus_cifar10/requirements.txt
@@ -1,3 +1,2 @@
-git+https://github.com/pytorch/functorch.git
-# must include the fix https://github.com/pytorch/opacus/pull/426
+functorch
opacus>=1.1.2
diff --git a/torchbenchmark/util/framework/detectron2/requirements.txt
b/torchbenchmark/util/framework/detectron2/requirements.txt
index a7f6a571..3c17e0b6 100644
--- a/torchbenchmark/util/framework/detectron2/requirements.txt
+++ b/torchbenchmark/util/framework/detectron2/requirements.txt
@@ -1,3 +1,2 @@
-git+https://github.com/facebookresearch/detectron2.git@c470ca3
omegaconf==2.1.1
numpy
```
- `python .\install.py` (Must be Run as Administrator...)
- TODO: This is just WIP.
| 1 |
4,391 | 87,696 |
Enable PostLocalSGDOptimizer on CUDA tensors
|
oncall: distributed, module: cuda, triaged
|
### 🚀 The feature, motivation and pitch
Hello!
I am trying to leverage the power of the `PostLocalSGDOptimizer` for averaging gradients during distributed training. However, as mentioned in the [docs](https://pytorch.org/docs/stable/distributed.optim.html#), I see that this optimization is not supported on CUDA tensors.
I think that enabling `PostLocalSGDOptimizer` for CUDA tensors would benefit tremendously the community, and thus, I was wondering: is this currently being worked on?
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @vincentqb @jbschlosser @ngimel
| 2 |
4,392 | 87,694 |
Investigate possibilities of automation for build pipeline
|
module: windows, triaged
|
After process of maintaining build pipeline is clear, investigate possibilities for automation and create follow up tickets to implement those.
@Blackhex came up with an idea, that we should have for example notifications when build fails.
Predecessor [ticket](https://github.com/pytorch/pytorch/issues/87693)
| 2 |
4,393 | 87,692 |
Performance issue on Windows with a "benchmark" comparing to Linux and WLS
|
module: windows, triaged
|
### Description
- Started upon discussion on https://pytorch.slack.com/archives/C042FDXSH51/p1664990529414809
- No PyTorch issue created yet. We will need solid reproduction case. It was requested but not provided yet.
- The issue was reported on https://pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b3.html with Linknet
- Some benchmarks that can be used:
- https://github.com/pytorch/benchmark
- https://github.com/LukasHedegaard/pytorch-benchmark
- https://github.com/ryujaehun/pytorch-gpu-benchmark
### Acceptance Criteria
- The reported performance issue is identified and analyzed.
- If upstream issue will be created, it's resolved.
### Sub-tasks
- https://github.com/microsoft/ARM-on-Windows-Ecosystem/issues/191
- https://github.com/microsoft/ARM-on-Windows-Ecosystem/issues/192
- https://github.com/microsoft/ARM-on-Windows-Ecosystem/issues/193
- https://github.com/microsoft/ARM-on-Windows-Ecosystem/issues/221
### Documentation
- It seems that there are currently no publicly available PyTorch binaries with debug symbols https://github.com/pytorch/pytorch/issues/11982
| 3 |
4,394 | 87,673 |
INTERNAL ASSERT FAILED !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_))
|
needs reproduction, triaged, module: TensorIterator
|
### 🐛 Describe the bug
RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED
```
import torch
input_tensor = torch.rand(10, 10)
grad_tensor = torch.rand(10, 10)
index_tensor = torch.randint(low=0, high=10, size=(10,))
input_tensor.addcdiv(1, grad_tensor, index_tensor)
print(input_tensor)
```
```
RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1659484806139/work/aten/src/ATen/TensorIterator.cpp":405, please report a bug to PyTorch.
```
### Versions
PyTorch 1.12.1
| 3 |
4,395 | 87,642 |
`libtorch_cpu.so` is exposing some LLVM symbols
|
module: build, module: cpp-extensions, triaged
|
### 🐛 Describe the bug
#63272 had hidden LLVM symbols by making LLVM a `PRIVATE` dependency of `torch_cpu` in `caffe2/CMakeLists.txt`, but `libtorch_cpu.so` is still exposing some LLVM symbols.
```shell
nm -D ./build/lib/libtorch_cpu.so | grep llvm
000000000ca82122 B FLAGS_torch_jit_llvm_use_fast_intrinsics
0000000009fe4650 T llvm_orc_deregisterEHFrameSectionWrapper
0000000009fe4720 T llvm_orc_registerEHFrameSectionWrapper
000000000a048800 T llvm_regcomp
000000000a049120 T llvm_regerror
000000000a04bba0 T llvm_regexec
000000000a04cfb0 T llvm_regfree
000000000a04d030 T llvm_strlcpy
```
This issue doesn't affect PyTorch directly but can affect a [PyTorch C++ extension](https://pytorch.org/tutorials/advanced/cpp_extension.html) if it links to another library that also links to LLVM.
Even in that case, workarounds can be used, so it's probably not a high priority issue at this point. Thanks!
### Versions
Master branch
cc @malfet @seemethere @zou3519 @ZhennanQin
| 0 |
4,396 | 87,634 |
Add tests for ProcessGroup cpp extensions
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
We have no test case covering for this tutorial (https://pytorch.org/tutorials/intermediate/process_group_cpp_extension_tutorial.html#customize-process-group-backends-using-cpp-extensions) to ensure that Process Group cpp extensions are supported. Creating this issue to track progress on this effort.
This ask is to prevent future regressions such as https://github.com/pytorch/pytorch/issues/87173
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu
| 0 |
4,397 | 93,579 |
torchdynamo.export doesn't work with data-dependent control flow
|
triaged, enhancement
|
### 🐛 Describe the bug
```
def f(x):
x_bool = torch.isfinite(x)
if x.all():
return x.cos()
return x.sin()
torchdynamo.export(f, torch.ones(6, 4), aten_graph=True, tracing_mode="symbolic")
```
### Error logs
torch._dynamo.exc.Unsupported: generic_jump TensorVariable()
### Minified repro
_No response_
| 2 |
4,398 | 93,576 |
eca_botnext26ts_256 fails with TORCHDYNAMO_DYNAMIC_ SHAPES=1: sympy infinite loop
|
triaged, bug, oncall: pt2
|
### 🐛 Describe the bug
```
File "/raid/ezyang/pytorch-scratch2/torch/nn/modules/module.py", line 1363, in _call_impl
return forward_call(*input, **kwargs)
File "<eval_with_key>.54", line 7, in forward
_unsafe_view = torch.ops.aten._unsafe_view.default(clone, [2048, 16]); clone = None
File "/raid/ezyang/pytorch-scratch2/torch/_ops.py", line 257, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: shape '[2048, 16]' is invalid for input of size 8192
```
repro
```
TORCHDYNAMO_DYNAMIC_SHAPES=1 time python benchmarks/dynamo/timm_models.py --accuracy --backend aot_eager --training --only eca_botnext26ts_256
```
### Error logs
does anyone even look at these
### Minified repro
Minifier did not work
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
4,399 | 93,575 |
Minifier should try forward only, and if it fails set fwd_only=True
|
triaged, bug, oncall: pt2
|
### 🐛 Describe the bug
self explanatory
### Error logs
_No response_
### Minified repro
_No response_
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
4,400 | 93,574 |
Minifier should save config variables so you don't have to replicate env vars
|
triaged, bug, oncall: pt2
|
### 🐛 Describe the bug
Today you have to remember to run `minifier_launcher.py` with the same config vars / env vars that the original script. The minifier should record all configs explicitly (maybe only those that changed.)
### Error logs
_No response_
### Minified repro
_No response_
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.