Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
5,401 | 80,017 |
RPC init fails and crashes when world_size is greater than 18
|
oncall: distributed, triaged, module: rpc
|
### π Describe the bug
Hi! When I use RPC, I find that when the `world_size` is greater than 18, the program will crash. I've tested it on two servers and got the same result.
Program to reproduce:
```python
import torch
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
def init_process(rank, tot_processes):
print(f'here is rank {rank}', flush=True)
rpc_backend_options = rpc.TensorPipeRpcBackendOptions(
init_method=f'tcp://localhost:52521'
)
rpc.init_rpc(
name=f'test_{rank}', rank=rank, world_size=tot_processes,
rpc_backend_options=rpc_backend_options,
)
print(f'rank {rank} init successfully', flush=True)
rpc.shutdown()
def main() -> None:
mp.set_start_method('spawn')
tot_processes = 19
print(f'spawning {tot_processes} processes...')
mp.spawn(
fn=init_process,
args=(tot_processes, ),
nprocs=tot_processes,
join=True,
)
if __name__ == '__main__':
main()
```
When `tot_processes = 18`, the program can exit without any error. But if `tot_processes = 19`, it will crash as below.
```shell
β― python rpc.py
spawning 19 processes...
here is rank 7
here is rank 12
here is rank 4
here is rank 11
here is rank 2
here is rank 6
here is rank 15
here is rank 10
here is rank 9
here is rank 14
here is rank 8
here is rank 5
here is rank 13
here is rank 17
here is rank 3
here is rank 1
here is rank 16
here is rank 0
here is rank 18
[W tensorpipe_agent.cpp:863] RPC agent for test_12 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:863] RPC agent for test_5 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:863] RPC agent for test_16 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:863] RPC agent for test_15 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:863] RPC agent for test_3 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:863] RPC agent for test_13 encountered error when sending outgoing request #0 to test_0: connect: Resource temporarily unavailable (this error originated at tensorpipe/common/socket.cc:114)
[W tensorpipe_agent.cpp:492] RPC agent for test_0 encountered error when accepting incoming pipe: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_16: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_1: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_3: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_9: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_11: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_2: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_18: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_4: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_17: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_14: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_8: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_7: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_12: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_6: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_5: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_10: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_15: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:682] RPC agent for test_0 encountered error when reading incoming request from test_13: sendmsg: Broken pipe (this error originated at tensorpipe/common/socket.h:105)
[W tensorpipe_agent.cpp:863] RPC agent for test_6 encountered error when sending outgoing request #0 to test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_8 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_11 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:863] RPC agent for test_10 encountered error when sending outgoing request #0 to test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_14 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_4 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_7 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_2 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_17 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:863] RPC agent for test_1 encountered error when sending outgoing request #0 to test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_18 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:887] RPC agent for test_9 encountered error when reading incoming response from test_0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
Traceback (most recent call last):
File "/home/ubuntu/myenv/experiment/ppo-new/rpc.py", line 30, in <module>
main()
File "/home/ubuntu/myenv/experiment/ppo-new/rpc.py", line 22, in main
mp.spawn(
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/ubuntu/myenv/experiment/ppo-new/rpc.py", line 11, in init_process
rpc.init_rpc(
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/__init__.py", line 190, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/__init__.py", line 224, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/backend_registry.py", line 97, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/backend_registry.py", line 305, in _tensorpipe_init_backend_handler
api._all_gather(None, timeout=rpc_backend_options.rpc_timeout)
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/api.py", line 204, in _all_gather
rpc_sync(
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/api.py", line 77, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/myenv/lib/python3.10/site-packages/torch/distributed/rpc/api.py", line 767, in rpc_sync
return fut.wait()
RuntimeError: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
```
### Versions
```shell
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1045-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.11.0 py3.10_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @jjlilley @mrzzd
| 15 |
5,402 | 80,016 |
[ONNX] Input node deleted when converting a Conditional random field model
|
module: onnx, triaged
|
### π Describe the bug
CRF model py:
```python
import torch
import torch.nn as nn
from typing import List, Optional
class CRF(nn.Module):
"""Conditional random field.
This module implements a conditional random field [LMP01]_. The forward computation
of this class computes the log likelihood of the given sequence of tags and
emission score tensor. This class also has `~CRF.decode` method which finds
the best tag sequence given an emission score tensor using `Viterbi algorithm`_.
Args:
num_tags: Number of tags.
batch_first: Whether the first dimension corresponds to the size of a minibatch.
Attributes:
start_transitions (`~torch.nn.Parameter`): Start transition score tensor of size
``(num_tags,)``.
end_transitions (`~torch.nn.Parameter`): End transition score tensor of size
``(num_tags,)``.
transitions (`~torch.nn.Parameter`): Transition score tensor of size
``(num_tags, num_tags)``.
.. [LMP01] Lafferty, J., McCallum, A., Pereira, F. (2001).
"Conditional random fields: Probabilistic models for segmenting and
labeling sequence data". *Proc. 18th International Conf. on Machine
Learning*. Morgan Kaufmann. pp. 282β289.
.. _Viterbi algorithm: https://en.wikipedia.org/wiki/Viterbi_algorithm
"""
def __init__(self, num_tags: int, batch_first: bool = False) -> None:
if num_tags <= 0:
raise ValueError(f'invalid number of tags: {num_tags}')
super().__init__()
self.num_tags = num_tags
self.batch_first = batch_first
self.start_transitions = nn.Parameter(torch.empty(num_tags))
self.end_transitions = nn.Parameter(torch.empty(num_tags))
self.transitions = nn.Parameter(torch.empty(num_tags, num_tags))
self.reset_parameters()
def reset_parameters(self) -> None:
"""Initialize the transition parameters.
The parameters will be initialized randomly from a uniform distribution
between -0.1 and 0.1.
"""
nn.init.uniform_(self.start_transitions, -0.1, 0.1)
nn.init.uniform_(self.end_transitions, -0.1, 0.1)
nn.init.uniform_(self.transitions, -0.1, 0.1)
def __repr__(self) -> str:
return f'{self.__class__.__name__}(num_tags={self.num_tags})'
def forward(self,
emissions: torch.Tensor,
mask: Optional[torch.ByteTensor] = None,
tags: torch.LongTensor = None,
reduction: str = 'mean',
nbest: Optional[int] = None,
pad_tag: Optional[int] = None) -> torch.Tensor:
"""Compute the conditional log likelihood of a sequence of tags given emission scores.
Args:
emissions (`~torch.Tensor`): Emission score tensor of size
``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length, num_tags)`` otherwise.
tags (`~torch.LongTensor`): Sequence of tags tensor of size
``(seq_length, batch_size)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length)`` otherwise.
mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
reduction: Specifies the reduction to apply to the output:
``none|sum|mean|token_mean``. ``none``: no reduction will be applied.
``sum``: the output will be summed over batches. ``mean``: the output will be
averaged over batches. ``token_mean``: the output will be averaged over tokens.
nbest (`int`): Number of most probable paths for each sequence
pad_tag (`int`): Tag at padded positions. Often input varies in length and
the length will be padded to the maximum length in the batch. Tags at
the padded positions will be assigned with a padding tag, i.e. `pad_tag`
Returns:
`~torch.Tensor`: The log likelihood. This will have size ``(batch_size,)`` if
reduction is ``none``, ``()`` otherwise.
"""
if tags is not None:
# training
if reduction not in ('none', 'sum', 'mean', 'token_mean'):
raise ValueError(f'invalid reduction: {reduction}')
if mask is None:
mask = torch.ones_like(tags, dtype=torch.uint8, device=tags.device)
if mask.dtype != torch.uint8:
mask = mask.byte()
self._validate(emissions, tags=tags, mask=mask)
if self.batch_first:
emissions = emissions.transpose(0, 1)
tags = tags.transpose(0, 1)
mask = mask.transpose(0, 1)
# shape: (batch_size,)
numerator = self._compute_score(emissions, tags, mask)
# shape: (batch_size,)
denominator = self._compute_normalizer(emissions, mask)
# shape: (batch_size,)
llh = numerator - denominator
crf_loss = None
if reduction == 'none':
crf_loss = llh
elif reduction == 'sum':
crf_loss = llh.sum()
elif reduction == 'mean':
crf_loss = llh.mean()
else:
crf_loss = llh.sum() / mask.float().sum()
return crf_loss
else:
# predict
predict_paths = self.decode(emissions, mask, nbest=nbest, pad_tag=pad_tag)
return predict_paths
def decode(self,
emissions: torch.Tensor,
mask: Optional[torch.ByteTensor] = None,
nbest: Optional[int] = None,
pad_tag: Optional[int] = None) -> torch.Tensor:
"""Find the most likely tag sequence using Viterbi algorithm.
Args:
emissions (`~torch.Tensor`): Emission score tensor of size
``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length, num_tags)`` otherwise.
mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
nbest (`int`): Number of most probable paths for each sequence
pad_tag (`int`): Tag at padded positions. Often input varies in length and
the length will be padded to the maximum length in the batch. Tags at
the padded positions will be assigned with a padding tag, i.e. `pad_tag`
Returns:
A PyTorch tensor of the best tag sequence for each batch of shape
(nbest, batch_size, seq_length)
"""
if nbest is None:
nbest = 1
if mask is None:
mask = torch.ones(emissions.shape[:2], dtype=torch.uint8,
device=emissions.device)
if mask.dtype != torch.uint8:
mask = mask.byte()
self._validate(emissions, mask=mask)
if self.batch_first:
emissions = emissions.transpose(0, 1)
mask = mask.transpose(0, 1)
predict_paths = None
if nbest == 1:
predict_paths = self._viterbi_decode(emissions, mask, pad_tag).unsqueeze(0)
else:
predict_paths = self._viterbi_decode_nbest(emissions, mask, nbest, pad_tag)
return predict_paths
def _validate(self, emissions: torch.Tensor,
tags: Optional[torch.LongTensor] = None,
mask: Optional[torch.ByteTensor] = None) -> None:
input_shape = emissions.shape
if len(input_shape) != 3:
raise ValueError(f'emissions must have dimension of 3, got {len(input_shape)}')
if input_shape[2] != self.num_tags:
raise ValueError(
f'expected last dimension of emissions is {self.num_tags}, '
f'got {input_shape[2]}')
if tags is not None:
tags_shape = tags.shape
if input_shape[0:2] != tags_shape:
raise ValueError(
'the first two dimensions of emissions and tags must match, '
f'got {tuple(input_shape[:2])} and {tuple(tags_shape)}')
if mask is not None:
mask_shape = mask.shape
if input_shape[:2] != mask_shape:
raise ValueError(
'the first two dimensions of emissions and mask must match, '
f'got {tuple(input_shape[:2])} and {tuple(mask_shape)}')
no_empty_seq = not self.batch_first and mask[0].all()
no_empty_seq_bf = self.batch_first and mask[:, 0].all()
if not no_empty_seq and not no_empty_seq_bf:
raise ValueError('mask of the first timestep must all be on')
# @torch.jit.script
def _compute_score(self, emissions: torch.Tensor,
tags: torch.LongTensor,
mask: torch.ByteTensor) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# tags: (seq_length, batch_size)
# mask: (seq_length, batch_size)
seq_length, batch_size = tags.shape
mask = mask.float()
# Start transition score and first emission
# shape: (batch_size,)
score = self.start_transitions[tags[0]]
score += emissions[0, torch.arange(batch_size), tags[0]]
for i in range(1, seq_length):
# Transition score to next tag, only added if next timestep is valid (mask == 1)
# shape: (batch_size,)
score += self.transitions[tags[i - 1], tags[i]] * mask[i]
# Emission score for next tag, only added if next timestep is valid (mask == 1)
# shape: (batch_size,)
score += emissions[i, torch.arange(batch_size), tags[i]] * mask[i]
# End transition score
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# shape: (batch_size,)
last_tags = tags[seq_ends, torch.arange(batch_size)]
# shape: (batch_size,)
score += self.end_transitions[last_tags]
return score
# @torch.jit.script
def _compute_normalizer(self, emissions: torch.Tensor,
mask: torch.ByteTensor) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
seq_length = emissions.size(0)
# Start transition score and first emission; score has size of
# (batch_size, num_tags) where for each batch, the j-th column stores
# the score that the first timestep has tag j
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
for i in range(1, seq_length):
# Broadcast score for every possible next tag
# shape: (batch_size, num_tags, 1)
broadcast_score = score.unsqueeze(2)
# Broadcast emission score for every possible current tag
# shape: (batch_size, 1, num_tags)
broadcast_emissions = emissions[i].unsqueeze(1)
# Compute the score tensor of size (batch_size, num_tags, num_tags) where
# for each sample, entry at row i and column j stores the sum of scores of all
# possible tag sequences so far that end with transitioning from tag i to tag j
# and emitting
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emissions
# Sum over all possible current tags, but we're in score space, so a sum
# becomes a log-sum-exp: for each sample, entry i stores the sum of scores of
# all possible tag sequences so far, that end in tag i
# shape: (batch_size, num_tags)
next_score = torch.logsumexp(next_score, dim=1)
# Set score to the next score if this timestep is valid (mask == 1)
# shape: (batch_size, num_tags)
score = torch.where(mask[i].unsqueeze(1).bool(), next_score, score)
# End transition score
# shape: (batch_size, num_tags)
score += self.end_transitions
# Sum (log-sum-exp) over all possible tags
# shape: (batch_size,)
return torch.logsumexp(score, dim=1)
# @torch.jit.script
def _viterbi_decode(self, emissions: torch.FloatTensor,
mask: torch.ByteTensor,
pad_tag: Optional[int] = None) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
# return: (batch_size, seq_length)
if pad_tag is None:
pad_tag = 0
device = emissions.device
seq_length, batch_size = mask.shape
# Start transition and first emission
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
history_idx = torch.zeros((seq_length, batch_size, self.num_tags),
dtype=torch.long, device=device)
oor_idx = torch.zeros((batch_size, self.num_tags),
dtype=torch.long, device=device)
oor_tag = torch.full((seq_length, batch_size), pad_tag,
dtype=torch.long, device=device)
# - score is a tensor of size (batch_size, num_tags) where for every batch,
# value at column j stores the score of the best tag sequence so far that ends
# with tag j
# - history_idx saves where the best tags candidate transitioned from; this is used
# when we trace back the best tag sequence
# - oor_idx saves the best tags candidate transitioned from at the positions
# where mask is 0, i.e. out of range (oor)
# Viterbi algorithm recursive case: we compute the score of the best tag sequence
# for every possible next tag
for i in range(1, seq_length):
# Broadcast viterbi score for every possible next tag
# shape: (batch_size, num_tags, 1)
broadcast_score = score.unsqueeze(2)
# Broadcast emission score for every possible current tag
# shape: (batch_size, 1, num_tags)
broadcast_emission = emissions[i].unsqueeze(1)
# Compute the score tensor of size (batch_size, num_tags, num_tags) where
# for each sample, entry at row i and column j stores the score of the best
# tag sequence so far that ends with transitioning from tag i to tag j and emitting
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emission
# Find the maximum score over all possible current tag
# shape: (batch_size, num_tags)
next_score, indices = next_score.max(dim=1)
# Set score to the next score if this timestep is valid (mask == 1)
# and save the index that produces the next score
# shape: (batch_size, num_tags)
score = torch.where(mask[i].unsqueeze(-1).bool(), next_score, score)
indices = torch.where(mask[i].unsqueeze(-1).bool(), indices, oor_idx)
history_idx[i - 1] = indices
# End transition score
# shape: (batch_size, num_tags)
end_score = score + self.end_transitions
_, end_tag = end_score.max(dim=1)
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# insert the best tag at each sequence end (last position with mask == 1)
history_idx = history_idx.transpose(1, 0).contiguous()
history_idx.scatter_(1, seq_ends.view(-1, 1, 1).expand(-1, 1, self.num_tags),
end_tag.view(-1, 1, 1).expand(-1, 1, self.num_tags))
history_idx = history_idx.transpose(1, 0).contiguous()
# The most probable path for each sequence
best_tags_arr = torch.zeros((seq_length, batch_size),
dtype=torch.long, device=device)
best_tags = torch.zeros(batch_size, 1, dtype=torch.long, device=device)
for idx in range(seq_length - 1, -1, -1):
best_tags = torch.gather(history_idx[idx], 1, best_tags)
best_tags_arr[idx] = best_tags.data.view(batch_size)
return torch.where(mask.bool(), best_tags_arr, oor_tag).transpose(0, 1)
# @torch.jit.script
def _viterbi_decode_nbest(self, emissions: torch.FloatTensor,
mask: torch.ByteTensor,
nbest: int,
pad_tag: Optional[int] = None) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
# return: (nbest, batch_size, seq_length)
if pad_tag is None:
pad_tag = 0
device = emissions.device
seq_length, batch_size = mask.shape
# Start transition and first emission
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
history_idx = torch.zeros((seq_length, batch_size, self.num_tags, nbest),
dtype=torch.long, device=device)
oor_idx = torch.zeros((batch_size, self.num_tags, nbest),
dtype=torch.long, device=device)
oor_tag = torch.full((seq_length, batch_size, nbest), pad_tag,
dtype=torch.long, device=device)
# + score is a tensor of size (batch_size, num_tags) where for every batch,
# value at column j stores the score of the best tag sequence so far that ends
# with tag j
# + history_idx saves where the best tags candidate transitioned from; this is used
# when we trace back the best tag sequence
# - oor_idx saves the best tags candidate transitioned from at the positions
# where mask is 0, i.e. out of range (oor)
# Viterbi algorithm recursive case: we compute the score of the best tag sequence
# for every possible next tag
for i in range(1, seq_length):
if i == 1:
broadcast_score = score.unsqueeze(-1)
broadcast_emission = emissions[i].unsqueeze(1)
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emission
else:
broadcast_score = score.unsqueeze(-1)
broadcast_emission = emissions[i].unsqueeze(1).unsqueeze(2)
# shape: (batch_size, num_tags, nbest, num_tags)
next_score = broadcast_score + self.transitions.unsqueeze(1) + broadcast_emission
# Find the top `nbest` maximum score over all possible current tag
# shape: (batch_size, nbest, num_tags)
next_score, indices = next_score.view(batch_size, -1, self.num_tags).topk(nbest, dim=1)
if i == 1:
score = score.unsqueeze(-1).expand(-1, -1, nbest)
indices = indices * nbest
# convert to shape: (batch_size, num_tags, nbest)
next_score = next_score.transpose(2, 1)
indices = indices.transpose(2, 1)
# Set score to the next score if this timestep is valid (mask == 1)
# and save the index that produces the next score
# shape: (batch_size, num_tags, nbest)
score = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1).bool(), next_score, score)
indices = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1).bool(), indices, oor_idx)
history_idx[i - 1] = indices
# End transition score shape: (batch_size, num_tags, nbest)
end_score = score + self.end_transitions.unsqueeze(-1)
_, end_tag = end_score.view(batch_size, -1).topk(nbest, dim=1)
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# insert the best tag at each sequence end (last position with mask == 1)
history_idx = history_idx.transpose(1, 0).contiguous()
history_idx.scatter_(1, seq_ends.view(-1, 1, 1, 1).expand(-1, 1, self.num_tags, nbest),
end_tag.view(-1, 1, 1, nbest).expand(-1, 1, self.num_tags, nbest))
history_idx = history_idx.transpose(1, 0).contiguous()
# The most probable path for each sequence
best_tags_arr = torch.zeros((seq_length, batch_size, nbest),
dtype=torch.long, device=device)
best_tags = torch.arange(nbest, dtype=torch.long, device=device) \
.view(1, -1).expand(batch_size, -1)
for idx in range(seq_length - 1, -1, -1):
best_tags = torch.gather(history_idx[idx].view(batch_size, -1), 1, best_tags)
best_tags_arr[idx] = best_tags.data.view(batch_size, -1) // nbest
return torch.where(mask.unsqueeze(-1).bool(), best_tags_arr, oor_tag).permute(2, 1, 0)
```
Convert script:
```python
import unittest
import numpy as np
import torch
import onnxruntime
from src.models.common.crf import CRF
class CrfModelTest(unittest.TestCase):
def test_crf_to_onnx_model(self):
crf_model = CRF(num_tags=25, batch_first=True)
torch_model_path = "/root/NLP_Meta/data/resume_parsing_new/segmentation/20220601_1860/pytorch_crf_model.bin"
# crf_model.load_state_dict(torch.load(torch_model_path, map_location="cpu"))
dummy_input = torch.randn([1, 64, 25])
dummy_mask = torch.ones([1, 64], dtype=torch.uint8)
input_names = ["emissions", "mask"]
output_names = ["predict_paths"]
dynamic_axes = {
"emissions": {0: "batch", 1: "sentence", 2: "hidden_size"},
"mask": {0: "batch", 1: "sentence"},
"predict_paths": {0: "path_num", 1: "batch", 2: "sentence"}
}
crf_onnx_model_path = "/root/NLP_Meta/data/resume_parsing_new/segmentation/onnx/20220601_1860_segment_crf.onnx"
torch.onnx.export(crf_model, ({"emissions": dummy_input, "mask": dummy_mask}, ), crf_onnx_model_path,
verbose=True,
opset_version=12,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
)
print("Convert to onnx model complete.")
def test_crf_onnx_run(self):
crf_onnx_model_path = "/root/NLP_Meta/data/resume_parsing_new/segmentation/onnx/20220601_1860_segment_crf.onnx"
sess = onnxruntime.InferenceSession(crf_onnx_model_path, providers=["CPUExecutionProvider"])
dummy_input = np.random.randn(1, 64, 25).astype(np.float32)
dummy_mask = np.ones((1, 64), dtype=np.float32)
outputs = sess.run(output_names=["predict_paths"], input_feed={"emissions": dummy_input})
print(outputs[0], outputs[0].shape)
```
Part graph:
```
graph(%mask : Byte(*, *, strides=[64, 1], requires_grad=0, device=cpu),
%1240 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1241 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1242 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1243 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1244 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1245 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1246 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1247 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1248 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1249 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1250 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1251 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1252 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1253 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1254 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1255 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1256 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1257 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1258 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1259 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
%1260 : Long(1, strides=[1], requires_grad=0, device=cpu),
%1261 : Long(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
```
Conversion success only have 'mask' input node, don't have 'emissions' input node?
### Versions
torch version: 1.10.0
onnx version: 1.11.0
| 6 |
5,403 | 80,012 |
static builds are broken by MKL_DNN
|
module: build, triaged, module: regression, module: third_party
|
### π Describe the bug
Building with BUILD_SHARED_LIBRARIES=OFF will fail if MKL / MKL DNN is enabled.
```shell
USE_CUDA=OFF BUILD_SHARED_LIBS=OFF USE_TENSORPIPE=OFF python setup.py install
```
Tensor pipe has another issue with static builds here: https://github.com/pytorch/tensorpipe/issues/449
Errors out like
```
CMake Error:
Running
'/usr/bin/ninja' '-C' '/home/anush/github/pytorch/build' '-t' 'recompact'
failed with:
ninja: error: build.ninja:63037: bad $-escape (literal $ must be written as $$)
```
This is because when MKL is enabled and it is a static build it generates wrong targets in the Ninja file:
```
$<TARGET_FILE:dnnl_graph>
```
### Versions
Top of main
cc @malfet @seemethere
| 1 |
5,404 | 80,007 |
when forward use **kwargsοΌhow to construct the example_ Inputs parameter in jit.trace?
|
oncall: jit
|
### π Describe the bug
import torch
class Model(nn.Module):
def forward(self, **kwargs):
# kwargs contains more than dozens of tensors
pass
model = Model()
trace_model = torch.jit.trace(model, example_inputs=??)
### Versions
PyTorch version: 1.6.0+cu101
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.26
Python version: 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1.3.2.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 440.44
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] torch==1.6.0+cu101
[pip3] torchvision==0.7.0+cu101
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] numpy 1.20.1 py38h93e21f0_0
[conda] numpy-base 1.20.1 py38h7d8b39e_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
| 2 |
5,405 | 79,997 |
Comprehensive documentation for Tensor indexing?
|
module: docs, triaged, module: advanced indexing
|
### π The doc issue
Is there a comprehensive doc page for Tensor indexing, like there is for [numpy.ndarray](https://numpy.org/doc/stable/user/basics.indexing.html)?
I am facing some inconsistencies, particularly when using combined advanced boolean indexing and basic indexing, where the behaviour is clearly documented in the numpy side of things, but I am getting IndexError s in pytorch.
Would be great to know to what extent Tensor indexing mimics ndarray indexing.
Cheers.
### Suggest a potential alternative/fix
Add a page like https://numpy.org/doc/stable/user/basics.indexing.html#advanced-indexing to the docs.
cc @svekars @holly1238
| 3 |
5,406 | 79,987 |
Deterministic `index_put` on CUDA fails when broadcasting is required
|
triaged, module: advanced indexing
|
### π Describe the bug
`index_put` with `torch.use_deterministic_algorithms(True)` when broadcasting is implied fails on CUDA while it succeeds on CPU and without `use_deterministic_algorithms`.
Note that while this appears to be related to previous issues, I do not believe this is a duplicate of an existing bug that has been _already_ fixed. See also: #57515, #61032, #61612, #67189, #72053
Report originated from forum post: https://discuss.pytorch.org/t/assigning-tensor-to-multiple-rows-on-gpu/154421
Repro:
```
import torch
torch.use_deterministic_algorithms(True)
x = torch.zeros((5,4), device=torch.device('cuda:0'))
x[[False,True,False,True,True]] = torch.tensor([1.0, 1.0, 1.0, 1.0], device=torch.device('cuda:0'), dtype=torch.float32)
```
I'll open a PR right after the posting of this issue with a band-aid fix, but I'm skeptical of its robustness.
CC @ptrblck @ngimel
### Versions
```
Collecting environment information...
PyTorch version: 1.12.0a0+bd13bc6
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.12.0a0+bd13bc6
[pip3] torch-tensorrt==1.1.0a0
[pip3] torchtext==0.13.0a0
[pip3] torchvision==0.13.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.22.3 py38h1d589f8_2 conda-forge
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.12.0a0+bd13bc6 dev_0 <develop>
[conda] torch-tensorrt 1.1.0a0 pypi_0 pypi
[conda] torchtext 0.13.0a0 pypi_0 pypi
[conda] torchvision 0.13.0a0 pypi_0 pypi
```
| 4 |
5,407 | 79,977 |
[CI] Do we run all cpp tests on CI?
|
module: ci, triaged
|
### π Describe the bug
We should! But recently, @vors pointed out that there have been regressions not caught by CI before. https://github.com/pytorch/pytorch/pull/79926
Proposal:
We should make sure all cpp tests that are expected to run are run somewhere in CI.
### Versions
CI
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 3 |
5,408 | 79,968 |
PrimTorch burns in static shapes
|
triaged, module: primTorch
|
With the latest version of PyTorch, I've been migrating TorchInductor to prims (see https://github.com/pytorch/torchdynamo/pull/431) and I've noticed many cases where PrimTorch burns in shapes in cases where the prior _decomps didn't.
The top culprits I see are:
- `prims.broadcast_in_dim` with static integer args
- Tensor constants containing shapes (e.g. numel)
cc @ezyang @mruberry @ngimel @Chillee
| 5 |
5,409 | 79,967 |
[feature request] LazyTensor that provides/loads/computes its contents only upon request to be returned from torch.load
|
feature, triaged, module: lazy
|
### π The feature, motivation and pitch
Discussed in https://github.com/pytorch/pytorch/issues/29523#issuecomment-1157729352 and originally proposed by @dzhulgakov in https://github.com/pytorch/pytorch/issues/64601#issuecomment-920469049
Scenario: do not load materialize state_dict all at once at torch.load time and provide some sort of lazy tensors that either + maybe some usecases by @stas00 also referenced in the issue above (https://github.com/pytorch/pytorch/issues/29523#issuecomment-1157230391):
* know how to lazy_tensor.copyto(param) -> this would be the most efficient and allow GPUDirect and other smart implementations of reading from disk directly to the GPU-placed param
* also support torch.as_tensor(lazy_tensor) to materialize to a dense tensor
Probably this would also include adding `torch.load(..., lazy = True)` or `map_location = 'lazy'` option and modifying load_state_dict to dispatch copy_ call to instance method copyto or if it doesn't exist, call torch.as_tensor(lazy_tensor) / use `__array__` before copy_ (this would allow working copying from NumPy arrays or h5py Datasets. A problem may be to not call copyto on non torch-known objects or somehow figure out this case. Currently NumPy does not have instance method copyto, but it may be introduced in the future and this might cause problems.
Maybe having access to tensor meta-info (like shape) should be done in non-lazy way (discussable)
As pointed by @albanD, probably this can be prototyped in Python. But this idea of a pattern of optimized loading of state dict maybe deserves some design in non-python case as well.
I don't know if this has any relation to existing LazyTensor XLA-like infra in PyTorch
Implementation probably would be different depending on underlying storage format: zip or pickle
### Alternatives
_No response_
### Additional context
_No response_
| 11 |
5,410 | 79,895 |
Modify update-viable-strict GHA to use internal version of checkout
|
module: ci, triaged
|
I previously encountered an error regarding workflow permissions (linked [here](https://github.com/pytorch/pytorch-canary/runs/6921418059?check_suite_focus=true)) which was temporarily resolved by using `actions/checkout@v2` instead of our internal version of checking out pytorch, which would be more reliable.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,411 | 79,893 |
Write lint for isGreen
|
module: ci, triaged
|
group workflows together to avoid using an allow list/regex in isGreen method
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,412 | 79,888 |
`CosineAnnealingWarmRestarts` does not update parameters added with `add_param_group`
|
triaged, module: LrScheduler
|
### π Describe the bug
## Background
The `torch.optim.Optimizer.add_param_group` method is often used with transfer learning in order to unfreeze layers during training. For example, this is the approach used in the PyTorch Lightning fine tuning callbacks:
https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.callbacks.BackboneFinetuning.html#pytorch_lightning.callbacks.BackboneFinetuning
## Issue
When using the `torch.optim.lr_scheduler.CosineAnnealingWarmRestarts` learning rate scheduler, any parameter groups that are added during training are not updated by the learning rate scheduler.
## Minimal example
```
## create a simple model with two layers
> model = th.nn.Sequential(collections.OrderedDict(a=th.nn.Linear(4, 8), b=th.nn.Linear(8, 2)))
## create an optimizer with a single parameter group, the first layer only
> opt = th.optim.Adam([dict(params=model.a.parameters(), lr=0.001),])
## create a learning rate scheduler tied to the optimizer
> sched = th.optim.lr_scheduler.CosineAnnealingWarmRestarts(opt, T_0=10)
## initial lr is as expected
> sched.get_last_lr()
[0.001]
## stepping updates the first param group as expected
> sched.step()
> sched.get_last_lr()
[0.0009755282581475768]
## add a new parameter group
> opt.add_param_group(dict(params=model.b.parameters(), lr=0.0001))
## after a single step operation, the lr for the second param group appears
> sched.get_last_lr()
[0.0009045084971874737]
> sched.step()
> sched.get_last_lr()
[0.0007938926261462366, 0.0001]
## after subsequent steps, the lr is not updated for the second parameter group!!
> sched.step()
> sched.get_last_lr()
[0.0006545084971874737, 0.0001]
```
## Insights
The `base_lrs` attribute on the `_LRScheduler` base class is created in the class initializer but is not updated after `add_param_group` is called:
https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L41
This attribute is later used by `CosineAnnealingWarmRestarts` in the `get_lr` method:
https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L1272
## Suggested fix
The `base_lrs` attribute could be changed to a property that is updated if the number of parameter groups does not match it's current length.
## Related issues
There seems to be a similar issue in the `ReduceLROnPlateau` scheduler described in issue #62475
Similar issues are also described in #53712
### Versions
Collecting environment information...
PyTorch version: 1.10.2+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.6.3
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.5.10
[pip3] segmentation-models-pytorch==0.2.1
[pip3] torch==1.10.2+cu113
[pip3] torchmetrics==0.7.2
[pip3] torchvision==0.11.3+cu113
[conda] Could not collect
| 1 |
5,413 | 79,877 |
test_meta_vstack_cuda_int16 (__main__.TestMetaCUDA) Fails with DEBUG=1
|
module: autograd, triaged, needs research
|
### π Describe the bug
`python test/test_meta.py -k test_meta_vstack_cuda_int16`
> ERROR: test_meta_vstack_cuda_int16 (__main__.TestMetaCUDA)
Traceback (most recent call last):
File "/scratch/eellison/pytorch/test/test_meta.py", line 325, in run_meta_crossref
meta_rs = func(*meta_args, **meta_kwargs)
File "/private/home/eellison/anaconda3/lib/python3.8/site-packages/torch/overrides.py", line 1768, in wrapped
return f(self, *args, **kwargs)
File "/private/home/eellison/anaconda3/lib/python3.8/site-packages/torch/overrides.py", line 1873, in __torch_function__
return func(*args, **kwargs)
RuntimeError: self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "/raid/eellison/pytorch/torch/csrc/autograd/generated/VariableType_4.cpp":13537, please report a bug to PyTorch.
`test_meta_unsqueeze_cuda_int8` also fails
### Versions
master
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
5,414 | 79,875 |
A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 11742 (objectdetection)
|
oncall: jit, oncall: mobile
|
### π Describe the bug
I trained a D2Go model and converted to torchscript_int8. When I tried to run on an android device, it raised the following error.
E/libc++abi: terminating with uncaught exception of type c10::Error: isTuple()INTERNAL ASSERT FAILED at "../../../../src/main/cpp/libtorch_include/arm64-v8a/ATen/core/ivalue_inl.h":1306, please report a bug to PyTorch. Expected Tuple but got String
Exception raised from toTuple at ../../../../src/main/cpp/libtorch_include/arm64-v8a/ATen/core/ivalue_inl.h:1306 (most recent call first):
(no backtrace available)
A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 11742 (objectdetection)
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220615
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 495.29.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.2.2
/usr/lib64/libcudnn_adv_infer.so.8.2.2
/usr/lib64/libcudnn_adv_train.so.8.2.2
/usr/lib64/libcudnn_cnn_infer.so.8.2.2
/usr/lib64/libcudnn_cnn_train.so.8.2.2
/usr/lib64/libcudnn_ops_infer.so.8.2.2
/usr/lib64/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.6.4
[pip3] torch==1.13.0.dev20220615
[pip3] torchaudio==0.13.0.dev20220615
[pip3] torchmetrics==0.9.1
[pip3] torchvision==0.14.0.dev20220615
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.22.3 py39he7a7128_0
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0.dev20220615 py3.9_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-lightning 1.6.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 0.13.0.dev20220615 py39_cu113 pytorch-nightly
[conda] torchmetrics 0.9.1 pypi_0 pypi
[conda] torchvision 0.14.0.dev20220615 py39_cu113 pytorch-nightly
| 4 |
5,415 | 79,867 |
All {view}_scatter variants should support all (or most) dtypes
|
triaged, module: functionalization
|
We've been using `{view}_scatter` to replace some in-place operations (e.g. x`.diag().copy_(y) -> diagonal_scatter(x, y)`) for the purposes of composite compliance. However, as discovered by @kshitij12345, it turns out that the {view}_scatter operations do not have the same coverage as the original operations (they may not support complex dtype, and they may not support complex autodiff even if they support complex dtype).
We should make sure the {view}_scatter operations have the same coverage (dtype, autograd, etc) as their in-place `{view}.copy_(...)` variants.
cc @bdhirsh
| 1 |
5,416 | 79,853 |
[bazel] [ci] `//:lazy_tests` Could not run 'aten::mul.Tensor' with arguments from the 'Lazy' backend
|
module: cuda, triaged, lazy, module: lazy, module: bazel
|
### π Describe the bug
I made a run of bazel in CI with GPU enabled and a long timeout.
PR: https://github.com/pytorch/pytorch/pull/79844
The logs contain all errors that CI hit https://github.com/pytorch/pytorch/runs/6955918249?check_suite_focus=true
If you search for Test output for //:lazy_tests you will find the following error
```
022-06-19T17:05:01.1555929Z [1A[K==================== Test output for //:lazy_tests:
2022-06-19T17:05:01.1556291Z Running main() from gmock_main.cc
2022-06-19T17:05:01.1556774Z [==========] Running 33 tests from 9 test suites.
2022-06-19T17:05:01.1557168Z [----------] Global test environment set-up.
2022-06-19T17:05:01.1557549Z [----------] 10 tests from BackendDeviceTest
2022-06-19T17:05:01.1557887Z [ RUN ] BackendDeviceTest.BackendDeviceType
2022-06-19T17:05:01.1558414Z [ OK ] BackendDeviceTest.BackendDeviceType (0 ms)
2022-06-19T17:05:01.1559014Z [ RUN ] BackendDeviceTest.Basic1
2022-06-19T17:05:01.1559325Z [ OK ] BackendDeviceTest.Basic1 (0 ms)
2022-06-19T17:05:01.1559646Z [ RUN ] BackendDeviceTest.Basic2
2022-06-19T17:05:01.1560147Z [ OK ] BackendDeviceTest.Basic2 (0 ms)
2022-06-19T17:05:01.1560793Z [ RUN ] BackendDeviceTest.Basic3
2022-06-19T17:05:01.1561473Z [ OK ] BackendDeviceTest.Basic3 (0 ms)
2022-06-19T17:05:01.1561902Z [ RUN ] BackendDeviceTest.Compare
2022-06-19T17:05:01.1562260Z [ OK ] BackendDeviceTest.Compare (0 ms)
2022-06-19T17:05:01.1562756Z [ RUN ] BackendDeviceTest.Ostream
2022-06-19T17:05:01.1563398Z [ OK ] BackendDeviceTest.Ostream (0 ms)
2022-06-19T17:05:01.1564041Z [ RUN ] BackendDeviceTest.FromAten
2022-06-19T17:05:01.1564657Z [ OK ] BackendDeviceTest.FromAten (0 ms)
2022-06-19T17:05:01.1565255Z [ RUN ] BackendDeviceTest.ToAten
2022-06-19T17:05:01.1565804Z [ OK ] BackendDeviceTest.ToAten (0 ms)
2022-06-19T17:05:01.1566379Z [ RUN ] BackendDeviceTest.GetBackendDevice1
2022-06-19T17:05:01.1567030Z [ OK ] BackendDeviceTest.GetBackendDevice1 (18 ms)
2022-06-19T17:05:01.1567608Z [ RUN ] BackendDeviceTest.GetBackendDevice2
2022-06-19T17:05:01.1567999Z [ OK ] BackendDeviceTest.GetBackendDevice2 (0 ms)
2022-06-19T17:05:01.1568483Z [----------] 10 tests from BackendDeviceTest (18 ms total)
2022-06-19T17:05:01.1568706Z
2022-06-19T17:05:01.1568866Z [----------] 2 tests from CacheTest
2022-06-19T17:05:01.1569308Z [ RUN ] CacheTest.BasicTest
2022-06-19T17:05:01.1569618Z [ OK ] CacheTest.BasicTest (0 ms)
2022-06-19T17:05:01.1569988Z [ RUN ] CacheTest.ShapeCacheTestForDynamicShape
2022-06-19T17:05:01.1570392Z [ OK ] CacheTest.ShapeCacheTestForDynamicShape (0 ms)
2022-06-19T17:05:01.1570832Z [----------] 2 tests from CacheTest (0 ms total)
2022-06-19T17:05:01.1571032Z
2022-06-19T17:05:01.1571215Z [----------] 2 tests from IrUtilTest
2022-06-19T17:05:01.1571489Z [ RUN ] IrUtilTest.BasicTest
2022-06-19T17:05:01.1571793Z [ OK ] IrUtilTest.BasicTest (0 ms)
2022-06-19T17:05:01.1572099Z [ RUN ] IrUtilTest.TestCircle
2022-06-19T17:05:01.1572392Z [ OK ] IrUtilTest.TestCircle (0 ms)
2022-06-19T17:05:01.1572781Z [----------] 2 tests from IrUtilTest (0 ms total)
2022-06-19T17:05:01.1572986Z
2022-06-19T17:05:01.1573163Z [----------] 2 tests from HashTest
2022-06-19T17:05:01.1573425Z [ RUN ] HashTest.Scalar
2022-06-19T17:05:01.1573705Z [ OK ] HashTest.Scalar (0 ms)
2022-06-19T17:05:01.1573996Z [ RUN ] HashTest.Sanity
2022-06-19T17:05:01.1574280Z [ OK ] HashTest.Sanity (0 ms)
2022-06-19T17:05:01.1574621Z [----------] 2 tests from HashTest (0 ms total)
2022-06-19T17:05:01.1574819Z
2022-06-19T17:05:01.1575034Z [----------] 3 tests from PermutationUtilTest
2022-06-19T17:05:01.1575414Z [ RUN ] PermutationUtilTest.TestInversePermutation
2022-06-19T17:05:01.1575856Z [ OK ] PermutationUtilTest.TestInversePermutation (0 ms)
2022-06-19T17:05:01.1576265Z [ RUN ] PermutationUtilTest.TestIsPermutation
2022-06-19T17:05:01.1576663Z [ OK ] PermutationUtilTest.TestIsPermutation (0 ms)
2022-06-19T17:05:01.1577047Z [ RUN ] PermutationUtilTest.TestPermute
2022-06-19T17:05:01.1577400Z [ OK ] PermutationUtilTest.TestPermute (0 ms)
2022-06-19T17:05:01.1577847Z [----------] 3 tests from PermutationUtilTest (0 ms total)
2022-06-19T17:05:01.1578067Z
2022-06-19T17:05:01.1578249Z [----------] 7 tests from ShapeTest
2022-06-19T17:05:01.1578520Z [ RUN ] ShapeTest.Basic1
2022-06-19T17:05:01.1578802Z [ OK ] ShapeTest.Basic1 (0 ms)
2022-06-19T17:05:01.1579087Z [ RUN ] ShapeTest.Basic2
2022-06-19T17:05:01.1579354Z [ OK ] ShapeTest.Basic2 (0 ms)
2022-06-19T17:05:01.1579621Z [ RUN ] ShapeTest.Basic3
2022-06-19T17:05:01.1579901Z [ OK ] ShapeTest.Basic3 (0 ms)
2022-06-19T17:05:01.1580180Z [ RUN ] ShapeTest.SetScalarType
2022-06-19T17:05:01.1580509Z [ OK ] ShapeTest.SetScalarType (0 ms)
2022-06-19T17:05:01.1580802Z [ RUN ] ShapeTest.SetSize
2022-06-19T17:05:01.1581086Z [ OK ] ShapeTest.SetSize (0 ms)
2022-06-19T17:05:01.1581354Z [ RUN ] ShapeTest.Equal
2022-06-19T17:05:01.1581633Z [ OK ] ShapeTest.Equal (0 ms)
2022-06-19T17:05:01.1581911Z [ RUN ] ShapeTest.Ostream
2022-06-19T17:05:01.1582178Z [ OK ] ShapeTest.Ostream (0 ms)
2022-06-19T17:05:01.1582558Z [----------] 7 tests from ShapeTest (0 ms total)
2022-06-19T17:05:01.1582755Z
2022-06-19T17:05:01.1582945Z [----------] 2 tests from LazyShapeTest
2022-06-19T17:05:01.1583246Z [ RUN ] LazyShapeTest.TestMulBasic
2022-06-19T17:05:01.1583600Z unknown file: Failure
2022-06-19T17:05:01.1587689Z C++ exception with description "Could not run 'aten::mul.Tensor' with arguments from the 'Lazy' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::mul.Tensor' is only available for these backends: [Dense, FPGA, ORT, Metal, Quantized, CustomRNGKeyId, MkldnnCPU, Sparse, SparseCsrCUDA, NestedTensor, BackendSelect, Python, Fake, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, AutogradOther, AutogradFunctionality, AutogradNestedTensor, Tracer, AutocastCPU, Autocast, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, DeferredInit, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, TESTING_ONLY_GenericMode, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, CPU, CUDA, XLA, MPS, IPU, XPU, HPU, VE, Lazy, Meta, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCPU, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
2022-06-19T17:05:01.1590281Z
2022-06-19T17:05:01.1590789Z Undefined: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1591481Z CPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCPU.cpp:39984 [kernel]
2022-06-19T17:05:01.1592051Z CUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCUDA.cpp:55699 [kernel]
2022-06-19T17:05:01.1592772Z HIP: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1593598Z MPS: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1594425Z IPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1595252Z XPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1596078Z HPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1596895Z VE: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1597569Z Meta: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterMeta.cpp:34244 [kernel]
2022-06-19T17:05:01.1598307Z PrivateUse1: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1599168Z PrivateUse2: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1600013Z PrivateUse3: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1600832Z FPGA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1601757Z ORT: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1602616Z Vulkan: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1603458Z Metal: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1604287Z QuantizedCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1605153Z QuantizedCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1606038Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1607019Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1607885Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1608760Z QuantizedXPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1609641Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1610515Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1611374Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1612259Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1613127Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1614001Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1614873Z CustomRNGKeyId: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1615573Z MkldnnCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterMkldnnCPU.cpp:691 [kernel]
2022-06-19T17:05:01.1616191Z SparseCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCPU.cpp:1859 [kernel]
2022-06-19T17:05:01.1616808Z SparseCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCUDA.cpp:2019 [kernel]
2022-06-19T17:05:01.1617564Z SparseHIP: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1618431Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1619312Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1620182Z SparseXPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1621132Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1621993Z SparseVE: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1622869Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1623747Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1624620Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1625472Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1626291Z SparseCsrCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCsrCPU.cpp:1508 [kernel]
2022-06-19T17:05:01.1627359Z SparseCsrCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCsrCUDA.cpp:1658 [kernel]
2022-06-19T17:05:01.1628047Z NestedTensorCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterNestedTensorCPU.cpp:386 [kernel]
2022-06-19T17:05:01.1628702Z NestedTensorCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterNestedTensorCUDA.cpp:466 [kernel]
2022-06-19T17:05:01.1629280Z BackendSelect: fallthrough registered at aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
2022-06-19T17:05:01.1629804Z Python: registered at aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
2022-06-19T17:05:01.1630307Z Functionalize: registered at aten/src/ATen/FunctionalizeFallbackKernel.cpp:174 [backend fallback]
2022-06-19T17:05:01.1630794Z Named: fallthrough registered at aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
2022-06-19T17:05:01.1631258Z Conjugate: registered at aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
2022-06-19T17:05:01.1631712Z Negative: registered at aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
2022-06-19T17:05:01.1632284Z ZeroTensor: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterZeroTensor.cpp:199 [kernel]
2022-06-19T17:05:01.1632827Z ADInplaceOrView: fallthrough registered at aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
2022-06-19T17:05:01.1633509Z AutogradOther: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1634184Z AutogradCPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1634845Z AutogradCUDA: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1635557Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1636247Z AutogradXLA: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1636919Z AutogradMPS: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1637568Z AutogradIPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1638229Z AutogradXPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1638880Z AutogradHPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1639671Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1640367Z AutogradLazy: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1641039Z AutogradMeta: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1641728Z AutogradPrivateUse1: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1642440Z AutogradPrivateUse2: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1643122Z AutogradPrivateUse3: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1643778Z Tracer: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/TraceType_0.cpp:14370 [kernel]
2022-06-19T17:05:01.1644360Z AutocastCPU: fallthrough registered at aten/src/ATen/autocast_mode.cpp:482 [backend fallback]
2022-06-19T17:05:01.1644826Z Autocast: fallthrough registered at aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
2022-06-19T17:05:01.1645259Z Batched: registered at aten/src/ATen/BatchingRegistrations.cpp:1068 [kernel]
2022-06-19T17:05:01.1645742Z VmapMode: fallthrough registered at aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
2022-06-19T17:05:01.1646263Z PythonTLSSnapshot: registered at aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]
2022-06-19T17:05:01.1646551Z
2022-06-19T17:05:01.1646812Z Exception raised from reportError at aten/src/ATen/core/dispatch/OperatorEntry.cpp:474 (most recent call first):
2022-06-19T17:05:01.1647222Z (no backtrace available)" thrown in the test body.
2022-06-19T17:05:01.1647572Z [ FAILED ] LazyShapeTest.TestMulBasic (1 ms)
2022-06-19T17:05:01.1647913Z [ RUN ] LazyShapeTest.TestCatBasic
2022-06-19T17:05:01.1648185Z unknown file: Failure
2022-06-19T17:05:01.1651720Z C++ exception with description "Could not run 'aten::cat' with arguments from the 'Lazy' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::cat' is only available for these backends: [Dense, FPGA, ORT, Metal, Quantized, CustomRNGKeyId, MkldnnCPU, Sparse, SparseCsrCUDA, NestedTensor, BackendSelect, Python, Fake, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, AutogradOther, AutogradFunctionality, AutogradNestedTensor, Tracer, AutocastCPU, Autocast, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, DeferredInit, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, TESTING_ONLY_GenericMode, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, CPU, CUDA, XLA, MPS, IPU, XPU, HPU, VE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCPU, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
2022-06-19T17:05:01.1654016Z
2022-06-19T17:05:01.1654519Z Undefined: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1655206Z CPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCPU.cpp:39984 [kernel]
2022-06-19T17:05:01.1655832Z CUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCUDA.cpp:55699 [kernel]
2022-06-19T17:05:01.1656547Z HIP: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1657383Z MPS: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1658213Z IPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1659038Z XPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1659845Z HPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1660740Z VE: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1661418Z Meta: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterMeta.cpp:34244 [kernel]
2022-06-19T17:05:01.1662150Z PrivateUse1: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1662995Z PrivateUse2: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1663850Z PrivateUse3: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1664692Z FPGA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1665525Z ORT: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1666358Z Vulkan: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1667548Z Metal: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1668293Z QuantizedCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterQuantizedCPU.cpp:1330 [kernel]
2022-06-19T17:05:01.1669065Z QuantizedCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1669948Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1670818Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1671690Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1672568Z QuantizedXPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1673452Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1674312Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1675191Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1676192Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1677103Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1677979Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1678836Z CustomRNGKeyId: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1679691Z MkldnnCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1680405Z SparseCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCPU.cpp:1859 [kernel]
2022-06-19T17:05:01.1681110Z SparseCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterSparseCUDA.cpp:2019 [kernel]
2022-06-19T17:05:01.1681844Z SparseHIP: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1682712Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1683592Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1684463Z SparseXPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1685321Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1686187Z SparseVE: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1687063Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1688002Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1688861Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1689735Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1690605Z SparseCsrCPU: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1691471Z SparseCsrCUDA: registered at bazel-out/k8-fastbuild/bin/aten/src/ATen/RegisterCompositeExplicitAutogradNonFunctional.cpp:26815 [default backend kernel]
2022-06-19T17:05:01.1692111Z BackendSelect: fallthrough registered at aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
2022-06-19T17:05:01.1692641Z Python: registered at aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
2022-06-19T17:05:01.1693142Z Functionalize: registered at aten/src/ATen/FunctionalizeFallbackKernel.cpp:174 [backend fallback]
2022-06-19T17:05:01.1693635Z Named: fallthrough registered at aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
2022-06-19T17:05:01.1694081Z Conjugate: registered at aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]
2022-06-19T17:05:01.1694536Z Negative: registered at aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
2022-06-19T17:05:01.1695058Z ZeroTensor: registered at aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
2022-06-19T17:05:01.1695587Z ADInplaceOrView: fallthrough registered at aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
2022-06-19T17:05:01.1696267Z AutogradOther: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1696950Z AutogradCPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1697628Z AutogradCUDA: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1698325Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1699072Z AutogradXLA: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1699741Z AutogradMPS: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1700404Z AutogradIPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1701082Z AutogradXPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1701734Z AutogradHPU: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1702420Z UNKNOWN_TENSOR_TYPE_ID: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1703110Z AutogradLazy: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1703785Z AutogradMeta: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1704465Z AutogradPrivateUse1: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1705178Z AutogradPrivateUse2: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1705875Z AutogradPrivateUse3: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/VariableType_0.cpp:14136 [autograd kernel]
2022-06-19T17:05:01.1706534Z Tracer: registered at bazel-out/k8-fastbuild/bin/torch/csrc/autograd/generated/TraceType_0.cpp:14370 [kernel]
2022-06-19T17:05:01.1707378Z AutocastCPU: registered at aten/src/ATen/autocast_mode.cpp:486 [kernel]
2022-06-19T17:05:01.1707832Z Autocast: fallthrough registered at aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
2022-06-19T17:05:01.1708295Z Batched: registered at aten/src/ATen/BatchingRegistrations.cpp:1068 [kernel]
2022-06-19T17:05:01.1708763Z VmapMode: fallthrough registered at aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
2022-06-19T17:05:01.1709284Z PythonTLSSnapshot: registered at aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]
2022-06-19T17:05:01.1709573Z
2022-06-19T17:05:01.1709834Z Exception raised from reportError at aten/src/ATen/core/dispatch/OperatorEntry.cpp:474 (most recent call first):
2022-06-19T17:05:01.1710263Z (no backtrace available)" thrown in the test body.
2022-06-19T17:05:01.1710599Z [ FAILED ] LazyShapeTest.TestCatBasic (0 ms)
2022-06-19T17:05:01.1711033Z [----------] 2 tests from LazyShapeTest (1 ms total)
2022-06-19T17:05:01.1711239Z
2022-06-19T17:05:01.1711430Z [----------] 2 tests from TrieCacheTest
2022-06-19T17:05:01.1711736Z [ RUN ] TrieCacheTest.TestSinglePath
2022-06-19T17:05:01.1712087Z [ OK ] TrieCacheTest.TestSinglePath (0 ms)
2022-06-19T17:05:01.1712431Z [ RUN ] TrieCacheTest.TestTwoPaths
2022-06-19T17:05:01.1712843Z [ OK ] TrieCacheTest.TestTwoPaths (0 ms)
2022-06-19T17:05:01.1713277Z [----------] 2 tests from TrieCacheTest (0 ms total)
2022-06-19T17:05:01.1713484Z
2022-06-19T17:05:01.1713658Z [----------] 3 tests from UtilTest
2022-06-19T17:05:01.1713956Z [ RUN ] UtilTest.ExceptionCleanup
2022-06-19T17:05:01.1714273Z [ OK ] UtilTest.ExceptionCleanup (0 ms)
2022-06-19T17:05:01.1714576Z [ RUN ] UtilTest.MaybeRef
2022-06-19T17:05:01.1714862Z [ OK ] UtilTest.MaybeRef (0 ms)
2022-06-19T17:05:01.1715121Z [ RUN ] UtilTest.Iota
2022-06-19T17:05:01.1715388Z [ OK ] UtilTest.Iota (0 ms)
2022-06-19T17:05:01.1715755Z [----------] 3 tests from UtilTest (0 ms total)
2022-06-19T17:05:01.1715948Z
2022-06-19T17:05:01.1716140Z [----------] Global test environment tear-down
2022-06-19T17:05:01.1716473Z [==========] 33 tests from 9 test suites ran. (21 ms total)
2022-06-19T17:05:01.1716844Z [ PASSED ] 31 tests.
2022-06-19T17:05:01.1717097Z [ FAILED ] 2 tests, listed below:
2022-06-19T17:05:01.1717410Z [ FAILED ] LazyShapeTest.TestMulBasic
2022-06-19T17:05:01.1717734Z [ FAILED ] LazyShapeTest.TestCatBasic
2022-06-19T17:05:01.1717929Z
2022-06-19T17:05:01.1718033Z 2 FAILED TESTS
```
This error repros locally for me.
### Versions
master 7d17e3b884c83a11b2453b8ab13f054fda160474
cc @ngimel
| 0 |
5,417 | 79,851 |
[bazel] [ci] `//:module_test` CUDA error: CUDA driver version is insufficient for CUDA runtime version
|
module: cuda, triaged, module: bazel
|
### π Describe the bug
I made a run of bazel in CI with GPU enabled and a long timeout.
PR: https://github.com/pytorch/pytorch/pull/79844
The logs contain all errors that CI hit https://github.com/pytorch/pytorch/runs/6955918249?check_suite_focus=true
If you search for `Test output for //:module_test` you will find the following error
```
==================== Test output for //:module_test:
Running main() from gmock_main.cc
[==========] Running 58 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 58 tests from ModuleTest
[ RUN ] ModuleTest.CanEnableAndDisableTrainingMode
[ OK ] ModuleTest.CanEnableAndDisableTrainingMode (21 ms)
[ RUN ] ModuleTest.ZeroGrad
[ OK ] ModuleTest.ZeroGrad (3 ms)
[ RUN ] ModuleTest.ZeroGradWithUndefined
[ OK ] ModuleTest.ZeroGradWithUndefined (1 ms)
[ RUN ] ModuleTest.RegisterModuleThrowsForEmptyOrDottedName
[ OK ] ModuleTest.RegisterModuleThrowsForEmptyOrDottedName (0 ms)
[ RUN ] ModuleTest.RegisterModuleThrowsForDuplicateModuleName
[ OK ] ModuleTest.RegisterModuleThrowsForDuplicateModuleName (0 ms)
[ RUN ] ModuleTest.ReplaceModuleThrowsForUnknownModuleName
[ OK ] ModuleTest.ReplaceModuleThrowsForUnknownModuleName (0 ms)
[ RUN ] ModuleTest.ReplaceModule
[ OK ] ModuleTest.ReplaceModule (0 ms)
[ RUN ] ModuleTest.UnregisterModule
[ OK ] ModuleTest.UnregisterModule (0 ms)
[ RUN ] ModuleTest.RegisterParameterThrowsForEmptyOrDottedName
[ OK ] ModuleTest.RegisterParameterThrowsForEmptyOrDottedName (0 ms)
[ RUN ] ModuleTest.RegisterParameterThrowsForDuplicateModuleName
[ OK ] ModuleTest.RegisterParameterThrowsForDuplicateModuleName (0 ms)
[ RUN ] ModuleTest.RegisterParameterUndefinedTensor
[ OK ] ModuleTest.RegisterParameterUndefinedTensor (0 ms)
[ RUN ] ModuleTest.RegisterBufferThrowsForEmptyOrDottedName
[ OK ] ModuleTest.RegisterBufferThrowsForEmptyOrDottedName (0 ms)
[ RUN ] ModuleTest.RegisterBufferThrowsForDuplicateModuleName
[ OK ] ModuleTest.RegisterBufferThrowsForDuplicateModuleName (0 ms)
[ RUN ] ModuleTest.CanGetName
[ OK ] ModuleTest.CanGetName (0 ms)
[ RUN ] ModuleTest.AsCastsModulesCorrectly
[ OK ] ModuleTest.AsCastsModulesCorrectly (0 ms)
[ RUN ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor
[ OK ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor (0 ms)
[ RUN ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor_CUDA
unknown file: Failure
C++ exception with description "CUDA error: CUDA driver version is insufficient for CUDA runtime version
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from getDevice at ./c10/cuda/impl/CUDAGuardImpl.h:39 (most recent call first):
(no backtrace available)" thrown in the test body.
[ FAILED ] ModuleTest.DeviceOrDtypeConversionSkipsUndefinedTensor_CUDA (0 ms)
[ RUN ] ModuleTest.ParametersAndBuffersAccessorSkipsUndefinedTensor
[ OK ] ModuleTest.ParametersAndBuffersAccessorSkipsUndefinedTensor (0 ms)
[ RUN ] ModuleTest.Conversion_MultiCUDA
unknown file: Failure
C++ exception with description "Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Exception raised from device_count_impl at c10/cuda/CUDAFunctions.cpp:44 (most recent call first):
(no backtrace available)" thrown in the test body.
[ FAILED ] ModuleTest.Conversion_MultiCUDA (1 ms)
[ RUN ] ModuleTest.CallingCloneOnModuleThatDoesNotOverrideCloneThrows
[ OK ] ModuleTest.CallingCloneOnModuleThatDoesNotOverrideCloneThrows (0 ms)
[ RUN ] ModuleTest.CallingCloneOnModuleThatDoesOverrideCloneDoesNotThrow
[ OK ] ModuleTest.CallingCloneOnModuleThatDoesOverrideCloneDoesNotThrow (0 ms)
[ RUN ] ModuleTest.CloneCreatesDistinctParameters
[ OK ] ModuleTest.CloneCreatesDistinctParameters (15 ms)
[ RUN ] ModuleTest.CloneCreatesDistinctParametersExplicitDevice_CUDA
-- Test timed out at 2022-06-19 17:03:11 UTC --
```
Despite the timeout at the end, the test fails earlier.
This is strange (how the driver version works out for the Cmake build in this case?) but I seen the same error when running on RBE too. I don't see this error when I run locally on driver `470.82.01` and CUDA `11.2`.
### Versions
master `7d17e3b884c83a11b2453b8ab13f054fda160474`
cc @ngimel
| 3 |
5,418 | 79,848 |
Automatically calculate output_shape of sequential model (or any other fCNN)
|
triaged, module: meta tensors
|
### π The feature, motivation and pitch
Follow up on #79512, if support for meta conv layers is added, hypothetically, it should be possible to do this:
```python
import torch
feature_extractor = torch.nn.Sequential(
torch.nn.Conv2d(3, 8, kernel_size=3),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2),
torch.nn.Conv2d(8, 16, kernel_size=4),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=3)
)
b_img = torch.rand(32, 3, 28, 28)
b_img.to('meta')
feature_extractor.to('meta')
print(feature_extractor(b_img).shape)
# mlp stuff here
```
### Alternatives
```python
import torch
feature_extractor = torch.nn.Sequential(
torch.nn.Conv2d(3, 8, kernel_size=3),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2),
torch.nn.Conv2d(8, 16, kernel_size=4),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=3)
)
b_img = torch.rand(32, 3, 28, 28)
# this is computationally expensive
print(feature_extractor(b_img).shape)
```
### Additional context
I am currently working on #79834 which should add support for meta conv layers.
| 1 |
5,419 | 79,847 |
Multi-node training meets unknown error
|
oncall: distributed, triaged
|
### π Describe the bug
Multi-node training meets unknown error!
The code I use is
```python
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
dist.init_process_group("nccl", rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
rank = int(os.environ["LOCAL_RANK"])
print(rank)
torch.cuda.set_device(rank)
# create model and move it to GPU with id rank
model = nn.Linear(10, 5).to(rank)
print(model)
ddp_model = DDP(model, device_ids=[rank], output_device=rank)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
for _ in range(10):
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
print(ddp_model.module.weight)
dist.destroy_process_group()
```
Then I use the following commands to run:
```
torchrun --nnode 2 --node_rank 0 --nproc_per_node 2 --master_addr 10.1.4.144 --master_port 13579 main.py
torchrun --nnode 2 --node_rank 1 --nproc_per_node 2 --master_addr 10.1.4.144 --master_port 13579 main.py
```
where node 0 ip is 10.1.4.144.
This error occurs on node_rank 0:
```
Traceback (most recent call last):
File "main.py", line 18, in <module>
ddp_model = DDP(model, device_ids=[rank], output_device=rank)
File "/data/qingsong/anaconda3/envs/th110/lib/python3.8/site-packages/torch/nn/parallel/distributed.py"
, line 578, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:891, internal error, NCC
L version 21.0.3
ncclInternalError: Internal check failed. This is either a bug in NCCL or due to memory corruption
Linear(in_features=10, out_features=5, bias=True)
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 20640 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 20639) of bi
nary: /data/qingsong/anaconda3/envs/th110/bin/python
```
While node_rank 1 is hanging.
### Versions
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-163-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 3090
GPU 1: GeForce RTX 3090
GPU 2: GeForce RTX 3090
GPU 3: GeForce RTX 3090
GPU 4: GeForce RTX 3090
GPU 5: GeForce RTX 3090
GPU 6: GeForce RTX 3090
GPU 7: GeForce RTX 3090
GPU 8: GeForce RTX 3090
GPU 9: GeForce RTX 3090
Nvidia driver version: 455.23.04
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.10.1+cu111
[pip3] torchaudio==0.10.1+rocm4.1
[pip3] torchvision==0.11.2+cu111
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.10.1+cu111 pypi_0 pypi
[conda] torchaudio 0.10.1+rocm4.1 pypi_0 pypi
[conda] torchvision 0.11.2+cu111 pypi_0 pypi
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 3 |
5,420 | 79,842 |
Automatically use CUDA
|
triaged, needs research, module: python frontend
|
### π The feature, motivation and pitch
To instantiate the Torch DSL you must presently use the following code:
```
import torch; torch.set_default_tensor_type(torch.cuda.FloatTensor)
from torch import *
```
The feature request is that the first line happens by default when CUDA is available.
Additionally, switching modes should be as simple as:
```
cpu() # now everything happens on the CPU
```
```
gpu() # now everything happens on the GPU
```
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
5,421 | 79,802 |
[ONNX] Replace test inheritance for `test/onnx/test_models.py` with parameterizing
|
module: onnx, triaged, onnx-triaged
|
### π The feature, motivation and pitch
`test/onnx/test_models_onnxruntime.py` Inherits tests from `test/onnx/test_models.py`, swapping the onnx backend to test with.
Rewrite with parameterizing to clear test cases and test files relation.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,422 | 79,788 |
Parameter.__deepcopy__ doesn't preserve view relationships
|
module: nn, triaged, module: correctness (silent)
|
When deepcopying a module with `Parameter`s that share the same storage, the resultant copy does not maintain the view relationship.
```python
import torch
from copy import deepcopy
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
t = torch.randn(3, requires_grad=True)
self.p1 = torch.nn.Parameter(t)
self.p2 = torch.nn.Parameter(t)
m = MyModule()
# True; parameters share same storage
print(m.p1.storage().data_ptr() == m.p2.storage().data_ptr())
m_copy = deepcopy(m)
# False; parameters don't share same storage for copy
print(m_copy.p1.storage().data_ptr() == m_copy.p2.storage().data_ptr())
```
This happens because memoization for `Parameter.__deepcopy__()` is done at the `Parameter` level, and so will only kick in if it sees the same `Parameter` object twice:
https://github.com/pytorch/pytorch/blob/bcc4dba439db7a9e11dec9bc98a41069a7adef57/torch/nn/parameter.py#L52-L58
In contrast, `Tensor.__deepcopy__()` does memoization at the `Tensor` level, deepcopying the underlying storage once and reusing if the tensor has been seen before (preserving view relationships).
Ideally, `Parameter.__deepcopy__()` memoization should happen at this level as well. Proposed fix is to tweak `Parameter.__deepcopy__()` to call `Tensor.__deepcopy__()` instead of `clone()`.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are
| 0 |
5,423 | 79,787 |
Improve clarity by making sharding a static nightly update
|
module: ci, triaged
|
Currently, sharding is done at every CI job, which involves downloading and parsing data from the outside world in the job, which can allow for flakiness or redness that is NOT the fault of the commit. Now that we have GH1, we can write test times to the repo every night and the CI jobs can read from that file in the repo instead of needing to access the outside world.
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 3 |
5,424 | 79,785 |
android-tests is often flaky
|
module: ci, triaged
|
### π Describe the bug
Twice every day or so, this test will fail to download important things and flakily fail. This detriments CI reliability
https://github.com/pytorch/pytorch/runs/6923556353?check_suite_focus=true and causes confusion for the oncall and any HUD viewers.
It looks like we have been trying to download certain things and failing. Could we wrap downloads with retries? cc. @kit1980
### Versions
CI
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 3 |
5,425 | 79,766 |
[FSDP] Test that module using mixed precision can be loaded into non-mp module
|
triaged, better-engineering, module: fsdp
|
### π The feature, motivation and pitch
In FSDP, mixed precision checkpoint is taken with the full parameter precision, but we're missing a test that does this and restores into a non-mixed precision module. Having this test will help provide confidence that use cases which use these such as finetuning / inference work as expected.
### Alternatives
_No response_
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma
| 0 |
5,426 | 79,751 |
[JIT] failures with nested with blocks + loop continuation
|
oncall: jit
|
### π Describe the bug
loop continuation + single `with` was fixed in #79749, but nested `with` blocks are still failing:
Add this to `test/jit/test_with.py` to repro:
```python
def test_with_and_continue_nested(self) -> int:
def fn(a: int):
ans = 0
ans2 = 0
for i in range(100):
with torch.no_grad():
v1 = i * 2
if (a + i) % 6 == 0:
continue
with torch.no_grad():
if (a + i) % 2 == 0:
continue
v2 = i * i
v3 = i * 3 + 4
if (a + i + 1) % 7 == 0:
continue
v4 = i * i * i
ans += v2 + v3 + v1 + v4
ans2 += v1 + v2 + v3 + v4
return ans, ans2
fn_s = torch.jit.script(fn)
self.assertEqual(fn(0), fn_s(0))
self.assertEqual(fn(1), fn_s(1))
self.assertEqual(fn(2), fn_s(2))
```
Stacktrace:
```
#1 0x00007fffc93ec27b in c10::detail::torchCheckFail (func=0x7fffccb18b80 <torch::jit::Node::eraseOutput(unsigned long)::__func__> "eraseOutput",
file=0x7fffccb14074 "../torch/csrc/jit/ir/ir.cpp", line=1315,
msg=0x7fffccb16378 "outputs_[i]->uses().empty() INTERNAL ASSERT FAILED at \"../torch/csrc/jit/ir/ir.cpp\":1315, please report a bug to PyTorch. ")
at ../c10/util/Exception.cpp:94
#2 0x00007fffe131d3f4 in c10::detail::torchInternalAssertFail (func=0x7fffccb18b80 <torch::jit::Node::eraseOutput(unsigned long)::__func__> "eraseOutput",
file=0x7fffccb14074 "../torch/csrc/jit/ir/ir.cpp", line=1315,
condMsg=0x7fffccb16378 "outputs_[i]->uses().empty() INTERNAL ASSERT FAILED at \"../torch/csrc/jit/ir/ir.cpp\":1315, please report a bug to PyTorch. ")
at ../c10/util/Exception.h:437
#3 0x00007fffd55079cd in torch::jit::Node::eraseOutput (this=0x55555b29f5f0, i=4) at ../torch/csrc/jit/ir/ir.cpp:1315
#4 0x00007fffd5507d26 in torch::jit::Node::destroy (this=0x55555b29f5f0) at ../torch/csrc/jit/ir/ir.cpp:1341
#5 0x00007fffd5514202 in torch::jit::generic_graph_node_list_iterator<torch::jit::Node>::destroyCurrent (this=0x7fffffff8ba0)
at ../torch/csrc/jit/ir/graph_node_list.h:103
#6 0x00007fffd5504763 in torch::jit::Block::destroy (this=0x55555b2a1d30) at ../torch/csrc/jit/ir/ir.cpp:750
#7 0x00007fffd5507cb6 in torch::jit::Node::eraseBlock (this=0x55555b291720, i=0) at ../torch/csrc/jit/ir/ir.cpp:1336
#8 0x00007fffd5507d82 in torch::jit::Node::destroy (this=0x55555b291720) at ../torch/csrc/jit/ir/ir.cpp:1344
#9 0x00007fffd5514202 in torch::jit::generic_graph_node_list_iterator<torch::jit::Node>::destroyCurrent (this=0x7fffffff8d20)
at ../torch/csrc/jit/ir/graph_node_list.h:103
#10 0x00007fffd5504763 in torch::jit::Block::destroy (this=0x55555addb360) at ../torch/csrc/jit/ir/ir.cpp:750
#11 0x00007fffd5507cb6 in torch::jit::Node::eraseBlock (this=0x55555b293b10, i=1) at ../torch/csrc/jit/ir/ir.cpp:1336
#12 0x00007fffd5507d82 in torch::jit::Node::destroy (this=0x55555b293b10) at ../torch/csrc/jit/ir/ir.cpp:1344
#13 0x00007fffd5401987 in torch::jit::inlineConsecutiveIfs (node=0x55555b296ff0) at ../torch/csrc/jit/frontend/exit_transforms.cpp:593
#14 0x00007fffd5401b06 in torch::jit::inlineConsecutiveIfs (block=0x55555b296c60) at ../torch/csrc/jit/frontend/exit_transforms.cpp:617
#15 0x00007fffd5401aeb in torch::jit::inlineConsecutiveIfs (block=0x55555ac5c620) at ../torch/csrc/jit/frontend/exit_transforms.cpp:613
#16 0x00007fffd5401aeb in torch::jit::inlineConsecutiveIfs (block=0x55555b281cb0) at ../torch/csrc/jit/frontend/exit_transforms.cpp:613
#17 0x00007fffd54028fc in torch::jit::TransformExits (graph=...) at ../torch/csrc/jit/frontend/exit_transforms.cpp:881
#18 0x00007fffd53f943f in torch::jit::ConvertToSSA (graph=...) at ../torch/csrc/jit/frontend/convert_to_ssa.cpp:345
#19 0x00007fffd541cb1f in torch::jit::to_ir::to_ir (this=0x7fffffff96f0, def=..., resolver_=..., self=0x0, method=...) at ../torch/csrc/jit/frontend/ir_emitter.cpp:676
```
Possibly caused by a check in exit_transforms.cpp which checks if the parent is a loop continuation (but not if the parent of the prim::With is a loop continuation..) https://github.com/pytorch/pytorch/blob/f9656817df2a745420dbfa015a1f3fd75b3e1b44/torch/csrc/jit/frontend/exit_transforms.cpp#L399-L402
### Versions
based on #79749
| 0 |
5,427 | 79,740 |
Compliance with PEP-0523
|
high priority, triage review, oncall: binaries, triaged
|
### π The feature, motivation and pitch
Simple PyPI API index which is used to install PyTorch with `pip` does not meet [PEP-0523][1]. Here is an example of installation script.
```shell
pip install \
--extra-index-url "https://download.pytorch.org/whl/cu102" \
torch
```
The issue is that simple PyPI API does not redirect to `/`-URL and uses relative paths to wheels URLs. This is what PEP-0523 says.
> All URLs which respond with an HTML5 page MUST end with a / and the repository SHOULD redirect the URLs without a `/` to add a `/` to the end.
Despite the fact PEP-0523 does not requires mandatory redirect, it is expected and usually assumed by users and developers.
[1]: https://peps.python.org/pep-0503/
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @seemethere @malfet
| 0 |
5,428 | 79,739 |
quantization: misleading backend config for linear_dynamic_fp16
|
oncall: quantization, triaged
|
### π Describe the bug
In the current native_backend_config_dict (https://pytorch.org/docs/master/quantization-backend-configuration.html), we have the following entry for linear:
```
{
'pattern': <class 'torch.nn.modules.linear.Linear'>,
'dtype_configs': [
{
'input_dtype': torch.quint8,
'weight_dtype': torch.qint8,
'bias_dtype': torch.float32,
'output_dtype': torch.quint8,
},
{
'input_dtype': torch.quint8,
'weight_dtype': torch.qint8,
'bias_dtype': torch.float32,
'output_dtype': torch.float32,
'is_dynamic': True,
},
{
'input_dtype': torch.float16,
'weight_dtype': torch.float16,
'bias_dtype': torch.float32,
'output_dtype': torch.float32,
'is_dynamic': True,
},
],
'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
'root_module': <class 'torch.nn.modules.linear.Linear'>,
'reference_quantized_module_for_root': <class 'torch.nn.quantized._reference.modules.linear.Linear'>,
'qat_module': <class 'torch.nn.qat.modules.linear.Linear'>,
},
```
This is misleading because the `linear_dynamic_fp16` is only supported for fbgemm and not qnnpack. Filing this issue to not forget about this.
### Versions
torch version: '1.13.0a0+git3acd9bb'
revision: c6c207f620d01c02f192613c5a231f2581dc3997
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @jgong5 @Xia-Weiwen @leslie-fang-intel @vkuzo
| 0 |
5,429 | 79,715 |
[FX] TypeError when tracing cat taking split's output as input
|
triaged, module: fx
|
### π Describe the bug
Hi,
Let's say I'd like to feed `cat` with `split`'s output and trace it :
```python
import torch
from torch.fx import symbolic_trace
class M(torch.nn.Module):
def forward(self, x):
y = torch.split(x, 4, 2)
return torch.cat(y, 0)
m = M()
traced = symbolic_trace(m)
print(traced.graph)
```
When running this, I get a `TypeError` because the tracer causes `split` to output a `Proxy` object (as expected, thanks to FX), but `cat` currently needs a sequence of tensors :
```shell
TypeError: cat() received an invalid combination of arguments - got (Proxy, int), but expected one of:
* (tuple of Tensors tensors, int dim, *, Tensor out)
* (tuple of Tensors tensors, name dim, *, Tensor out)
```
IMO issue #34294 seems to be closely related to what we see here.
### Versions
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-37-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.11.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchdiffeq==0.2.3
[pip3] torchensemble==0.1.7
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0
[conda] Could not collect
cc @ezyang @SherlockNoMad
| 0 |
5,430 | 79,709 |
ONEDNN testing is not done properly in quantization codebase
|
oncall: quantization, triaged
|
Update: we have various tests in quantization which assume fbgemm but do not guard for fbgemm properly. We should change
```
@skipIfNoFBGEMM
def test_foo(...):
...
```
to
```
@skipIfNoFBGEMM
def test_foo(...):
with override_qengines('fbgemm'):
...
```
### π Describe the bug
When I run
```
python test/test_quantization.py -k Fx
```
on master (revision c6c207f620d01c02f192613c5a231f2581dc3997), there are test failures. There should be no test failures.
Log of failures: https://gist.github.com/vkuzo/d33ce5313b2ac2180064a5a55b9093a2
### Versions
https://gist.github.com/vkuzo/f5683a595a1481aac892e3c300de3acd
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @jgong5 @Xia-Weiwen @leslie-fang-intel @vkuzo
| 3 |
5,431 | 79,705 |
gradgradcheck fails for torch.native_layer_norm
|
module: double backwards, module: autograd, triaged
|
### π Describe the bug
gradgradcheck fails when `weights=None` and bias is defined for `torch.native_layer_norm`.
```python
import torch
from torch.autograd.gradcheck import gradgradcheck
a = torch.randn(2, requires_grad=True, dtype=torch.float64)
bias = torch.randn(2, requires_grad=True, dtype=torch.float64)
def func(a, bias):
return torch.native_layer_norm(a, (2,), None, bias, 1e-5)
gradgradcheck(func, [a, bias])
```
```py
GradcheckError: Jacobian mismatch for output 1 with respect to input 2,
numerical:tensor([[1.0000, 0.0000],
[0.0000, 1.0000]], dtype=torch.float64)
analytical:tensor([[0., 0.],
[0., 0.]], dtype=torch.float64)
```
### Versions
Fails on Colab (1.11.0+cu113) and master branch.
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,432 | 79,703 |
Float and double tensors randomly initialized with the same seed get different values for size >= 16
|
triaged, module: random
|
### π Describe the bug
Hello, when randomly initializing with the same seed two size-n tensors with dtypes float and double, they get different values for n >= 16. For n <= 15 they are equal as expected.
### Sample code to reproduce the problem
```python
import torch
n = 16
t1 = torch.zeros(n, dtype=torch.float)
t2 = torch.zeros(n, dtype=torch.double)
torch.manual_seed(0)
t1.normal_(0, 1)
torch.manual_seed(0)
t2.normal_(0, 1)
print(t1)
print(t2)
```
### Expected result
`t1` and `t2` should contain the same values.
### Obtained result
`t1` and `t2` have different values for n >= 16. For example with n=16:
```python
tensor([-1.1258, -1.1524, -0.2506, -0.4339, 0.8487, 0.6920, -0.3160, -2.1152,
0.3223, -1.2633, 0.3500, 0.3081, 0.1198, 1.2377, 1.1168, -0.2473])
tensor([-2.3104, -0.3733, -1.0608, 0.9995, -0.8840, -1.2755, -0.6232, -0.8664,
-1.2956, 1.5236, 0.3237, 2.0177, 1.1357, -1.2269, 0.0714, 0.3380],
dtype=torch.float64)
```
`t1` and `t2` have equal values for n <= 15. For example with n=15:
```python
tensor([ 1.5410, -0.2934, -2.1788, 0.5684, -1.0845, -1.3986, 0.4033, 0.8380,
-0.7193, -0.4033, -0.5966, 0.1820, -0.8567, 1.1006, -1.0712])
tensor([ 1.5410, -0.2934, -2.1788, 0.5684, -1.0845, -1.3986, 0.4033, 0.8380,
-0.7193, -0.4033, -0.5966, 0.1820, -0.8567, 1.1006, -1.0712],
dtype=torch.float64)
```
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0.post2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:06:46) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0.post2
[conda] libblas 3.9.0 15_linux64_mkl conda-forge
[conda] libcblas 3.9.0 15_linux64_mkl conda-forge
[conda] liblapack 3.9.0 15_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.22.4 py310h4ef5377_0 conda-forge
[conda] pytorch 1.11.0 cpu_py310h75c9ab6_2 conda-forge
```
cc @pbelevich
| 0 |
5,433 | 79,684 |
Does Torch JIT Support Trace High-level Custom Op?
|
oncall: jit
|
### π The feature, motivation and pitch
e.g. What I need is something like this:
```py
@jit_trace_atomic_define("my_custom_ops_0")
def my_op(x):
y = many-complex-custom-func(x)
return F.relu(y) * 2.5
```
By tracing this operator, I expect to get is
```
%y.5 : Tensor = my_custom_ops_0(%x), scope: __module._xxx # ...
```
instead of **expanding every internal small computation functions** (because serveral of internal functions doesn't support jit tracing)
Is this requirement possible in current pytorch?
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,434 | 79,647 |
tensorboard SummaryWriter.add_graph fails when model uses empty tuples
|
triaged, module: tensorboard
|
### π Describe the bug
I'd like to add the following model to tensorboard, because the model is traceable and can produce a graph that can be printed correctly:
```python
import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
@torch.jit.script_if_tracing
def func(x: torch.Tensor):
return x, ()
class A(nn.Module):
def forward(self, x):
return func(x)[0]
model = A()
print(torch.jit.trace(model, torch.rand(3)).graph)
writer = SummaryWriter("output")
writer.add_graph(model, torch.rand(10, 10, 10))
writer.close()
```
However, the `add_graph` method fails to parse the graph due to the empty tuple. It gives the following error:
```
graph(%self : __torch__.A,
%x : Float(3, strides=[1], requires_grad=0, device=cpu)):
%4 : Function = prim::Constant[name="func"]()
%5 : (Tensor, ()) = prim::CallFunction(%4, %x)
%6 : Tensor, %7 : () = prim::TupleUnpack(%5)
return (%6)
Traceback (most recent call last):
File "//a.py", line 20, in <module>
writer.add_graph(model, torch.rand(10, 10, 10))
File "/usr/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py", line 736, in add_graph
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, use_strict_trace))
File "/usr/lib/python3.10/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 301, in graph
list_of_nodes = parse(graph, trace, args)
File "/usr/lib/python3.10/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 237, in parse
nodes_py.append(NodePyOP(node))
File "/usr/lib/python3.10/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 93, in __init__
self.attributes = str({k: node_cpp[k] for k in node_cpp.attributeNames()}).replace("'", ' ')
File "/usr/lib/python3.10/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 93, in <dictcomp>
self.attributes = str({k: node_cpp[k] for k in node_cpp.attributeNames()}).replace("'", ' ')
File "/usr/lib/python3.10/site-packages/torch/onnx/utils.py", line 1262, in _node_getitem
return getattr(self, sel)(k)
AttributeError: 'torch._C.Node' object has no attribute 'ival'
```
It looks to me like an issue of tracing - tracing creates a node with missing attributes.
Downstream error report: https://github.com/facebookresearch/detectron2/issues/1607#issuecomment-1093355307
### Versions
My code was run in `pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime` official docker container.
| 0 |
5,435 | 79,644 |
[FSDP] Progress of ParamExecOrderWrapPolicy
|
in progress, triaged, module: fsdp
|
## Description
This issue is used to track the development progress of ParamExecOrderWrapPolicy in PyTorch distributed FSDP, initial PR: #79238.
## Remaining TODOs:
- [ ] Starting from the second iteration, remove all internal FSDP wraps from the root FSDP module. In addition, register all removed FSDP wraps as attributes of the root FSDP. These FSDP wraps will be used to schedule parameter communication and resharding.
- [ ] Implement a helper function that could merge a list of FSDP wraps into one. This allows one to group multiple FSDP wraps and perform communication/resharding together. This allows neighboring modules based on the execution order be wrapped together in the ParamExecOrderWrapPolicy, which is a feather currently not supported in recursive wrapping and has the potential to enhance the performance.
- [ ] Based on FSDP._fsdp_params_exec_order, patch the forward() function (or use forward hook) of each module.
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 0 |
5,436 | 79,629 |
Missing the time unit in duration time of DDP logging
|
triaged, module: ddp
|
### π Describe the bug
The interface _get_ddp_logging_data() misses labeling the time unit of the duration time, such as forward_compute_time, backward_compute_time, backward_comm_time, and backward_compute_comm_overlap_time. The current time unit is ns.
To make the DDP logging data more readable, here are the recommended names according to their actual meanings:
- "forward_compute_time" should be "forward_device_duration_ns"
- "forward_compute_time_start" should be "forward_host_start"
- "backward_compute_time" should be "backward_device_duration_ns"
- "backward_compute_time_start" should be "backward_host_start"
- "backward_compute_time_end" should be "backward_host_end"
- "backward_comm_time" should be "backward_comm_device_duration_ns"
- "backward_comm_time_start" should be "backward_comm_host_start"
- "backward_comm_time_end" should be "backward_comm_host_end"
- "backward_compute_comm_overlap_time" should be "backward_compute_comm_overlap_device_duration_ns"
### Versions
since pytorch 1.9
| 2 |
5,437 | 79,620 |
[FSDP] Verify that FSDP-managed parameters are the same across ranks
|
triaged, better-engineering, module: fsdp
|
Similar to https://github.com/pytorch/pytorch/issues/68803, we can add a check that the FDSP-managed parameters (i.e. all parameters minus the ignored parameters on that rank, which may be different across ranks) are the same across ranks in debug mode.
cc @zhaojuanmao @mrshenli @rohan-varma
| 0 |
5,438 | 79,606 |
PyTorch Preview (Nightly) version number does not comply with Conda conventions
|
oncall: binaries, triaged
|
### π Describe the bug
Conda environments using PyTorch Preview (Nightly) builds cannot be easily exported and recreated because of the way the package version number changes every night, and previous builds are no longer available.
This requires the user to manually edit their Conda environment file daily to remove the date from the PyTorch package version number, e.g., changing it from `pytorch=1.13.0.dev20220614` to `pytorch=1.13.0.*`.
The user experience with Conda would be better if `dev20220614` were removed from the package version number. Then `conda env export --no-builds` would work for PyTorch as for all other packages available via Conda.
Below are all the steps required to reproduce this issue.
---
Create a new Conda environment:
`$ conda create --name pytorch-nightly python=3.9.13 --no-default-packages`
Active the Conda environment:
`$ conda activate pytorch-nightly`
Install the newest PyTorch Preview (Nightly) build via Conda:
`$ conda install pytorch -c pytorch-nightly`
Export the Conda environment to a file, with and without build specifications:
`$ conda env export > with_environment.yml`
`$ conda env export --no-builds > without_environment.yml`
Compare the resulting files:
```
$ diff with_environment.yml without_environment.yml
7,25c7,25
< - bzip2=1.0.8=h3422bc3_4
< - ca-certificates=2022.5.18.1=h4653dfc_0
< - libffi=3.4.2=h3422bc3_5
< - libzlib=1.2.12=h90dfc92_0
< - ncurses=6.3=h07bb92c_1
< - openssl=3.0.3=ha287fd2_0
< - pip=22.1.2=pyhd8ed1ab_0
< - python=3.9.13=h96fcbfb_0_cpython
< - python_abi=3.9=2_cp39
< - pytorch=1.13.0.dev20220614=py3.9_0
< - readline=8.1.2=h46ed386_0
< - setuptools=62.3.4=py39h2804cbe_0
< - sqlite=3.38.5=h40dfcc0_0
< - tk=8.6.12=he1e0b03_0
< - typing_extensions=4.2.0=pyha770c72_1
< - tzdata=2022a=h191b570_0
< - wheel=0.37.1=pyhd8ed1ab_0
< - xz=5.2.5=h642e427_1
< - zlib=1.2.12=h90dfc92_0
---
> - bzip2=1.0.8
> - ca-certificates=2022.5.18.1
> - libffi=3.4.2
> - libzlib=1.2.12
> - ncurses=6.3
> - openssl=3.0.3
> - pip=22.1.2
> - python=3.9.13
> - python_abi=3.9
> - pytorch=1.13.0.dev20220614
> - readline=8.1.2
> - setuptools=62.3.4
> - sqlite=3.38.5
> - tk=8.6.12
> - typing_extensions=4.2.0
> - tzdata=2022a
> - wheel=0.37.1
> - xz=5.2.5
> - zlib=1.2.12
```
Notice that `dev20220614` is a part of the `pytorch` _package version number_ (not its _build specification_).
Wait until the next Preview (Nightly) build is released.
Deactivate and delete the Conda environment you created earlier:
`$ conda deactivate`
`$ conda env remove --name pytorch-nightly`
Recreate the environment from the file created earlier:
`$ conda env create --file without_environment.yml`
Notice the error message:
```
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- pytorch=1.13.0.dev20220614
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220614
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:01:00) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] torch==1.13.0.dev20220614
[conda] numpy 1.21.6 py39h690d673_0 conda-forge
[conda] pytorch 1.13.0.dev20220614 py3.9_0 pytorch-nightly
```
cc @ezyang @seemethere @malfet
| 0 |
5,439 | 79,604 |
Some unit tests are failing
|
oncall: distributed, triaged, module: docker
|
### π Describe the bug
Some distributed tests and some core tests are failing.
Note: initially I got these error while running unit tests in v1.11.0, but then I've tried running them in the master branch, and got the same errors. The first 2 gists (see the links below) that are linked below contain the output of the unit tests in the master branch, while the last gist contains the output of the unit tests in docker using pytorch v1.11.0 since I could not build the docker image in the master branch (see below).
To reproduce the issue clone and build the master branch from source using the instructions in README.md, then install additional packages that are required by unit tests:
```bash
conda install pytest expecttest hypothesis
```
Then run the unit tests:
```
python test/run_test.py |& tee test_log_master.txt
```
[The output is large, so I've put it in a gist](https://gist.github.com/Daemonhost/df06dc16d4db5a61aacb4719d14ef058#file-test_log_master-txt). The errors are at the end.
After that I tried running core tests only
```
python test/run_test.py --core |& tee test_log_core_master.txt
```
[Here's the gist](https://gist.github.com/Daemonhost/df06dc16d4db5a61aacb4719d14ef058#file-test_log_core_master-txt).
## Unit tests in docker in v1.11.0
I could not build the docker image in the master branch. Here's the error message
```
#18 5.555 Building wheel torch-1.13.0a0+git83e575c
#18 5.556 Traceback (most recent call last):
#18 5.556 File "setup.py", line 317, in <module>
#18 5.556 cmake = CMake()
#18 5.556 File "/opt/pytorch/tools/setup_helpers/cmake.py", line 116, in __init__
#18 5.556 self._cmake_command = CMake._get_cmake_command()
#18 5.556 File "/opt/pytorch/tools/setup_helpers/cmake.py", line 145, in _get_cmake_command
#18 5.556 raise RuntimeError("no cmake or cmake3 with version >= 3.13.0 found")
#18 5.556 RuntimeError: no cmake or cmake3 with version >= 3.13.0 found
```
So instead I've built the docker image in v1.11.0, and tried running the core unit tests there. [Here's the gist](https://gist.github.com/Daemonhost/df06dc16d4db5a61aacb4719d14ef058#file-test_log_core_docker_v1-11-0-txt).
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0a0+git83e575c
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 510.54
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.18.4
[pip3] torch==1.13.0a0+git83e575c
[conda] blas 1.0 mkl
[conda] magma-cuda113 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.22.3 py38he7a7128_0
[conda] numpy-base 1.22.3 py38hf524024_0
[conda] torch 1.13.0a0+git83e575c pypi_0 pypi
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
5,440 | 79,592 |
[LTC] Introduce a `MetricsReport` python binding and allow backend to add their report as string
|
triaged, lazy
|
### π The feature, motivation and pitch
`Metrics` and `Counters` classes were originally introduced in https://github.com/pytorch/xla/blob/master/third_party/xla_client/metrics.h and then got upstream to LTC in https://github.com/pytorch/pytorch/blob/master/torch/csrc/lazy/core/metrics.cpp.
As part of the LTC migration, pytorch/xla wants to adapt the upstream `Metrics` class but we run into an issue. Currently pytorch/xla `Metrics` class is part of the `xla_client` which will be built with tensorflow. `xla_client` can not take any dependency to pytorch. `ComputationClinet` in the `xla_client` also needs to use `Metrics` to record some debugging information. I think it will be very hard to get rid of the pytorch/xla version of the `Metrics`.
Given that `xla_client` complications, I think we can still keep 2 `metrics` classes. When `MetricReport`(which we will need to add a binding on the LTC side) is being called in the LTC, LTC will collects all of its counters and metrics. After that is done, LTC can call a backendInterface to get the backend specified `MetricReport` as string and append that to its report.
I think it is actually a good idea since user can easily tell which metrics/counters are backend specified and where to look.
To summary I think we should
1. Add a `MetricsReport` python binding in LTC
2. Add a `GetBackendMetricReport` method to the `BackendInterface`
3. Append `GetBackendMetricReport` at the end of the LTC `MetricsReport`
@Krovatkin @wconstab
### Alternatives
have two API one only collect metrics reports for LTC and another one only for xla
### Additional context
_No response_
| 6 |
5,441 | 79,563 |
__torch__dispatch does not return new output in inplace function
|
triaged, module: __torch_dispatch__
|
### π Describe the bug
Even though we are returning a `ModeTensor` in `x.add_(1)` below, the output is still the input `torch.rand([4])` which is not a `ModeTensor`.
```
import torch
from torch.utils._python_dispatch import TorchDispatchMode
class ModeTensor(torch.Tensor):
def __new__(cls, elem, mode):
r = torch.Tensor._make_subclass(cls, elem, elem.requires_grad)
r.elem = elem
r.mode = mode
return r
def __torch_dispatch(self, func, types, args=(), kwargs=None):
with self.mode:
return func(*args, **kwargs)
class Mode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
def unwrap(e):
if isinstance(e, ModeTensor):
return e.elem
else:
return e
def wrap(t):
if isinstance(t, torch.Tensor):
return ModeTensor(t, self)
else:
return t
return wrap(func(*tuple(unwrap(a) for a in args), **kwargs))
x = torch.rand([4])
with Mode():
out_func = x.add(1)
out_inplace = x.add_(1)
print(type(out_func), out_inplace)
# <class '__main__.ModeTensor'> tensor([1.2925, 1.5918, 1.2033, 1.9141])
```
### Versions
master
cc @Chillee @ezyang @zou3519 @albanD @samdow
| 1 |
5,442 | 79,542 |
Unable to use a parameter with torch.sparse_coo layout with DDP
|
oncall: distributed, module: sparse, triaged, module: ddp
|
### π Describe the bug
*Description :* DDP's parameter verification step fails when one of my parameters is a `torch.sparse_coo_tensor`
*To reproduce the bug :* Run the following script
```
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
class Net(nn.Module):
def __init__(self):
super().__init__()
i = torch.tensor([[0, 1, 1],
[2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
out = torch.sparse_coo_tensor(i, v, [2, 4])
self.out = nn.Parameter(out)
def forward(self, x):
return x.sum() + self.out.sum()
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def demo_basic(rank, world_size):
setup(rank, world_size)
# create model and move it to GPU with id rank
model = Net().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
def cleanup():
dist.destroy_process_group()
if __name__ == "__main__":
n_gpus = torch.cuda.device_count()
assert n_gpus >= 2, f"Requires at least 2 GPUs to run, but got {n_gpus}"
world_size = n_gpus
run_demo(demo_basic, world_size)
```
This is the error that I get :
```
Traceback (most recent call last):
File "/home/v-lucaspa/polytropon/test.py", line 63, in <module>
run_demo(demo_basic, world_size)
File "/home/v-lucaspa/polytropon/test.py", line 51, in run_demo
mp.spawn(demo_fn,
File "/datadrive/anaconda3/envs/hf/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/datadrive/anaconda3/envs/hf/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/datadrive/anaconda3/envs/hf/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/datadrive/anaconda3/envs/hf/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/v-lucaspa/polytropon/test.py", line 36, in demo_basic
ddp_model = DDP(model, device_ids=[rank])
File "/datadrive/anaconda3/envs/hf/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__
dist._verify_params_across_processes(self.process_group, parameters)
RuntimeError: sparse tensors do not have strides
```
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-1021-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.3
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_2
[conda] numpy-base 1.21.5 py39hf524024_2
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.12.0 py39_cu113 pytorch
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @nikitaved @pearu @cpuhrsch @amjames
| 1 |
5,443 | 79,528 |
test_ops.py extremely slow on cuda11.3
|
triaged
|
Since the merge of #78304, our Linux 11.3 CUDA tests have increased in test time by 3 hours. @0x00b1's PR introduces several OpInfos that are culprit.

Before the change, our total test time was averaging around 7.5 hrs. After, it was consistently 10.5hrs.
| 3 |
5,444 | 79,518 |
Display a "reference" link for ops that points to primTorch implementations
|
module: docs, triaged, better-engineering, module: primTorch
|
Users have often requested a more readable way to understand PyTorch's operators. See https://github.com/pytorch/pytorch/pull/79413#issuecomment-1153988754 for a recent reiteration of this suggestion.
cc @svekars @holly1238 @ezyang @mruberry @ngimel
| 0 |
5,445 | 79,510 |
DISABLED test_checkpoint_wrapper_parity (__main__.CheckpointWrapperTest)
|
triaged, module: flaky-tests, skipped, module: fsdp
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_checkpoint_wrapper_parity&suite=CheckpointWrapperTest&file=distributed/fsdp/test_checkpoint_wrapper.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6872974643).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 13 red and 13 green.
cc @zhaojuanmao @mrshenli @rohan-varma
| 10 |
5,446 | 79,508 |
DISABLED test_caching_pinned_memory (__main__.TestCuda)
|
module: cuda, triaged, module: flaky-tests, skipped
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_caching_pinned_memory&suite=TestCuda&file=test_cuda.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6872881686).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green.
cc @ngimel
| 4 |
5,447 | 79,477 |
Implement NestedTensor size function
|
module: nestedtensor, oncall: transformer/mha
|
### π The feature, motivation and pitch
NestedTensor does not implement a size method. NestedTensor has been integrated into nn.Transformer, which has revealed a couple of use cases where the sizes of intermediate tensors in the model are needed.
The three main cases are:
- PyTorch profiler fails on nn.Transformer because it attempts to find the size of all intermediate tensors, which includes NestedTensors in the transformer.
- Autograd with NestedTensor fails because it requires the size of the tensor
- Torchscript with NestedTensor fails because torch.jit.script(model) attempts to find the size of intermediate tensors for performance optimizations.
### Alternatives
Two main ways to fix:
1. In all code that tries to find size of intermediate tensors, check for NestedTensor, and if it's a NestedTensor, do not ask for size. This seems feasible for the above use cases (profiler: has a WIP patch to check for NestedTensor, autograd: WIP in https://github.com/pytorch/pytorch/issues/79039, torchscript: could probably make a torchscript-specific patch). But I imagine that there are many such use cases and it would be hard to cover them all.
2. Implement NestedTensor size method in a way that does not break existing use cases.
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @erichan1
| 2 |
5,448 | 79,476 |
[META] Sign up to discuss significantly modifying CI
|
module: ci, triaged
|
Our CI currently tests tens of thousands of tests across many different platforms and compilers. As PyTorch grows and different modules would like to add to our CI, we should meet and discuss the added value of increased testing vs our constraints of CI capacity and time to signal (TTS). We should also discuss any changes that may largely affect CI.
Please sign up for a slot (Tuesdays 4:05-5:00pm ET) below! Please add a topic and an RFC/document that should be prepared ahead of time so we can spend more time in discussion. The PyTorch Dev Infra team will be de-facto attendees. **Please include emails so the invites could be sent out accordingly.**
### 6/14/22 [EXAMPLE]
**Topic**: Let's Add More Fun(c) to CI
**Presenter(s)**: Firstname Lastname (presenter@email.com)
**RFC/Document**: https://github.com/pytorch/pytorch/issues/78082 is a good example
**Invitees/Attendees**: Team MemberA (teammateA@email.com), Team MemberB (teammateB@email.com), Module ExpertA (expertA@email.com), etc..
### 6/21/22
**Topic**:
**Presenter(s)**:
**RFC/Document**:
**Invitees**:
### 7/12/22
**Topic**:
**Presenter(s)**:
**RFC/Document**:
**Invitees**:
### 7/26/22
**Topic**:
**Presenter(s)**:
**RFC/Document**:
**Invitees**:
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 0 |
5,449 | 79,469 |
Add a new _broadcast_coalesced op for DDP
|
triaged
|
Add a new _broadcast_coalesced op for DDP. A follow up on #76722.
| 0 |
5,450 | 79,468 |
Ensure the guards in distributed_c10d.py wrappers get executed in the replay of the graph
|
oncall: distributed, triaged
|
Ensure the guards in distributed_c10d.py wrappers get executed or reassured in the replay of the captured graph in any graph mode. A follow up on #76722.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,451 | 79,467 |
Add autograd support for dispatch passable c10d ops
|
oncall: distributed, triaged
|
Add autograd support for dispatch passable c10d ops as a follow up on #76722. It can probably reuse the implementation here: https://github.com/pytorch/pytorch/blob/master/torch/distributed/nn/functional.py.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,452 | 79,464 |
Iteration # 1-offset in DDP logging
|
oncall: distributed, triaged, module: ddp
|
### π Describe the bug
When using the _get_ddp_logging_data() to collect logging data, the iteration# has 1-offset. Starting from the second iteration, DDP logging data are always 1 iteration offset.
This table shows the results when comparing the DDP logging data with the pytorch profiler traces. e.g., the iteration 2 of DDP logging data is 20.5 ms, which is very close to the iteration 1 of profiler traces that is 20.6 ms.
Iteration \ Time (ms) | Forward_compute_time (DDP) | Forward time in profiling
-- | -- | --
1 | 1953.336832 | 20.661
2 | 20.523072 | 22.753
3 | 23.968768 | 28.798
4 | 29.75024 | 18.042
5 | 18.861792 | 40.775
6 | 41.697664 | Β
</div></b>
Here is the sample code of Resnet18:
import os
import socket
import sys
import logging
import time
from collections import OrderedDict
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.autograd.profiler import record_function
import torchvision
from torchvision import models, transforms
def run_worker(rank, world_size):
torch.cuda.set_device(rank)
dist.init_process_group("nccl", rank=rank, world_size=world_size)
model = models.resnet18(pretrained=False).cuda()
ddp_model = nn.parallel.DistributedDataParallel(
module=model,
device_ids=[rank],
)
# Define loss function, input, labels, and optimizer.
loss_fn = nn.MSELoss()
input = torch.randn(10, 3, 224, 224).cuda()
labels = torch.randn(10, 1000).cuda().to(rank)
optimizer = torch.optim.SGD(ddp_model.parameters(), lr=0.001)
with torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
on_trace_ready=torch.profiler.tensorboard_trace_handler("/tmp/traces"),
record_shapes=True,
) as prof:
for i in range(0, 10):
with record_function(f"#Step {i}"):
optimizer.zero_grad()
with record_function("#forward"):
output = ddp_model(input)
loss = loss_fn(output, labels).half().cuda()
with record_function("#backward"):
loss.backward()
optimizer.step()
ddp_logging = ddp_model._get_ddp_logging_data()
print(ddp_logging)
# Cleanup.
dist.destroy_process_group()
if __name__ == "__main__":
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12345"
world_size = torch.cuda.device_count()
mp.spawn(run_worker, args=(world_size,), nprocs=world_size, join=True)`
### Versions
since pytorch 1.9
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,453 | 79,459 |
[BE][ZeRO] Enable multigpu unit tests
|
oncall: distributed, triaged, better-engineering
|
Following https://github.com/pytorch/pytorch/pull/77947, we should enable `ZeroRedundancyOptimizer` multigpu unit tests.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,454 | 79,453 |
[LTC] Make `torch::lazy::BackendImplInterface::ExecuteComputation` takes `ComputationPtr` instead of `Computation`
|
triaged, lazy
|
### π The feature, motivation and pitch
PyTorch/XLA is adapting to upstream LTC. In https://github.com/pytorch/xla/pull/3627 I am trying to implement the reaming of the backend interface. The issue I run into right now is `ExecuteComputation` currently takes `Computation`. The call is coming from
https://github.com/pytorch/pytorch/blob/30fb2c4abaaaa966999eab11674f25b18460e609/torch/csrc/lazy/core/lazy_graph_executor.cpp#L964-L967
The issue here is that PyTorch/XLA inherits Computation and defined `torch_xla::Computation` in
https://github.com/pytorch/xla/blob/a5d83f412ca603cb6aa16cc86c77fde8f05aabbd/torch_xla/csrc/computation.h#L34
The request is for `ExecuteComputation` to take `ComputationPtr`(which is what `async->cached_computation->computation` directly).
### Alternatives
I don't see any alternative.
### Additional context
_No response_
| 1 |
5,455 | 79,452 |
Use c10d broadcast_object in Zero
|
oncall: distributed, module: bootcamp, good first issue, triaged, better-engineering, pt_distributed_rampup
|
### π The feature, motivation and pitch
The current implementation of Zero Redundancy optimizer has its [own implementation](https://github.com/pytorch/pytorch/blob/master/torch/distributed/optim/zero_redundancy_optimizer.py#L73) of object broadcasting.
We should replace it with c10d [broadcast_object_list](https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast_object_list).
To verify the result, run the tests in `test/distributed/optim/test_zero_redundancy_optimizer.py`.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 4 |
5,456 | 93,765 |
Guards for a linked list will be `O(n^2)`
|
module: internals, triaged, enhancement, oncall: pt2
|
Suppose the user creates a linked list like data structure. The guards might be something like:
```
head.next.val == 1 and
head.next.next.val == 2 and
head.next.next.next.val == 3 and
head.next.next.next.next.val == 4 and
head.next.next.next.next.next.val == 5 and
...
```
Since each line in the guards starts from `f_locals`, accessing the nth item will be `O(n)` and the total guards will be `O(n^2)`. Something similar might also happen for deep `nn.Module` hierarchies.
We should rewrite how do codegen for guards to include [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination), which will fix this issue. We also consider implementing guards with C++ as they are a potentially hot part of the code.
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @soumith @msaroufim @wconstab @ngimel
| 0 |
5,457 | 79,425 |
API for accessing SymIntNode mandates refcount bump even when it is unnecessary
|
triaged
|
### π Describe the bug
For example, toSymbolicIntNode could profitably return a reference but it returns a shared pointer forcing refcount bump.
### Versions
master
cc @Krovatkin
| 0 |
5,458 | 79,418 |
Add doc formatting check to lintrunner
|
module: lint, triaged, better-engineering
|
From @mruberry: Docs are difficult to build locally (our documentation for how to do so is often out of date) and formatting errors in CI appear without line numbers (in at least some cases). Some recent example errors/warnings:
```
/opt/conda/lib/python3.7/site-packages/torch/__init__.py:docstring of torch.set_float32_matmul_precision:22: ERROR: Unexpected indentation.
/opt/conda/lib/python3.7/site-packages/torch/__init__.py:docstring of torch.set_float32_matmul_precision:23: WARNING: Block quote ends without a blank line; unexpected unindent.
```
If lintrunner could detect doc formatting issues it would making adding/updating docs much easier.
| 1 |
5,459 | 79,407 |
Conda enviroment
|
triaged
|
### π Describe the bug
Traceback (most recent call last):
File "/data1/yvr/.conda/envs/opt/bin/opt-baselines", line 33, in <module>
sys.exit(load_entry_point('metaseq==0.0.1', 'console_scripts', 'opt-baselines')())
File "/data1/yvr/.conda/envs/opt/lib/python3.7/site-packages/metaseq-0.0.1-py3.7-linux-x86_64.egg/metaseq/launcher/opt_baselines.py", line 314, in cli_main
get_grid, postprocess_hyperparams, add_extra_options_func=add_extra_options_func
File "/data1/yvr/.conda/envs/opt/lib/python3.7/site-packages/metaseq-0.0.1-py3.7-linux-x86_64.egg/metaseq/launcher/sweep.py", line 378, in main
backend_main(get_grid, postprocess_hyperparams, args)
File "/data1/yvr/.conda/envs/opt/lib/python3.7/site-packages/metaseq-0.0.1-py3.7-linux-x86_64.egg/metaseq/launcher/slurm.py", line 34, in main
grid = get_grid(args)
File "/data1/yvr/.conda/envs/opt/lib/python3.7/site-packages/metaseq-0.0.1-py3.7-linux-x86_64.egg/metaseq/launcher/opt_baselines.py", line 78, in get_grid
raise RuntimeError("Where are you running this?! Check DATA_LOCATIONS.")
RuntimeError: Where are you running this?! Check DATA_LOCATIONS.
### Versions
After I conda environment following the setup.md,I plan to follow the train.md,it comes to the bug?
| 1 |
5,460 | 79,395 |
SymInt equality tests are unsound
|
triaged
|
### π Describe the bug
Current implementation is
```
bool operator==(const SymInt& p2) const {
return data_ == p2.data_;
}
bool operator!=(const SymInt& p2) const {
return data_ != p2.data_;
}
```
This is unsound. Two sym ints can be referencing different symbolic variables but after guarding on their concrete values we may discover they are equal and == should return true. These need to be virtualized like the rest of the operators on SymInt.
@Krovatkin
### Versions
master
| 0 |
5,461 | 79,388 |
Init connect timeout when use torch.distributed.run
|
oncall: distributed, oncall: r2p
|
### π Describe the bug
TRAINING_SCRIPT.py
```
def main():
dist.init_process_group("nccl", init_method='env://')
.......
if __name__ == "__main__":
main()
```
when I run this on both node0 and node1
```
export LOGLEVEL=INFO && python -m torch.distributed.run --nproc_per_node=1 --nnodes=2
--rdzv_id=ID1--rdzv_backend=c10d --rdzv_endpoint='IP1:2222' TRAINING_SCRIPT.py
```
I get the error from both node0 and node1
```
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : launch_mnist.py
min_nodes : 2
max_nodes : 2
nproc_per_node : 1
run_id : ID1
rdzv_backend : c10d
rdzv_endpoint : IP1:2222
rdzv_configs : {'timeout': 900}
max_restarts : 0
monitor_interval : 5
log_dir : None
metrics_cfg : {}
[E socket.cpp:793] [c10d] The client socket has timed out after 60s while trying to connect to (IP1, 2222).
ERROR:torch.distributed.elastic.multiprocessing.errors.error_handler:{
"message": {
"message": "RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.",
"extraInfo": { .......}
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 156, in _create_tcp_store
host, port, is_master=is_server, timeout=timedelta(seconds=read_timeout)
TimeoutError: The client socket has timed out after 60s while trying to connect to (IP1, 2222).
The above exception was the direct cause of the following exception:
```
but when I change the run
on node0 (use localhost instead of IP1)
```
export LOGLEVEL=INFO && python -m torch.distributed.run --nproc_per_node=1 --nnodes=2
--rdzv_id=ID1--rdzv_backend=c10d --rdzv_endpoint='localhost:2222' TRAINING_SCRIPT.py
```
on node1
```
export LOGLEVEL=INFO && python -m torch.distributed.run --nproc_per_node=1 --nnodes=2
--rdzv_id=ID1--rdzv_backend=c10d --rdzv_endpoint='IP1:2222' TRAINING_SCRIPT.py
```
it go well.
the output of node0
```
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : launch_mnist_v12.py
min_nodes : 2
max_nodes : 2
nproc_per_node : 1
run_id : ID1
rdzv_backend : c10d
rdzv_endpoint : localhost:2222
rdzv_configs : {'timeout': 900}
max_restarts : 0
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_6o014_3m/m638480e883e4cd58af52617214cfe50__u799hzl
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=IP1
master_port=54902
group_rank=0
group_world_size=2
local_ranks=[0]
role_ranks=[0]
global_ranks=[0]
role_world_sizes=[2]
global_world_sizes=[2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_6o014_3m/m638480e883e4cd58af52617214cfe50__u799hzl/attempt_0/0/error.json
env MASTER_ADDR=IP1
env MASTER_PORT=54902
env WORLD_SIZE=2
env RANK=0
env LOCAL_RANK=0
| distributed init (rank 0): env://,(backend nccl):, local rank:0, world size:2
NCCL version 2.10.3+cuda10.2
Train Epoch: 1 [0/60000 (0%)] Loss: 2.317649
```
the output of node1
```
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : launch_mnist_v12.py
min_nodes : 2
max_nodes : 2
nproc_per_node : 1
run_id : ID1
rdzv_backend : c10d
rdzv_endpoint : IP1:2222
rdzv_configs : {'timeout': 900}
max_restarts : 0
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_n4bpjqqf/m638480e883e4cd58af52617214cfe50_gz_c6jhz
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=IP1
master_port=54902
group_rank=1
group_world_size=2
local_ranks=[0]
role_ranks=[1]
global_ranks=[1]
role_world_sizes=[2]
global_world_sizes=[2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_n4bpjqqf/m638480e883e4cd58af52617214cfe50_gz_c6jhz/attempt_0/0/error.json
env MASTER_ADDR=IP1
env MASTER_PORT=54902
env WORLD_SIZE=2
env RANK=1
env LOCAL_RANK=0
| distributed init (rank 1): env://,(backend nccl):, local rank:0, world size:2
```
another strange thing is that when I use deprecated module `torch.distributed.launch`, it goes well when I run
# on both node 0 and node1
```
python -m torch.distributed.launch --master_addr="IP1" --master_port=2222 --nproc_per_node=1 --nnodes=2 TRAINING_SCRIPT.py
````
as mentioned in #76367
### Versions
Collecting environment information...
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.26
Python version: 3.7.5 (default, Apr 26 2022, 08:54:01) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-514.44.5.10.h193.x86_64-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 450.102.04
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 9 |
5,462 | 79,387 |
caffe2_nvrtc is produced even when it won't be used
|
module: build, triaged, module: selective build
|
### π Describe the bug
The library is produced whenever CUDA is enabled. However, in the code, lazyNVRTC is called instead of loading this library. The only situation it appears to be used in is for ROCm.
Perhaps the library should only be produced for ROCm builds?
### Versions
Has been like this for a long time. Affects any modern version.
cc @malfet @seemethere @dhruvbird @ljk53
| 0 |
5,463 | 79,383 |
Incorrect image upscaling on MPS backend
|
triaged, module: mps
|
### π Describe the bug
Upscaling images via [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) works on-CPU, but produces visually-incorrect output using MPS backend on M1 Max.
Using PyTorch nightly build, `1.13.0.dev20220610`.
# Setup
I've made an [mps-repro](https://github.com/Birch-san/mps-repro) repository for this.
[`repro.py` here](https://github.com/Birch-san/mps-repro/blob/main/repro.py).
```bash
git clone https://github.com/Birch-san/mps-repro.git
cd mps-repro
python3 -m venv venv
source ./venv/bin/activate
python3 -m pip install --upgrade pip
#--install Real-ESRGAN
git submodule update --init --recursive
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P Real-ESRGAN/experiments/pretrained_models
# GFPGAN is unnecessarily pinned to an old numpy, for which there is no M1 macOS release. my fork fixes this
pip install basicsr facexlib git+https://github.com/Birch-san/GFPGAN.git@newer-numpy
cd Real-ESRGAN
pip install -r requirements.txt
python setup.py develop
cd ..
#--done installing Real-ESRGAN
# our torch nightly probably got nuked by the above, but we do need it for GPU support on macOS
pip install --pre "torch==1.13.0.dev20220610" "torchvision==0.14.0.dev20220609" --extra-index-url https://download.pytorch.org/whl/nightly/
```
# Run
```bash
python repro.py --half_precision_float false --backend_type mps
python repro.py --half_precision_float true --backend_type mps
python repro.py --half_precision_float false --backend_type cpu
# --half_precision_float true on-CPU is unsupported, but that problem's not important.
```
# Results
Attempt to upscale this input image:

Fruit hypothesized by [imagen-pytorch](https://github.com/cene555/Imagen-pytorch).
## CPU
### Half-precision float
[Unsupported](https://github.com/pytorch/pytorch/issues/74625) (`"slow_conv2d_cpu" not implemented for 'Half'`), but that's not what our repro is investigating.
### Single-precision float

Perfectly 2x upsampled fruit.
## MPS
### Half-precision float

Overexposed, white-and-green, high-frequency formless mess.
### Single-precision float

Andy Warhol-esque desaturated fruit, tiled in tiles 2/3rd the width of the original image.
### Versions
```
PyTorch version: 1.13.0.dev20220610
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220610
[pip3] torchvision==0.14.0.dev20220609
[conda] numpy 1.23.0rc2 pypi_0 pypi
[conda] torch 1.13.0.dev20220606 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20220603 pypi_0 pypi
[conda] torchvision 0.14.0a0+f9f721d pypi_0 pypi
```
cc @kulinseth @albanD
| 13 |
5,464 | 79,382 |
torch failure to open libcuda.so.1 on macOS
|
triaged, module: macos
|
### π Describe the bug
the torch had a hardcode of cuda lib soname, it does not match the macOS version, it is libcuda.dylib
I just try to run an example on [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) , it raise exception
<pre>
Traceback (most recent call last):
File "/Users/xlla/test/test-stable-baseline3.py", line 11, in <module>
model.learn(total_timesteps=10_000)
File "/usr/local/lib/python3.9/site-packages/stable_baselines3/ppo/ppo.py", line 304, in learn
return super(PPO, self).learn(
File "/usr/local/lib/python3.9/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 270, in learn
self.train()
File "/usr/local/lib/python3.9/site-packages/stable_baselines3/ppo/ppo.py", line 264, in train
loss.backward()
File "/usr/local/lib/python3.9/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.9/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Error in dlopen: dlopen(libcuda.so.1, 6): image not found
</pre>
I search this string and found it at
`aten/src/ATen/cuda/detail/LazyNVRTC.cpp` and `third_party/tensorpipe/tensorpipe/common/cuda_lib.h`
### Versions
PyTorch version: 1.11.0a0+git967238d
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: macOS 10.13.6 (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.9.13 (main, Jun 3 2022, 08:10:53) [Clang 10.0.1 (clang-1001.0.46.4)] (64-bit runtime)
Python platform: macOS-10.13.6-x86_64-i386-64bit
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GeForce GTX 1060 6GB
Nvidia driver version: 1.1.0
cuDNN version: Probably one of the following:
/usr/local/cuda/lib/libcudnn.7.dylib
/usr/local/cuda/lib/libcudnn_static.a
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] mypy-protobuf==3.2.0
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.4
[pip3] torch==1.11.0a0+git967238d
[pip3] torchmetrics==0.9.1
[pip3] torchtext==0.14.0a0+e2fa8d8
[pip3] torchvision==0.14.0a0+5f6e22d
[conda] Could not collect
cc @malfet @albanD
| 0 |
5,465 | 79,375 |
TorchScript bidirectional lnlstm from example doesn't work
|
oncall: jit
|
I tried to run a bidirectional LNLSTM from example benchmarks/fastrnns/custom_lstms.py with code:
```python
def test_script_stacked_bidir_lnlstm(seq_len, batch, input_size, hidden_size,
num_layers):
inp = torch.randn(seq_len, batch, input_size)
states = [LSTMState(torch.randn(batch, hidden_size),
torch.randn(batch, hidden_size))
for _ in range(num_layers)]
rnn = script_lnlstm(
input_size, hidden_size, num_layers, bidirectional=True)
# just a smoke test
out, out_state = rnn(inp, states)
return out, out_state
```
It caused such error:
```
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "/home/xun/projs/ML/ML_alls/custom_lstms.py", line 202, in forward
outputs = torch.jit.annotate(List[Tensor], [])
for i in range(len(inputs)):
out, state = self.cell(inputs[i], state)
~~~~~~~~~ <--- HERE
outputs += [out]
return torch.stack(outputs), state
File "/home/xun/projs/ML/ML_alls/custom_lstms.py", line 176, in forward
hx, cx = state
igates = self.layernorm_i(torch.mm(input, self.weight_ih.t()))
hgates = self.layernorm_h(torch.mm(hx, self.weight_hh.t()))
~~~~~~~~ <--- HERE
gates = igates + hgates
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
RuntimeError: self must be a matrix
```
| 1 |
5,466 | 79,359 |
[build] No documented way to install C++ binaries for pure-python development of pytorch
|
oncall: binaries, module: build, triaged
|
## Issue description
There is currently no method documented to facilitate pure python development of PyTorch. This results in unnecessary compilation and building for those who just want to work on the Python part of PyTorch.
Even after disabling almost everything as documented in the "Install from Source" portion in README, the compilation can still take a long time. This feels quite unnecessary to just work on the Python portion of pytorch.
Current alternatives include manually symlinking files you are working on into site-packages.
## Code example
Run `DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_CUDA=0 BUILD_TEST=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 python setup.py develop` (documented in the documentation)
cc @ezyang @seemethere @malfet
| 1 |
5,467 | 79,355 |
[bazel] build spams warnings
|
triaged, module: bazel
|
### π Describe the bug
Minor annoyance about bazel build:
it doesn't use the same COPTs as the CMake build for the warning ignores, which leads to a lot of spam during the build.
It would be useful to do something like "build_variables.bzl" but for the COPTs.
### Versions
master 164029f783ba52d206862925e9341e6b851179ff
| 0 |
5,468 | 79,352 |
Adam not optimally implemented: unnecessary torch.div
|
module: performance, module: optimizer, triaged, actionable
|
### π The feature, motivation and pitch
Adam is implemented in https://github.com/pytorch/pytorch/blob/master/torch/optim/adam.py#L259 using the following effective formula (terminology from the [paper](https://arxiv.org/abs/1412.6980) and the code with light abbreviations):
```
theta = lr / bias1 * m / (sqrt(v) / bias2 + eps)
```
This formulation requires an additional division of the full-sized tensor `sqrt(v)` by `bias2`. Presumably it was done this way to have `eps` not be implicitly affected by `bias2`. However, we can still achieve this without the costly `sqrt(v)/bias2` op - multiply & divide `eps` by `bias2` and then factor out `1/bias2` in the denominator and you get:
```
theta = lr * bias2 / bias1 * m / (sqrt(v) + bias2 * eps)
```
which should be incrementally faster to compute.
Note: the current formulation permeates the Adam and Adam-esque implementations, so any update should presumably be propagated across them.
### Alternatives
N/a
### Additional context
I've confirmed that the current formulation does indeed induce an extra `div` op in the Adam step:

This was done on pytorch v1.10, and the code has been reorganized since so the precise file & line reference is no longer accurate, but the relevant piece of code hasn't actually changed, it's still there on https://github.com/pytorch/pytorch/blob/master/torch/optim/adam.py#L259 as of this writing.
cc @VitalyFedyunin @ngimel @vincentqb @jbschlosser @albanD
| 6 |
5,469 | 79,351 |
[bazel] ability to run gpu tests on gpu machines in RBE
|
triaged, module: bazel
|
### π The feature, motivation and pitch
As a user who builds pytorch using bazel and Remote Build Execution, I need a way to tell the remote build to use GPU machines for tests that require GPU. In order to achieve this, it's beneficial to consolidate all tests usage through the common macros definition.
### Alternatives
Build and test everything on GPU machines. Very expensive.
### Additional context
This is quality of life improvement for bazel builds.
| 0 |
5,470 | 79,349 |
PyTorch get positive log_prob of a multivariate normal distribution
|
triaged
|
### π Describe the bug
PyTorch get positive log_prob of a multivariate normal distribution
version is 1.11.0
here is the code:
```
import torch
m = torch.Tensor([0.3512, 1.3149, -0.8541, 0.5619]).to(torch.double)
cov = torch.Tensor([[0.2304, 0.0000, 0.0000, 0.0000],
[0.0000, 0.1919, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0785, 0.0000],
[0.0000, 0.0000, 0.0000, 0.1838]]).to(torch.double)
a = torch.Tensor([0.3278, 1.3005, -0.8631, 0.5571]).to(torch.double)
print(torch.distributions.MultivariateNormal(m, cov).log_prob(a))
```
And I get result 0.0006
### Versions
version is 1.11.0
here is the code:
```
import torch
m = torch.Tensor([0.3512, 1.3149, -0.8541, 0.5619]).to(torch.double)
cov = torch.Tensor([[0.2304, 0.0000, 0.0000, 0.0000],
[0.0000, 0.1919, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0785, 0.0000],
[0.0000, 0.0000, 0.0000, 0.1838]]).to(torch.double)
a = torch.Tensor([0.3278, 1.3005, -0.8631, 0.5571]).to(torch.double)
print(torch.distributions.MultivariateNormal(m, cov).log_prob(a))
```
And I get result 0.0006
| 1 |
5,471 | 79,337 |
Conda install from pytorch-nightly channel does not install the expected version on macOS
|
oncall: binaries, triaged, module: macos
|
### π Describe the bug
A similar error occurred before https://github.com/pytorch/pytorch/issues/33103
Another related issue about pytorch nightly on macOS https://github.com/pytorch/pytorch/issues/78681
* Platform: macOS 12.4, Apple M1 Max
* Conde version 4.13
* Conda config: channel_priority: disabled
1. When installing pytorch only the expected nightly build version (1.13) is installed
```
conda install -c pytorch-nightly pytorch
```
2. When installing pytorch together with torchvision, ```conda-forge``` channel is used and the stable versions are installed, which do not have Apple M chip support
```
conda install -c pytorch-nightly pytorch torchvision
```
3. When installing pytorch first then torchvision, pytorch version is forced to fall back to 1.11
```
The following packages will be SUPERSEDED by a higher-priority channel:
pytorch pytorch-nightly::pytorch-1.13.0.dev20~ --> conda-forge::pytorch-1.11.0-cpu_py39h03f923b_1
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220610
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:00:33) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220610
[conda] numpy 1.22.4 py39h7df2422_0 conda-forge
[conda] pytorch 1.13.0.dev20220610 py3.9_0 pytorch-nightly
```
cc @ezyang @seemethere @malfet @albanD
| 1 |
5,472 | 79,336 |
Batches are being duplicated from go http call
|
triaged
|
### π Describe the bug
Hi there,
I'm currently testing a sentence bert (MNLI) model hosted on torchserve GPU. Here is the relevant encoding call.
```python
embedding_list = self.model.encode(queries, batch_size=min(len(queries), 512))
```
I'm testing two entrypoints to understand latency of this model.
This first, is in python with a simple requests call
```python
for i in range(1000):
requests.post('http://localhost:8989/predictions/bert_soft_label_one_to_many_query_scores_new', headers=headers, json=json_blob_10_6500_chars)
time.sleep(.1)
```
For the python call, with 10 encodings of ~6500 chars each (total 65k), I see a prediction latency of ~90-120ms. Great (screenshot below)! I also notice similar responses with 100 encoding per call.
<img width="1201" alt="image" src="https://user-images.githubusercontent.com/8621935/173171373-38ccfb07-e137-4b36-8090-c71c8b4db652.png">
However, I'm also testing latency from a golang http calls with the following function call. This code is encoding up to 100 objects per request with ~140 chars each (~14k total), I'm seeing much slower latency but also it looks like the requests are being duplicated? Notice the double `batches` line.
<img width="1335" alt="image" src="https://user-images.githubusercontent.com/8621935/173171482-ad472166-bcb2-4632-8479-16c14ed227ab.png">
```golang
func (c TorchservedClient) NewGetOneToManyQueryScores(ctx context.Context, otmr *OneToManyQueryScoreRequest) (*OneToManyQueryScoreResponse, error) {
if otmr == nil {
return nil, fmt.Errorf("request is empty")
}
otmrJson, err := json.Marshal(otmr)
if err != nil {
return nil, err
}
// This endpoint corresponds to the model name in the .mar file. The base url passed in should look like `http://torchserved-0.torchserved:8080/predictions ` and we add in the model name to this url.
req, err := http.NewRequestWithContext(ctx, http.MethodPost, c.makeURL("bert_soft_label_one_to_many_query_scores_new"), bytes.NewBuffer(otmrJson))
if err != nil {
return nil, fmt.Errorf("Request construction failed: %w", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.client.Do(req)
if err != nil {
return nil, fmt.Errorf("POST %s failed: %w", req.URL, err)
}
defer webutil.Discard(resp)
if resp.StatusCode != http.StatusOK {
b, _ := ioutil.ReadAll(resp.Body)
return nil, fmt.Errorf("POST %s returned %v %v", req.URL, resp.StatusCode, string(b))
}
var otmResp OneToManyQueryScoreResponse
if err := json.NewDecoder(resp.Body).Decode(&otmResp); err != nil {
return nil, fmt.Errorf("failed to parse response for POST %s: %w", req.URL, err)
}
return &otmResp, nil
}
```
Any advice for what may be going on here?
### Versions
Versions of relevant libraries:
[pip3] mypy==0.942
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torchserve==0.5.3
[pip3] torchvision==0.11.1
| 1 |
5,473 | 79,333 |
[ONNX] Internal assert error during export
|
oncall: jit, module: onnx, onnx-triaged
|
### πBug
When exporting pytorch script model to onnx, I get an error: ` input_values.size() == param_count_list.size()INTERNAL ASSERT FAILED at "../torch/csrc/jit/python/script_init.cpp":493, please report a bug to PyTorch. `
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
import torch.nn as nn
import torch.nn.functional as F
import re
import os
import unicodedata
import numpy as np
device = torch.device("cpu")
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
dropout=(0 if n_layers == 1 else dropout), bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
embedded = self.embedding(input_seq)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
outputs, hidden = self.gru(packed, hidden)
outputs, _ = torch.nn.utils.rnn.pad_packed_sequence(outputs)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:]
return outputs, hidden
hidden_size = 500
encoder_n_layers = 2
decoder_n_layers = 2
dropout = 0.1
voc_num_words = 7826
embedding = nn.Embedding(voc_num_words , hidden_size)
encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
test_seq = torch.LongTensor(10, 1).random_(0, voc_num_words ).to(device)
test_seq_length = torch.LongTensor([test_seq.size()[0]]).to(device)
script_encoder = torch.jit.script(encoder)
torch.onnx.export(script_encoder, (test_seq, test_seq_length),"model8.onnx")
```
### Expected behavior
Successful model export to ONNX.
### Environment
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.4
Libc version: glibc-2.26
Python version: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
cc @svenstaro @eklitzke @dreiss
| 10 |
5,474 | 79,325 |
[NVFuser] hitting fallbacks on demucs (from torchbench + lazy tensor)
|
triaged, module: nvfuser
|
### π Describe the bug
NVFuser is hitting a fallback when running demucs on torchbench + lazy tensor.
Repro:
```python
import torch
from torch.utils.jit.log_extract import run_nvfuser, load_graph_and_inputs
ir = """graph(%0 : Float(8, 512, 1452, strides=[764416, 1493, 1], requires_grad=0, device=cuda:0),
%1 : Float(8, 512, 1452, strides=[743424, 1452, 1], requires_grad=0, device=cuda:0),
%p32 : int):
%3 : Float(8, 512, 1452, strides=[743424, 1452, 1], requires_grad=0, device=cuda:0) = aten::relu(%1)
%4 : Float(8, 512, 1452, strides=[743424, 1452, 1], requires_grad=0, device=cuda:0) = aten::add(%3, %0, %p32)
return (%4)
"""
_, inputs = load_graph_and_inputs(ir)
run_nvfuser(ir, inputs)
```
Error:
```
RuntimeError: stride == cur_contig_stride || (still_rightmost && stride == 1) || (!still_rightmost && stride % word_size == 0) INTERNAL ASSERT FAILED at "/scratch/dberard/bench-june/pytorch/torch/csrc/jit/codegen/cuda/executor_utils.cpp":599, please report a bug to PyTorch. Vectorization of T0_g[ iS70{( ceilDiv(( ceilDiv(( ceilDiv(( T0.size[0] * ( T0.size[1] * T0.size[2] ) ), 4) ), 1) ), 128) )}, iS69{1}, iS67{4}, iS71{128} ] with word size 4 not possible due to invalid stride. Domain: iS69{1}, stride: 1493
```
Stacktrace:
```
#0 0x00007fffc7966d1d in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1 0x00007fffa98d671a in c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /scratch/dberard/bench-june/pytorch/torch/lib/libc10.so
#2 0x00007fffa98d848e in c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /scratch/dberard/bench-june/pytorch/torch/lib/libc10.so
#3 0x00007fffb49d8f68 in torch::jit::fuser::cuda::executor_utils::(anonymous namespace)::validateAlignedVectorizedFusionInputOutput(c10::IValue const&, int, torch::jit::fuser::cuda::TensorView*) () from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#4 0x00007fffb49ddf49 in torch::jit::fuser::cuda::executor_utils::validateVectorizedTensors(torch::jit::fuser::cuda::kir::Kernel*, c10::ArrayRef<c10::IValue> const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, torch::jit::fuser::cuda::executor_utils::caching::ExecutorCompileTimeInfoCache*, torch::jit::fuser::cuda::kir::ExpressionEvaluator&) () from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#5 0x00007fffb49bb5e3 in torch::jit::fuser::cuda::FusionExecutor::runFusion(c10::ArrayRef<c10::IValue> const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, torch::jit::fuser::cuda::LaunchParams const&, c10::optional<unsigned long> const&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#6 0x00007fffb4a78925 in torch::jit::fuser::cuda::FusionKernelRuntime::runKernelWithInput(c10::ArrayRef<c10::IValue> const&, unsigned long, torch::jit::fuser::cuda::SegmentedGroup*) () from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#7 0x00007fffb4a79a56 in torch::jit::fuser::cuda::FusionKernelRuntime::runWithInput(c10::ArrayRef<c10::IValue> const&, unsigned long) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#8 0x00007fffb4a7c620 in torch::jit::fuser::cuda::FusionExecutorCache::runFusionWithInputs(c10::ArrayRef<c10::IValue> const&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#9 0x00007fffb4a7cc9a in torch::jit::fuser::cuda::GraphCache::runGraphWithInputs(c10::ArrayRef<c10::IValue> const&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#10 0x00007fffb4b1182f in torch::jit::fuser::cuda::runCudaFusionGroup(torch::jit::Node const*, std::vector<c10::IValue, std::allocator<c10::IValue> >&)::{lambda()#3}::operator()() const () from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#11 0x00007fffb4b1215e in torch::jit::fuser::cuda::runCudaFusionGroup(torch::jit::Node const*, std::vector<c10::IValue, std::allocator<c10::IValue> >&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cuda_cu.so
#12 0x00007fffbb1432d3 in torch::jit::InterpreterStateImpl::runImpl(std::vector<c10::IValue, std::allocator<c10::IValue> >&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cpu.so
#13 0x00007fffbb1315e2 in torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cpu.so
#14 0x00007fffbb124576 in torch::jit::GraphExecutorImplBase::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_cpu.so
#15 0x00007fffc549c263 in torch::jit::runAndInsertCall(torch::jit::Function&, torch::jit::tuple_slice const&, pybind11::kwargs const&, c10::optional<c10::IValue>, std::function<torch::jit::Value* (torch::jit::Graph&, torch::jit::MatchedSchema const&)> const&) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_python.so
#16 0x00007fffc552fe69 in void pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::{lambda(pybind11::args, pybind11::kwargs)#53}, pybind11::object, pybind11::args, pybind11::kwargs, pybind11::name, pybind11::is_method, pybind11::sibling>(torch::jit::initJitScriptBindings(_object*)::{lambda(pybind11::args, pybind11::kwargs)#53}&&, pybind11::object (*)(pybind11::args, pybind11::kwargs), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) () from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_python.so
#17 0x00007fffc500adac in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
from /scratch/dberard/bench-june/pytorch/torch/lib/libtorch_python.so
```
### Versions
A100, torchbench = main, pytorch = viable/strict.
cc @jjsjann123
| 1 |
5,475 | 79,307 |
`prepare_qat_fx` docstring doesn't run
|
oncall: quantization, triaged, module: fx
|
This code example is in the docstring for `torch.ao.quantization.quantize_fx_prepare_qat_fx`:
https://github.com/pytorch/pytorch/blob/35eda5f95956c3631e5684d519213944ab66b012/torch/ao/quantization/quantize_fx.py#L495-L509
There are two issues:
(1) The import `from torch.ao.quantization import prepare_fx` doesn't actually work. The path is wrong.
(2) This example doesn't even use the function it is describing, but instead uses `prepare_fx`, which is for PTQ, not QAT
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv
| 0 |
5,476 | 79,299 |
PyTorch gets stuck when using an NVLink/A6000 and more than two GPUs
|
oncall: distributed, triaged, module: ddp
|
### π Describe the bug
When I utilize PyTorchβs distributed data parallel (DDP) to train the [ImageNet example](https://github.com/pytorch/examples/tree/main/imagenet) with two GPUs, NVLink is used successfully based on the performance counters. However, as soon as I increase the number of GPUs to three or more (up to eight), the training loop gets stuck at the very beginning.
If I set the environment variable `NCCL_P2P_DISABLE=1`, I can use as many GPUs as I like, but I obviously donβt get the benefits of NVLink.
Output with two GPUs:
```
Use GPU: 0 for training
Use GPU: 1 for training
=> creating model 'resnet50'
=> creating model 'resnet50'
Epoch: [0][ 1/5005] Time 6.435 ( 6.435) Data 4.396 ( 4.396) Loss 7.1897e+00 (7.1897e+00) Acc@1 0.00 ( 0.00) Acc@5 0.00 ( 0.00)
Epoch: [0][ 1/5005] Time 6.439 ( 6.439) Data 4.302 ( 4.302) Loss 7.0490e+00 (7.0490e+00) Acc@1 0.00 ( 0.00) Acc@5 2.34 ( 2.34)
Epoch: [0][ 11/5005] Time 0.303 ( 0.854) Data 0.000 ( 0.400) Loss 8.3196e+00 (7.8798e+00) Acc@1 0.00 ( 0.07) Acc@5 0.00 ( 0.28)
Epoch: [0][ 11/5005] Time 0.303 ( 0.853) Data 0.000 ( 0.391) Loss 8.3118e+00 (8.1050e+00) Acc@1 0.00 ( 0.07) Acc@5 0.78 ( 1.07)
Epoch: [0][ 21/5005] Time 0.323 ( 0.592) Data 0.000 ( 0.214) Loss 7.3862e+00 (8.0053e+00) Acc@1 0.00 ( 0.04) Acc@5 0.00 ( 0.22)
Epoch: [0][ 21/5005] Time 0.324 ( 0.591) Data 0.113 ( 0.215) Loss 8.0911e+00 (8.1161e+00) Acc@1 0.78 ( 0.11) Acc@5 0.78 ( 0.78)
```
Output with eight GPUs
```
Use GPU: 0 for training
Use GPU: 4 for training
Use GPU: 5 for training
Use GPU: 7 for training
Use GPU: 3 for training
Use GPU: 1 for training
Use GPU: 2 for training
Use GPU: 6 for training
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
=> creating model 'resnet50'
```
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.17
Python version: 3.7.10 (default, Jun 4 2021, 14:48:32) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 510.73.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchfile==0.1.0
[pip3] torchnet==0.0.4
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] cudatoolkit-dev 11.3.1 py37h5e8e339_0 conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.20.2 pypi_0 pypi
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] pytorch 1.11.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py37_cu113 pytorch
[conda] torchvision 0.12.0 py37_cu113 pytorch
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 4 |
5,477 | 93,763 |
allowed_functions_module_string_ignorelist doesn't work very well
|
triaged, oncall: pt2, module: dynamo
|
I noticed that adding torch.utils._pytree to the ignore list wasn't enough to induce dynamo into inlining it; I had to add it to the explicitly excluded functions list.
My hypothesis for why this is happening is that sometimes PyTorch modules accidentally reexport identifiers from other modules. These then get picked up and shoved into the allowlist.
To be honest, it's not clear to me why we ignorelist rather than explicitly allowlist what we put into graphs.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 2 |
5,478 | 79,275 |
testSerializationInterop in test/cpp/jit/torch_python_test.cpp has not run in over two years
|
oncall: jit
|
### π Describe the bug
This was moved into the file in https://github.com/pytorch/pytorch/pull/44795 but wasn't called even then. Please delete the code or fix it and reenable it.
### Versions
head
| 1 |
5,479 | 79,272 |
PyTorch leaks a macro definition called "CHECK" in the C++ version
|
module: cpp, triaged
|
### π Describe the bug
I'm working on a C++/CUDA extensions for PyTorch. It is big enough to warrant unit tests on C++ side, for which I chose to use Catch2.
Now, the problem is, that both, PyTorch and Catch2, define a CHECK macro. Catch2 has the option to use a prefix, but that makes all tests longer. imho PyTorch should not define such a macro (it looks like an unintentional leak to me, as there is also TORCH_CHECK).
```
#include <torch/types.h>
#ifdef CHECK
static_assert(false, "CHECK defined");
#endif
```
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 960
GPU 1: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] pytorch3d==0.6.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.6.2 py39_cu113_pyt1110 pytorch3d
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
cc @jbschlosser
| 0 |
5,480 | 79,261 |
[NVFuser] bad performance on pyhpc_isoneutral_mixing
|
triaged, module: nvfuser
|
### π Describe the bug
-9% on pyhpc_isoneutral_mixing compared to nnc.
Repro:
```
[ENV_VAR] python run.py -m jit -d cuda -t eval pyhpc_isoneutral_mixing
```
where ENV_VAR is either:
* PYTORCH_JIT_USE_NNC_NOT_NVFUSER=1
* PYTORCH_JIT_ENABLE_NVFUSER=1
Alternatively, run https://gist.github.com/davidberard98/53b277b57ddc8d35340fecd279a9688d to run the individual fusion groups used in this model. Run `python pyhpc_isoneutral_mixing_ir.py --id x y z ... ` with a list of graph indices. The graph indices that are performing < -10% worse than nnc are: `8 12 13 15 19 20 29`
### Versions
viable/strict, torchbench=main, A100
cc @jjsjann123
| 0 |
5,481 | 79,250 |
[BE] Generalize recursive wrapping utility
|
oncall: distributed, better-engineering, module: fsdp
|
### π The feature, motivation and pitch
The wrap utilities in `torch.distributed.fsdp.wrap` are not actually FSDP specific as they take a configurable `wrapper_cls` argument.
As seen in https://github.com/pytorch/pytorch/pull/78704/files, it can be leveraged for activation checkpointing as well. Might be useful to generalize this utility and offer it more widely.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,482 | 79,246 |
[NVFuser] bad performance on mobilenet_v2 and mobilenet_v3_large
|
triaged, module: nvfuser
|
### π Describe the bug
-17% on mobilenet_v2 and -9.68% on mobilenet_v3_large.
From torchbench, run:
```
[ENV_FLAG] python run.py -m jit -d cuda -t eval [MODEL_NAME]
```
* replace ENV_FLAG with either `PYTORCH_JIT_ENABLE_NVFUSER=1` (for nvfuser) or `PYTORCH_JIT_USE_NNC_NOT_NVFUSER=1` (for nnc)
* replace MODEL_NAME with either `mobilenet_v2` or `mobilenet_v3_large`
Interestingly, if you use the log_extract tool it shows good perf for all of the graphs. Repro:
```
$ PYTORCH_JIT_LOG_LEVEL=">>log_extract" PYTORCH_JIT_ENABLE_NVFUSER=1 python [torchbench]/run.py -m jit -d cuda -t eval > logs.txt 2>&1
$ python [pytorch]/scripts/jit/log_extract.py logs.txt --nnc-static --nnc-dynamic --nvfuser
```
^ all the individual fusion groups show improvement, but the model as a whole gets worse.
Not quite sure why, but here are some possibilities:
* Could be some correctness issue, e.g. returning the wrong memory format from some fusion group (which would impact the overall model performance but possibly not the individual fusion group performance)
* Could be some sort of overhead (but why would that only happen in log_extract and not in the torchbench results?)
* Could be bad benchmarking either in log_extract or in torchbench.
### Versions
viable/strict, torchbench on main, A100. Built with DEBUG=0.
cc @jjsjann123
| 0 |
5,483 | 79,244 |
[NVFuser] bad performance on pyhpc_equation_of_state
|
triaged, module: nvfuser
|
### π Describe the bug
pyhpc_equation_of_state is showing ~40% decrease in performance compared to NNC. Note that in this model, the entire model is fused (as far as I can tell...)
Model: https://github.com/pytorch/benchmark/blob/main/torchbenchmark/models/pyhpc_equation_of_state/eos_pytorch.py
Alternate repro:
```
python [pytorch dir]/scripts/jit/log_extract.py ir.txt --nnc-dynamic --nvfuser
```
download ir.txt from here: https://gist.github.com/davidberard98/5c89b07e101a332653a16da6da0d431d
### Versions
A100, viable/strict (`bfaa187fb0`), torchbench main branch (`9bb0bd7066`)
cc @jjsjann123
| 0 |
5,484 | 79,222 |
scripted fft Convolutions are faster than nn.Conv1d with large kernels
|
module: performance, module: cuda, module: convolution, triaged
|
### π Describe the bug
On running torchaudio code I noticed that some resampling operations are slower than they should be on the forward pass of the Resample transform. I tracked the slowness to the use of functional.conv1d(). Since an fft convolution is as slow as a mult per group and a cat, its time is approximately constant regarding conv size. Should pytorch not handle the choice ( serialized conv, or fft conv) internally? Or should this issue be a torchaudio issue?
```python
from typing import Optional
import time
import torch
from torch import Tensor
import torch.nn.functional as F
# pylint: disable=no-member
# pylint: disable=suppressed-message
def fftconv1d(x: Tensor, weight: Tensor,
bias: Optional[Tensor] = None,
padding: int = 0,
groups: int = 1) -> Tensor:
"""
Args
x: Tensor (batch_size, in_channels, size)
weight: Tensor (out_channels, in_channels//groups, kernel_size)
bias: Tensor [None] out_channels
padding int [0]
groups int [1] in_channels, out _channels must be divisible by groups
# stride and dilation = 1
adapted from https://towardsdatascience.com/fourier-convolutions-in-pytorch-4cbd23c70005
faster for large ones
"""
assert x.ndim == 3, "x expedted shape: (N, C, L)"
assert weight.ndim == 3, "weight expected (in_channels, out_channels, kernel)"
_out, _in, _ = weight.shape
if bias is not None:
assert bias.ndim==1 and len(bias) == _out, "bias vector sized as out_channels reqd"
assert not x.shape[1]%groups, f"in_channels must be mod groups {x.shape[1], groups}"
assert not _out%groups, f"out_channels must be mod groups {_out, groups}"
assert x.shape[1] == groups*_in, f"Given groups={groups} and weight {tuple(weight.shape)}, \
expected input {tuple(x.shape)} to have {groups*_in} channels"
out = F.pad(x, [padding, padding])
_pad = out.shape[-1] - weight.shape[-1]
x_rfft = torch.fft.rfftn(out, dim=-1)
w_rfft = torch.fft.rfftn(F.pad(weight, (0, _pad)), dim=-1)
w_rfft.imag *= -1
if groups == 1:
x_rfft = torch.einsum("ab..., cb... -> ac...", x_rfft, w_rfft)
else:
_o = _out//groups
x_rfft = torch.cat([torch.einsum("ab..., cb... -> ac...",
x_rfft[:, _in*g:_in*(g+1)],
w_rfft[_o*g:_o*(g+1)])
for g in range(groups)], dim=1)
out = torch.fft.irfftn(x_rfft, dim=-1)[..., :_pad + 1].contiguous()
if bias is not None:
out = out + bias.view(1, -1, 1)
return out
def _testconv(cuda=True, grad=True, pad=None, out_channels=4, in_channels=2,
batch_size= 20, size = 4096, ksize = 1000, groups=1):
if pad is None:
pad = ksize//2
signal = torch.randn(batch_size, in_channels, size)
if grad:
signal.requires_grad = True
kernel = torch.randn(out_channels, in_channels//groups, ksize)
bias = torch.randn(out_channels)
print(f"\n signal: {tuple(signal.shape)}, kernel: {tuple(kernel.shape)}")
if cuda:
signal = signal.to(device="cuda")
kernel = kernel.to(device="cuda")
bias = bias.to(device="cuda")
_start = time.time()
y0 = F.conv1d(signal, kernel, bias=bias, padding=pad, groups=groups)
if cuda:
torch.cuda.synchronize()
_fconv = time.time()
y2 = fftconv1d(signal, kernel, bias=bias, padding=pad, groups=groups)
if cuda:
torch.cuda.synchronize()
_fftconv = time.time()
_test = f'test: cuda:{cuda}, grad:{grad}, pad{pad}, out:{out_channels}, in{in_channels}, groups{groups}'
print(_test)
_nntime = 1000*(_fconv - _start)
_fftime = 1000*(_fftconv - _fconv)
if _nntime < _fftime:
_nn="\t\t\tnn.Conv1d is faster"
_ff =""
elif _fftime < _nntime:
_nn = ""
_ff = "\t\t\tFFT faster"
print(f" nn.Conv1d() time {1000*(_nntime):.1f} ms {_nn}")
print(f" fftconv1d time {1000*(_fftime):.1f} ms {_ff}")
assert torch.allclose(y0, y2, rtol=1e-3, atol=1e-3), _test
def test_conv_opt():
cuda = [True, False]
grad = [True, False]
padding = [0, None, 100]
groups = [1,2]
out_channels = [4,2]
in_channels = [2,8]
batch_size = 20
size = [4096, 14400]
ksize = [9, 1000]
for p in padding:
for r in grad:
for c in cuda:
for g in groups:
for i in in_channels:
for o in out_channels:
for k in ksize:
for s in size:
_testconv(cuda=c, grad=r, pad=p, out_channels=o, groups=g,
in_channels=i, batch_size=batch_size, size=s, ksize=k)
```
### Versions
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.27
Python version: 3.9.5 (default, May 18 2021, 19:34:48) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.55
GPU models and configuration: GPU 0: NVIDIA TITAN RTX
Nvidia driver version: 510.39.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] denoising-diffusion-pytorch==0.7.1.1
[pip3] numpy==1.20.3
[pip3] pytorch-fid==0.2.0
[pip3] pytorch-lightning==1.4.0
[pip3] pytorch3d==0.2.0
[pip3] torch==1.11.0
[pip3] torch-ema==0.2
[pip3] torch-fidelity==0.3.0
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.4.1
[pip3] torchnmf==0.3.5.dev0
[pip3] torchvision==0.12.0
[conda] _pytorch_select 0.1 cpu_0
[conda] blas 2.114 mkl conda-forge
[conda] blas-devel 3.9.0 14_linux64_mkl conda-forge
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] denoising-diffusion-pytorch 0.7.1.1 pypi_0 pypi
[conda] libblas 3.9.0 14_linux64_mkl conda-forge
[conda] libcblas 3.9.0 14_linux64_mkl conda-forge
[conda] liblapack 3.9.0 14_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 14_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-devel 2022.0.1 h66538d2_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.19.3 pypi_0 pypi
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-fid 0.2.0 pypi_0 pypi
[conda] pytorch-lightning 1.4.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.2.0 pypi_0 pypi
[conda] torch-ema 0.2 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchmetrics 0.4.1 pypi_0 pypi
[conda] torchnmf 0.3.5.dev0 pypi_0 pypi
[conda] torchvision 0.12.0 py39_cu113 pytorch
cc @VitalyFedyunin @ngimel
| 1 |
5,485 | 79,208 |
[ONNX] Enable more operators to support data propagation
|
module: onnx, triaged, onnx-triaged
|
### π The feature, motivation and pitch
Continue the work https://github.com/pytorch/pytorch/pull/75307. That PR enables Shape and Gather op to use ONNX's data propagation. There are other supported operators from ONNX ([the list](https://github.com/pytorch/pytorch/pull/75307)). To cover more use cases for shape inference in PyTorch-ONNX exporter, we should enable more operators in PyTorch-ONNX exporter to support data propagation.
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,486 | 79,205 |
out-of-place functional optimizers: functional optimizers may not be composite compliant
|
module: optimizer, triaged, needs research, module: __torch_dispatch__, tensor subclass
|
## The Problem
Claim: today, the functional optimizers API is not "Composite Compliant". In other words, the implementation uses in-place operations that do not interact well with Tensor Subclasses.
For example, consider the implementation of [sgd](https://github.com/pytorch/pytorch/blob/e99c8ec3c267228c9ff9fefbfb0235dd41698948/torch/optim/sgd.py#L166-L179)
```
def sgd(params, grads, momentum_buffers: "aka, the state"):
pass
```
Let's assume that some of the momentum buffers are Tensor subclasses, but none of params and grads are Tensor subclasses. Then we would run into a situation where we are performing `Tensor.mul_(TensorSubclass)` ([see here](https://github.com/pytorch/pytorch/blob/e99c8ec3c267228c9ff9fefbfb0235dd41698948/torch/optim/sgd.py#L233))
The "Tensor-Subclass-ness" of the buffers would be lost.
## Concrete examples
- Maybe the user has (params, grad), but want to test the effect of a different set of momentum_buffers. So they'll create a BatchedTensor for the momentum_buffers, and given (params, grad, momentum_buffers), what the user really wants is to obtain a BatchedTensor that represents the updates to apply to the params. Then, the user will manually apply the updates to the params out-of-place (instead of in-place!) to avoid an error
```
params = Tensor(...)
grads = Tensor(...)
momentum_buffers = BatchedTensor(...)
updates = sgd(params, grads, momentum_buffers)
# The following operation doesn't work, because params is a Tensor but momentum_buffers is a BatchedTensor
params.add_(updates)
# So the user's workaround, and our recommended approach, is the following:
new_params = params + updates
```
## Pitch
- Introduce a version of functional optimizers that just returns the (Tensor) updates to be applied to the parameters. This gives our users more flexiblity (they can perform the in-place themselves, or decide to do an out-of-place).
- Fix the internals of functional optimizers to ensure that if any of the inputs are Tensor subclasses, the output (the updates) propagate the Tensor-Subclass-ness.
cc @vincentqb @jbschlosser @albanD @Chillee @ezyang @zou3519 @samdow
| 0 |
5,487 | 79,202 |
[bug] Device dispatcher can choose CPU path for CUDA tensors.
|
module: build, triaged, module: dispatch, module: codegen
|
### π Describe the bug
It is possible to confuse device dispatcher to choose wrong path when device dispatches are specified in a single line in `native_functions.yaml`.
To reproduce checkout the code from https://github.com/pytorch/pytorch/pull/79201 at https://github.com/pytorch/pytorch/pull/79201/commits/d03cfcfa406a75a5bcd11e0f6adeaa461b338e07.
Then
```python
In [1]: import torch
In [2]: ig = torch.rand(3, 3, device='cuda')
In [3]: ic = ig.to('cpu')
In [4]: res = torch.foo(ic, ic, ic)
CPU kernel
In [5]: res = torch.foo(ig, ig, ig)
CPU kernel
Segmentation fault (core dumped)
```
When dispatch keys are on separate lines, line in https://github.com/pytorch/pytorch/pull/79201/commits/9caf9267ecc5e56444cd1ad380a783b30159ee22, everything is fine.
### Versions
```
PyTorch version: 1.13.0a0+git75a01c2
Is debug build: False
CUDA used to build PyTorch: 11.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (GCC) 10.3.0
Clang version: Could not collect
CMake version: version 3.22.3
Libc version: glibc-2.31
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-104-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.4.120
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2060
GPU 1: NVIDIA GeForce RTX 2060
Nvidia driver version: 465.19.01
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-10.1.243/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.4.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] mypy==0.812
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.13.0a0+gitd03cfcf
[conda] magma-cuda112 2.5.2 1 pytorch
[conda] mkl 2022.0.1 h8d4b97c_803 conda-forge
[conda] mkl-include 2022.0.1 h8d4b97c_803 conda-forge
[conda] numpy 1.22.3 pypi_0 pypi
[conda] torch 1.13.0a0+gitd03cfcf dev_0 <develop>
```
cc @malfet @seemethere @ezyang @bhosmer @bdhirsh
| 5 |
5,488 | 79,197 |
[feature request] Support dataclass derivations of nn.Module
|
module: nn, triaged
|
### π The feature, motivation and pitch
Discussed in:
- https://github.com/pytorch/pytorch/issues/72901#issuecomment-1042408766
- https://github.com/pytorch/pytorch/issues/72901#issuecomment-1046011310
- https://github.com/pytorch/pytorch/issues/72901#issuecomment-1046014180
Usecase: eliminating boiler-plate module constructor such as in https://huggingface.co/blog/annotated-diffusion#position-embeddings . This happens a lot in simple composed modules. They could benefit from auto-generated constructors.
### Alternatives
Could nn.Module-derived classes support auto-generated constructors (that would probably mean adjusting the default nn.Module constructor to allow `**kwargs`) without marking the class as dataclass (that allow setting fields by passing constructor kwargs)? Would IDE's support hints of declared fields? (akin to dataclasses?) (afaik one can now declare and type fields in the class definition)
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 7 |
5,489 | 79,195 |
[bug] fill_, masked_fill_ : fill ops allow lossy downcasting of fill value
|
module: bc-breaking, triaged, topic: bc breaking, module: primTorch
|
### π Describe the bug
```python
>>> import torch
>>> t = torch.ones(3, dtype=torch.int32)
>>> t.fill_(3.14)
tensor([3, 3, 3], dtype=torch.int32)
>>> t.masked_fill_(t > 0, 3.14)
tensor([3, 3, 3], dtype=torch.int32)
# binary ops and other ops correctly error out
>>> t.add_(3.14)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: result type Float can't be cast to the desired output type Int
```
cc: @mruberry
### Versions
master
cc @ezyang @gchanan @mruberry @Lezcano @peterbell10 @ngimel
| 0 |
5,490 | 79,421 |
Mismatch in clang toolchain lead to binary incompatibilities on M1 between torch and torchvision
|
oncall: binaries, module: ci, triaged, module: macos, module: arm
|
### π Describe the bug
The M1 unit-tests [fail](https://github.com/pytorch/vision/runs/6806004576?check_suite_focus=true) with the following error:
```
self = <OpOverloadPacket(op='torchvision.roi_align')>
args = (tensor([[[[ 8.2470e+00, 1.3783e+01, 1.3401e+01, ..., 2.4769e+00,
2.6093e+00, 6.0664e+00],
...68.9164],
[ 0.0000, 175.5915, 200.8642, 382.0600, 212.2028]],
dtype=torch.float64), 0.25, 7, 7, 2, ...)
kwargs = {}
def __call__(self, *args, **kwargs):
# overloading __call__ to ensure torch.ops.foo.bar()
# is still callable from JIT
# We save the function ptr as the `op` attribute on
# OpOverloadPacket to access it here.
> return self._op(*args, **kwargs or {})
E RuntimeError: torchvision::roi_align is not yet supported with named tensors. Please drop names via `tensor = tensor.rename(None)`, call the op with an unnamed tensor, and set names on the result of the operation.
```
~~The last time the tests run successfully was on the 6th of June at commit d4a03fc02d0566ec97341046de58160370a35bd2. Unfortunately the days after that our CI broke for unrelated issues, so I guess it could be a commit commits from Core between the 6th and 9th of June.~~
EDIT:
https://github.com/pytorch/vision/runs/6797676717?check_suite_focus=true is passing (June 8, 6:43 PM GMT+2), ee6f6ec214aca19d7dfb2f24bffd26e4e083216b => it is passing on stable pytorch 1.11 (see https://github.com/pytorch/vision/runs/6797676717?check_suite_focus=true#step:3:88), `--pre` was added later: https://github.com/pytorch/vision/commit/e1f46d42f2acd1039e9a459316bc3daa92d3c983
### Versions
Latest main branch.
cc @ezyang @seemethere @malfet @pytorch/pytorch-dev-infra @albanD
| 18 |
5,491 | 79,191 |
Triangular solve fails on batches of matrices of size > (*, 524280)
|
module: cuda, triaged, module: linear algebra
|
### π Describe the bug
An error "CUBLAS_STATUS_EXECUTION_FAILED when calling 'cublasStrsmBatched'" will be triggered when calculating the log probabilities of MultivariateNormal distribution on GPU and the number of data samples is larger than 524280.
Code to reproduce the problem:
```
import torch
device = torch.device("cuda")
# device = torch.device("cpu")
dtype = torch.float32
mean = torch.tensor([0.0, 0.0], dtype=dtype, device=device)
sd = torch.diag_embed(torch.tensor([1.0, 1.0], dtype=dtype, device=device))
distribution = torch.distributions.MultivariateNormal(mean, sd)
data1 = torch.randn([524280,2], dtype=dtype, device=device)
logprob = distribution.log_prob(data1)
data2 = torch.randn([524281,2], dtype=dtype, device=device)
logprob = distribution.log_prob(data2)
```
Result:
Calculating the log_prob of data1 runs without problem. Calculating the log_prob of data2 produce the following error:
```
Traceback (most recent call last):
File ~/project/test_distributions.py:23 in <module>
logprob = distribution.log_prob(data2)
File ~/anaconda3/lib/python3.9/site-packages/torch/distributions/multivariate_normal.py:208 in log_prob
M = _batch_mahalanobis(self._unbroadcasted_scale_tril, diff)
File ~/anaconda3/lib/python3.9/site-packages/torch/distributions/multivariate_normal.py:57 in _batch_mahalanobis
M_swap = torch.linalg.solve_triangular(flat_L, flat_x_swap, upper=False).pow(2).sum(-2) # shape = b x c
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasStrsmBatched( handle, side, uplo, trans, diag, m, n, alpha, A, lda, B, ldb, batchCount)`
```
The code also runs without problem if switching the device to CPU.
### Versions
The error can be reproduced on the following two systems.
System 1:
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.20.0
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.55
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
Nvidia driver version: 510.39.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_2
[conda] numpy-base 1.21.5 py39hf524024_2
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchtext 0.12.0 py39 pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
```
System 2:
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: N/A
Python version: 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 SUPER
Nvidia driver version: 510.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py39h2bbff1b_0
[conda] mkl_fft 1.3.1 py39h277e83a_0
[conda] mkl_random 1.2.2 py39hf11a4ad_0
[conda] numpy 1.21.5 py39h7a0a035_2
[conda] numpy-base 1.21.5 py39hca35cd5_2
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
```
cc @ezyang @gchanan @zou3519 @fritzo @neerajprad @alicanb @nikitaved @ngimel @jianyuh @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 10 |
5,492 | 79,177 |
_make_elementwise_unary_reference and other function factories in torch._refs don't set __name__ correctly
|
triaged, module: primTorch
|
### π Describe the bug
```
>>> import torch._refs
>>> torch._refs.abs.__name__
'_ref'
```
This is problematic for FX because FX uses the `__name__` to recompile the function back into Python, and cannot do so if it looks like this; it will silently miscompile the code.
functools.wraps does not help as there is no "real" function to wrap.
### Versions
master
cc @ezyang @mruberry @ngimel
| 0 |
5,493 | 79,171 |
DistributedDataParallel `static_graph=True` fails to handle unused parameters
|
oncall: distributed, triaged, module: ddp
|
### π Describe the bug
DistributedDataParallel fails to properly handle unused parameters.
```python
import os
os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'DETAIL'
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn import Module
from torch.nn.parallel.distributed import DistributedDataParallel
class M(Module):
def __init__(self):
super().__init__()
self.a = nn.Conv3d(1, 1, 1)
self.b = nn.Conv3d(1, 1, 1)
def forward(self, x):
return self.a(x), self.b(x)
if __name__ == '__main__':
os.environ.update(
MASTER_ADDR='127.0.0.1',
MASTER_PORT=str(12345),
WORLD_SIZE=str(1),
RANK=str(0),
)
dist.init_process_group('nccl')
m = M().cuda()
ddp = DistributedDataParallel(
m,
static_graph=True,
find_unused_parameters=False # default
)
x = torch.zeros(1, 1, 1, 1, 1, device='cuda')
for i in range(5):
print(f'iter: {i}')
a, b = ddp(x)
l = a.sum()
l.backward()
```
Output
```
iter: 0
iter: 1
iter: 2
Traceback (most recent call last):
File <...>, in <module>
a, b = ddp(x)
File "<...>/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "<...>/conda/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 947, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your training graph has changed in this iteration, e.g., one parameter is used in first iteration, but then got unused in the second iteration. this is not compatible with static_graph set to True.
Parameters which did not receive grad for rank 0: b.bias, b.weight
Parameter indices which did not receive grad for rank 0: 2 3
```
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.17
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: A100-SXM4-40GB
GPU 1: A100-SXM4-40GB
GPU 2: A100-SXM4-40GB
GPU 3: A100-SXM4-40GB
GPU 4: A100-SXM4-40GB
GPU 5: A100-SXM4-40GB
GPU 6: A100-SXM4-40GB
GPU 7: A100-SXM4-40GB
Nvidia driver version: 460.73.01
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.4.0
/usr/lib64/libcudnn_adv_infer.so.8.4.0
/usr/lib64/libcudnn_adv_train.so.8.4.0
/usr/lib64/libcudnn_cnn_infer.so.8.4.0
/usr/lib64/libcudnn_cnn_train.so.8.4.0
/usr/lib64/libcudnn_ops_infer.so.8.4.0
/usr/lib64/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-ignite==0.4.9
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] blas 2.114 mkl conda-forge
[conda] blas-devel 3.9.0 14_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] ignite 0.4.9 py_0 pytorch
[conda] libblas 3.9.0 14_linux64_mkl conda-forge
[conda] libcblas 3.9.0 14_linux64_mkl conda-forge
[conda] liblapack 3.9.0 14_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 14_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h06a4308_117
[conda] mkl-devel 2022.0.1 h66538d2_117
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] numpy 1.22.3 py39hc58783e_2 conda-forge
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.12.0 py39_cu113 pytorch
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 5 |
5,494 | 79,164 |
PyTorch/XLA's DDP XLABackend is broken by upstream change
|
oncall: distributed, triaged, module: xla, module: ddp
|
### π PyTorch/XLA's DDP hangs
In https://github.com/pytorch/xla/commit/841e0d3cab9555ce6ca03de5e0498c1310a01ba2 @hjm-aws introduced the `XLABackend` which enabled the DDP for PyTorch/XLA. However this is currently broken due to that subclassing of `Work` class(PyTorch/XLA has a `WorkXla`) is broken. The result is that when `wait` is being called `WorkXla::wait` is not correctly trigger and the default wait will be used and process will hang.
More detail XLA issue can be found in https://github.com/pytorch/xla/issues/3625
### Versions
PyTorch: nightly
PyTorch/XLA: nightly
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @bdhirsh
| 5 |
5,495 | 79,145 |
Redundant info are saved when using torch.save to save part of torch.tensor
|
module: serialization, triaged
|
### π Describe the bug
I am trying to save part of a very large torch.tensor by indexing it. But the saved partial tensor is always as the same size as the original one.
To reproduce:
```python
import torch
a = torch.ones([100,100])
torch.save(a, 'full.pt') #file full.pt is 40.7kB (size is reported by nautilus on Ubuntu 20.04)
torch.save(a[0], 'partial0.pt') #file partial0.pt is 40.7kB
torch.save(a[:1], 'partial1.pt') #file partial1.pt is 40.7kB
b = torch.ones([1,100])
torch.save(b, 'partial2.pt') #file partial2.pt is 1.1kB
```
But when I load the file ```partial0.pt``` and ```partial1.pt``` back to torch.tensor, they are of the size that I want.
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numexpr 2.8.0 mkl_py310h22e654f_0 conda-forge
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.11.0 py3.10_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
cc @mruberry
| 3 |
5,496 | 79,138 |
[AUTOGRAD] support implicit reductions with SymInts in autograd.
|
triaged, lazy
| null | 0 |
5,497 | 79,137 |
[AUTOGRAD] codegen to use sym_sizes for ops w/ symint overloads in derivative formulas
|
triaged, lazy
|
Change codegen logic so it can transparently generate `sym_sizes` in place of `sizes` for ops that support `SymInt[]` overload.
| 0 |
5,498 | 79,130 |
torchvision.models.mobilenetv3 can't save pre-trained model to custom dir?
|
triaged, module: vision
|
### π The doc issue
import torchvision.models as models
mobile_v3 = models.mobilenet_v3_small(pretrained=True)
this method load the pre-trained model to defaul catch dir?
in this function _mobilenet_v3_model
the line
"state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
model.load_state_dict(state_dict)"
if I want to save pre-trained model to custom dir .how to set the 'model_dir' to load_state_dict_from_url() this method?
### Suggest a potential alternative/fix
_No response_
cc @fmassa @vfdev-5 @pmeier
| 0 |
5,499 | 79,120 |
Hide or fuse TupleConstruct / TupleUnpack from tensorboard graph
|
triaged, module: tensorboard
|
### π The feature, motivation and pitch
Tensorboard graphs generated by Summary.add_graph are cluttered with successions of TupleUnpack and TupleConstruct when chaining modules that take and return several arguments. Tuple packing and unpacking is a basic and transparent programming pattern in python, I cannot think of a situation where we want to explicitly view it in the graph.
I would be more legible if forward arguments and tuple return values were not grouped into a single graph node.
### Alternatives
_No response_
### Additional context
Example:
```py
import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
class Layer(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(64, 64)
def forward(self, a, b):
return self.linear(a), self.linear(b)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = Layer()
self.layer2 = Layer()
def forward(self, x):
a, b = self.layer1(x, x)
c, d = self.layer2(a, b)
return a + b + c + d
model = Model()
summary = SummaryWriter(".")
summary.add_graph(model, input_to_model=(torch.rand([1, 64])))
```

| 0 |
5,500 | 79,117 |
[ONNX] `.squeeze(1)` on the B X T (not B X 1 X T) tensor causes export error in masking
|
module: onnx, triaged, onnx-triaged
|
### π Describe the bug
Here is I made repro script with simple model
```python
import torch
import torch.nn as nn
import onnxruntime
device = 'cuda'
use_mask = True
model_path = 'simple_model.onnx'
### Mask calculation
def compute_output_lengths(x, lengths_fraction=None):
if lengths_fraction is None:
return torch.full(x.shape[:1], x.shape[-1], device=x.device, dtype=torch.long)
return (lengths_fraction * x.shape[-1]).ceil().long()
def temporal_mask(x, lengths):
return (torch.arange(x.shape[-1], device=x.device, dtype=lengths.dtype).unsqueeze(0) <
lengths.unsqueeze(1)).view(x.shape[:1] + (1,) * (len(x.shape) - 2) + x.shape[-1:])
### Simple model for export
class SimpleNetwork(nn.Module):
def __init__(self, use_mask=False):
super().__init__()
self.use_mask = use_mask
self.conv1 = nn.Conv1d(in_channels=1,
out_channels=3,
kernel_size=3,
stride=1,
padding=0,
dilation=1,
groups=1
)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv1d(in_channels=3,
out_channels=1,
kernel_size=3,
stride=1,
padding=0,
dilation=1,
groups=1)
def forward(self, x, xlen):
# x - [B; T], xlen - [B]
x = x.unsqueeze(1)
# x - [B; 1; T]
if self.use_mask:
mask = temporal_mask(x, compute_output_lengths(x, xlen))
x = x * mask
x = self.conv1(x)
x = self.relu(x)
x = self.conv2(x)
return x
### Random tensor to export
onnx_sample_batch_size = 16
onnx_sample_time = 1024
waveform_input = torch.rand(onnx_sample_batch_size, onnx_sample_time, device=device)
xlen = torch.rand(onnx_sample_batch_size, device=device)
### Create model
model = SimpleNetwork(use_mask=use_mask).to(device)
result_torch = model(waveform_input, xlen)
### Export model
torch.onnx.export(
model, (waveform_input, xlen,),
model_path,
verbose=False,
opset_version=12,
export_params=True,
do_constant_folding=True,
input_names=['x', 'xlen'],
output_names=['logits'],
dynamic_axes=dict(x={
0: 'B', 1: 'T'
}, logits={
0: 'B', 2: 't'
}, xlen={
0: 'B'
})
)
session = onnxruntime.InferenceSession(model_path, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
onnx_input = dict(x=waveform_input.cpu().numpy())
if use_mask:
onnx_input['xlen'] = xlen.cpu().numpy()
result_onnx = session.run(None, onnx_input)[0]
result_onnx = torch.as_tensor(result_onnx, device=device)
### Π‘orrectness check
assert torch.allclose(result_torch.cpu(), result_onnx.cpu(), rtol=1e-02, atol=1e-03)
### Doing the same but with different shape
validate_batch_size = 32
validate_sample_time = 512
validate_waveform_input = torch.rand(validate_batch_size, validate_sample_time, device=device)
validate_xlen = torch.rand(validate_batch_size, device=device)
validate_result_torch = model(validate_waveform_input, validate_xlen)
validate_onnx_input = dict(x=validate_waveform_input.cpu().numpy())
if use_mask:
validate_onnx_input['xlen'] = validate_xlen.cpu().numpy()
validate_result_onnx = session.run(None, validate_onnx_input)[0]
validate_result_onnx = torch.as_tensor(validate_result_onnx, device=device)
assert torch.allclose(validate_result_torch.cpu(), validate_result_onnx.cpu(), rtol=1e-02, atol=1e-03)
```
Here is fully convertible and working script.
But if i will add `.squeeze(1)` in forward like this:
```python
def forward(self, x, xlen):
# x - [B; T]
# x.squeeze(1) - [B; T]
# x.squeeze(1).unsqueeze(1) - [B; 1; T]
x = x.squeeze(1).unsqueeze(1)
if self.use_mask:
mask = temporal_mask(x, compute_output_lengths(x, xlen))
x = x * mask
x = self.conv1(x)
x = self.relu(x)
x = self.conv2(x)
return x
```
Then I'll get following exception:
```
[E:onnxruntime:, sequential_executor.cc:346 Execute] Non-zero status code returned while running Reshape node. Name:'Reshape_16' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{32,1024}, requested shape:{16,1,1024}
```
So `.squeeze(1)` on tensor without B X 1 X ... structure causes to static shapes fixing in the onnx model.
Also I found that this script works with squeeze if `mask=False`. So this bag is combination of mask calculation and squeeze.
Here is image from Netron. Problem in `Mul` where `B = 1024`
<img width="248" alt="image" src="https://user-images.githubusercontent.com/21354805/172596256-89ee7656-d7f2-4c59-895f-58244e768d05.png">
Another issue about it: https://github.com/pytorch/pytorch/issues/36796
### Versions
```
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 470.82.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.1
[pip3] torch-audiomentations==0.9.1
[pip3] torch-pitch-shift==1.2.0
[pip3] torch-stft==0.1.4
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.10.1 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-audiomentations 0.9.1 pypi_0 pypi
[conda] torch-pitch-shift 1.2.0 pypi_0 pypi
[conda] torch-stft 0.1.4 pypi_0 pypi
[conda] torchaudio 0.10.1 py39_cu113 pytorch
[conda] torchvision 0.11.2 py39_cu113 pytorch
```
```
python -c 'import onnxruntime; print(onnxruntime.__version__)'
```
`1.10.0`
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.