Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
1,101 | 108,745 |
[PT2.0] [.Compile] [Dynamic] Pytorch FX/JIT graph's inputs/nodes ordering is changed when FX recompiles even though the graph operations are same
|
triaged, oncall: pt2, module: dynamic shapes, module: dynamo
|
### 🐛 Describe the bug
Consider a graph with 5 input tensors and its input shapes as below for iteration 1 and iternation 2:
Iteration 1: input shapes::{0: [], 1: [3], 2: [3], 3: [8, 3, 24, 24, 24], 4: [3], 5: [3]}
Iteration 2: input shapes::{0: [3], 1: [], 2: [3], 3: [8, 3, 24, 24, 71], 4: [3], 5: [3]}
With the above input shapes, Fx will perform recompilations even when dynamic=True. FX graph input for tensor 3 will be like:
iteration 1: [s0, 3, s2, s2, s2]
iteration 2: [s0, 3, s2, s2, s3]
Here the FX graph generated in iteration 1 and iterations 2 have a differences that, the graph input order and graph nodes order (randomly) got changed in recompilation. This makes different JIT graphs and not able to leverage the dynamic shape feature fully.
For example, the below images shows the FX graph of iteration 1 and iteration 2:

The graph inputs are same but its order is changed, similarly the graph nodes and operations are same but its order is changed randomly.
The JIT graph generated from the the FX graphs also reflects with the same differences.
Below testcase can reproduce the issue:
[test_batchnorm3d.zip](https://github.com/pytorch/pytorch/files/12546771/test_batchnorm3d.zip)
```
import torch
import torch.nn as nn
aten = torch.ops.aten
from torch._functorch.aot_autograd import aot_module_simplified
import numpy as np
import random
def my_aot_compiler(gm, example_inputs):
def my_compiler(gm, example_inputs):
# print(gm.code)
# print(gm.graph)
import copy
from torch._functorch.compile_utils import strip_overloads
from torch._functorch.compilers import _disable_jit_autocast
module = copy.deepcopy(gm)
with _disable_jit_autocast():
strip_overloads(module)
for node in module.graph.nodes:
new_kwargs = {}
for k, v in node.kwargs.items():
if isinstance(v, torch.device):
v = v.type
new_kwargs[k] = v
node.kwargs = new_kwargs
module.graph.lint()
module.recompile()
from collections import OrderedDict
module._forward_hooks = OrderedDict()
module._forward_pre_hooks = OrderedDict()
print("Module is \n", module.print_readable())
f = torch.jit.script(module)
print("Converted Graph is \n", f.graph)
return gm.forward
# Invoke AOTAutograd
return aot_module_simplified(
gm,
example_inputs,
fw_compiler=my_compiler
)
class OpWrapperModule(torch.nn.Module):
def __init__(self, op):
super().__init__()
self.op = op
def forward(self, inputs):
result = self.op(inputs)
return result
def test_dynamic_shape_topk_cpu():
print("Starting the test.................")
sizes = [
(8, 3, 24, 24, 24),
(8, 3, 24, 24, 71),
(8, 3, 24, 24, 80),
]
op = nn.BatchNorm3d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
op = op.to('cpu')
model_cpu = OpWrapperModule(op).to(dtype=torch.float32).to('cpu')
compiled_function_training = torch.compile(model_cpu, backend=my_aot_compiler, dynamic=True)
for s in sizes:
t = torch.empty(size=s, dtype=torch.float32).uniform_(0, 1).to('cpu').requires_grad_()
t = t.detach().requires_grad_()
result_compile_train = compiled_function_training(t)
grad_in = torch.empty(size=s, dtype=torch.float32).uniform_(0, 1).to('cpu')
result_compile_train.backward(grad_in)
test_dynamic_shape_topk_cpu()
```
This issue even observed with dynamic=False case,
Please help to analyze this issue and suggest any graph normalization method is there in Pytorch?
### Versions
Collecting environment information...
PyTorch version: 2.0.1a0+git783db92
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 14.0.5 (ssh://gerrit:29418/tpc_llvm10 02ad77b7f83f1afda4228414a6e3c917bb653668)
CMake version: version 3.27.2
Libc version: glibc-2.31
Python version: 3.8.16 (default, Jan 17 2023, 23:13:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
1,102 | 108,744 |
switch more test cases to use MultithreadTestCase
|
good first issue, triaged, module: dtensor
|
MultithreadTestCase allow us to run less resouce by spawning threads instead of processes, which could make distributed tests run faster. We have the following test files still not using MultithreadTestCase, and we should switch those test case to use it.
[ ] https://github.com/pytorch/pytorch/blob/main/test/distributed/_tensor/test_math_ops.py
[ ] https://github.com/pytorch/pytorch/blob/main/test/distributed/_tensor/test_matrix_ops.py
[ ] https://github.com/pytorch/pytorch/blob/main/test/distributed/_tensor/test_tensor_ops.py
[ ] https://github.com/pytorch/pytorch/blob/main/test/distributed/_tensor/test_embedding_ops.py
Example test case that already uses multithreaded test case, see https://github.com/pytorch/pytorch/blob/main/test/distributed/_tensor/test_pointwise_ops.py#L75
one just need to extend the `DTensorOpTestBase` for the above test files, should be relatively simple
| 1 |
1,103 | 108,743 |
DISABLED test_complex_half_reference_testing_fft_hfft2_cuda_complex32 (__main__.TestCommonCUDA)
|
triaged, module: flaky-tests, skipped, module: primTorch
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_complex_half_reference_testing_fft_hfft2_cuda_complex32&suite=TestCommonCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16564758724).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_complex_half_reference_testing_fft_hfft2_cuda_complex32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
1,104 | 108,742 |
[dtensor] enable tensor metadata check across ranks when run_check=True
|
triaged, module: dtensor
|
https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/api.py#L106
When calling `DTenor.from_local`, by default we need to sync the tensor metadata across ranks to ensure user passed in the same type of tensor across ranks, this is a safety check and can be turn off by `run_check=False`, we should implement this feature by using `dist.allgather_object` to all gather the tensor metadata, and make sure each rank have the same metadata.
| 0 |
1,105 | 108,741 |
test commit 2
|
topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D49043673](https://our.internmc.facebook.com/intern/diff/D49043673/)
| 1 |
1,106 | 108,740 |
test commit 1
|
topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D49043675](https://our.internmc.facebook.com/intern/diff/D49043675/)
| 1 |
1,107 | 108,739 |
DDP Elastic "master_addr" resolution error in environment variables.
|
oncall: distributed, triaged, module: elastic
|
### 🐛 Describe the bug
I found a problem with MASTER_ADDR in DDP. When dynamic backend(c10d) is used, the obtained MASTER_ADDR is resolved to the machine name. As a result, the connection fails to be obtained from the environment variable during **init_process_group**.
The specific test cases are as follows:
Python script in use **"test_master_addr.py"**:
```python
import os
def main():
print('RANK = ', int(os.environ['RANK']))
print('MASTER_ADDR = ', os.environ["MASTER_ADDR"])
print('MASTER_PORT = ', os.environ["MASTER_PORT"])
if __name__ == "__main__":
main()
```
Here are three test scenarios:
(1) Using the rendezvous static backend
The startup command is as follows:
```bash
torchrun --nproc_per_node=1 --nnodes=1 test_master_addr.py
```
and the result is:
```bash
RANK = 0
MASTER_ADDR = 127.0.0.1
MASTER_PORT = 29500
```
(2) Using the rendezvous static backend, and specify the master address and port.
The startup command is as follows:
```bash
torchrun --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr="51.38.95.133" --master_port=12345 test_master_addr.py
```
and the result is:
```bash
RANK = 0
MASTER_ADDR = 51.38.95.133
MASTER_PORT = 12345
```
(3) Using the rendezvous dynamic backend (c10d), and specify the rdzv_endpoint.
The startup command is as follows:
```bash
torchrun --nproc_per_node=1 --nnodes=1 --rdzv-backend=c10d --rdzv_id=1 --rdzv_endpoint=51.38.95.133:12345 test_master_addr.py
```
and the result is:
```bash
RANK = 0
MASTER_ADDR = centos7-133
MASTER_PORT = 32905
```
When I fixed this problem, all three test scenarios were able to get MASTER_ADDR correctly from the environment variable.
The modification is as follows:
```python
torch/distributed/elastic/agent/server/api.py:
def _get_fq_hostname() -> str:
return socket.getfqdn(socket.gethostname())
# add this function
def _get_fq_host_ip() -> str:
return socket.gethostbyname(socket.gethostname())
......
@staticmethod
def _set_master_addr_port(
store: Store,
master_addr: Optional[str],
master_port: Optional[int],
local_addr: Optional[str],
):
if master_port is None:
sock = _get_socket_with_port()
with closing(sock):
master_port = sock.getsockname()[1]
if master_addr is None:
# If user specified the address for the local node, use it as the master addr if not exist
if local_addr:
master_addr = local_addr
else:
# master_addr = _get_fq_hostname()
master_addr = _get_fq_host_ip() # Use _get_fq_host_ip instead of _get_fq_hostname
store.set("MASTER_ADDR", master_addr.encode(encoding="UTF-8"))
store.set("MASTER_PORT", str(master_port).encode(encoding="UTF-8"))
```
Now retest:
```bash
#The result of (1):
RANK = 0
MASTER_ADDR = 127.0.0.1
MASTER_PORT = 29500
#The result of (2):
RANK = 0
MASTER_ADDR = 51.38.95.133
MASTER_PORT = 12345
#The result of (3):
RANK = 0
MASTER_ADDR = 51.38.95.133
MASTER_PORT = 36499
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git6c0bba3
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.5.0
Clang version: Could not collect
CMake version: version 3.20.5
Libc version: glibc-2.17
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz
Stepping: 7
CPU MHz: 1000.000
CPU max MHz: 2401.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+git6c0bba3
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0a0+git6c0bba3 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @dzhulgakov
| 1 |
1,108 | 108,734 |
[Decomposition] sum
|
fb-exported, ciflow/inductor
|
Summary:
Decomp already exists; include it in core_aten_decompositions
https://www.internalfb.com/code/fbsource/[e69bf00ff87a55c9a30bd7905881661ff05fa211]/xplat/caffe2/torch/_refs/__init__.py?lines=2228
Differential Revision: D49042180
| 3 |
1,109 | 108,731 |
[fx][split][testing] Add testing for #107981
|
open source, topic: not user facing
|
- Follow-up to #107981, adding testing for metadata copying in placeholder nodes within the `split_by_tags` utility
- Validation included in the test from #107248, since both tests are relevant to the same aspect of the utility
| 1 |
1,110 | 108,727 |
[Decomposition] rand_like
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Move decomp from _inductor and include it in core_aten_decompositions
Differential Revision: D48940164
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
1,111 | 108,726 |
[Decomposition] lift_fresh
|
fb-exported, ciflow/inductor
|
Summary:
Decomp already exists. Include it in core_aten_decompositions
https://www.internalfb.com/code/fbsource/[b15dc20207e33abb49621994196f2ee063724d2a]/fbcode/caffe2/torch/_decomp/decompositions.py?lines=1777
Differential Revision: D48871716
| 4 |
1,112 | 108,722 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
1,113 | 108,716 |
Support benchmark fusion for TemplateKernel
|
triaged, module: inductor, module: dynamo
|
### 🚀 The feature, motivation and pitch
A followup for https://github.com/pytorch/pytorch/pull/108193
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 0 |
1,114 | 108,711 |
[TGIF Inplace] [xlv2][1/n] Expose a couple APIs from inline_container that will be used for chunk read
|
caffe2, fb-exported
|
Summary: Expose APIs needed for chunk read. Tested together with stacked diff.
Test Plan:
unit test
```
buck test //caffe2/caffe2/serialize:inline_container_test -- --run-disabled --print-passing-details
```
Integration test is done together with stacked diff.
Differential Revision: D48544397
// Temporarily adding to unblock shipIt:
@diff-train-skip-merge
| 29 |
1,115 | 108,698 |
doctr_reco_predictor: ERROR:common:call_function groupby in skip_files Builtin groupby
|
triaged, oncall: pt2, module: dynamo, module: export
|
This is because we dont support itertools.groupby
Repro
~~~
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only doctr_reco_predictor
~~~
1) Come up with a repro
2) Search for itertools.accumulate impl in Dynamo. Investigate if gropuby can be implemented.
3) Add your repro test case. Use fullgraph=True to ensure no graph break.
cc @ezyang @msaroufim @wconstab @bdhirsh @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 0 |
1,116 | 108,696 |
[DeprecatedAPI][iOS 2][stringWithCString:] - xplat caffe2
|
fb-exported, release notes: jit
|
Summary: https://developer.apple.com/documentation/foundation/nsstring/1497289-stringwithcstring
Test Plan: builds
Differential Revision: D48692610
| 5 |
1,117 | 108,692 |
Adding Maximal Update Parametrization (µP) to torch.nn.init
|
module: nn, triaged, needs research
|
### 🚀 The feature, motivation and pitch
Using muP as the default parameter initializer as it represents a unique point in the parametrization space.
### Alternatives
Status Quo
as suggested in https://github.com/pytorch/pytorch/issues/102477#issuecomment-1574125554
### Additional context
https://github.com/microsoft/mup
https://arxiv.org/abs/2011.14522
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
1,118 | 108,690 |
Move negative index checking to common.py - Fix issue 97365
|
open source, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108690
Fixes https://github.com/pytorch/pytorch/issues/97365
| 2 |
1,119 | 108,677 |
Simplify symbolize choice
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108677
Previously we were using a weak symbol, but in certain internal builds it didn't get overridden correctly. This just puts the conditional compilatin right in the unwind.cpp file.
Differential Revision: [D49020023](https://our.internmc.facebook.com/intern/diff/D49020023/)
| 1 |
1,120 | 108,676 |
RuntimeError when calling conv_transpose2d with groups
|
needs reproduction, module: nn, triaged
|
### 🐛 Describe the bug
When calling conv_transpose2d with groups > 1, I run into the following error:
```
r = torch.nn.functional.conv_transpose2d(a, wts, stride=stride, padding=padding, dilation=dilation,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not construct a memory descriptor using a format tag
```
Full code snippet:
```
import torch
device = "cpu"
a = torch.randn((9, 42, 1, 5), device=device, dtype=torch.float32)
wts = torch.randn((42, 1, 1, 1), device=device, dtype=torch.float32)
stride = [1, 1]
padding = [0, 0]
dilation = [4, 3]
groups = 2
r = torch.nn.functional.conv_transpose2d(a, wts, stride=stride, padding=padding, dilation=dilation, groups=groups)
print(r)
```
### Versions
```
PyTorch version: 2.0.0.post101
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:34:09) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
...
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.0.post101
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch 2.0.0 cpu_mkl_py311had667d7_101 conda-forge
[conda] torch 2.0.0.post101 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
1,121 | 108,671 |
avg_pool3d_backward fails on meta with grad_input parameter
|
triaged, module: meta tensors
|
### 🐛 Describe the bug
Code to reproduce
```python
import torch
#Working with cpu
t = torch.randn((2, 2, 2, 2, 2))
self = torch.randn((2, 2, 4, 4, 4))
torch.ops.aten.avg_pool3d_backward(t, self, 2, 2, 0, False, False, None, grad_input=torch.ones_like(self))
#Failing with meta
t = torch.randn((2, 2, 2, 2, 2), device='meta')
self = torch.randn((2, 2, 4, 4, 4), device='meta')
torch.ops.aten.avg_pool3d_backward(t, self, 2, 2, 0, False, False, None, grad_input=torch.ones_like(self))
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/localdata/nicholasw/pytorch/torch/_ops.py", line 692, in __call__
return self._op(*args, **kwargs or {})
File "/localdata/nicholasw/pytorch/torch/_prims_common/wrappers.py", line 229, in _fn
result = fn(*args, **kwargs)
TypeError: meta_avg_pool3d_backward() got an unexpected keyword argument 'grad_input'
```
### Versions
Collecting environment information...
PyTorch version: 2.2.0a0+git208fd1c
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
BogoMIPS: 4491.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.2.0a0+git738106c
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.2.0a0+git738106c dev_0 <develop>
cc @ezyang @eellison @bdhirsh
| 0 |
1,122 | 108,670 |
torch.jit.script produces incorrect gradients
|
oncall: jit
|
### 🐛 Describe the bug
I made a custom layer-norm implementation for Conv1d `(B, C, L)` ordering, and the gradients appear to be wrong under `torch.jit.script`.
Here is a reproducible example:
```python
import torch
import torch.nn as nn
def ln_nb(x, gamma):
mean = x.mean(dim=1, keepdim=True) # (B, C, L) -> (B, 1, L)
var = x.var(dim=1, keepdim=True, unbiased=False) # (B, C, L) -> (B, 1, L)
x = (x - mean) / torch.sqrt(var + 1e-5) # (B, C, L)
return gamma * x
ln_nb_scripted = torch.jit.script(ln_nb)
class LN(nn.Module):
def __init__(self, num_channels, jit=False):
""" Bias-free layer-norm with (B, C, L) ordering. """
super(LN, self).__init__()
self.gamma = nn.Parameter(torch.ones(1, num_channels, 1))
self.jit = jit
def forward(self, x):
if self.jit:
return ln_nb_scripted(x, self.gamma)
return ln_nb(x, self.gamma)
# create dummy-input + grad from previous layer
B, C, L = 8, 512, 1024
x = torch.randn(B, C, L, requires_grad=True).cuda()
prev_grads = torch.randn(B, C, L).cuda()
# note: error doesn't appear until 2nd or 3rd JIT-ed function call.
for i in range(4):
# version 1. (no JIT)
ln1 = LN(C, jit=False).cuda()
y1 = ln1(x)
grad1, = torch.autograd.grad(
y1,
x,
prev_grads
)
# version 2. (JIT)
ln2 = LN(C, jit=True).cuda()
y2 = ln2(x)
grad2, = torch.autograd.grad(
y2,
x,
prev_grads
)
grad_diff = torch.abs(grad1 - grad2)
y_diff = torch.abs(y1 - y2)
print(f"Iteration {i}")
print(f"Output diffs. Mean: {y_diff.mean().item()}, Max: {y_diff.max().item()}")
print(f"Grad diffs. Mean: {grad_diff.mean().item()}, Max: {grad_diff.max().item()}\n")
```
As an interesting aside, the bug goes away if my model is wrapped in `torch.compile`, even with `backend=eager`. I discovered this because my model would _only_ train when wrapped with `torch.compile` (Unless I disable `torch.jit.script`, in which case it always trains).
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-1030-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.86.10
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.46
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 24 MiB (24 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy==0.991
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.6
[pip3] torch==2.0.1+cu118
[pip3] torch-stoi==0.1.2
[pip3] torchaudio==2.0.2+cu118
[pip3] torchfile==0.1.0
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torch-stoi 0.1.2 pypi_0 pypi
[conda] torchaudio 2.0.2+cu118 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 3 |
1,123 | 108,669 |
Lower MEMORY_LIMIT_MAX_JOBS to avoid oom during conda builds
|
topic: not user facing, ciflow/binaries_conda
|
Trying to address nightly failure: https://github.com/pytorch/pytorch/issues/108607
| 3 |
1,124 | 108,666 |
[TEST] Try larger instances for conda builds
|
topic: not user facing, ciflow/binaries_conda
|
Fixes #ISSUE_NUMBER
| 1 |
1,125 | 108,665 |
INTERNAL ASSERT FAILED in `shape_type_inference.cpp`
|
oncall: jit
|
### 🐛 Describe the bug
I made a slight modification to a PyTorch model that previously exported to ONNX without issue (specifically, instead of using the passed in input `mesh_edge_indices`, the modified model loads them from a file - I assume this is converted into a constant tensor when tracing).
I first `torch.jit.trace`d and saved the model to produce `postcvpr.pt` (put inside a .zip folder and [attached](https://github.com/pytorch/pytorch/files/12540209/postcvpr.zip)). However, when trying to export this new model to ONNX I hit an `INTERNAL ASSERT FAILED`.
For reference, the unmodified model (which does not trigger the assertion) is attached [here](https://github.com/pytorch/pytorch/files/12541034/postcvpr_old.zip).
Python code used to export to ONNX:
```python
from typing import Sequence
import torch
from torch.jit._trace import TopLevelTracedModule
traced_model: TopLevelTracedModule = torch.jit.load("postcvpr.pt")
INPUT_NAMES = [
"cloth_features",
"active_mask",
"obstacle_features",
"mesh_edge_indices",
"mesh_edge_features",
"coarse0_edge_indices",
"coarse0_edge_features",
"coarse1_edge_indices",
"coarse1_edge_features",
"coarse2_edge_indices",
"coarse2_edge_features",
"inverse_world_edge_indices",
"inverse_world_edge_features",
"direct_world_edge_indices",
"direct_world_edge_features",
]
INPUT_SHAPES = [
(torch.float32, (4424, 24)),
(torch.bool, (5678, 1)),
(torch.float32, (5678, 24)),
(torch.int64, (2, 26272)),
(torch.float32, (26272, 12)),
(torch.int64, (2, 10568)),
(torch.float32, (10568, 12)),
(torch.int64, (2, 6340)),
(torch.float32, (6340, 12)),
(torch.int64, (2, 2900)),
(torch.float32, (2900, 12)),
(torch.int64, (2, 782)),
(torch.float32, (782, 9)),
(torch.int64, (2, 782)),
(torch.float32, (782, 9)),
]
# Adapted from https://github.com/onnx/onnx/issues/654
def export_onnx_model(
model,
input_shapes: list[tuple[torch.dtype, Sequence[int]]],
onnx_path,
input_names=None,
output_names=None,
dynamic_axes=None,
):
# Remove the old onnx model - make sure we are actually creating a new one, lol
# os.remove(onnx_path)
inputs = create_inputs(input_shapes)
# Test just running the model with generated sample input
model(*inputs)
# Export
torch.onnx.export(
model,
inputs,
onnx_path,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
opset_version=16,
)
def create_inputs(input_shapes: list[tuple[torch.dtype, Sequence[int]]]):
# Minimum of either number of cloth verts or number of body verts
min_verts = min(input_shapes[0][1][0], input_shapes[1][1][0])
return tuple(
map(
lambda s: (
ty := s[0],
shape := s[1],
torch.rand(shape, device="cuda") < 0.9
if ty == torch.bool
else torch.randint(0, min_verts, shape, dtype=ty, device="cuda")
if ty == torch.int64
else torch.rand(shape, dtype=ty, device="cuda"),
)[-1],
input_shapes,
)
)
export_onnx_model(
traced_model,
INPUT_SHAPES,
"./postcvpr.onnx",
INPUT_NAMES,
["output"],
)
```
Full Output:
```
/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py:825: UserWarning: no signature found for <torch.ScriptMethod object at 0x7fb6d36f8860>, skipping _decide_input_format
warnings.warn(f"{e}, skipping _decide_input_format")
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/home/nathaniel/dev/HOOD_2/HOOD/./Test.py", line 90, in <module>
export_onnx_model(
File "/home/nathaniel/dev/HOOD_2/HOOD/./Test.py", line 61, in export_onnx_model
torch.onnx.export(
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/symbolic_helper.py", line 392, in wrapper
return fn(g, *args, **kwargs)
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/symbolic_opset9.py", line 945, in expand_as
return g.op("Expand", self, shape)
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 86, in op
return _add_op(self, opname, *raw_args, outputs=outputs, **kwargs)
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 245, in _add_op
node = _create_node(
File "/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/_internal/jit_utils.py", line 306, in _create_node
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
RuntimeError: input_shape_value == reshape_value || input_shape_value == 1 || reshape_value == 1 INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/onnx/shape_type_inference.cpp":580, please report a bug to PyTorch. ONNX Expand input shape constraint not satisfied.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.1.0-8ubuntu1~22.04) 13.1.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro T2000 with Max-Q Design
Nvidia driver version: 531.41
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-10855M CPU @ 2.80GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
BogoMIPS: 5615.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch3d==0.7.4
[pip3] torch==2.0.1
[pip3] torch-cluster==1.6.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torchaudio==2.0.2
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] mxnet-mkl 1.6.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.7.4 py310_cu117_pyt201 pytorch3d
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-cluster 1.6.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu117 pytorch
[conda] triton 2.0.0 pypi_0 pypi
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,126 | 108,657 |
Add aten::trunc to core IR
|
fb-exported
|
Summary:
floor, ceil is core.
Libraries like XNNPACK and MKL supports it.
Decomp takes multiple ops (possibly `torch.floor(a) * (a > 0) + torch.ceil(a) * (a < 0)`)
Test Plan: CI
Differential Revision: D48989685
| 3 |
1,127 | 108,651 |
libtorch: runtime error when iterating batch of dataloader
|
module: cpp, triaged
|
### 🐛 Describe the bug
Hello, I'm trying to implement train code with libtorch(pytorch c++ api). But the runtime error always raises when it goes to iterate batch of dataloader:
``train.cpp``
```cpp
auto train_data = UDGDataset(traindata, V, L, C).map(Stack<UDGExample<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> >());
auto valid_data = UDGDataset(validdata, V, L, C).map(Stack<UDGExample<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> >());
auto test_data = UDGDataset(testdata, V, L, C).map(Stack<UDGExample<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> >());
auto trainloader = torch::data::make_data_loader<torch::data::samplers::SequentialSampler>(std::move(train_data), batch_size);
auto validloader = torch::data::make_data_loader<torch::data::samplers::RandomSampler>(std::move(valid_data), batch_size);
auto testloader = torch::data::make_data_loader<torch::data::samplers::RandomSampler>(std::move(test_data), batch_size);
auto net = GnnModelUndirected(dim, C, hidden_size, L);
torch::optim::SGD* optimizer = new torch::optim::SGD(net->parameters(), lr = lr);
torch::nn::CrossEntropyLoss criterion;
float old_acc = 0.0;
float old_vacc = 0.0;
for (int i = 0; i < epoch; ++i) {
int num = 0;
int correct_num = 0;
int total_err1 = 0;
int total_err0 = 0;
net->train();
for (auto& batch : *trainloader) {
optimizer->zero_grad();
auto out = net->forward(emb, batch.one_hot_s, batch.one_hot_t, batch.one_hot_p);
auto loss = criterion(out, batch.y);
loss.backward();
optimizer->step();
auto res = torch::argmax(out, 1);
int batchsize = res.size(0);
num += batchsize;
int cnt = 0;
int err1 = 0;
int err0 = 0;
for (int j = 0; j < batchsize; ++j) {
if (res[j].item<int>() == batch.y[j].item<int>()) {
cnt++;
}
else if (res[j].item<int>() == 1) {
err1++;
}
else {
err0++;
}
}
logging << "accuracy: " << std::fixed << std::setprecision(4) << (float)cnt / batchsize << std::endl;
logging << "err1: " << err1 << std::endl;
logging << "err0: " << err0 << std::endl;
correct_num += cnt;
total_err1 += err1;
total_err0 += err0;
}
// ...et cetera...
```
``model.hpp``
```cpp
#pragma once
#include<torch/torch.h>
//using namespace torch::indexing;
struct GnnModelUndirectedImpl : torch::nn::Module {
GnnModelUndirectedImpl(int input_dim, int label_num, int hidden_size, int path_length):
gru(torch::nn::GRU(torch::nn::GRUOptions(label_num, hidden_size).num_layers(1).batch_first(true))),
linear(torch::nn::Linear(hidden_size, 2)),
relu(torch::nn::ReLU()),
softmax(torch::nn::Softmax(torch::nn::SoftmaxOptions(1))),
linear_vec(torch::nn::Linear(input_dim* path_length, hidden_size))
{
//auto gru = register_module("gru", torch::nn::GRU(torch::nn::GRUOptions(label_num, hidden_size).num_layers(1).batch_first(true)));
//auto linear = register_module("linear", torch::nn::Linear(hidden_size, 2));
//auto relu = register_module("relu", torch::nn::ReLU());
//auto softmax = register_module("softmax", torch::nn::Softmax(torch::nn::SoftmaxOptions(1)));
//auto linear_vec = register_module("linear_vec", torch::nn::Linear(input_dim * path_length, hidden_size));
}
torch::Tensor forward(torch::Tensor vec, torch::Tensor one_hot_s, torch::Tensor one_hot_t, torch::Tensor one_hot_p) {
auto vec1 = linear_vec(vec);
vec1 = relu(vec1);
auto s = torch::mm(one_hot_s, vec);
auto t = torch::mm(one_hot_t, vec);
auto p = relu(std::get<0>(gru(one_hot_p)));
p = p.index({ torch::indexing::Slice(torch::indexing::None, -1, torch::indexing::None) });
auto sp = s * p;
auto tp = t * p;
auto r = sp + tp;
auto out = linear(r);
out = softmax(out);
return out;
}
torch::nn::GRU gru;
torch::nn::Linear linear;
torch::nn::ReLU relu;
torch::nn::Softmax softmax;
torch::nn::Linear linear_vec;
};
TORCH_MODULE(GnnModelUndirected);
template <typename S = torch::Tensor, typename T = torch::Tensor, typename P = torch::Tensor, typename Y = torch::Tensor> struct UDGExample {
S one_hot_s;
T one_hot_t;
P one_hot_p;
Y y;
UDGExample() = default;
UDGExample(S s, T t, P p, Y y):one_hot_s(std::move(s)), one_hot_t(std::move(t)), one_hot_p(std::move(p)), y(std::move(y)){}
};
template <typename ExampleType = UDGExample<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> >
struct Stack : public torch::data::transforms::Collation<ExampleType> {
ExampleType apply_batch(std::vector<ExampleType> examples) override {
std::vector<torch::Tensor> one_hot_s, one_hot_t, one_hot_p, y;
one_hot_s.reserve(examples.size());
one_hot_t.reserve(examples.size());
one_hot_p.reserve(examples.size());
y.reserve(examples.size());
for (auto& example : examples) {
one_hot_s.push_back(std::move(example.one_hot_s));
one_hot_t.push_back(std::move(example.one_hot_t));
one_hot_p.push_back(std::move(example.one_hot_p));
y.push_back(std::move(example.y));
}
return { torch::stack(one_hot_s), torch::stack(one_hot_t), torch::stack(one_hot_p), torch::stack(y)};
}
};
class UDGDataset : public torch::data::Dataset<UDGDataset, UDGExample<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> > {
private:
std::vector<std::vector<int> > Data;
int N;
int V;
int L;
int C;
public:
UDGDataset(std::vector<std::vector<int> >& data, int v, int l, int c) {
Data = data;
N = data.size();
V = v;
L = l;
C = c;
}
UDGExample<> get(size_t i) override {
torch::Tensor one_hot_s = torch::zeros({ V });
torch::Tensor one_hot_t = torch::zeros({ V });
torch::Tensor one_hot_p = torch::zeros({ L, C });
torch::Tensor y = torch::zeros({ 1 }, torch::dtype(torch::kLong));
int s = Data[i][0];
int t = Data[i][1];
int l = Data[i][2];
one_hot_s[s] = 1;
one_hot_t[t] = 1;
for (int j = 0; j < l; ++j) {
one_hot_p[j][Data[i][3 + j]] = 1;
}
y[0] = Data[i][Data[i].size() - 1];
return { one_hot_s.clone(), one_hot_t.clone(), one_hot_p.clone(), y.clone() };
}
torch::optional<size_t> size() const override {
return N;
}
};
```
The error raises in `.\libtorch\include\torch\csrc\api\include\torch\data\dataloader\stateless.h`, on `line 76`, the assert failed:

The error information says that is a c10::error.
Should any further information be needed, please tell me.
Hope for solution. Thank you.
### Versions
from pytorch download page, choose preview(Nightly), Windows, LibTorch, C++/Java, CPU, downloaded from `https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-latest.zip`
cc @jbschlosser
| 0 |
1,128 | 108,650 |
Unsupported: inline in skipfiles: Logger.info
|
triaged, oncall: pt2, module: dynamo, module: graph breaks
|
### 🐛 Describe the bug
```
Unsupported: inline in skipfiles: Logger.info | info /usr/lib/python3.10/logging/__init__.py
from user code:
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py", line 762, in forward
logger.info("Forward upsample size to force interpolation output size.")
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
```
PyTorch version: 2.1.0.dev20230903+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-1012-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230903+cu121
[pip3] torchvision==0.16.0.dev20230903+cu121
[pip3] triton==2.1.0
[conda] Could not collect
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 0 |
1,129 | 108,865 |
Heap buffer overflow with `torch::load` on fuzzy data
|
oncall: jit
|
### 🐛 Describe the bug
Hi! I've been fuzzing torchvision project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz).
I've found a heap buffer overflow error at `pngwrite.c:842` in libpng project.
I think, heap buffer overflow may occur because the buffer is allocated with a size different from that specified in the field, which is responsible for the size of the tensor.
https://github.com/pytorch/vision/blob/90913fb47629de01e6369bd841fdec6c82604b48/torchvision/csrc/io/image/cpu/encode_png.cpp#L150-L155
**How to reproduce**
1. Build docker from [here](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/torchvision) and run the container:
```
sudo docker build -t oss-sydr-fuzz-torchvision .
sudo docker run --privileged --rm -v `pwd`:/fuzz -it oss-sydr-fuzz-torchvision /bin/bash
```
2. Run the target on this input: [encode-png-bof.txt](https://github.com/pytorch/vision/files/12536900/encode-png-bof.txt)
```
/encode_png_fuzz encode-png-bof.txt
```
3. You will see the following output:
```
=================================================================
==454==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60a000000080 at pc 0x0000005c4ee7 bp 0x7ffc786e2570 sp 0x7ffc786e1d40
READ of size 18 at 0x60a000000080 thread T0
#0 0x5c4ee6 in __asan_memcpy /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:22:3
pytorch/vision#1 0x13f10be5 in png_write_row /libpng-1.6.37/pngwrite.c:842:4
pytorch/vision#2 0x61fab8 in vision::image::encode_png(at::Tensor const&, long) /vision/torchvision/csrc/io/image/cpu/encode_png.cpp:155:5
pytorch/vision#3 0x604619 in LLVMFuzzerTestOneInput /vision/encode_png.cc:64:32
pytorch/vision#4 0x66b041 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
pytorch/vision#5 0x6544cc in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
pytorch/vision#6 0x65a61b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
pytorch/vision#7 0x654222 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
pytorch/vision#8 0x7fc4b71b2082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
pytorch/vision#9 0x542cdd in _start (/encode_png_fuzz+0x542cdd)
0x60a000000080 is located 0 bytes to the right of 64-byte region [0x60a000000040,0x60a000000080)
allocated by thread T0 here:
#0 0x5c6757 in __interceptor_posix_memalign /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_malloc_linux.cpp:145:3
pytorch/vision#1 0x12366e09 in c10::alloc_cpu(unsigned long) /pytorch/c10/core/impl/alloc_cpu.cpp:74:13
pytorch/vision#2 0x122e3c34 in c10::DefaultCPUAllocator::allocate(unsigned long) const /pytorch/c10/core/CPUAllocator.cpp:23:14
pytorch/vision#3 0x99c6f79 in caffe2::serialize::PyTorchStreamReader::getRecord(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/caffe2/serialize/inline_container.cc:314:48
pytorch/vision#4 0xddd909f in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const /pytorch/torch/csrc/jit/serialization/import_read.cpp:40:38
pytorch/vision#5 0xddd909f in c10::DataPtr std::__invoke_impl<c10::DataPtr, torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(std::__invoke_other, torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
pytorch/vision#6 0xddd8ee0 in std::enable_if<is_invocable_r_v<c10::DataPtr, torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>, c10::DataPtr>::type std::__invoke_r<c10::DataPtr, torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:113:9
pytorch/vision#7 0xddd8d50 in std::_Function_handler<c10::DataPtr (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>)::$_0>::_M_invoke(std::_Any_data const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:291:9
pytorch/vision#8 0xdebcd76 in std::function<c10::DataPtr (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>::operator()(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14
pytorch/vision#9 0xdeb20dc in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:568:25
pytorch/vision#10 0xdeae437 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251:27
pytorch/vision#11 0xdeae0d2 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204:3
pytorch/vision#12 0xddd6de3 in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) /pytorch/torch/csrc/jit/serialization/import_read.cpp:53:20
pytorch/vision#13 0xdd732dd in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import.cpp:184:10
pytorch/vision#14 0xdd69885 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize(c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:287:19
pytorch/vision#15 0xdd6c855 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:438:25
pytorch/vision#16 0xdd6c1c7 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:421:10
pytorch/vision#17 0xdd6dce4 in torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:503:10
pytorch/vision#18 0xf2d3f75 in torch::serialize::InputArchive::load_from(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>) /pytorch/torch/csrc/api/src/serialize/input-archive.cpp:97:13
pytorch/vision#19 0x60509c in void torch::load<at::Tensor, char*&>(at::Tensor&, char*&) /pytorch/torch/include/torch/csrc/api/include/torch/serialize.h:107:11
pytorch/vision#20 0x6036be in LLVMFuzzerTestOneInput /vision/encode_png.cc:38:5
pytorch/vision#21 0x66b041 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
pytorch/vision#22 0x6544cc in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
pytorch/vision#23 0x65a61b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
pytorch/vision#24 0x654222 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
pytorch/vision#25 0x7fc4b71b2082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
SUMMARY: AddressSanitizer: heap-buffer-overflow /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:22:3 in __asan_memcpy
Shadow bytes around the buggy address:
0x0c147fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c147fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c147fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c147fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c147fff8000: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
=>0x0c147fff8010:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c147fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c147fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c147fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c147fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c147fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==454==ABORTING
```
### Versions
torchvision version: 9d0a93eee90bf7c401b74ebf9c8be80346254f15
OS: Ubuntu 20.04
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 14 |
1,130 | 108,645 |
uninformative OOM error
|
module: cuda, triaged
|
### 🐛 Describe the bug
Following is an uninformative error for OOM issues. I am aware that my job raises this error due to memory issues on one of the machines and runs fine another.
``` File "compressor_train.py", line 133, in <module>
train(cfg)
File "compressor_train.py", line 126, in train
train_epoch(train_loader)
File "compressor_train.py", line 113, in train_epoch
loss.backward()
File "/users/shivanim/anaconda3/envs/vqv/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/users/shivanim/anaconda3/envs/vqv/lib/python3.8/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: GET was unable to find an engine to execute this computation
```
### Versions
```Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.17
Python version: 3.8.17 (default, Jul 5 2023, 21:04:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.95.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 8000
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 3523.706
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==1.6.0
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==1.0.3
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-lightning 1.6.0 pypi_0 pypi
[conda] pytorchvideo 0.1.5 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @ptrblck
| 2 |
1,131 | 108,642 |
torch.topk returned values and indices are reordered if sorted=False
|
triaged, module: sorting and selection
|
### 🐛 Describe the bug
If sorted=False, torch.topk will return a set of reordered values and indices tensors instead of the original values

### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.5 (Green Obsidian) (x86_64)
GCC version: (GCC) 10.2.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.28
Python version: 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-348.el8.0.2.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A40
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7552 48-Core Processor
Stepping: 0
CPU MHz: 2200.000
CPU max MHz: 2200.0000
CPU min MHz: 1500.0000
BogoMIPS: 4400.06
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] triton==2.0.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 4 |
1,132 | 108,640 |
torch.onnx.export does not trace all outputs for the HF BLOOM model
|
module: onnx, triaged
|
### 🐛 Describe the bug
When I try to export HF BLOOM model using `torch.onnx.export` only the first output is traced, the other outputs are ignored.
Reproduction:
```
import onnx
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m")
example_input = {
"input_ids": torch.tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]),
"use_cache": True,
"return_dict": False,
}
output = model(**example_input)
print(output) # the output is a Tuple[Tensor, Tuple[Tuple[Tensor, Tensor], ...], ...]
torch.onnx.export(
model,
example_input,
"bloom-560m.onnx",
)
onnx_model = onnx.load_model("bloom-560m.onnx")
print(onnx_model.graph.output) # only a single output, the rest of the outputs (cache) is ignored
```
In this scenario the output of the model is a nested tuple of dozens of tensors (logits + cache) and the cache is missing in the onnx model output. The same code works fine with other models such as Llama or BART.
Tested on both stable and nightly.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230904+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7413 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3630.8101
CPU min MHz: 1500.0000
BogoMIPS: 5300.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230904+cu121
[pip3] torch-tensorrt==2.0.0.dev0
[pip3] torchaudio==2.2.0.dev20230905+cu121
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0.dev20230905+cu121
[pip3] triton==2.1.0+440fd1b
[conda] Could not collect
| 0 |
1,133 | 108,637 |
use reduced_precision_reduction flags in Triton matmul
|
triaged, open source, topic: not user facing, module: inductor, module: dynamo, ciflow/inductor
|
Fixes #108621
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 14 |
1,134 | 108,636 |
torch.compile operation benchmark result is poor
|
module: convolution, triaged, oncall: pt2, module: inductor, module: cpu inductor
|
### 🐛 Describe the bug
I am currently testing the optimization features of torchdynamo and comparing them with the optimization results of TVM. I have already completed the optimization evaluation of various operators in TVM and am now conducting evaluations for dynamo and openxla. However, my operator optimization evaluation results show that torch 2.0 is performing negative optimization. I have tested the following operators as shown in the figure.
<img width="336" alt="image" src="https://github.com/pytorch/pytorch/assets/62176674/98b040a3-6261-4d7e-9198-1f2548139ab3">
### Error logs
<img width="1307" alt="image" src="https://github.com/pytorch/pytorch/assets/62176674/7b9df29b-c788-427d-8631-bf7ecdd5f427">
### Minified repro
main.py
```python
import numpy
import time
from torchDynamo import *
# ------------------------------------------------------------------------------
# User Configurable Variables
# ------------------------------------------------------------------------------
dtype = "float32"
# ------------------------------------------------------------------------------
# Helper Function
# ------------------------------------------------------------------------------
def evaluator(s, inputs, num):
all_time = []
for i in range(num):
torch.cuda.synchronize()
start = time.time()
result = s(inputs)
torch.cuda.synchronize()
end = time.time()
elapsed_time = end - start
all_time.append(elapsed_time)
# 计算时间的平均值
average_time = sum(all_time) / num
return average_time
def evaluate_operation(s, inputs, optimization, log):
"""Evaluate operation correctness and print the performance information.
Args:
s: The schedule to be built.
vars: The argument lists to the function.
target: The target and option of the compilation.
inputs: The input tensors.
standard: The standard result for correctness evaluation.
optimization: The name of the optimization.
log: The log list.
"""
mean_time = evaluator(s, inputs, 1)
log.append((optimization, mean_time))
def report_performance(log):
"""Convert the log into a performance table.
Args:
log: The log list.
"""
baseline = log[-1][1]
header = "Benchmark".ljust(20) + "\t" + "Time".rjust(
10) + "\t" + "SpeedUp".rjust(10)
split_line = "-" * 50
print(split_line)
print(header)
print(split_line)
for result in log:
formatted_time = "{:.2f}".format(result[1])
formatted_performance = "{:.2f}".format(baseline / result[1])
print("\033[32m%s\033[0m\t\033[33m%s\033[0m\t\033[34m%s\033[0m" %
(result[0].ljust(20), str(formatted_time + " ms").rjust(10),
str(formatted_performance).rjust(10)))
def main():
# ----------------------------------------------------------------------------
# Initialization and Baseline
# ----------------------------------------------------------------------------
# Initialize the log list.
log = []
# Generate random tensor for testing.
size = (512, 64, 3)
c, n, k, p, s = size[0], size[0], size[1], size[2], 1
oc, ic, n, k, p, s = size[0], size[0], size[1], size[2], 1, 1
data,weight, out = get_conv_data_torch(c, n, k, p, s)
# ----------------------------------------------------------------------------
# Register Benchmarks and Dump Report
# ----------------------------------------------------------------------------
# Register default schedule.
s_1 = conv_torch(data, out, k, p, s)
evaluate_operation(s_1,
inputs=data,
optimization="torch_conv_default",
log=log)
s_2 = conv_compiled(data, out, k, p, s)
evaluate_operation(s_2,
inputs=data,
optimization="torch_conv_dynamo",
log=log)
report_performance(log)
if __name__ == "__main__":
main()
```
torchDynamo.py
```python
import torch
import torch.nn as nn
import numpy as np
def conv_out_size(n, k, p, s):
"""Compute the output size by given input size n (width or height),
kernel size k, padding p, and stride s
Return output size (width or height)
"""
return (n - k + 2 * p)//s + 1
def get_conv_data(oc, ic, n, k, p=0, s=1, constructor=None):
"""Return random 3-D data tensor, 3-D kernel tenor and empty 3-D output
tensor with the shapes specified by input arguments.
oc, ic : output and input channels
n : input width and height
k : kernel width and height
p : padding size, default 0
s : stride, default 1
constructor : user-defined tensor constructor
"""
np.random.seed(0)
data = np.random.normal(size=(ic, n, n)).astype('float32')
weight = np.random.normal(size=(oc, ic, k, k)).astype('float32')
on = conv_out_size(n, k, p, s)
out = np.empty((oc, on, on), dtype='float32')
if constructor:
data, weight, out = (constructor(x) for x in [data, weight, out])
return data, weight, out
def conv_torch(data, out, k, p, s):
f = nn.Conv2d(data.shape[1], out.shape[1], kernel_size=k, stride=s, padding=p)
return f
def conv_compiled(data, out, k, p, s):
f = nn.Conv2d(data.shape[1], out.shape[1], kernel_size=k, stride=s, padding=p)
f_s = torch.compile(f)
return f_s
def get_conv_data_torch(c, n, k, p, s):
data, weight, out = get_conv_data(c, c, n, k, p, s,lambda x: torch.from_numpy(x))
data = data.unsqueeze(0) # 在第0个维度前添加一个新维度
out = out.unsqueeze(0)
return data, weight, out
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+4136153
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-23-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
Nvidia driver version: 470.182.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3414.5500
CPU min MHz: 1500.0000
BogoMIPS: 4500.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+4136153
[pip3] torch-tensorrt==1.5.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
1,135 | 108,633 |
Back out "Faster gc_count update for CUDACachingAllocator"
|
fb-exported
|
Summary:
Original commit changeset: 1d04ae368fd8
Original Phabricator Diff: D48481557
Test Plan: llm inference service can encounter a segfault underload. it no longer does after backing out the diff.
Reviewed By: houseroad
Differential Revision: D49003404
| 3 |
1,136 | 108,627 |
autocast not consistent across different GPUs (A100 and RTX A6000)
|
triaged, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
I train and inference a classifier using autocast. Result is different accross diffenent GPUs (same .venv, code and data).
The result on A100 is much superior than on RTX A6000.
Not using autocast with `ctx = nullcontext()` on RTX A6000, gets the similar result to A100 with autocast.
I get no torch warnings on either machine.
```python
ctx = torch.amp.autocast(device_type='cuda', dtype=torch.bfloat16)
#training
with ctx:
logits, loss = classifier(X, Y)
#inference
with ctx:
logits, loss = classifier(X, None)
```
P.S. This has cost me one month of my business time.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.36
Python version: 3.11.5 (main, Aug 25 2023, 23:47:33) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 46%
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4200.03
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 8 MiB (32 instances)
L3 cache: 80 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 1 |
1,137 | 108,621 |
[inductor] Triton matmul templates should use reduced_precision_reduction flags
|
feature, good first issue, triaged, module: half, oncall: pt2, module: inductor, matrix multiplication
|
In eager mode [we have flags](https://pytorch.org/docs/stable/notes/cuda.html#reduced-precision-reduction-in-fp16-gemms) for using fp16 accumulators in matmuls.
Currently, our Triton matmul templates ignore this flag and always accumulate in float32. This will make them slower, so we may be leaving some perf on the table in max-autotune mode.
The implementation of this just involves updating the `acc_type` function to read the correct flags:
https://github.com/pytorch/pytorch/blob/c8e72a4a5c6398ad38d41ceee9775f8d4544225c/torch/_inductor/kernel/mm_common.py#L134-L137
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
1,138 | 108,614 |
[pytorch] Test key ET models export to core aten ir
|
fb-exported, topic: not user facing, keep-going
|
Differential Revision: D48992081
| 7 |
1,139 | 108,612 |
[codemod] Del `(object)` from 10 inc caffe2/fb/nas_profiler/lookups/xtensa_lookup.py
|
fb-exported, topic: not user facing
|
Summary: Python3 makes the use of `(object)` in class inheritance unnecessary. Let's modernize our code by eliminating this.
Test Plan: Sandcastle
Reviewed By: meyering
Differential Revision: D48957985
| 2 |
1,140 | 108,610 |
[WIP] Test threaded multi compile
|
ciflow/trunk, release notes: releng, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108610
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
1,141 | 108,609 |
[export] Fix getattr node issues with custom obj
|
module: export
|
When we recreate a graph module with a custom object, we will run into a SyntaxError due to the python codegen not understanding what to do with the custom in-memory objects. To solve this, we bypass the issue through creating an empty GraphModule and manually set the graph (https://fburl.com/code/5pgtpju8).
However, this runs into an issue when there are attributes on the graph module. torch.fx.GraphModule initialization only copies over the attributes if there is a get_attr node in the graph (https://www.internalfb.com/code/fbsource/[3f79c0a1c045]/fbcode/caffe2/torch/fx/graph_module.py?lines=360-363). Since we don't initialize the graph module with a graph when there's a custom object in the graph, these attributes will never get copied over to the newly created graph module.
Fixes an issue from tensorrt team.
| 2 |
1,142 | 108,606 |
[torch][cse]fix cse pass for hashing slice
|
fb-exported, release notes: fx
|
Summary: * str repr python slice object for hashing, should be safe enough as slice parameter are none, int, or fx node repr
Test Plan: {F1070294120} for the example in test we have the following graph changes.
Differential Revision: D48381385
| 2 |
1,143 | 108,605 |
Move sequential partition utils to fx/passes/utils
|
fb-exported, release notes: quantization, release notes: AO frontend, suppress-api-compatibility-check
|
Summary: `find_sequential_partitions` is fairly generic. Let's move it to `fx/utils` so it can be used by non quantization related work
Test Plan: CI
Differential Revision: D48664627
| 8 |
1,144 | 108,602 |
torchrun fails to run on Windows 11
|
module: windows, triaged
|
### 🐛 Describe the bug
On Windows 11, just running vanilla "torchrun train.py" gives the error "failed to create process." - with no other info (no stack trace).
```
torch 2.0.1
python 3.9.0
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.0 (default, Nov 15 2020, 08:30:55) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3504
DeviceID=CPU0
Family=207
L2CacheSize=12288
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3504
Name=Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz
ProcessorType=3
Revision=21767
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.1
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46357
[conda] mkl-service 2.4.0 py39h2bbff1b_1
[conda] mkl_fft 1.3.6 py39hf11a4ad_1
[conda] mkl_random 1.2.2 py39hf11a4ad_1
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch 2.0.1 py3.9_cuda11.7_cudnn8_0 pytorch
[conda] pytorch-cuda 11.7 h16d0643_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] torch-optimizer 0.3.0 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 0 |
1,145 | 108,601 |
Introduce triton_jit decorator to simplify defining triton.jittable kernels.
|
open source, release notes: sparse
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108601
* #108512
| 1 |
1,146 | 108,591 |
[POC] Avoid `recordStream` for `_reduce_scatter_base`
|
release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108591
* #108590
| 2 |
1,147 | 108,590 |
[POC] Avoid `recordStream` for `_allgather_base`
|
release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108591
* __->__ #108590
This is needed for implementing FSDP memory management without ever using `recordStream` (without setting `TORCH_NCCL_AVOID_RECORD_STREAMS=1`). Another option is to always avoid `recordStream` if `async_op=False`.
| 1 |
1,148 | 108,586 |
TorchInductor workers use "fork" which doesn't work in a multithreaded environment
|
triaged, module: multithreading, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
TorchInductor workers use "fork" as the default method for spawning processes: https://github.com/pytorch/pytorch/blob/ff38c0e2f9cae35378553c38ccf7188007fed938/torch/_inductor/codecache.py#L1349
"fork" in general is [broken](https://www.microsoft.com/en-us/research/uploads/prod/2019/04/fork-hotos19.pdf) and should not be used in modern systems where multithreading is used commonly. Even [python](https://discuss.python.org/t/switching-default-multiprocessing-context-to-spawn-on-posix-as-well/21868) plans to switch to using "spawn" as the default multiprocessing spawn method.
Creating this issue to follow up from the discussion [here](https://github.com/pytorch/pytorch/pull/87411#issuecomment-1699795308). The usage of "fork" creates non-deterministic situations where application code deadlocks or crashes in a multi-threaded environment. We should switch the default here to be "spawn" or at least give users an option to switch to using "spawn" instead.
This means we need to set `num_workers=0` which slows down compilation times significantly. First compile usually takes about 20mins with `num_workers=16`. However, with `num_workers=0` this goes up to 90mins.
### Versions
main branch
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
1,149 | 108,582 |
[dynamo] scalar <comp op> tensor not supported
|
triaged, oncall: pt2, module: dynamo
|
Min repro code by @lezcano :
```python
import torch
@torch.compile(fullgraph=True)
def fn(x):
return 0 <= x
x = torch.randn(512, device="cuda")
fn(x)
```
Here are tracing logs:
```
Step 1: torchdynamo start tracing fn check_compile_le_torch_op.py:4
TRACE starts_line fn check_compile_le_torch_op.py:4
@torch.compile()
wrap_to_fake L['x'] (512,) [<DimDynamic.STATIC: 2>] [None]
TRACE starts_line fn check_compile_le_torch_op.py:6
return 0 <= x
TRACE LOAD_CONST 0 []
TRACE LOAD_FAST x [ConstantVariable(int)]
TRACE COMPARE_OP <= [ConstantVariable(int), TensorVariable()]
step triggered compile
Traceback (most recent call last):
File "/pytorch/torch/_dynamo/symbolic_convert.py", line 691, in step
getattr(self, inst.opname)(inst)
File "/pytorch/torch/_dynamo/symbolic_convert.py", line 1107, in COMPARE_OP
BuiltinVariable(supported_any[op], **options).call_function(
File "/pytorch/torch/_dynamo/variables/builtin.py", line 618, in call_function
result = handler(tx, *args, **kwargs)
File "/pytorch/torch/_dynamo/variables/builtin.py", line 1412, in _comparison
_unimplemented()
File "/pytorch/torch/_dynamo/variables/builtin.py", line 1321, in _unimplemented
unimplemented(f"comparison {typestr(left)} {op} {typestr(right)}")
File "/pytorch/torch/_dynamo/exc.py", line 176, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: comparison ConstantVariable(int) <built-in function le> TensorVariable()
restore_graphstate: removed 0 nodes
COMPILING GRAPH due to GraphCompileReason(reason='step_unsupported', user_stack=[<FrameSummary file check_compile_le_torch_op.py, line 6 in fn>], graph_break=True)
GUARDS:
hasattr(L['x'], '_dynamo_dynamic_indices') == False # _dynamo/variables/builder.py:1252 in wrap_fx_proxy_cls
___is_grad_enabled() # _dynamo/output_graph.py:345 in init_ambient_guards
not ___are_deterministic_algorithms_enabled() # _dynamo/output_graph.py:341 in init_ambient_guards
___is_torch_function_enabled() # _dynamo/output_graph.py:349 in init_ambient_guards
utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:347 in init_ambient_guards
check_tensor(L['x'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[512], stride=[1]) # _dynamo/variables/builder.py:1252 in wrap_fx_proxy_cls
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ysiraichi
| 3 |
1,150 | 108,570 |
subclasses <> compile <> dynamic shapes: assume only first inner tensor gets dynamic dims
|
module: dynamo, ciflow/inductor
|
This came from some initial exploration of trying to compile @vkuzo 's `Float8Tensor` subclass through AOTAutograd. One issue I ran into is that:
(a) The current dynamic shapes logic assumes that the dynamic dims inferred from dynamo onto the outer wrapper tensor should be applied to every inner tensor in a wrapper tensor subclass
(b) this isn't actually true in many cases: for `Float8Tensor`, we have a subclass with two inner tensors: the first should get dynamic shapes, but the second is an `amax` scalar-tensor that is always a zero-dim scalar tensor, and doesn't need/want dynamic shapes.
I added a larger comment in the code, but I think that "really" fixing this problem will require some API design, since `mark_dynamic()` not longer carries enough info to specific what you want to happen for subclasses. Instead of figuring out the right thing to do here (which feels risky because we don't really know all of the ways that people might (ab)use subclasses in the longer term), I'm just doing the simple thing: we have no use cases today where a wrapper subclasses has **multiple** inner tensors, **all** of which need dynamic shapes. So I updated the fakeifying logic to always assume that the first inner tensor attr returned from `__tensor_flatten__` gets dynamic shapes, and no other tensors after the first do.
cc @ezyang, lmk if this sounds reasonable to you for now.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108570
* #108243
* #108235
* #108081
| 3 |
1,151 | 108,569 |
Call for a deterministic implementation of scatter_add_cuda_kernel
|
triaged, module: scatter & gather ops
|
### 🚀 The feature, motivation and pitch
I met an error below:
-----------------------------------------------------------------------------
......
message_agg = scatter(message, index=obj, dim=0, dim_size=n_node, reduce='sum')
File "/home/gg/anaconda3/envs/KGAT-Pytorch/lib/python3.6/site-packages/torch_scatter/scatter.py", line 155, in scatter
return scatter_sum(src, index, dim, out, dim_size)
File "/home/gg/anaconda3/envs/KGAT-Pytorch/lib/python3.6/site-packages/torch_scatter/scatter.py", line 21, in scatter_sum
return out.scatter_add_(dim, index, src)
RuntimeError: scatter_add_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
### Alternatives
_No response_
### Additional context
_No response_
cc @mikaylagawarecki
| 0 |
1,152 | 108,567 |
Allow slicing of Nested Tensors along constant dimensions
|
triaged, module: nestedtensor
|
### 🚀 The feature, motivation and pitch
In many cases several of the nested tensor dimensions are constant, and therefore in principle should allow slicing
```
import torch
a = torch.rand(5,5,5)
b = torch.rand(5,5,10)
c = torch.rand(5,5,15)
d= torch.nested.nested_tensor([a,b,c])
d[:2]
```
Will currently return the following
`NotImplementedError: Could not run 'aten::slice.Tensor' with arguments from the 'NestedTensorCPU' backend. `
May be useful to be able to mark which dims are and will remain constant?
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg
| 1 |
1,153 | 108,565 |
`bytes(...)` support of torch tensor does not match numpy + it would be nice to support tensor.tobytes() as alias
|
feature, triaged, module: numpy
|
### 🐛 Describe the bug
```python
import numpy as np
import torch
a = np.arange(3)
print(bytes(a))
# b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00'
# ^ correct ^
print(bytes(a.view(np.uint8)))
# b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00'
# ^ correct ^
print(a.view('uint8'))
# array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0], dtype=uint8)
print(a.tobytes())
# b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00'
# ^ correct ^
b = torch.arange(3)
print(bytes(b))
# b'\x00\x01\x02'
# ^ incorrect ^
print(bytes(b.view(torch.uint8)))
# b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00'
# ^ correct ^
print(b.view('uint8'))
# TypeError: view() received an invalid combination of arguments - got (str), but expected one of:
# * (torch.dtype dtype)
# didn't match because some of the arguments have invalid types: (str)
# * (tuple of ints size)
# didn't match because some of the arguments have invalid types: (str)
print(b.tobytes())
# AttributeError: 'Tensor' object has no attribute 'tobytes'
```
Also, if PyTorch supported string representations of dtype as NumPy does, we could write more polymorphic code: `bytes(tensor.view('uint8'))`, currently we need to have to use backend specific `np.uint8` or `torch.uint8`
Maybe related issues:
- https://github.com/pytorch/pytorch/issues/33041
- https://github.com/pytorch/pytorch/issues/43949
### Versions
```python
np.__version__
# '1.24.2'
torch.__version__
# '2.1.0.dev20230802+cpu'
```
cc @mruberry @rgommers
| 0 |
1,154 | 108,562 |
[1/N] Elimates c10::to_string
|
module: cpu, triaged, open source, ciflow/trunk, release notes: quantization, release notes: cpp, ciflow/periodic
|
This PR tries to replace c10::to_string with std::to_string and other alternatives.
| 3 |
1,155 | 108,559 |
Fix permuted sum precision issue for lower precision on CPU
|
open source, module: bfloat16, module: half, ciflow/trunk, topic: not user facing, ciflow/mps
|
Fixes #83149
| 1 |
1,156 | 108,546 |
[Decomposition] unbind
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Copy decomp from caffe2/torch/_refs/__init__.py and include it in core_aten_decompositions
Test Plan: OSS + Phabricator Tests
Differential Revision: D48871742
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 18 |
1,157 | 108,545 |
[Decomposition] uniform_
|
fb-exported, ciflow/inductor
|
Summary:
Include decomp in core_aten_decompositions
Decomp already exists
https://www.internalfb.com/code/fbsource/[03ff511cad587fc27ed8fd6a54b87845246e8e0c]/xplat/caffe2/torch/_decomp/decompositions.py?lines=2178
Test Plan: OSS + Phabricator Tests
Differential Revision: D48940435
| 3 |
1,158 | 108,543 |
[Decomposition] split.Tensor
|
fb-exported, ciflow/inductor, module: export
|
Summary:
Include decomp in core_aten_decompositions
Decomp: https://github.com/pytorch/pytorch/blob/1e9b590df989337d809fa33edd7ffc6cb60e70ff/torch/_decomp/decompositions.py#L1198
Test Plan: OSS + Phabricator Tests
Differential Revision: D48871743
| 17 |
1,159 | 108,542 |
[Decomposition] resize
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Add decomp and include it in core_aten_decompositions
Test Plan: Phabricator + OSS Tests
Differential Revision: D48940336
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 9 |
1,160 | 108,541 |
[Decomposition] randn_like
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Moving decomposition from _inductor/decomposition.py and include it in core_aten_decompositions
Test Plan: Phabricator + OSS Tests
Differential Revision: D48940304
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 10 |
1,161 | 108,539 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 3 |
1,162 | 108,538 |
[Decomposition] randint
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Moving decomposition from _inductor/decomposition.py and include it in core_aten_decompositions
Test Plan: Phabricator + OSS Tests
Differential Revision: D48940203
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 14 |
1,163 | 108,537 |
[Decomposition] full_like
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Moving decomposition from _inductor/decomposition.py and include it in core_aten_decompositions
Test Plan: Phabricator + OSS Tests
Differential Revision: D48939842
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 14 |
1,164 | 108,536 |
[Decomposition] exponential_
|
fb-exported, ciflow/inductor
|
Summary:
Include decomp in core_aten_decompositions
Decomp already exists:
https://github.com/pytorch/pytorch/blob/ff38c0e2f9cae35378553c38ccf7188007fed938/torch/_decomp/decompositions_for_rng.py#L229
Test Plan: Phabricator + OSS Tests
Differential Revision: D48939790
| 3 |
1,165 | 108,535 |
[Decomposition] bernoulli
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: Moving decomposition of bernoulli from _inductor/decomposition.py and include it in core_aten_decompositions
Test Plan: Phabricator + OSS Tests
Differential Revision: D48878434
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 11 |
1,166 | 108,532 |
Breaking incompatibility with Cuda 12.2, pytorch stable, torchvision
|
oncall: binaries, module: crash, module: cuda, triaged
|
### 🐛 Describe the bug
With latest Cuda 12.2 and ubuntu 20.04 driver, pytorch stable and torchvision installed as on web site, there is a weird segfault occurring whenever a model runs on Cuda.
When it crashes with a message, it's always about python multiprocessing code with buffers too big. Sorry I didn't capture as it seems random with most of the time a segfault without a message. Reinstall from scratch didn't solve the issue.
Note that DDP works but segfault happens with simple single threaded inference.
Solution is to use pytorch-nightly and to force it to install gpu version for pytorch and torchvisio, as by default it installs cpu version! This works very well and even faster. We can't wait to see this nightly becomes the new stable.
Another solution could be to downgrade as we didn't see issue with Cuda 12.1 and pytorch stable for the last 6 months. Maybe that's what we'd be forced to do if nightly breaks...
### Versions
Ubuntu 20.04
Python 3.11 or 3.10 in conda environment
Cuda 12.2, cudnn 8 for Cuda 12, on A100 server.
Pytorch stable and torchvision latest in a fresh conda environment.
cc @seemethere @malfet @ptrblck
| 2 |
1,167 | 108,522 |
nn.Transformer has dropout layers that BERT / GPT-2 do not have
|
module: docs, triaged, oncall: transformer/mha
|
### 📚 The doc issue
The [docstring of nn.TransformerEncoder](https://github.com/pytorch/pytorch/blob/51c2e22/torch/nn/modules/transformer.py#L233) reads: "Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters."
However, `TransformerEncoderLayer` (and `TransformerDecoderLayer`) has a dropout layer in between the two linear layers that come after the attention layers:
* https://github.com/pytorch/pytorch/blob/51c2e22/torch/nn/modules/transformer.py#L723
* https://github.com/pytorch/pytorch/blob/51c2e22/torch/nn/modules/transformer.py#L874
BERT does not have this:
* https://github.com/google-research/bert/blob/master/modeling.py#L872
So the docstring is subtly wrong, at least when planning to use the model for training.
For more context, also GPT-2 does not have this:
* https://github.com/openai/gpt-2/blob/master/src/model.py#L118
The original "Attention is all you need" has it in Transformer Base v2 and v3, but not v1:
* https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py#L1850
The official TensorFlow tutorial does not have it:
* https://www.tensorflow.org/text/tutorials/transformer#the_feed_forward_network
But the Annotated Transformer has it:
* http://nlp.seas.harvard.edu/annotated-transformer/
An alternative to changing the docstring would be extending the `TransformerEncoderLayer` implementation to optionally accept a dictionary for `dropout` with entries "residual", "mlp", "attention" for the three types of dropout that are employed in the model. But I don't know how much `nn.Transformer` is used as compared to custom implementations.
### Suggest a potential alternative/fix
A correct version would be:
"Users can build the BERT (...) model with corresponding parameters, except that BERT does not employ dropout between the feed-forward layers."
This would draw some attention to the fact that the implementation is slightly different from other transformers. Ideally, there would be a similar note for Transformer, TransformerDecoder, TransformerEncoderLayer, TransformerDecoderLayer.
cc @svekars @carljparker @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 0 |
1,168 | 108,521 |
resutl of (torch.mm(a,b) does not match result of (a[:part,:], b)
|
module: numerical-stability, triaged, matrix multiplication
|
### 🐛 Describe the bug
I tried to use torch.mm compute block matrix multiplication severally instead of computing the result once , but I found the results of two computation are not close. For example, when $a \in R^(m \times n)$, $a1 = a[:m/2, :]$,. $a2 = [m/2:,:]$ , $b \in R^{n \times kl}$, torch.mm(a * b) should equal to torch.cat(torch.mm(a1, b), torch.mm(a2, b)), but actually they do not match.
The following code presents this problem.
```
import torch
def test(m, n, k, dtype, rtol, atol):
a = torch.randn(m, k, dtype=dtype).cuda()
b = torch.randn(k, n, dtype=dtype).cuda()
c = torch.mm(a, b)
for i in range(1,m+1):
d = torch.mm(a[:i,:] , b)
if not torch.allclose(d, c[:i, :], rtol, atol):
print(f'Not match, m={i} {n=} {k=} {dtype=}')
dtypes = [(1.e-3, 1e-5, torch.float16), (1.e-3, 1e-5, torch.bfloat16), (1e-5, 1e-8,torch.float32)]
for r, a, dtype in dtypes:
for m in [4, 8, 16]:
for n in [256, 512]:
for k in [256, 512]:
test(m, n, k, dtype, r, a)
```
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 536.67
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 3
Socket(s): 1
Stepping: 5
BogoMIPS: 4991.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 144 KiB (3 instances)
L1i cache: 96 KiB (3 instances)
L2 cache: 3.8 MiB (3 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[pip3] tritonclient==2.36.0
[conda] Could not collect
| 2 |
1,169 | 108,520 |
[inductor] CPU int32 overflow behavior differs between clang and gcc
|
triaged, bug, oncall: pt2, module: m1, module: inductor, module: cpu inductor
|
### 🐛 Describe the bug
As a follow on from #108513 the following add operation gives a different output on torch and dynamo on CPU:
```
import torch
import torch._dynamo
import torch._dynamo.config
def mySum64(x):
return (x+x).to(torch.int64)
x = torch.tensor( (2147483647), dtype=torch.int32)
torchResult = mySum64(x)
dynamoResult = torch.compile(mySum64)(x)
print(torchResult)
print(dynamoResult)
```
The output is :
```
tensor(-2)
tensor(4294967294)
```
It looks like the compiler is not honouring the int32 for the add; instead it is incorrectly treating it as an int64.
When we fix this a test should be added for this situation in the same place as #108513.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gite68b3ad
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-13.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+git1004706
[pip3] torchgen==0.0.1
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0a0+git1004706 dev_0 <develop>
[conda] torchgen 0.0.1 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 10 |
1,170 | 108,519 |
Pytorch profiler with Tensorboard example not working
|
triaged, module: tensorboard
|
### 🐛 Describe the bug
I'm trying to run the (official) example showing how to run Pytorch profiler and visualize the results with Tensorboard, i.e.
https://github.com/pytorch/tutorials/blob/main/intermediate_source/tensorboard_profiler_tutorial.py
After opening tensorboard as mentioned in the instructions, I get the following message:
```
There’s no dashboard by the name of “pytorch_profiler”.
```
Created trace by the profiler(there was a problem that prevented me from uploading the whole trace, so here is just the initial excerpt):
{
"schemaVersion": 1,
"deviceProperties": [
{
"id": 0, "name": "NVIDIA A100-PCIE-40GB", "totalGlobalMem": 42505207808,
"computeMajor": 8, "computeMinor": 0,
"maxThreadsPerBlock": 1024, "maxThreadsPerMultiprocessor": 2048,
"regsPerBlock": 65536, "regsPerMultiprocessor": 65536, "warpSize": 32,
"sharedMemPerBlock": 49152, "sharedMemPerMultiprocessor": 167936,
"numSms": 108, "sharedMemPerBlockOptin": 166912
},
{
"id": 1, "name": "NVIDIA A100-PCIE-40GB", "totalGlobalMem": 42505207808,
"computeMajor": 8, "computeMinor": 0,
"maxThreadsPerBlock": 1024, "maxThreadsPerMultiprocessor": 2048,
"regsPerBlock": 65536, "regsPerMultiprocessor": 65536, "warpSize": 32,
"sharedMemPerBlock": 49152, "sharedMemPerMultiprocessor": 167936,
"numSms": 108, "sharedMemPerBlockOptin": 166912
}
],
"record_shapes": 1,
"with_stack": 1,
"profile_memory": 1,
"traceEvents": [
{
"ph": "X", "cat": "cpu_op", "name": "autograd::engine::evaluate_function: NllLossBackward0", "pid": 744636, "tid": 744740,
"ts": 1693865209300135, "dur": 106,
"args": {
"External id": 2049,"Ev Idx": 0, "Fwd thread id": 1, "Sequence number": 215
}
},
{
"ph": "X", "cat": "cpu_op", "name": "NllLossBackward0", "pid": 744636, "tid": 744740,
"ts": 1693865209300143, "dur": 92,
"args": {
"External id": 2050,"Ev Idx": 1, "Input Dims": [[]], "Input type": ["float"], "Fwd thread id": 1, "Sequence number": 215
}
},
{
"ph": "X", "cat": "cpu_op", "name": "aten::nll_loss_backward", "pid": 744636, "tid": 744740,
"ts": 1693865209300173, "dur": 62,
"args": {
"External id": 2051,"Ev Idx": 2, "Input Dims": [[], [32, 1000], [32], [], [], [], []], "Input type": ["float", "float", "long int", "", "Scalar", "Scalar", "float"]
}
},
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7642 48-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1498.150
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4599.57
Virtualization: AMD-V
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] perlin-numpy==0.0.0
[pip3] pytorch-lightning==2.0.5
[pip3] torch==2.0.0
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.1+cu118
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.15.1+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.24.3 py310hd5efca6_0
[conda] numpy-base 1.24.3 py310h8e6c178_0
[conda] perlin-numpy 0.0.0 pypi_0 pypi
[conda] pytorch 2.0.0 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.0.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.1+cu118 pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchvision 0.15.1+cu118 pypi_0 pypi
| 1 |
1,171 | 108,517 |
Multi-Head Attention: Only require attn_mask if actually needed
|
triaged, open source
|
`torch.nn.MultiheadAttention` internally uses `scaled_dot_product_attention()`. For the common case of causal attention, the latter accepts an `is_causal` flag, which computes causal attention inside the kernel without having to compute an attention mask in memory. Still, `torch.nn.MultiheadAttention` and `torch.nn.functional.multi_head_attention_forward()` ask for an attention mask whenever `is_causal=True` is given:
```python
import torch
mha = torch.nn.MultiheadAttention(100, 4)
x = torch.random(2, 10, 100)
mha(x, x, x, is_causal=True, need_weights=False) # RuntimeError
```
This short PR tightens the check for a missing attention mask so it is not required when it would be set to `None` 11 lines later anyway.
Disclaimer: I currently do not have a development setup for PyTorch and will rely on the CI, sorry.
As an aside, the docstring of `torch.nn.functional.multi_head_attention_forward()` currently reads:
> is_causal: If specified, applies a causal mask as attention mask, and ignores
> attn_mask for computing scaled dot product attention.
This suggests that the mask is completely ignored, while it is actually still required when either `need_weights` or `key_padding_mask` is given. This could be fixed either by updating the docstring, or by creating a causal `attn_mask` on the fly when needed, and not ever complaining about a missing mask. The latter would be convenient, but it would hide to the user the opportunity to precompute the mask once and reuse it in the case of fixed sequence lengths or multiple same-size transformer layers.
| 4 |
1,172 | 108,516 |
S390x complex division
|
module: cpu, triaged, open source
|
Adopt algorithm from AVX2 implementation.
This change fixes test test_complex_div_underflow_overflow_cpu_complex128
from test/test_binary_ufuncs.py
At the same time it breaks some of Arithmetics/*.Division tests
from vec_test_all_types_ZVECTOR,
but it's also broken on AVX2 and AVX512.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 6 |
1,173 | 108,514 |
torch model to onnx conversion success but failed when inference
|
module: onnx, triaged
|
### 🐛 Describe the bug
I have tried to use the iSTFT from the code https://github.com/MasayaKawamura/MB-iSTFT-VITS/blob/main/stft.py
the trained pytorch model can be used for inference on the format of torch checkpoint file, and we can convert it to ONNX with torch.onnx.export without error report. but when inference with the onnx model with CPUExecutor, there is a bug report about a Gather operation, on the line: https://github.com/MasayaKawamura/MB-iSTFT-VITS/blob/main/stft.py:165, the index value in the Gather operation is exactly boundary value plus one. which did not happen in pytorch checkpoint inference.
### Versions
PyTorch version: 1.10.2+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: version 3.14.1
Libc version: glibc-2.23
Python version: 3.8.5 (default, May 17 2022, 20:10:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] (64-bit runtime)
Python platform: Linux-4.14.134-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: GeForce RTX 3090
GPU 1: GeForce RTX 3090
GPU 2: GeForce RTX 3090
GPU 3: GeForce RTX 3090
GPU 4: GeForce RTX 3090
GPU 5: GeForce RTX 3090
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6226 CPU @ 2.70GHz
Stepping: 7
CPU MHz: 1327.286
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 5400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 19712K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==1.11.200
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] pytest-mypy==0.10.0
[pip3] torch==1.10.2+cu111
[pip3] torch-yin==0.1.2
[pip3] torchaudio==0.10.2+cu111
[pip3] torchvision==0.11.3+cu111
[conda] Could not collect
| 0 |
1,174 | 108,510 |
Eliminate calls of c10::guts::conjunction,c10::guts::disjunction,c10::guts::negation,c10::guts::void_t, c10::invoke and c10::guts::apply
|
module: cpu, triaged, open source, ciflow/binaries, ciflow/trunk, release notes: distributed (c10d), ciflow/periodic, ciflow/slow
|
C++17 provides alternations.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
@huydhn @Skylion007
| 24 |
1,175 | 108,509 |
[xla hash update] update the pinned xla hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor, merging
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
1,176 | 108,507 |
Refactor FindSanitizer.cmake
|
triaged, open source, ciflow/binaries, topic: not user facing, ciflow/periodic, ciflow/inductor, ciflow/slow
|
This PR cleans up CMake Sanitizer module to better support various platforms. Meanwhile, it supports running CUDA tests
| 8 |
1,177 | 108,505 |
[Dynamo][Guard]expose guard code
|
triaged, open source, module: dynamo, ciflow/inductor
|
This is part of ongoing efforts to expose more information to users so that they can check and understand how dynamo works.
This PR exposes the source code of guards to users. Together with the `_debug_get_cache_entry_list` API, users can inspect a compiled function he wants, and look into its guards.
Another proposal is to wire things up so that `inspect.getsource` works for guards. However, `inspect.getsource` works by reading the source code **file**, which does not work for dynamically generated function.
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @jansel
| 2 |
1,178 | 108,502 |
[complex][cpu] nansum & nanmean
|
module: cpu, open source, topic: not user facing
|
Fixes #71472
This PR adds complex support for nansum and nanmean in the CPU.
Previous PR (in CUDA): #93199
cc @Skylion007 @kshitij12345 @lezcano
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
1,179 | 108,500 |
[cond] cache size limit exceeeded
|
triaged, oncall: pt2, module: higher order operators
|
### 🐛 Describe the bug
Repro
1) Get this stack of branch - https://github.com/pytorch/pytorch/pull/108402
2) Run ` test/functorch/test_control_flow.py::TestControlFlowTraced::test_nested_map_cond_real` or `pytest test/functorch/test_control_flow.py::TestControlFlowTraced::test_cond_nested_with_closure`
Background - I am trying to reduce the cache size limit to 4. Its unclear to me why torch.cond requires even more than one compile. I see many recompiles. Its also counter intuitive because torch.cond is targeted for torch.export.
cc @ezyang @msaroufim @wconstab @bdhirsh @ydwu4 @zou3519
### Error logs
_No response_
### Minified repro
_No response_
### Versions
N/A
| 7 |
1,180 | 108,496 |
The CPU version of `torch.cummax` is slow
|
module: performance, module: cpu, triaged
|
### 🐛 Describe the bug
The CPU version of `cummax` is very slow, even more than a naive custom implementation.
This does not happen on the GPU `cummax`.
This problem was originally posted [in the forum](https://discuss.pytorch.org/t/cpu-version-of-torch-cummax-is-slow/187612/2).
This benchmark code below is not from me but from the user `KFrank`.
It can reproduce the problem while being more simple.
```python
import torch
print (torch.__version__)
print (torch.version.cuda)
print (torch.cuda.get_device_name())
from time import time
_ = torch.manual_seed (2023)
def corner_pool(x: torch.Tensor, dim: int, flip: bool):
sz = x.size(dim)
outputs = list(x.unbind(dim))
for i in range(1, sz):
if flip:
i_in = sz - i
i_out = sz - i - 1
else:
i_in = i - 1
i_out = i
outputs[i_out] = torch.maximum(outputs[i_out], outputs[i_in])
return torch.stack(outputs, dim=dim)
img = torch.rand (1, 128, 256, 256)
cmA = torch.cummax (img, dim = -2)[0]
cmB = corner_pool (img, -2, False)
print ('cpu equal:', torch.equal (cmA, cmB))
t0 = time()
for i in range (10):
cmA = torch.cummax (img, dim = -2)[0]
print ('cpu cummax time: ', time() - t0)
t0 = time()
for i in range (10):
cmB = corner_pool (img, -2, False)
print ('cpu corner_pool time: ', time() - t0)
img = img.cuda()
cmA = torch.cummax (img, dim = -2)[0]
cmB = corner_pool (img, -2, False)
print ('gpu equal:', torch.equal (cmA, cmB))
torch.cuda.synchronize()
t0 = time()
for i in range (10):
cmA = torch.cummax (img, dim = -2)[0]
torch.cuda.synchronize()
print ('gpu cummax time: ', time() - t0)
```
The script's output is:
```shell
2.0.1+cu117
11.7
NVIDIA GeForce GTX 1660 Ti
cpu equal: True
cpu cummax time: 1.9056651592254639
cpu corner_pool time: 0.25840282440185547
gpu equal: True
gpu cummax time: 0.007984399795532227
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.36
Python version: 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 4800H with Radeon Graphics
CPU family: 23
Model: 96
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 50%
CPU max MHz: 2900.0000
CPU min MHz: 1400.0000
BogoMIPS: 5788.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 8 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] dirtytorch==1.2.1
[pip3] flake8==6.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] pytorch-lightning==2.0.5
[pip3] torch==2.0.1
[pip3] torchmetrics==1.0.1
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD
| 0 |
1,181 | 108,494 |
backend-friendly distributions
|
module: distributions, feature, module: cuda, triaged
|
### 🚀 The feature, motivation and pitch
As of now many basic distributions are not supported on important backends, for instance for a categorical or multinomial distribution on inductor/cuda graphs
```python
@torch.compile(fullgraph=True, backend='inductor')
def fn():
cat = torch.distributions.categorical.Categorical(torch.tensor([0.1,0.2,0.7])) # seems to use torch.multinomial under the hood
return cat.sample()
fn()
```
gives
```json
{
"name": "Unsupported",
"message": "call_method SizeVariable() numel [] {}\n\nfrom user code:\n File \"/tmp/ipykernel_1333478/2921103239.py\", line 6, in fn\n return cat.sample()\n File \"/usr/local/lib/python3.8/dist-packages/torch/distributions/categorical.py\", line 118, in sample\n samples_2d = torch.multinomial(probs_2d, sample_shape.numel(), True).T\n\nSet torch._dynamo.config.verbose=True or TORCHDYNAMO_VERBOSE=1 for more information\n\n\nYou can suppress this exception and fall back to eager by setting:\n torch._dynamo.config.suppress_errors = True\n",
}
```
### Alternatives
It it possible to have it implemented in a way compatible with GPU-friendly backends?
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved @ptrblck
| 1 |
1,182 | 108,493 |
RWKV + Adam exp_avg_sq will change from positive to negative after loss.backward()
|
needs reproduction, module: optimizer, triaged
|
### 🐛 Describe the bug
when I run a task with RWKV(transformers RWKV, not directly from RWKV-LM repo) + Adam, I observe that in the second step, loss will become `NaN`.
1. When I dive into that, I find out some values from `exp_avg_sqs`(state value in optimizer) are negative which will make https://github.com/pytorch/pytorch/blob/1b3dc05c3e703841e64e0277d473a0baf3296671/torch/optim/adam.py#L565 this line to give `NaN`(we are sqrt-ing on negative values). `exp_avg_sqs` is not supposed to be negative.
2. keep diving into this problem makes me find that when we call loss.backward() and we directly compare the `state['exp_avg_sqs']` value before https://github.com/pytorch/pytorch/blob/1b3dc05c3e703841e64e0277d473a0baf3296671/torch/autograd/__init__.py#L251 and after this line, some states' `exp_avg_sqs` will change from positive to negative which I think is not normal.
3. looks like it is not a underflow, since some values with even lower number remain the same
4. I am running under BFloat16 + autocast + GradScaler
How to fix this issue
Thanks in advance
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7282 16-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.92
Virtualization: AMD-V
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] pytorch-lightning==1.9.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.0.3
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.25.1 pypi_0 pypi
[conda] pytorch-lightning 1.9.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 3 |
1,183 | 108,491 |
Suppport Fused AdamW on CPU
|
module: performance, feature, module: optimizer, triaged, needs research
|
### 🚀 The feature, motivation and pitch
I would like to benefit from the speed advantages of fused-adamw while doing CPU only training, but this is not supported. It currently throws an error indicating that it only works with GPUs.
While CPU only training is not a priority it seems, supporting it well makes it possible for downstream developers to for example run CI/CD tests on CPU only instances.
### Alternatives
Not using fused-adamw.
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 2 |
1,184 | 108,484 |
DistributedDataParallel to support __getattr__
|
oncall: distributed, triaged
|
### 🚀 The feature, motivation and pitch
The DistributedDataParallel wrapped module, beside being a subclass of nn.Module, can have its own additional methods for different situations. Now, under DDP, in order to reference those methods, it'll be quite cubersome to do the following everywhere that we want to reference those methods
```
if is_ddp():
mymodel.module.method1()
else:
mymodel.method1()
```
or
```
if is_ddp():
my_wrapped_module.method1()
else:
mymodel.method1()
```
Now if we add `__getattr__` support to the DistributedDataParallel class and have it delegate to the wrapped module for the call, we won't need to either keep track of the wrapped module or reference the wrapped module for the method call needed.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 2 |
1,185 | 108,483 |
Efficient and robust calculation of diag(sparse @ diag @ sparse)
|
module: sparse, feature, triaged
|
### 🚀 The feature, motivation and pitch
This is a feature request for improving sparse tensors. I'm not really expecting the issue to be resolved any time soon, but I've written about my use case and where I have been having problems in case it is helpful for those who work on pytorch. (Or maybe I'm using sparse tensors in the wrong way!)
As part of a model I'm implementing, I'd like to be able to compute $\text{diag}\left(\mathbf{A} \mathbf{B} \mathbf{A}^T\right)$ efficiently, where $\mathbf{B}$ is a diagonal matrix, and $\mathbf{A}$ is sparse (and both are square matrices). In theory I can compute this without using much memory, but I've been struggling to get it working properly in pytorch.
There is a nice way to do this if you are willing to store $\mathbf{A}$ as a dense tensor, namely `torch.sum((A * B_diag) * A,-1)`, but unfortunately this won't scale if $\mathbf{A}$ is a very big matrix. So ideally I'd use the `torch.sparse` API instead!
I found that it is possible to do something roughly similar to this using `torch.sparse` (illustrated below - [^1]), but it doesn't quite work properly - backprop doesn't work.
There is a workaround - [^2] - representing $\mathbf{B}$ as a sparse tensor and performing 2 sparse mat muls, but it is not very robust. It would be nice to have a `torch.sparse.diagonal` function for this!
[^1] - torch sparse implementation backprop doesn't work
```python
import torch as t
t.manual_seed(0)
B_diag = t.randn((10,), requires_grad=True)
A_dense = t.randn((10, 10), requires_grad=False)
naive_calc = t.diagonal(A_dense @ t.diag_embed(B_diag) @ A_dense.t())
dense_calc = t.sum(A_dense * B_diag * A_dense, -1)
assert (naive_calc - dense_calc).abs().max() < 1e-6
A_sparse = A_dense.to_sparse()
sparse_calc = t.sparse.sum(A_sparse * B_diag * A_sparse, -1).to_dense()
assert (naive_calc - sparse_calc).abs().max() < 1e-6 # this actually works! nice
# but i can't backprop through sparse_calc
# as an example
x1 = t.randn((10,), requires_grad = True)
(x1 - sparse_calc).sum().backward() # ERROR
```
### Alternatives
[^2] - In seeking an workaround, we can convert $\mathbf{B}$ to a sparse tensor, and compute 2 sparse matmuls. But it is not robust to having zeros on the diagonal
```python
ABAT = t.sparse.mm(t.sparse.mm(A_sparse, t.diag_embed(B_diag).to_sparse()), A_sparse.transpose(-2, -1))
# to get the diagonal of this sparse matrix, there is no t.sparse.diagonal function, so we can use a mask
eye_mask = t.eye(10, requires_grad=False).to_sparse() # could do this more efficiently if it were necessary
diag_ABAT = (ABAT * eye_mask).coalesce().values()
assert (diag_ABAT - naive_calc).abs().max() < 1e-6
# note this only works because every element on the non-diag is non-zero
ABAT_dense = ABAT.to_dense(); ABAT_dense[0,0] = 0.
ABAT = ABAT_dense.to_sparse()
diag_ABAT = (ABAT * eye_mask).coalesce().values()
assert (diag_ABAT - naive_calc).abs().max() < 1e-6 # ERROR!
```
### Additional context
torch version: 2.0.1
gpu: NVIDIA GeForce RTX 2080 Ti
cpu: Intel(R) Xeon(R) Silver 4112 CPU @ 2.60GHz
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
1,186 | 108,475 |
Don't call release_lock_on_cudamalloc on ROCm
|
module: rocm, ciflow/trunk, topic: not user facing, test-config/default
|
This change https://github.com/pytorch/pytorch/pull/108367 breaks ROCm tests in trunk. This was landed internally, so it might be easier to forward fix the test instead.
The failure https://hud.pytorch.org/pytorch/pytorch/commit/b8af8ac784905ff4d20792959e3920d01acfa8cf is that the new `release_lock_on_cudamalloc` function is not available on ROCm.
### Testing
The test passes on ROCm https://github.com/pytorch/pytorch/actions/runs/6057040907/job/16437921431
Fixes #108476
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 2 |
1,187 | 108,474 |
CNN w variable sized input performance regression 1.10.2 cu113 -> 2.0.1 cu117
|
module: performance, module: nn, module: cuda, triaged, module: regression
|
### 🐛 Describe the bug
I'm running an image processing service that takes images of varying sizes and runs them through a couple of CNNs, one with fixed size input (always w0 * h0) and one with variable size input (w & h vary from input to input).
I've been trying to upgrade from v1.10.2 cu113 to v2.0.1 cu117 and encountered https://github.com/bytedeco/javacpp-presets/issues/1409 / https://github.com/pytorch/pytorch/pull/104369. When I apply the workaround (`TORCH_CUDNN_V8_API_DISABLED=1`) and run a load test I see the following performance regression:

In addition, the first couple of runs through my networks can easily take ~30 seconds on v2.0.1 when they take <1 second on v1.10.2.
This performance regression makes it hard to upgrade and I'm not really sure how to proceed here - I have an entire production edifice where it exhibits, but it's all proprietary so hard to share in a way that could enable debugging.
I guess I'd need to find a way to replicate it in a smaller python-only example?
### Versions
The issue exhibits in v2.0.1 cu117 but not in v1.10.2 cu113
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 7 |
1,188 | 108,469 |
[unwind.cpp] In process unwind symbolization
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108469
(WIP to debug build issues)
Replaces the Symbolizer which calls `addr2line` with one that uses
libbfd in process to do the same work. This is a little nicer to deal with
because it doesn't have to fork, which can be a problem for large applications.
It also doesn't keep subprocesses around that might not clean up nicely.
libbfd is the same library addr2line uses, so the results should be equivalent
and take about the same time to generate.
This uses `std::async` to launch threads because libbfd is pretty slow,
and doing so matches the performance of the previous version which launched
processes instead of threads.
Differential Revision: [D48980893](https://our.internmc.facebook.com/intern/diff/D48980893)
| 3 |
1,189 | 108,451 |
[WIP] lazy list length guarding
|
module: dynamo, ciflow/inductor
|
Fixes #ISSUE_NUMBER
cc @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 2 |
1,190 | 108,446 |
`SymInt` input doesn't get optimized out from `torch.compiled()` graph even if unused
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
We have Dynamo backend defined similar to IPEX which traces and freezes the model:
```python
import importlib
import logging
import torch
from torch._dynamo import register_backend
from .common import fake_tensor_unsupported
@register_backend
@fake_tensor_unsupported
def aio(model, input):
model.print_readable()
try:
with torch.no_grad():
traced_model = torch.jit.trace(model.eval(), inputs)
frozen_model = torch.jit.freeze(traced_model)
return frozen_model
except Exception as ex:
log.warning("JIT trace failed during the optimize process.")
log.warning(print(ex))
return model
```
I'm running the Llama model from Transformers repo tag tag: v4.30.1 with following script:
```python
import argparse
import os
import sys
import time
import datetime
import torch._dynamo.config
import transformers
import torch
import torch._dynamo
from torch.autograd.profiler import profile
import traceback as tb
import logging
default_input_texts = ("Below is an instruction that describes a task."
"Write a response that appropriately completes the request.\r\n\r\n"
"### Instruction:\r\nList three technologies that make life easier.\r\n\r\n### Response:")
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model_path",
type=str, required=None,
help="Recovered Model path")
parser.add_argument("-a", "--aio",
dest="aio", action='store_true',
help="Use AIO backend")
parser.set_defaults(aio=False)
parser.add_argument("-i", "--input_prompt",
type=str, default=default_input_texts,
help="Input prompt")
return parser.parse_args()
def main():
args = parse_args()
torch._dynamo.config.cache_size_limit = 128
print("Loading model and tokenizer...")
alpaca_model = transformers.LlamaForCausalLM.from_pretrained(args.model_path)
alpaca_tokenizer = transformers.LlamaTokenizer.from_pretrained(args.model_path)
alpaca_model.config.pad_token_id = alpaca_tokenizer.pad_token_id = 0 #unk
alpaca_model.config.bos_token_id = 1
alpaca_model.config.eos_token_id = 2
print("Torch compile...")
alpaca_model = alpaca_model.eval()
alpaca_model = torch.compile(alpaca_model, backend="air", dynamic=True, fullgraph=False)
inputs = alpaca_tokenizer(args.input_prompt, return_tensors="pt")
outputs = alpaca_model.generate(inputs=inputs.input_ids, max_new_tokens=100)
output_text = alpaca_tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print("-- Alpaca output --")
print("{}\n\n".format(output_text))
```
one of the graph that `torch.compile()` produces is:
```
class GraphModule(torch.nn.Module):
def forward(self, s0 : torch.SymInt, L_attention_mask_ : torch.Tensor):
l_attention_mask_ = L_attention_mask_
# File: /onspecta/transformers/src/transformers/models/llama/modeling_llama.py:737, code: position_ids = attention_mask.long().cumsum(-1) - 1
long = l_attention_mask_.long()
cumsum = long.cumsum(-1); long = None
sub = cumsum - 1; cumsum = None
# File: /onspecta/transformers/src/transformers/models/llama/modeling_llama.py:738, code: position_ids.masked_fill_(attention_mask == 0, 1)
eq = l_attention_mask_ == 0; l_attention_mask_ = None
masked_fill_ = sub.masked_fill_(eq, 1); eq = None
return (sub,)
```
Here second argument is `s0 : torch.SymInt` which isn't used later, I think it should be optimized out by DeadCodeElimination, I tried to call `eliminate_dead_code` on model, it doesn't do anything. This is troublesome since `orch.jit.trace` doesn't support `SymInt` inputs.
This bug occurs many times in this model, I pasted only one subgraph where is occurs since it is short.
Problem doesn't occur on v2.0.0 tag, but happens on `400c4de53bb7b36066aef381313ed71e4a877e95`
### Versions
main branch
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,191 | 108,445 |
_foreach_copy_ with scalar second arg
|
feature, module: optimizer, triaged, actionable, module: mta
|
### 🚀 The feature, motivation and pitch
There are a lot of uses (specifically in the optimizer) for launching a single kernel to fill multiple tensors with a scalar. Other foreach ops can handle scalar args and broadcasting as well.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @mcarilli
| 3 |
1,192 | 108,444 |
[quant][pt2e] Refactor annotate functions for binary ops
|
fb-exported, release notes: quantization, release notes: AO frontend
|
Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//executorch/backends/xnnpack/test:test_xnnpack_ops
```
Reviewed By: jerryzh168
Differential Revision: D48763230
| 5 |
1,193 | 108,442 |
Torch compile generates incorrect graph on Llama model
|
high priority, triaged, module: regression, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
We have Dynamo backend defined similar to IPEX which traces and freezes the model (however the problem is general):
```python
import importlib
import logging
import torch
from torch._dynamo import register_backend
from .common import fake_tensor_unsupported
@register_backend
@fake_tensor_unsupported
def aio(model, input):
model.print_readable()
try:
with torch.no_grad():
traced_model = torch.jit.trace(model.eval(), inputs)
frozen_model = torch.jit.freeze(traced_model)
return frozen_model
except Exception as ex:
log.warning("JIT trace failed during the optimize process.")
log.warning(print(ex))
return model
```
I'm running the Llama model from Transformers repo tag `tag: v4.30.1` with following script:
```python
import argparse
import os
import sys
import time
import datetime
import torch._dynamo.config
import transformers
import torch
import torch._dynamo
from torch.autograd.profiler import profile
import traceback as tb
import logging
default_input_texts = ("Below is an instruction that describes a task."
"Write a response that appropriately completes the request.\r\n\r\n"
"### Instruction:\r\nList three technologies that make life easier.\r\n\r\n### Response:")
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model_path",
type=str, required=None,
help="Recovered Model path")
parser.add_argument("-a", "--aio",
dest="aio", action='store_true',
help="Use AIO backend")
parser.set_defaults(aio=False)
parser.add_argument("-i", "--input_prompt",
type=str, default=default_input_texts,
help="Input prompt")
return parser.parse_args()
def main():
args = parse_args()
torch._dynamo.config.cache_size_limit = 128
print("Loading model and tokenizer...")
alpaca_model = transformers.LlamaForCausalLM.from_pretrained(args.model_path)
alpaca_tokenizer = transformers.LlamaTokenizer.from_pretrained(args.model_path)
alpaca_model.config.pad_token_id = alpaca_tokenizer.pad_token_id = 0 #unk
alpaca_model.config.bos_token_id = 1
alpaca_model.config.eos_token_id = 2
print("Torch compile...")
alpaca_model = alpaca_model.eval()
alpaca_model = torch.compile(alpaca_model, backend="air", dynamic=True, fullgraph=False)
inputs = alpaca_tokenizer(args.input_prompt, return_tensors="pt")
outputs = alpaca_model.generate(inputs=inputs.input_ids, max_new_tokens=100)
output_text = alpaca_tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print("-- Alpaca output --")
print("{}\n\n".format(output_text))
```
When model gets to the our dynamo backend Pytorch throws an error (inside `torch.jit.trace`):
```
INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::__and__ but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::__and__.Scalar(Tensor self, Scalar other) -> Tensor
aten::__and__.Tensor(Tensor self, Tensor other) -> Tensor
aten::__and__.bool(bool a, bool b) -> bool
aten::__and__.int(int a, int b) -> int
```
The problem lies in this part (I printed the model with `print_readable()` call in our backend:
```
# File: /onspecta/transformers/src/transformers/models/llama/modeling_llama.py:234, code: if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
size_12 = matmul_1.size()
getitem_47 = size_12[2]; size_12 = None
eq_4 = getitem_47 == getitem_7; getitem_47 = None
and__6 = True & True
and__7 = 1 & eq_4; eq_4 = None
and__8 = and__7 & True; and__7 = None
not__2 = _operator_not_(and__8); and__8 = None
```
and__7 = 1 & eq_4; eq_4 = None <----- this line is wrong there is an Tensor on left hand side and bool on the right hand side, the code generated by torch.dynamo is incorrect. Problem doesn't occur on `v2.0.0` tag, but happens on `400c4de53bb7b36066aef381313ed71e4a877e95`
The original code from the model in this place is:
```
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
```
### Versions
main branch
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,194 | 108,432 |
Wrong result of first run with torch.compile() when backend is using torch.jit.trace() and model has inplace operators
|
oncall: jit, triaged, oncall: pt2
|
### 🐛 Describe the bug
We have Dynamo backend defined similar to IPEX which traces and freezes the model:
```python
import importlib
import logging
import torch
from torch._dynamo import register_backend
from .common import fake_tensor_unsupported
@register_backend
@fake_tensor_unsupported
def aio(model, input):
try:
with torch.no_grad():
traced_model = torch.jit.trace(model.eval(), inputs)
frozen_model = torch.jit.freeze(traced_model)
return frozen_model
except Exception as ex:
log.warning("JIT trace failed during the optimize process.")
log.warning(print(ex))
return model
```
Then following script:
```python
import torch
class Inplace(torch.nn.Module):
def __init__(self):
super(Inplace, self).__init__()
def forward(self, input, input2):
input.add_(input2)
return input.add_(input2)
inplace = Inplace()
compiled = torch.compile(inplace, backend="aio")
res = inplace(torch.tensor(1), torch.tensor(2))
print(res)
inputs = (torch.tensor(1), torch.tensor(2))
res = compiled(torch.tensor(1), torch.tensor(2))
print(res)
res = compiled(torch.tensor(1), torch.tensor(2))
print(res)
```
gives following results:
```
tensor(5)
tensor(9)
tensor(5)
```
The second number (first run on the compiled model) is wrong. Model gives different results on first and subsequent runs. The problem happens because we used inplace operators, it is working correctly when we are using normal `add` op. This worked correctly on v2.0.0 and fails when we rebased our changes on `400c4de53bb7b36066aef381313ed71e4a877e95`
### Versions
PyTorch version: 2.1.0a0+git6d18fe9-dev
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.20.0-rc5
Libc version: glibc-2.35
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:28:59) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 128
Socket(s): 2
Stepping: r3p1
CPU max MHz: 3000.0000
CPU min MHz: 1000.0000
BogoMIPS: 50.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
L1d cache: 16 MiB (256 instances)
L1i cache: 16 MiB (256 instances)
L2 cache: 256 MiB (256 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-127
NUMA node1 CPU(s): 128-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==2.1.0a0+git6d18fe9.dev
[pip3] torchvision==0.16.0a0+02d3d6d
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.1.0a0+git6d18fe9.dev pypi_0 pypi
[conda] torchvision 0.16.0a0+02d3d6d pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,195 | 108,430 |
Revert D48801487: Multisect successfully blamed "D48801487: [export] Copy gm before calling PassManager" for test or build failures
|
fb-exported
|
Summary:
This diff is reverting D48801487
D48801487: [export] Copy gm before calling PassManager by digantdesai has been identified to be causing the following test or build failures:
Tests affected:
- [on_device_ai/Assistant/Jarvis/compiler:test_passes - test_calculate_peak_memory_pass (on_device_ai.Assistant.Jarvis.compiler.tests.test_passes.TestMemPlanningPasses)](https://www.internalfb.com/intern/test/844425014917237/)
Here's the Multisect link:
https://www.internalfb.com/multisect/2930767
Here are the tasks that are relevant to this breakage:
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.
If you believe this diff has been generated in error you may Commandeer and Abandon it.
Test Plan: NA
Differential Revision: D48901081
| 8 |
1,196 | 108,420 |
[dynamo] Add DictView variable tracker
|
open source, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108420
* #110524
* #111196
* #110523
* #110522
As per title
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 1 |
1,197 | 108,407 |
torch.einsum() computes different results on cpu and cuda on A100 GPU.
|
module: cuda, triaged, module: linear algebra
|
### 🐛 Describe the bug
**Dear Developers:**
Currently, `torch.einsum()` is commonly used to compute the attention scores of query and key matrix. However, we found this operator computes different results on CPU and CUDA on a **A100 GPU**. And V100 doesn't have this problem.
Below is the reproduced code, where `n` is the sequence length, `h` is the number of heads, `d` is the hidden_size per head. We fixed `h, d = 64, 96`, and tested the results of different `n`.
```python
import torch
for n in [1, 10, 50, 64, 100, 200, 500]:
h, d = 64, 96
qq = torch.ones((1, n, h, d), dtype=torch.float32) # use ones tensor
kk = torch.ones((1, n, h, d), dtype=torch.float32)
vv = torch.ones((1, n, h, d), dtype=torch.float32)
kk /= torch.sqrt(torch.tensor(d))
scores1 = torch.einsum(
"bthd,bshd->bhts", qq.cuda(), kk.cuda()
).cpu() # cuda
scores2 = torch.einsum("bthd,bshd->bhts", qq, kk) # cpu
print(n, (scores1 - scores2).abs().max()) # compute the max different abs value.
```
A100 results, when `n>=64`, this problem arises.
```bash
1 tensor(7.6294e-06)
10 tensor(0.)
50 tensor(0.)
64 tensor(0.0011)
100 tensor(0.0011)
200 tensor(0.0011)
500 tensor(0.0011)
```
V100 results, almost no error.
```bash
1 tensor(7.6294e-06)
10 tensor(7.6294e-06)
50 tensor(7.6294e-06)
64 tensor(0.)
100 tensor(7.6294e-06)
200 tensor(7.6294e-06)
500 tensor(7.6294e-06)
```
### Versions
Python Envs
```
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230812+cu121
[pip3] torch-tensorrt==1.5.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.0.0
[conda] numpy 1.24.4 pypi_0 pypi
```
cc @ptrblck @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 5 |
1,198 | 108,406 |
enhance documentation around the developer build
|
triaged, topic: docs
|
### 📚 The doc issue
At the following or similar to the source build docs
```
To resolve the issue of your local folder not being picked up first in the Python path when building PyTorch, you can try the following steps:
Remove the previously installed package:
Since you have already run python setup.py develop, you need to undo that step first. To do this, go to the root directory of the PyTorch project and run:
python setup.py develop --uninstall
This will remove the package installed with develop mode.
Install the package in editable mode again:
Navigate to the root directory of your PyTorch project and run:
pip install -e .
This will install the package in editable mode, which will create a link to your local code, ensuring that your local folder is picked up first in the Python path.
```
I can implement this
### Suggest a potential alternative/fix
_No response_
| 1 |
1,199 | 108,404 |
multiple AMD GPUs
|
needs reproduction, module: rocm, triaged
|
### 🐛 Describe the bug
When I run multiple GPU's using ROCm, the second GPU does not work.
I use the docker image rocm/pytorch:latest.
I have two GPUs installed:
> rocm-smi
```
========================= ROCm System Management Interface =========================
=================================== Concise Info ===================================
GPU Temp (DieEdge) AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%
0 31.0c 5.0W 0Mhz 96Mhz 0% auto 289.0W 0% 0%
1 43.0c 5.0W 0Mhz 96Mhz 0% auto 289.0W 0% 0%
====================================================================================
=============================== End of ROCm SMI Log ================================
```
they are both the same type of GPU:
$ lspci|grep VGA
03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c0)
07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c0)
within the docker image, i run python:
>>> import torch
>>> print(torch.tensor([1.0, 2.0, 3.0], device="cuda:0"))
tensor([1., 2., 3.], device='cuda:0')
and the first device executes simple functions with no problem.
but the second device gives this error:
>>> print(torch.tensor([1.0, 2.0, 3.0], device="cuda:1"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_tensor.py", line 427, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_tensor_str.py", line 636, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_tensor_str.py", line 567, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_tensor_str.py", line 327, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_tensor_str.py", line 115, in __init__
nonzero_finite_vals = torch.masked_select(
RuntimeError: HIP error: the operation cannot be performed in the present state
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing HIP_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
and after that, the GPU% of that GPU shoots up to 99% and stays there:
```
========================= ROCm System Management Interface =========================
=================================== Concise Info ===================================
GPU Temp (DieEdge) AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%
0 33.0c 5.0W 0Mhz 96Mhz 0% auto 289.0W 2% 0%
1 54.0c 60.0W 2575Mhz 96Mhz 0% auto 289.0W 1% 99%
====================================================================================
=============================== End of ROCm SMI Log ================================
```
### Versions
PyTorch version: 2.0.0a0+git70f6d0c
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.6.31061-8c743ae5d
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 16.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.6.0 23243 be997b2f3651a41597d7a41441fff8ade4ac59ac)
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-31-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6900 XT
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.6.31061
MIOpen runtime version: 2.20.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9900 CPU @ 3.10GHz
Stepping: 13
CPU MHz: 799.992
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 6199.99
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.18.5
[pip3] torch==2.0.0a0+git70f6d0c
[pip3] torchvision==0.15.0a0+c206a47
[conda] No relevant packages
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 11 |
1,200 | 108,402 |
[dynamo] Reduce cache size to 4
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #108402
* #108161
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.