Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
1,801 | 106,101 |
EMFORMER_RNNT not compilable
|
triaged, oncall: pt2, module: dynamic shapes, module: dynamo
|
### 🚀 The feature
## Repro
```python
import torch
import torchaudio
from torchaudio.io import StreamReader
bundle = torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH
feature_extractor = bundle.get_streaming_feature_extractor()
decoder = bundle.get_decoder()
token_processor = bundle.get_token_processor()
# random values
# This works
decoder(torch.randn(80, 80), length=torch.tensor([1.0]), beam_width=10)
decoder = torch.compile(decoder, fullgraph=True)
# This does not work
decoder(torch.randn(80, 80), length=torch.tensor([1.0]), beam_width=10)
```
## Error
```
(nightly) marksaroufim@marksaroufim-mbp zzz % ls
add.ff model.ff test.py test4.py
buck2-aarch64-apple-darwin.zst tes2.py test3.py
(nightly) marksaroufim@marksaroufim-mbp zzz %
(nightly) marksaroufim@marksaroufim-mbp zzz % python test4.py
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
/Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/executorch/extension/pytree/__init__.py:25: UserWarning: Unable to import executorch.extension.pytree, using native torch pytree instead.
warnings.warn(
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT forward /Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torchaudio/models/rnnt_decoder.py line 267
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] due to:
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] File "/Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torch/_inductor/fx_passes/split_cat.py", line 125, in normalize_cat_default
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] assert all(ndim == x.meta["example_value"].dim() for x in tensors)
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] AssertionError:
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:18,049] torch._dynamo.convert_frame: [WARNING] converting frame raised error, suppressing error
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT transcribe /Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torchaudio/models/rnnt.py line 580
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] due to:
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] File "/Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torch/_inductor/fx_passes/split_cat.py", line 125, in normalize_cat_default
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] assert all(ndim == x.meta["example_value"].dim() for x in tensors)
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] AssertionError:
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:21,172] torch._dynamo.convert_frame: [WARNING] converting frame raised error, suppressing error
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT forward /Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torchaudio/models/rnnt.py line 220
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] due to:
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] File "/Users/marksaroufim/opt/anaconda3/envs/nightly/lib/python3.10/site-packages/torch/_inductor/fx_passes/split_cat.py", line 125, in normalize_cat_default
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] assert all(ndim == x.meta["example_value"].dim() for x in tensors)
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] AssertionError:
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING]
[2023-07-26 18:57:24,048] torch._dynamo.convert_frame: [WARNING] converting frame raised error, suppressing error
```
### Motivation, pitch
It's a popular model folks have been asking me about and I'd like get it deployed to a mobile device, starting with the decoder
Here's my progress so far
* torch.compile was graph breaking on jit annotate
```
@@ -236,7 +236,7 @@ class RNNTBeamSearch(torch.nn.Module):
b_hypos = self._init_b_hypos(device) if hypo is None else hypo
for t in range(n_time_steps):
a_hypos = b_hypos
- b_hypos = torch.jit.annotate(List[Hypothesis], [])
+ b_hypos = []
key_to_b_hypo: Dict[str, Hypothesis] = {}
symbols_current_t = 0
@@ -292,7 +292,7 @@ class RNNTBeamSearch(torch.nn.Module):
enc_out, _ = self.model.transcribe(input, length)
return self._search(enc_out, None, beam_width)
- @torch.jit.export
+ # @torch.jit.export
def infer(
self,
input: torch.Tensor,
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 7 |
1,802 | 106,099 |
Hack FSDP to work with CPU training
|
module: cpu, Stale, release notes: distributed (fsdp)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106099
Quick demo how FSDP on CPU can work. We need to improve the Stream
stub and add an Event stub that are noops for CPU, and fallback to collectives
that gloo does support.
To actually enable this, we may not want Gloo to fallback to allgather / reduce + scatter when using the unsupported collectives, and maybe handle this at the FSDP level instead. But that seems a little more brittle since future changes to FSDP can again use these unsupported colls directly, causing CPU training to break.
Differential Revision: [D47821700](https://our.internmc.facebook.com/intern/diff/D47821700/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 4 |
1,803 | 106,096 |
DO NOT MERGE! Test if backwards cumprod is broken even when falling back on cpu on macos 13.2.1
|
triaged, open source, Stale, topic: not user facing, ciflow/mps
|
I do not have macos 13.2.1 so I am testing this here on the ci. If there is a better way please let me know. Please do not bother triaging this. Please also unassign this pr is possible (I don't think I have permissions to do it)
| 4 |
1,804 | 106,081 |
[do not review][Dynamo] Wait for lazy accumaultion of guards for input tensors
|
Stale, topic: not user facing, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106081
Looking for what CI shows up here, and how bad the problem is. Trying to check the complexity of solving - https://github.com/pytorch/pytorch/issues/106067
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
| 2 |
1,805 | 106,073 |
Potential lack of CI testing on older NVIDIA GPU
|
module: ci, triaged
|
## Issue description
Recently, we observe flaky issue with AWS G3 runner using Tesla M60 GPU https://github.com/pytorch/pytorch/issues/105721#event-9928505839. The issue was mitigated by moving the job to the newer G5 runner using A10G. Because the actual root cause is unclear, running on G5 could hide actual issues if it turns out to be something on PyTorch. So this issue is here to track any future investigations on this area.
Here is an example failure where `nvidia-smi` hangs:
* [linux-bionic-cuda11.8-py3.9-gcc7 / test (multigpu, 1, 1, linux.16xlarge.nvidia.gpu)](https://hud.pytorch.org/pytorch/pytorch/commit/33b855e9069ddc98399c0dfd362f18df9503b66f)
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 1 |
1,806 | 106,067 |
Tensors always get 0/1 specialization guards, even if they're not used
|
triaged, oncall: pt2, module: dynamic shapes, module: guards
|
### 🐛 Describe the bug
This recompiles but it shouldn't:
```
import torch
@torch.compile(backend="eager")
def f(x, y):
return y + 2
f(torch.randn(0), torch.randn(2))
f(torch.randn(2), torch.randn(2))
```
`TORCHDYNAMO_PRINT_GUARDS=1 TORCH_LOGS=guards,dynamic,recompiles python n.py`
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,807 | 106,050 |
"Graph break: inline in skipfiles:" is a bad message
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
TORCH_LOGS=graph_breaks should produce user-understandable logs, but a log like `Graph break: inline in skipfiles: LazyGetItemMixin.__getitem__ | __getitem__ ` is completely impenetrable to any standard user. Need more info.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,808 | 106,046 |
Avoid incrementing refcount of `grad_fn` in `unpack_list`
|
module: autograd, triaged, module: mta
|
https://github.com/pytorch/pytorch/pull/105504 added an argument of `shared_ptr<Node>` which we ideally would `std::move` to avoid bumping refcount but it turned out impossible as we call the lambda multiple times.
rel:
- https://github.com/pytorch/pytorch/blob/c4b7311fc26a878ee594b6a65a66b38dfe22db86/tools/autograd/templates/Functions.h#L28-L45
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mcarilli
| 0 |
1,809 | 106,031 |
torch._dynamo.export does not support symbolic int inputs
|
triaged, oncall: pt2, module: dynamic shapes, module: export
|
### 🐛 Describe the bug
```
import torch
def f(x, i):
return x.view(i) + 2
r = torch._dynamo.export(f, torch.randn(3, 3), 9)
```
prints
```
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] Summary of dimension constraints:
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] The following dimensions have been specialized and CANNOT be dynamic.
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] ```
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] def specializations(x, i):
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] # x:
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] assert x.size()[0] == 3
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] assert x.size()[1] == 3
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO] ```
[2023-07-26 07:44:09,382] torch._dynamo.eval_frame: [INFO]
```
There isn't really any reason why the input 9 couldn't be treated symbolically though.
You can sort of fake it with this patch
```
diff --git a/fbcode/caffe2/torch/_dynamo/eval_frame.py b/fbcode/caffe2/torch/_dynamo/eval_frame.py
--- a/fbcode/caffe2/torch/_dynamo/eval_frame.py
+++ b/fbcode/caffe2/torch/_dynamo/eval_frame.py
@@ -966,7 +966,7 @@
assume_static_by_default = True
with patch(f"{__name__}.most_recent_backend", None), config.patch(
summarize_dim_constraints=True,
- specialize_int=True,
+ specialize_int=False,
assume_static_by_default=assume_static_by_default,
automatic_dynamic_shapes=False,
capture_dynamic_output_shape_ops=True,
diff --git a/fbcode/caffe2/torch/_dynamo/variables/builder.py b/fbcode/caffe2/torch/_dynamo/variables/builder.py
--- a/fbcode/caffe2/torch/_dynamo/variables/builder.py
+++ b/fbcode/caffe2/torch/_dynamo/variables/builder.py
@@ -1082,12 +1082,6 @@
)
self.tx.output.unspec_variable_map[self.name] = unspec_var
if not is_constant_source(self.get_source()):
- if self.tx.export and not isinstance(self.get_source(), LocalSource):
- raise AssertionError(
- "Dynamo attempts to add additional input during export: value={}, source={}".format(
- wrapped_value, self.get_source()
- )
- )
fake_tensor_value = None
if isinstance(unspec_var, ConstantVariable):
example_value = unspec_var.value
diff --git a/fbcode/caffe2/torch/fx/experimental/symbolic_shapes.py b/fbcode/caffe2/torch/fx/experimental/symbolic_shapes.py
--- a/fbcode/caffe2/torch/fx/experimental/symbolic_shapes.py
+++ b/fbcode/caffe2/torch/fx/experimental/symbolic_shapes.py
@@ -1480,7 +1480,11 @@
self.symbol_to_source = symbol_to_source
def print_source(self, source) -> str:
- return f"dynamic_dim({source.base.name()}, {source.idx})"
+ from torch._dynamo.source import TensorPropertySource
+ if isinstance(source, TensorPropertySource):
+ return f"dynamic_dim({source.base.name()}, {source.idx})"
+ else:
+ return "TODO"
def _print_Symbol(self, expr) -> str:
assert isinstance(expr, sympy.Symbol), str(type(expr))
```
but there are still some gaps (see TODO; also argument matching doesn't work
```
eval_frame.py", line 924, in produce_matching
id(arg) in dict_of_source_args
AssertionError: Dynamo input and output is a strict subset of traced input/output
```
cc @msaroufim @wconstab @bdhirsh @anijain2305 @tugsbayasgalan @voznesenskym
### Versions
main
| 1 |
1,810 | 106,029 |
[FSDP] Investigate sharded GPU gradient lifetime when CPU offloading
|
triaged, module: fsdp
|
I wonder if we are keeping a reference to the sharded GPU gradient even after the gradient offload to CPU has finished, resulting in unnecessary GPU memory usage. More broadly, we need to do some memory profiling for CPU offloading since there have been reports of it not saving GPU memory.
cc @zhaojuanmao @mrshenli @rohan-varma
| 0 |
1,811 | 106,027 |
DISABLED test_profiler_cuda_sync_events (__main__.TestProfiler)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/profiler%2Ftest_profiler.py%3A%3ATestProfiler%3A%3Atest_profiler_cuda_sync_events)).
Introduced by https://github.com/pytorch/pytorch/pull/105187
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,812 | 106,026 |
DISABLED test_triton_template_with_epilogues_and_dynamic_shape (__main__.TestDoBench)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_max_autotune.py%3A%3ATestDoBench%3A%3Atest_triton_template_with_epilogues_and_dynamic_shape)).
Looks like it was introduced by https://hud.pytorch.org/pytorch/pytorch/commit/98956c5320534cb66fd0dd69bc632122e16adba9
@jataylo to look into this further
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,813 | 106,024 |
Make Intel GPUs available
|
triaged, open source, Stale, release notes: mps, ciflow/mps
|
Fixes #ISSUE_NUMBER
| 4 |
1,814 | 106,017 |
enable torch.device TorchFunctionMode
|
open source, Stale
|
Fixes #ISSUE_NUMBER
| 3 |
1,815 | 106,011 |
Misleading error message in multilabel_margin_loss when passing incompatible tensor dimensions
|
module: nn, triaged, actionable
|
### 🐛 Describe the bug
While using MultiLabelMarginLoss in PyTorch, I received a RuntimeError with a misleading error message. Here's the traceback
```
RuntimeError: Expected non-empty vector or matrix with optional 0-dim batch size, but got: [20, 9, 20]
```
The error message suggests that I'm passing an empty vector or matrix, but that's not the case. I have tried tensors of different shapes, and the error message remains the same.
In my first example, my input tensor shape was [20, 9, 20] and the target tensor was a 7-D tensor. In a second example, the input tensor shape was [1, 2, 3] and the target tensor was a 3-D tensor. In both cases, I suspect the issue is that tensor shapes are uncompatiable.
While I understand that multilabel_margin_loss is expected to receive 1-D or 2-D tensors, the error message could be more descriptive. It would be helpful if it indicated the expected and received tensor dimensions, rather than just saying it expected a non-empty vector or matrix.
Here are the examples that led to this error:
```python
# First example
input_tensor = torch.rand([20, 9, 20])
target_tensor = torch.rand([2, 2, 2, 2, 2, 2, 2]) # 7-D tensor
loss_function = nn.MultiLabelMarginLoss()
loss = loss_function(input_tensor, target_tensor)
# Second example
input_tensor = torch.rand([1, 2, 3])
target_tensor = torch.rand([2, 2, 2]) # 3-D tensor
loss_function = nn.MultiLabelMarginLoss()
loss = loss_function(input_tensor, target_tensor)
```
Please let me know if there's any additional information you need or if there's a better way to approach this problem. If this behavior is expected and known, feel free to close this issue. Thank you!
### Versions
PyTorch version: 2.1.0.dev20230622+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 12.0.0-3ubuntu1~20.04.5
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2070
GPU 1: NVIDIA GeForce RTX 2070
GPU 2: NVIDIA GeForce RTX 2070
GPU 3: NVIDIA GeForce RTX 2070
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1224.656
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.39
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230622+cu118
[pip3] torchaudio==2.1.0.dev20230622+cu118
[pip3] torchvision==0.16.0.dev20230622+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230622+cu118 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
1,816 | 106,010 |
[quant][fx] Fix node deletion bug during fx convert
|
triaged, open source, Stale, release notes: quantization, release notes: AO frontend
|
If a binary_op node `add_xx` is used by 2 nodes, and one of them is a `quantize_per_tensor_xx` node, it would be deleted during `convert`, `lower_to_native_backend` step, which would cause an error, because `torch.fx` is trying to delete a node which is used by another node.
It was found when I tried to quantize a EfficientNet model.
| 2 |
1,817 | 106,006 |
[torch.compile] assertion sometimes ignored with inductor backend
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
When the default backend is used, assertions are sometimes ignored.
This bug is a bit flaky - it disappeared when I tried to wrap it in try/catch - but was reduced from a non-toy example.
It does not manifest with the 'eager' backend selected.
This was discovered with the nightly 2.1.0.dev20230725+cu118 - the stable version crashes outright due to #96529
### Error logs
Not applicable, the bug manifests as **lack** of AssertionError
### Minified repro
### handbuilt repro
```
import torch
x, y = torch.ones(5), torch.zeros(5)
def incorrect(x,y):
assert torch.all(torch.eq(x,y)) #with assert False, it behaves as expected
print("not thrown")
return x+y
def correct(x,y):
try:
assert torch.all(torch.eq(x, y))
print("not thrown")
except:
print("throw")
return x+y
torch.compile(correct, backend='eager')(x,y) #throw, correct
torch._dynamo.reset()
torch.compile(correct, backend='inductor')(x,y) #throw, correct
torch._dynamo.reset()
torch.compile(incorrect, backend='inductor')(x,y) #not thrown, incorrect
torch._dynamo.reset()
torch.compile(incorrect, backend='eager')(x,y) #assertion error, correct
```
### Versions
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21653 100 21653 0 0 80621 0 --:--:-- --:--:-- --:--:-- 80794
Collecting environment information...
PyTorch version: 2.1.0.dev20230725+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.33
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-triton==2.1.0+9e3e10c5ed
[pip3] torch==2.1.0.dev20230725+cu118
[pip3] torchaudio==2.1.0.dev20230725+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.16.0.dev20230725+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,818 | 106,003 |
[FSDPxMTPG] Migrate TestFSDPTraversal
|
Stale, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106003
* #106002
* #106001
* #106000
* #105999
* #105995
* #105991
* #105207
* #105206
* #105205
* #105180
* #105176
This test can be migrated to MTPG, saving ~4x test time.
Differential Revision: [D47784765](https://our.internmc.facebook.com/intern/diff/D47784765/)
| 2 |
1,819 | 106,002 |
[FSDPExecOrder] Migrate one test to MTPG
|
Stale, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106003
* __->__ #106002
* #106001
* #106000
* #105999
* #105995
* #105991
* #105207
* #105206
* #105205
* #105180
* #105176
Only a single test can be migrated because the rest of the tests
capture warning.warnings which MTPG seems to have issues with. cc @kumpera
Differential Revision: [D47784763](https://our.internmc.facebook.com/intern/diff/D47784763/)
| 2 |
1,820 | 106,001 |
[FSDP test][ez] remove setUp()
|
Stale, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106003
* #106002
* __->__ #106001
* #106000
* #105999
* #105995
* #105991
* #105207
* #105206
* #105205
* #105180
* #105176
Only calls the super method, so not needed
Differential Revision: [D47784764](https://our.internmc.facebook.com/intern/diff/D47784764/)
| 2 |
1,821 | 106,000 |
[ComposablexMTPG] Migrate some composable tests to MTPG
|
Stale, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106003
* #106002
* #106001
* __->__ #106000
* #105999
* #105995
* #105991
* #105207
* #105206
* #105205
* #105180
* #105176
Per title
Differential Revision: [D47784762](https://our.internmc.facebook.com/intern/diff/D47784762/)
| 2 |
1,822 | 105,999 |
[FSDPxMTPG] Migrate one more test
|
Stale, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106003
* #106002
* #106001
* #106000
* __->__ #105999
* #105995
* #105991
* #105207
* #105206
* #105205
* #105180
* #105176
After Rodrigo's work to enable MTPG in backward pass, we can migrate
this test. the remaining tests are still on multiprocess due to things like
numerical issues when moving them to MTPG.
On 8 GPU A100 node, this test runtime is cut from ~92s to ~8s
Differential Revision: [D47784179](https://our.internmc.facebook.com/intern/diff/D47784179/)
| 3 |
1,823 | 105,997 |
Flatbuffer torchscipt files don't load in PyTorch Android Lite 1.13.1
|
oncall: jit
|
### 🐛 Describe the bug
When attempting to load a flatbuffer traced pytorch file, I get the following exception:
```
Caused by: com.facebook.jni.CppException: Flatbuffer input file but the build hasn't enabled flatbuffer
Exception raised from _load_mobile_from_bytes at /home/agunapal/pytorch/torch/csrc/jit/mobile/import.cpp:638 (most recent call first):
(no backtrace available)
at org.pytorch.LiteNativePeer.initHybrid(Native Method)
at org.pytorch.LiteNativePeer.<init>(LiteNativePeer.java:28)
at org.pytorch.LiteModuleLoader.load(LiteModuleLoader.java:30)
at ai.sensory.face.model.ImageProcessingModel.<init>(ImageProcessingModel.java:49)
at ai.sensory.face.model.FaceLivenessModel.<init>(FaceLivenessModel.java:42)
at ai.sensory.face.CaptureActivity.onCreate(CaptureActivity.java:195)
at android.app.Activity.performCreate(Activity.java:7327)
at android.app.Activity.performCreate(Activity.java:7318)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1271)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3096)
... 11 more
```
### Versions
[1.13.1](https://mvnrepository.com/artifact/org.pytorch/pytorch_android_lite/1.13.1)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,824 | 105,993 |
[aarch64][xplat/caffe2] Fix aarch64 build
|
fb-exported, Stale, topic: not user facing
|
Summary: Add missing dep to pull in sleef headers.
Test Plan:
```
$ buck2 build -c fbcode.arch=aarch64 fbsource//xplat/caffe2:cpukernel_avx2
```
Reviewed By: r1mikey, ajaymh
Differential Revision: D47772650
| 4 |
1,825 | 105,982 |
vmap and rnn/lstm "accessing '.data' under vmap transform is not allowed"
|
triaged, module: functorch
|
### 🐛 Describe the bug
Quick disclaimer before I start: This is literally the first time I ever posted a bug report on GitHub; I've done my best to be clear and complete and follow the guidelines, but plz forgive me if anything here is incomplete/unclear :X
I have a use case where it'd be very ( very ) nice to be able to run a collection of models ( really a bunch of differently-weighted versions of the same model ) in a vectorized way rather than looping through each of them individually. The models here all have the same architecture, and are all ( at any given time ) doing inference on the same batch of data - they differ only in their weights. The error here only occurs with the LSTM component of the model. The rest of the model is just a bunch of linear layers, which I have tested independently, and are not subject to this error - only the LSTM does this. Below is minimal code to reproduce the problem ( written following guidance provided [here](https://pytorch.org/tutorials/intermediate/ensembling.html) ).
```python
import torch as pt
BSIZE = 32
SEQLEN = 16
LSTMSIZE = 64
INPUTSIZE = 8
NUMMODELS = 4
def BuildLSTM( inputSize=INPUTSIZE, lstmSize=LSTMSIZE ):
return pt.nn.LSTM( input_size=INPUTSIZE, hidden_size=LSTMSIZE, batch_first=True )
BASELSTM = BuildLSTM().to( 'meta' )
def fLSTM( params, buffers, inputBatch ):
return pt.func.functional_call( BASELSTM, (params,buffers), (inputBatch,) )
dummyBatch = pt.rand( (BSIZE, SEQLEN, INPUTSIZE) ) #(32,16,8)
LSTMstack = [ BuildLSTM() for _ in range( NUMMODELS ) ] #len = 4
PARAMS,BUFFERS = pt.func.stack_module_state( LSTMstack )
pt.vmap( fLSTM, in_dims=(0,0,None) )( PARAMS, BUFFERS, dummyBatch )
```
The exact error produced is:
```
Traceback (most recent call last):
File "vmaptest.py", line 20, in <module>
pt.vmap( fLSTM, in_dims=(0,0,None) )( PARAMS, BUFFERS, dummyBatch )
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 434, in wrapped
return _flat_vmap(
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 619, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "vmaptest.py", line 18, in fLSTM
return pt.func.functional_call( BASELSTM, (params,buffers), (inputBatch,) )
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/_functorch/functional_call.py", line 143, in functional_call
return nn.utils.stateless._functional_call(
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/nn/utils/stateless.py", line 262, in _functional_call
return module(*args, **kwargs)
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 760, in forward
self._init_flat_weights()
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 139, in _init_flat_weights
self.flatten_parameters()
File "/home/USERNAME/miniconda3/envs/ENVNAME/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 167, in flatten_parameters
if (not isinstance(fw.data, Tensor) or not (fw.data.dtype == dtype) or not fw.data.is_cuda or not torch.backends.cudnn.is_acceptable(fw.data)):
RuntimeError: accessing `data` under vmap transform is not allowed
```
Superficially, this seems quite similar to: https://github.com/pytorch/pytorch/issues/103161, minus the jacrev stuff. I'll add a couple additional observations in a reply to this issue to avoid making this post excessively long. Environment details below.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 2600 Six-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
BogoMIPS: 6799.27
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 384 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.0.1
[pip3] torchmetrics==0.11.4
[conda] blas 2.16 mkl conda-forge
[conda] libblas 3.8.0 16_mkl conda-forge
[conda] libcblas 3.8.0 16_mkl conda-forge
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch-nightly
[conda] liblapack 3.8.0 16_mkl conda-forge
[conda] liblapacke 3.8.0 16_mkl conda-forge
[conda] mkl 2020.2 256
[conda] numpy 1.22.4 py38h99721a1_0 conda-forge
[conda] pytorch 2.0.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.0.2 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.11.4 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.0.0 py38 pytorch
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
1,826 | 105,980 |
[DONOTMERGE][ROCm]Test MI210 CI Nodes
|
module: rocm, open source, ciflow/trunk, topic: not user facing, ciflow/periodic, keep-going, ciflow/slow
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 34 |
1,827 | 105,966 |
Test
|
Stale, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105966
* #105687
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 2 |
1,828 | 105,961 |
Error in Profiler : RuntimeError: Expected !config.profile_memory to be true, but got false
|
oncall: profiler
|
### 🐛 Describe the bug
The moment I set profile_memory flag or any other flag after it in the function `ProfilerConfig` to `True` , I get the following error
`RuntimeError: Expected !config.profile_memory to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)`
### Full log
```
========================================================================================================= test session starts =========================================================================================================
platform linux -- Python 3.8.10, pytest-7.3.1, pluggy-1.0.0
rootdir: /workspace/work
plugins: hypothesis-5.35.1, rerunfailures-11.1.2, shard-0.1.2, xdist-3.2.1, xdoctest-1.0.2
collected 2 items
Running 2 items in this shard
nsys_profiler_issue.py .F [100%]
============================================================================================================== FAILURES ===============================================================================================================
____________________________________________________________________________ test_NsysProfilingScope[t_shape0-experimental0-False-False-False-True-False] _____________________________________________________________________________
t_shape = (3, 3, 3), record_input_shape = False, profile_memory = True, with_stack = False, with_flops = False, with_modules = False, experimental = <torch._C._profiler._ExperimentalConfig object at 0x7f59d53ddbb0>
is_unit_testing = True
@pytest.mark.parametrize("record_input_shape", [False])
@pytest.mark.parametrize("profile_memory", [False,True])
@pytest.mark.parametrize("with_stack", [False])
@pytest.mark.parametrize("with_flops", [False])
@pytest.mark.parametrize("with_modules", [False])
@pytest.mark.parametrize("experimental", [torch._C._profiler._ExperimentalConfig()])
@pytest.mark.parametrize("t_shape", [(3, 3, 3)])
@torch.no_grad()
def test_NsysProfilingScope(
t_shape,
record_input_shape,
profile_memory,
with_stack,
with_flops,
with_modules,
experimental,
is_unit_testing=True,
):
t = torch.rand(t_shape).cuda()
module = Module().eval().cuda()
module = torch.jit.script(module)
out = module(t)
> with NsysProfilerScope(
record_input_shape, profile_memory, with_stack, with_flops, with_modules,experimental, is_unit_testing
):
nsys_profiler_issue.py:108:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nsys_profiler_issue.NsysProfilerScope object at 0x7f591b092a90>
def __enter__(self):
> _enable_profiler(self.profiler_config, self.activities, self.scopes)
E RuntimeError: Expected !config.profile_memory to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
nsys_profiler_issue.py:64: RuntimeError
======================================================================================================= short test summary info =======================================================================================================
FAILED nsys_profiler_issue.py::test_NsysProfilingScope[t_shape0-experimental0-False-False-False-True-False] - RuntimeError: Expected !config.profile_memory to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
===================================================================================================== 1 failed, 1 passed in 2.27s =====================================================================================================
```
### Reproducible script
Filename : `nsys_profiler_issue.py`
```
import torch
from torch.autograd import ProfilerConfig, ProfilerState, _disable_profiler, _enable_profiler
import pytest
from typing import Any
class NsysProfilerScope:
"""
Denotes the scope of code that nsys should profile when run with the --capture-range cudaProfilerApi and --capture-range-end stop command line flags.
Users must ensure that this scope is entered and exited ONLY ONCE during program execution.
Therefore, it is recommended that this scope be used at the highest level function that runs any PyTorch model.
Usage:
with NsysProfilerScope():
<code to be profiled>
"""
class_counter = 0 # Required to make sure multiple instances of the class are not created.
def __init__(
self,
record_input_shape: bool = False,
profile_memory: bool = False,
with_stack: bool = False,
with_flops: bool = False,
with_modules: bool = False,
experimental: bool = None,
is_unit_testing: bool = False,
):
"""
Args:
record_input_shape : bool flag to display shapes of tensor on nsys profile. Default = False
profile_memory : bool flag to display tensor memory allocation / deallocation on nsys. Expensive to trace , hence , turned off by default
with_stack : enable record source and information for the ops. Default = False
with_flops : enable estimated FLOPS of operators. Default = False
with_modules : record module hierarchy. Only supported for torchscript modules. Default = False
is_unit_testing: helps in resetting class counter to perform unit testing on different configurations
"""
super().__init__()
## Check only single instance of class is created
if not is_unit_testing:
NsysProfilerScope.class_counter += 1
if NsysProfilerScope.class_counter > 1:
raise RuntimeError("Multiple instances of NsysProfilerScope have been created ")
self.profiler_config = ProfilerConfig(
ProfilerState.NVTX,
record_input_shape,
profile_memory,
with_stack,
with_flops,
with_modules,
experimental,
)
self.activities = set()
self.scopes = set()
self.is_unit_testing = is_unit_testing
def __enter__(self):
_enable_profiler(self.profiler_config, self.activities, self.scopes)
torch.cuda.synchronize()
torch.cuda.profiler.cudart().cudaProfilerStart()
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any):
torch.cuda.synchronize()
torch.cuda.profiler.cudart().cudaProfilerStop()
_disable_profiler()
class Module(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, t: torch.Tensor) -> torch.Tensor:
u = t + t
return u
@pytest.mark.parametrize("record_input_shape", [False])
@pytest.mark.parametrize("profile_memory", [False,True])
@pytest.mark.parametrize("with_stack", [False])
@pytest.mark.parametrize("with_flops", [False])
@pytest.mark.parametrize("with_modules", [False])
@pytest.mark.parametrize("experimental", [torch._C._profiler._ExperimentalConfig()])
@pytest.mark.parametrize("t_shape", [(3, 3, 3)])
@torch.no_grad()
def test_NsysProfilingScope(
t_shape,
record_input_shape,
profile_memory,
with_stack,
with_flops,
with_modules,
experimental,
is_unit_testing=True,
):
t = torch.rand(t_shape).cuda()
module = Module().eval().cuda()
module = torch.jit.script(module)
out = module(t)
with NsysProfilerScope(
record_input_shape, profile_memory, with_stack, with_flops, with_modules,experimental, is_unit_testing
):
other = module(t)
torch.allclose(out, other)
if __name__ == "__main__":
pytest.main()
```
### Reproducible command
`pytest nsys_profiler_issue.py`
### Versions
**Versions**
Pytorch NGC Container 23.04
cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 1 |
1,829 | 105,954 |
Ensure PRs are rebased on top of a recent commit (CI check)
|
triaged, module: devx
|
**Goal**: Reduce breaks from land races
**Context**
Today our PR staleness checks only check to see that CI was run on the PR within the past X days, but it doesn't check how old the merge base was (despite many of us thinking that's what it does!)
Let's align reality with our expectations and make this actually happen. CI checks seem like a good option for this
**Exit criteria**
A CI Check that fails if it notices that the mergebase of the pr and trunk is more than X business days old (3 seems like a good value to use).
If the check fails, we'd ideally post a comment on the PR sharing the commands the dev should run to rebase their PR.
Defining this to use business days in order to avoid requiring rebases right after the weekend
cc @kit1980 @huydhn @clee2000
| 2 |
1,830 | 105,944 |
ReplayRecordTests.test_fn_call_args and others fail on my local devserver
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
The failures look like this
```
__________________________________________________ ReplayRecordTests.test_fn_call_args ___________________________________________________
Traceback (most recent call last):
File "/data/users/ezyang/c/pytorch/test/dynamo/test_replay_record.py", line 186, in test_fn_call_args
self.check_replay(
File "/data/users/ezyang/c/pytorch/test/dynamo/test_replay_record.py", line 50, in check_replay
with self.assertLogs(logger="torch._dynamo", level=logging.ERROR) as log_orig:
File "/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/_log.py", line 84, in __exit__
self._raiseFailure(
File "/home/ezyang/local/c/pytorch-env/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
To execute this test, run the following from the base repo dir:
python test/dynamo/test_replay_record.py -k test_fn_call_args
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
I have checked and confirmed that an exception is being raised, there just is no ERROR log. It's not obvious to me why I would necessarily expect the exception to show up in logs. Oddly, these seem to pass in CI
I'm running on Python 3.10.11. Please reach out if you need help reproducing.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,831 | 105,943 |
PT2 is not thread safe
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
PT2 compilation stack is not thread safe. We do not test for this very carefully and I suspect https://github.com/pytorch/pytorch/pull/105942 may not be the last fix we need for this.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @eellison who also suffered under fake tensor thread safety
### Versions
main
| 1 |
1,832 | 105,941 |
Differences in the results of conv2d calculations in PyTorch 1.8
|
needs reproduction, module: cudnn, triaged
|
### 🐛 Describe the bug
```python
import torch
inputs = torch.tensor([[[[1., 2.],
[3., 4.]],
[[5., 6.],
[7., 8.]]]], device='cuda:0', dtype=torch.float16)
weight = torch.tensor([[[[1.]],
[[2.]]]], device='cuda:0', dtype=torch.float16)
with torch.backends.cudnn.flags(deterministic=True, benchmark=True):
stride = (2, 2)
padding = (0, 0)
dilation = (1, 1)
groups = 1
out_gpu = torch.nn.functional.conv2d(
inputs,
weight,
None,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
)
out_cpu = torch.nn.functional.conv2d(
inputs.to('cpu').to(torch.float),
weight.to('cpu').to(torch.float),
None,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
)
print(f"{out_cpu = }")
print(f"{out_gpu = }")
```
## Output
```
out_cpu = tensor([[[[11.]]]])
out_gpu = tensor([[[[5.]]]], device='cuda:0', dtype=torch.float16)
```
The expected result is 11.
Also, the result at cpu or other version is 11 as expected.
However, the result at PyTorch 1.6, 1.7 at cuda with torch.float16, torch.float32, torch.float64 is 5
### Versions
I am using [nvcr.io/nvidia/pytorch:20.12-py3](http://nvcr.io/nvidia/pytorch:20.12-py3) on 3070Ti at ArchLinx(linux: 6.4.4-arch1-1)
```
Collecting environment information...
PyTorch version: 1.8.0a0+1606899
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.18.2
Libc version: glibc-2.31
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-6.4.4-arch1-1-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti
Nvidia driver version: 535.86.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 167
Model name: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
Stepping: 1
CPU MHz: 2127.380
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 4 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==3.7.9
[pip3] numpy==1.19.2
[pip3] pytorch-transformers==1.1.0
[pip3] torch==1.8.0a0+1606899
[pip3] torchtext==0.9.0a0
[pip3] torchvision==0.9.0a0
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] nomkl 3.0 0
[conda] numpy 1.19.2 py38h6163131_0
[conda] numpy-base 1.19.2 py38h75fe3a5_0
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torch 1.8.0a0+1606899 pypi_0 pypi
[conda] torchtext 0.9.0a0 pypi_0 pypi
[conda] torchvision 0.9.0a0 pypi_0 pypi
```
## Possible related issues
- https://github.com/pytorch/pytorch/issues/55381
- https://github.com/pytorch/pytorch/issues/54036
If there is any duplication with these issues, please let me know.
Thank you in advance.
cc @csarofeen @ptrblck @xwang233
| 4 |
1,833 | 105,936 |
[FSDP] Support `OptimStateDictConfig.offload_to_cpu` for real
|
feature, triaged, actionable, module: fsdp
|
With https://github.com/pytorch/pytorch/pull/105848, I added the doc for `OptimStateDictConfig.offload_to_cpu` describing `offload_to_cpu: bool`; however, the bool does not actually work at the moment and only supports `True`. This issue is to track adding support for real.
https://github.com/pytorch/pytorch/blob/71d18f61059dbb617a8afc79b64598d2fbb89d63/torch/distributed/fsdp/api.py#L329-L339
cc @zhaojuanmao @mrshenli @rohan-varma @fegin @wz337
| 0 |
1,834 | 105,934 |
Flip default on `add_zero_attn` in `torch.nn.MultiheadAttention` to `True`
|
module: nn, triaged, oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
I'm working on implementing a modified version of Softmax to be used in Transformer models that could greatly improve the performance of these models across the board. Evan Miller outlines the change in the blog post linked below, and it is an extremely simple one. This is my first issue on the pytorch library and I am happy to begin working on it.
From what I know, this would involve:
1. Creating a new function within torch.nn.functional that resembles the torch.nn.functional.softmax code.
2. Add similar functionality to the next layers of code that inherit from torch.nn.functional. The quantized version comes to mind.
3. Add an nn.Module that while similar to nn.Softmax, uses the new torch.nn.functional.softmax_one code to be written for point 1.
Credit to Evan Miller for this enlightening discovery: https://www.evanmiller.org/attention-is-off-by-one.html
### Alternatives
A possible alternative could be me publishing my own nn.Module as a separate function.
### Additional context
This improvement is something that, while it would require retraining for Transformer models that want to use this new optimization, would not introduce any breaking changes to the torch library.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg
| 4 |
1,835 | 105,933 |
DISABLED test_cross_entropy_large_tensor_reduction_sum_cuda (__main__.TestNNDeviceTypeCUDA)
|
module: nn, module: rocm, triaged, module: flaky-tests, skipped
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_cross_entropy_large_tensor_reduction_sum_cuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15304850679).
Over the past 72 hours, it has flakily failed in 6 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_cross_entropy_large_tensor_reduction_sum_cuda`
Test file path: `test_nn.py`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,836 | 105,929 |
Dynamo silently ignores TorchDispatchMode
|
triaged, module: __torch_dispatch__, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
This test fails:
```
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py
index 4ac6b803ddb..43083783cc3 100644
--- a/test/dynamo/test_misc.py
+++ b/test/dynamo/test_misc.py
@@ -38,6 +38,7 @@ from torch._dynamo.testing import (
unsupported,
)
+from torch.utils._python_dispatch import TorchDispatchMode
from torch._dynamo.utils import CompileProfiler, ifdynstaticdefault
from torch.ao.quantization import MinMaxObserver
from torch.ao.quantization.fake_quantize import FakeQuantize
@@ -6040,6 +6041,22 @@ def ___make_guard_fn():
self.assertEqual(counter.frame_count, 1)
self.assertEqual(counter.op_count, 3)
+ def test_mode_guard(self):
+ class RewriteAddToMul(TorchDispatchMode):
+ def __torch_dispatch__(self, func, types, args=(), kwargs=None):
+ if func is torch.ops.aten.add.Tensor:
+ func = torch.ops.aten.mul.Tensor
+ return func(*args, **kwargs)
+
+ def fn(x):
+ return x + x
+
+ x = torch.tensor([3.0])
+ with RewriteAddToMul():
+ r = fn(x)
+ opt_r = torch.compile(fn)(x)
+ self.assertEqual(r, opt_r)
+
def test_tracing_nested_py_tree(self):
import torch.utils._pytree as pytree
```
Note that if you don't make torch.compile unconditionally bail out and have it handle torch dispatch mode, you need to add appropriate guards; this test doesn't exercise that the guards are correct.
### Versions
main
cc @Chillee @zou3519 @albanD @samdow @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 3 |
1,837 | 105,926 |
enabling fused A16W8 mm through prologue fusion WIP
|
Stale, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105926
Summary: note: for something like torch.mm(x, weight.to(torch.float16))*scale+bias even with scale, bias being multi-dimensional, this PR handles the prologue fusion and the epilogue fusion happens correctly without needing to do anything.
codegen: https://gist.github.com/HDCharles/d8a1ff7d52fcafcb7a0d880596b2c0c1
benchmarks: https://gist.github.com/HDCharles/3deb02e26e3d53f106c15216ff0e608a
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,838 | 105,925 |
Possible speed up of nn.MultiheadAttention
|
oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
Hello,
Implementing MHA using the `opt_einsum` package speeds up the MultiheadAttention by ~30%:
```python
from math import sqrt
from opt_einsum import contract as einsum
d = Q.shape[-1]
A = einsum('b n h c, b m h c -> b n m h', Q, K) / sqrt(d)
A = A.softmax(2)
y = einsum('b n m h, b m h c -> b n h c', A, V).reshape(B, N, -1)
```
On my machine (NVIDIA RTX A4000 Laptop GPU) it runs 30% faster and on a Tesla V100 it runs 65% faster when compared against calling the forward pass of `nn.MultiheadAttention` for both PyTorch 1.13 and 2.0.
I don't know if this implementation is feasible, but there's that.
### Alternatives
_No response_
### Additional context
The function used to measure speed:
```python
def timeit(func, N=1000):
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for _ in range(N):
func()
end.record()
torch.cuda.synchronize()
print(start.elapsed_time(end))
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 1 |
1,839 | 105,920 |
test_linalg: triangular_solve - set higher precision
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/periodic
|
This is a follow-up on https://github.com/pytorch/pytorch/pull/104425.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105920
| 6 |
1,840 | 105,918 |
Torch.jit : RuntimeError: Unable to extract string literal index for ModuleDict
|
oncall: jit
|
### 🐛 Describe the bug
Hi, I'm trying to use torch-mlir to compile a pytorch wide_and_deep model from https://github.com/jrzaurin/pytorch-widedeep. But I'm having some issues now, when compiling the model. According to reply from torch-mlir, the error due to torch.jit.script that can not extract string literal index for ModuleDict . When I call torch_mlir.compile, torch-mlir calls torch.jit.script on the module before importing it into torch-mlir.
1. Python Script
```
import pandas as pd
import numpy as np
import torch
from sklearn.model_selection import train_test_split
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy
from pytorch_widedeep.datasets import load_adult
import torch_mlir
df = load_adult(as_frame=True)
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df_train, df_test = train_test_split(df, test_size=0.2, stratify=df.income_label)
# Define the 'column set up'
wide_cols = [
"education",
"relationship",
"workclass",
"occupation",
"native-country",
"gender",
]
crossed_cols = [("education", "occupation"), ("native-country", "occupation")]
cat_embed_cols = [
"workclass",
"education",
"marital-status",
"occupation",
"relationship",
"race",
"gender",
"capital-gain",
"capital-loss",
"native-country",
]
continuous_cols = ["age", "hours-per-week"]
target = "income_label"
target = df_train[target].values
# prepare the data
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df_train)
tab_preprocessor = TabPreprocessor(
cat_embed_cols=cat_embed_cols, continuous_cols=continuous_cols # type: ignore[arg-type]
)
X_tab = tab_preprocessor.fit_transform(df_train)
# build the model
wide = Wide(input_dim=np.unique(X_wide).shape[0], pred_dim=1)
tab_mlp = TabMlp(
column_idx=tab_preprocessor.column_idx,
cat_embed_input=tab_preprocessor.cat_embed_input,
continuous_cols=continuous_cols,
)
model = WideDeep(wide=wide, deeptabular=tab_mlp)
print("dim: ",np.unique(X_wide).shape[0])
# train and validate
trainer = Trainer(model, objective="binary", metrics=[Accuracy])
trainer.fit(
X_wide=X_wide,
X_tab=X_tab,
target=target,
n_epochs=5,
batch_size=256,
)
# predict on test
X_wide_te = wide_preprocessor.transform(df_test)
X_tab_te = tab_preprocessor.transform(df_test)
preds = trainer.predict(X_wide=X_wide_te, X_tab=X_tab_te)
# Save and load
# Option 1: this will also save training history and lr history if the
# LRHistory callback is used
trainer.save(path="model_weights", save_state_dict=True)
# Option 2: save as any other torch model
torch.save(model.state_dict(), "model_weights/wd_model.pt")
# From here in advance, Option 1 or 2 are the same. I assume the user has
# prepared the data and defined the new model components:
# 1. Build the model
model_new = WideDeep(wide=wide, deeptabular=tab_mlp)
model_new.load_state_dict(torch.load("model_weights/wd_model.pt"))
# 2. Instantiate the trainer
trainer_new = Trainer(model_new, objective="binary")
# 3. Either start the fit or directly predict
preds = trainer_new.predict(X_wide=X_wide, X_tab=X_tab)
data = []
data.append(torch.from_numpy(X_wide))
data.append(torch.from_numpy(X_tab))
print(model_new)
module = torch_mlir.compile(model_new, data, output_type=torch_mlir.OutputType.LINALG_ON_TENSORS, use_tracing=False)
```
2. Error
```
Traceback (most recent call last):
File "/home/project/iree-demo/pytorch/test-case-generate/widedeep/wide_deep.py", line 105, in <module>
module = torch_mlir.compile(model_new, data, output_type=torch_mlir.OutputType.LINALG_ON_TENSORS, use_tracing=False)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch_mlir/__init__.py", line 404, in compile
scripted = torch.jit.script(model)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_script.py", line 1284, in script
return torch.jit._recursive.create_script_module(
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
Unable to extract string literal index. ModuleDict indexing is only supported with string literals. For example, 'i = "a"; self.layers[i](x)' will fail because i is not a literal. Enumeration of ModuleDict is supported, e.g. 'for k, v in self.items(): out = v(inp)':
File "/root/anaconda3/envs/torch-mlir/lib/python3.10/site-packages/pytorch_widedeep/models/tabular/embeddings_layers.py", line 173
def forward(self, X: Tensor) -> Tensor:
embed = [
self.embed_layers["emb_layer_" + self.embed_layers_names[col]](
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
X[:, self.column_idx[col]].long()
)
```
### Versions
```
conda create -n torch-mlir python=3.10
pip install torch-2.1.0.dev20230523+cpu-cp310-cp310-linux_x86_64.whl
pip install torch_mlir-20230524.848-cp310-cp310-linux_x86_64.whl
pip install torchvision-0.16.0.dev20230523+cpu-cp310-cp310-linux_x86_64.whl
pip install pytorch-widedeep
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,841 | 105,917 |
Torch.jit.frontend.NotSupportedError: not supporting functions with variable number of arguments.
|
oncall: jit
|
### 🐛 Describe the bug
Hi, I'm trying to use torch-mlir to compile **a pytorch yolov5 model** from https://github.com/ultralytics/yolov5 . But I'm having some issues now, when compiling these model. When I call torch_mlir.compile, torch-mlir calls torch.jit.script on the module before importing it into torch-mlir. The error is due to torch.jit.script not supporting functions with variable number of arguments.
1. python script
```
import torch
import torch_mlir
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
module = torch_mlir.compile(model, torch.zeros(1,3,640,640), output_type="linalg-on-tensors")
```
2.Error
```
Traceback (most recent call last):
File "/home/project/iree-demo/pytorch/test-case-generate/yolo/yolo.py", line 35, in <module>
module = torch_mlir.compile(yolomodel, img, output_type="linalg-on-tensors")
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch_mlir/__init__.py", line 404, in compile
scripted = torch.jit.script(model)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 1284, in script
return torch.jit._recursive.create_script_module(
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 614, in _construct
init_fn(script_module)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 520, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_script.py", line 1466, in _recursive_compile_class
return _compile_and_register_class(obj, rcb, _qual_name)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/_recursive.py", line 49, in _compile_and_register_class
ast = get_jit_class_def(obj, obj.__name__)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/frontend.py", line 234, in get_jit_class_def
method_defs = [
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/frontend.py", line 235, in <listcomp>
get_jit_def(obj, name, self_name=self_name, is_classmethod=is_classmethod(obj))
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/frontend.py", line 297, in get_jit_def
return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/frontend.py", line 335, in build_def
param_list = build_param_list(ctx, py_def.args, self_name, pdt_arg_types)
File "/root/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/jit/frontend.py", line 363, in build_param_list
raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/root/anaconda3/envs/yolo/lib/python3.10/warnings.py", line 477
def __exit__(self, *exc_info):
~~~~~~~~~ <--- HERE
if not self._entered:
raise RuntimeError("Cannot exit %r without entering first" % self)
'__torch__.warnings.catch_warnings' is being compiled since it was called from 'SPPF.forward'
File "/root/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 229
def forward(self, x):
x = self.cv1(x)
with warnings.catch_warnings():
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
y1 = self.m(x)
```
From torch-mlir community reply, they think this error due to torch.jit.sript. So I'm here for help!
### Versions
### Python Version as shown below
```
conda create -n torch-mlir python=3.10
pip install torch-2.1.0.dev20230523+cpu-cp310-cp310-linux_x86_64.whl
pip install torch_mlir-20230524.848-cp310-cp310-linux_x86_64.whl
pip install torchvision-0.16.0.dev20230523+cpu-cp310-cp310-linux_x86_64.whl
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,842 | 105,916 |
Missing coalesced flag from `torch.autograd.Function.backward`
|
module: sparse, triaged
|
### 🐛 Describe the bug
Sparse gradients change coalesced flag after backward
Code to reproduce:
```python
import torch
from typing import Any, Tuple
class CheckFn(torch.autograd.Function):
@staticmethod
def forward(ctx: Any, *args: Any, **__: Any) -> torch.Tensor:
x, w = args
ctx.w = w
return x
@staticmethod
def backward(ctx: Any, *args: Any) -> Tuple[None, torch.Tensor]:
w = ctx.w
shape = w.shape
i = torch.tensor([0, 1]).reshape([1, -1])
v = torch.tensor([[1, 2], [3, 4]], dtype=torch.float)
g = torch.sparse_coo_tensor(i, v, shape, check_invariants=False)._coalesced_(True)
print(g.is_coalesced()) # True
return None, g
if __name__ == '__main__':
x = torch.tensor([1, 2, 3, 4], dtype=torch.float).view([2, 2])
w = torch.tensor([1, 1, 1, 1], dtype=torch.float).view([2, 2])
w.requires_grad = True
y = CheckFn.apply(x, w)
y.sum().backward()
print(w.grad.is_coalesced()) # False
```
### Versions
````
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-10310U CPU @ 1.70GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 12
CPU max MHz: 4400.0000
CPU min MHz: 400.0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cpu
[pip3] torchaudio==2.0.2+cpu
[pip3] torchvision==0.15.2+cpu
[conda] No relevant packages
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
1,843 | 105,914 |
torch.compile(cpu) does not handle float16 properly
|
triaged, oncall: pt2, module: cpu inductor
|
### 🐛 Describe the bug
```python
import torch
a = torch.randn(64, dtype=torch.float16)
@torch.compile
def f(a, dim=-1):
na = torch.linalg.vector_norm(a, dim=dim)
eps = 1e-8
return na.clamp_min(eps)
f(a)
```
This does work with `bfloat16`. The issue here is that `bfloat16` works because there exists the magic helper function
https://github.com/pytorch/pytorch/blob/5f6c6ff4cf27e4924aee4e3c951efe83bf95c106/torch/_inductor/codegen/cpp.py#L2516
This function inserts the necessary `to_dtype`s for `bfloat16` to work. This hack should either be removed or generalised to other dtypes.
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,844 | 105,912 |
Batching rule for aten::bincount.
|
triaged, module: functorch
|
### 🚀 The feature, motivation and pitch
I used `torch.bincount()` in a function that was passed to `torch.func.vmap()` and got this warning:
> UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::bincount. **Please file us an issue on GitHub** so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
I'm filing an issue as requested : )
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
1,845 | 105,907 |
Enable xpu backend in totchdynamo benchmarks
|
open source, Stale, module: dynamo, ciflow/inductor
|
Add xpu backend for torchdynamo benchmarks
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 3 |
1,846 | 105,904 |
Libtorch linking Error:undefined reference
|
module: build, triaged
|
### 🐛 Describe the bug
After linking libtorch in my own created project's cmakelist, the compiler reports an "undefined reference" error, unable to find the reference of a function within my own project, rather than a function in libtorch. However, before linking libtorch, this error did not occur.How can I solve this bug.
find_package(Torch REQUIRED)
target_link_libraries(${binary_name} ${TORCH_LIBRARIES})
### Versions
PyTorch version: 1.12.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.2
Libc version: glibc-2.31
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.14.0_1-0-0-50-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3100.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==1.12.1+cpu
[pip3] torchtext==0.4.0
[pip3] torchvision==0.13.1+cpu
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 1.12.1+cpu pypi_0 pypi
[conda] torchtext 0.4.0 pypi_0 pypi
[conda] torchvision 0.13.1+cpu pypi_0 pypi
cc @malfet @seemethere
| 1 |
1,847 | 105,901 |
Default parameters missing of maxpool2d node generated by dynamo export
|
triaged, oncall: pt2, module: export
|
### 🐛 Describe the bug
When we use `dynamo.export` to capture the fx graph, the default parameters as input nodes for `maxpool2d` is missing. However, for `convolution`, the default parameters as input node are in the fx graph. Take the example code:
```
import torch
import torch._dynamo as torchdynamo
import copy
class ConvMaxpool2d(torch.nn.Module):
def __init__(self,):
super().__init__()
self.conv = torch.nn.Conv2d(3, 64, 7, bias=True)
self.relu = torch.nn.ReLU()
self.maxpool = torch.nn.MaxPool2d(3, stride=2)
def forward(self, x):
return self.maxpool(self.relu(self.conv(x)))
def run_max_pool2d():
batch_size = 116
model = ConvMaxpool2d().eval()
x = torch.randn(batch_size, 3, 224, 224).contiguous(memory_format=torch.channels_last)
example_inputs = (x,)
with torch.no_grad():
# Generate the FX Module
exported_model, guards = torchdynamo.export(
model,
*copy.deepcopy(example_inputs),
aten_graph=True,
)
print("exported_model is: {}".format(exported_model), flush=True)
if __name__ == "__main__":
run_max_pool2d()
```
The generated fx graph is:
```
def forward(self, x):
arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
_param_constant0 = self._param_constant0
_param_constant1 = self._param_constant1
convolution_default = torch.ops.aten.convolution.default(arg0, _param_constant0, _param_constant1, [1, 1], [0, 0], [1, 1], False, [0, 0], 1); arg0 = _param_constant0 = _param_constant1 = None
relu_default = torch.ops.aten.relu.default(convolution_default); convolution_default = None
max_pool2d_with_indices_default = torch.ops.aten.max_pool2d_with_indices.default(relu_default, [3, 3], [2, 2]); relu_default = None
getitem = max_pool2d_with_indices_default[0]
getitem_1 = max_pool2d_with_indices_default[1]; max_pool2d_with_indices_default = None
return pytree.tree_unflatten([getitem], self._out_spec)
```
As we can see:
- For `convolution`, even we define the module as `torch.nn.Conv2d(3, 64, 7, bias=True)`, the generated graph will still take default parameters of `convolution` as input node `torch.ops.aten.convolution.default(arg0, _param_constant0, _param_constant1, [1, 1], [0, 0], [1, 1], False, [0, 0], 1);`
- However, the `maxpool2d` the generated node is ` torch.ops.aten.max_pool2d_with_indices.default(relu_default, [3, 3], [2, 2]); ` which abandons the default parameters as input nodes.
It will cause complicated definition in graph lowering of inductor pattern matcher. If we define the ` torch.nn.MaxPool2d(3, stride=2)` with different parameters (for example: `torch.nn.MaxPool2d(3, stride=2, padding=1)`), it will generate patterns with different number of input nodes. So, we have to pattern match each of the generated pattern.
### Versions
```
(inductor_quant) [root@CPX-4 inductor_quant]# python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+git9f2ad4f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.19.5-1.el7.elrepo.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Genuine CPU
Stepping: 10
CPU MHz: 1200.331
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+gitd5fc795
[pip3] torchvision==0.16.0a0+08c9938
[conda] mkl 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 <pip>
[conda] mkl-service 2.4.0 py38h3605609_14 intel
[conda] mkl-static 2023.0.0 <pip>
[conda] mkl_fft 1.3.1 py38hcab1719_22 intel
[conda] mkl_random 1.2.2 py38hbf47bc3_22 intel
[conda] mkl_umath 0.1.1 py38hf66a691_32 intel
[conda] numpy 1.24.3 <pip>
[conda] numpy 1.22.3 py38hf0956d0_5 intel
[conda] numpy-base 1.22.3 py38h45c9ace_5 intel
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 3 |
1,848 | 105,900 |
torch.jit.frontend.NotSupported when compiling stable-diffusion
|
oncall: jit
|
### 🐛 Describe the bug
```
from diffusers import UNet2DConditionModel as model
unet = model.from_pretrained("CompViz/stable-diffusion-v1-4", subfolder="unet", torchscript=True)
# torchscript=True is from troch_tensorrt bert case
unet = torch.jit.script(unet)
```
then got torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported
I use torch.jit.script to adapt torch_tensorrt(as torchtrt).ts.compile.
Using torch.jit.trace would face Aten dtype not support error, and using torchtrt.compile would face torchtrt.expanddim error
If this bug happens causing from the mismatch between torchtrt/torch.jit and diffusers.models, is there any way to compile diffusers' model without writing the unet instead of import, or writing the engine using nvidia/tensorRT python API without torchtrt to compile it? Thanks for any possible solution!
p.s. I have already remove all assert and branch using tensor result like `assert input_shape[0]==xxx manually` in diffusers.
### Versions
docker image: nvcr.io/nvidia/pytorch:22.11-py3
platform: DGX-A100 40GB(SXM)
python packages:
pytorch 1.13.0a0+936e930
torch_tensorrt 1.3.0a0
diffusers 0.19.0.dev0
other CPU/memory/... env is unrelated, if need, i would list them here
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,849 | 105,884 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor, merging
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,850 | 105,883 |
Add oneDNN Graph fusion pass in Inductor CPP backend
|
triaged, module: mkldnn, open source, release notes: jit, module: inductor, module: dynamo, ciflow/inductor
|
[RFC: Integrating oneDNN Graph Compiler into Inductor C++/OpenMP Backend for Enhanced Graph Fusion and Performance #105582](https://github.com/pytorch/pytorch/issues/105582)
- [ ] need to update python binding file after #97957
Handling the opaque op is referred to #90356
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 7 |
1,851 | 105,878 |
FakeTensor detach() gives meta tensor other than FakeTensor under `torch._C._DisableTorchDispatch()`
|
triaged, tensor subclass, module: fakeTensor
|
### 🐛 Describe the bug
My repro:
```
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
from torch.utils._mode_utils import no_dispatch
fake_mode = FakeTensorMode()
with fake_mode:
t = torch.ones([10])
print(t)
print(t.detach())
with no_dispatch():
t_no_dispatch = t.detach()
print(t_no_dispatch)
```
Output:
```
FakeTensor(..., size=(10,))
FakeTensor(..., size=(10,))
tensor(..., device='meta', size=(10,))
```
I feel like the expected behavior of detaching a fake tensor should be the same with/without `no_dispatch`, which is a thin wrapper of `_DisableTorchDispatch` as otherwise we cannot load fake tensors as parameters into nn.Module with `no_dispatch`.
### Versions
PyTorch version: 2.1.0.dev20230524+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 10.4.0-4ubuntu1~22.04) 10.4.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4950.1948
CPU min MHz: 2200.0000
BogoMIPS: 7399.95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.14.2
[pip3] ema-pytorch==0.2.3
[pip3] flake8==6.0.0
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.5
[pip3] torch==2.1.0.dev20230524+cu118
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torch-struct==0.5
[pip3] torchaudio==2.1.0.dev20230524+cu118
[pip3] torchdata==0.7.0.dev20230524
[pip3] torchmetrics==1.0.1
[pip3] torchrec-nightly==2023.7.17
[pip3] torchtext==0.16.0.dev20230524+cpu
[pip3] torchvision==0.16.0.dev20230524+cu118
[pip3] vector-quantize-pytorch==1.6.30
cc @ezyang @msaroufim @albanD @wconstab @bdhirsh @anijain2305
| 3 |
1,852 | 105,872 |
PyTorch 2.0.x `CUDA error: operation not supported` when `Tensor.to` a different device
|
needs reproduction, module: cuda, triaged, module: regression
|
### 🐛 Describe the bug
I've created an Azure VM with 2 A10 GPUs (`standard_nv72ads_a10_v5`). When running Hugging Face code to use both the GPUs, I've accountered `CUDA error: operation not supported`, and turns out it is reproducible with this PyTorch snippet.
```python
import torch
a = torch.Tensor(torch.randn(5,5,5))
a = a.to("cuda:0")
# Move a to another device
b = a.to("cuda:1")
```
It shows Segmentation Fault, and with [faulthandler](https://docs.python.org/3/library/faulthandler.html) enabled, it shows the error:
```
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Notably when I downgrade PyTorch from 2.0.1 to 1.13.1, the same snippet works.
As PyTorch 2.0 should be backward compatible, I hope to understand how this could happen, since there are some new features of PyTorch 2.0 that I need.
### Versions
Output of `collect_env.py`
```
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10-24Q
GPU 1: NVIDIA A10-24Q
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 74F3 24-Core Processor
Stepping: 1
CPU MHz: 3194.042
BogoMIPS: 6388.08
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.1 MiB
L1i cache: 1.1 MiB
L2 cache: 18 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-17
NUMA node1 CPU(s): 18-35
NUMA node2 CPU(s): 36-53
NUMA node3 CPU(s): 54-71
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid
Versions of relevant libraries:
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
```
Output of NVIDIA's `deviceQuery`:
```
Detected 2 CUDA Capable device(s)
Device 0: "NVIDIA A10-24Q"
CUDA Driver Version / Runtime Version 12.0 / 9.2
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24298 MBytes (25478168576 bytes)
MapSMtoCores for SM 8.6 is undefined. Default to use 64 Cores/SM
MapSMtoCores for SM 8.6 is undefined. Default to use 64 Cores/SM
(72) Multiprocessors, ( 64) CUDA Cores/MP: 4608 CUDA Cores
GPU Max Clock rate: 1695 MHz (1.70 GHz)
Memory Clock rate: 6251 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Enabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 2 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Device 1: "NVIDIA A10-24Q"
CUDA Driver Version / Runtime Version 12.0 / 9.2
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24298 MBytes (25478168576 bytes)
MapSMtoCores for SM 8.6 is undefined. Default to use 64 Cores/SM
MapSMtoCores for SM 8.6 is undefined. Default to use 64 Cores/SM
(72) Multiprocessors, ( 64) CUDA Cores/MP: 4608 CUDA Cores
GPU Max Clock rate: 1695 MHz (1.70 GHz)
Memory Clock rate: 6251 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Enabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 3 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from NVIDIA A10-24Q (GPU0) -> NVIDIA A10-24Q (GPU1) : Yes
> Peer access from NVIDIA A10-24Q (GPU1) -> NVIDIA A10-24Q (GPU0) : Yes
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.0, CUDA Runtime Version = 9.2, NumDevs = 2
Result = PASS
```
cc @ezyang @gchanan @zou3519 @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @ptrblck
| 6 |
1,853 | 105,863 |
torch.compile does not respect branching in forward()
|
triaged, oncall: pt2, module: guards
|
### 🐛 Describe the bug
Hey all,
I have a Module that has an if statement in its `forward()` function. When using torch.compile, the model is frozen in the first branching it encountered. I have created a minimal repro script that recreates this problem without all the other fuzz.
```python
import torch
from torch import nn
from torch.optim import Adam
class SimpleNetwork(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(nn.Conv2d(3, 16, 3, 2, 1),
nn.ReLU(),
nn.Conv2d(16, 32, 3, 2, 1),
nn.AdaptiveAvgPool2d(1))
self.return_input = True
def forward(self, x):
ret = self.layers(x)[0, 0]
if self.return_input:
return ret, x
else:
return ret
if __name__ == '__main__':
loss = nn.CrossEntropyLoss()
net = SimpleNetwork().to('cuda')
net_optim = torch.compile(net)
optim = Adam(net_optim.parameters(), lr=1e-4)
for i in range(10):
optim.zero_grad()
dummy_input = torch.rand((1, 3, 32, 32), device=torch.device('cuda', 0))
dummy_target = torch.randint(0, 32, (1, ), device=torch.device('cuda', 0))
# here the output is a tuple of (result, input)
out = net_optim(dummy_input)
l = loss(out[0], dummy_target)
l.backward()
optim.step()
net_optim.return_input = False
dummy_input = torch.rand((1, 3, 32, 32), device=torch.device('cuda', 0))
# the expected output here is just the result. This should NOT be a tuple because we set return_orig to False
out = net_optim(dummy_input)
print(type(out))
```
At the very end the module still returns a tuple of (result, input) even though it should return the result only.
It can of course be that this is intended behavior (after all, how could torch.compile know about the other branch at compile time?). If that is the case, I would very much appreciate a hint as to how to reset the model compilation upon changing `return_input`.
Why not simply create a new compiled model by calling torch.compile again? Because that is not as straightforward as it seems. IIRC the correct order is `initialize model` -> `torch.compile` -> `initialize optimizers` -> `initialize DDP` (which is what I am using). Redoing the compile step would require me to redo the other steps as well which is just ugly.
Best,
Fabian
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230724
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800X 8-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 5083,9839
CPU min MHz: 2200,0000
BogoMIPS: 7600.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230724
[pip3] torchaudio==2.1.0.dev20230724
[pip3] torchvision==0.16.0.dev20230724
[pip3] triton==2.1.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,854 | 105,860 |
Programmation error enabling unlegal memory access on gpu
|
module: error checking, triaged, module: mps
|
### 🐛 Describe the bug
Hi, in the following code I make a voluntary indexing logic error, as I wanted to index a tensor w.r.t multiple indexes at once. When I send all the tensors on cpu, an indexing error is raised which is fine. But, when I send it on my macOS mps device, I don't have any raised error and I can show that non legal memory is accessed instead. I understand the programmation error I am voluntary making, but should an error be raised or is it normal?
The code is the following for the minimal reproducible error:
```
import torch as T
def access_non_legal_memory(batch_size=9):
some_tensor = T.tensor([[0,0,1,1], [1,1,0,0]], device='mps')[None, :, :]
non_legal_memory = some_tensor[T.arange(batch_size), T.unsqueeze(T.tensor([0,1,1,0,1,0,1,1,0], device='mps'), dim=0)]
print(non_legal_memory)
if __name__=="__main__":
access_non_legal_memory()
```
The output printed is:
tensor([[[0, 0, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[1, 0, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 1, 2, 3]]], device='mps:0')
And as I said, on cpu an error is raised as it should.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] pytorch-ignite==0.4.12
[pip3] torch==2.0.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torchvision==0.13.0
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.21.5 py310h742c864_3
[conda] pytorch-ignite 0.4.12 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torchvision 0.13.0 pypi_0 pypi
(PTorchEnv) dominikrichard@REC-P-778273 Deep %
cc @malfet @kulinseth @albanD @DenisVieriu97 @razarmehr @abhudev
| 0 |
1,855 | 105,859 |
Report model flop utilization (mfu) in benchmark
|
feature, triaged, oncall: pt2
|
### 🐛 Describe the bug
This should be easy to do by running flop counter on the eager model (or even on fake tensors in dynamo analysis) to compute the theoretical maximum throughput. This would help us control for eager model being extremely extremely bad; it gives us an absolute yardstick to measure against, rather than speedup against eager (which can be gamed by slowing eager down.)
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,856 | 105,858 |
[WIP][Experiment] Avoid real computation for dynamo export
|
open source, Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105858
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 2 |
1,857 | 105,853 |
Bug when dealing with fallbacks on CPU
|
triaged, module: complex, module: dynamic shapes, module: inductor, module: cpu inductor
|
### 🐛 Describe the bug
To repro, patch in https://github.com/pytorch/pytorch/pull/105850, change the line
https://github.com/pytorch/pytorch/blob/3045e84e679a23ba3d5a5071b79593724098f389/test/test_linalg.py#L4313
for `torch.compile(torch.matmul)(x, y)`, and run
```bash
python test/test_linalg.py -vk test_matmul_small_brute_force_1d_Nd_cpu_complex64
```
It fails with the following traceback
```
File "/home/lezcano/git/pytorch/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/home/lezcano/git/pytorch/pytorch/torch/_inductor/graph.py", line 658, in run_node
result = fallback_handler(n.target, add_to_fallback_set=False)(
File "/home/lezcano/git/pytorch/pytorch/torch/_inductor/lowering.py", line 1291, in handler
TensorBox.create, ir.FallbackKernel.create(kernel, *args, **kwargs)
File "/home/lezcano/git/pytorch/pytorch/torch/_inductor/ir.py", line 3430, in create
return generate_output(example_output, [])
File "/home/lezcano/git/pytorch/pytorch/torch/_inductor/ir.py", line 3427, in generate_output
assert output is None, f"FallbackKernel output type {type(output)} is not supported"
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: FallbackKernel output type <class 'torch.SymInt'> is not supported
```
It fails when creating a fallback for:
```
aten.sym_size
(TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cpu', torch.complex64, size=[0, s0, 0], stride=[s0, 1, 1]))
)), 1)
{}
```
This is odd, as `sym_size` does have a lowering.
### Versions
master
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
1,858 | 105,846 |
Strange backward behavior with sparse tensors
|
module: sparse, module: autograd, triaged
|
### 🐛 Describe the bug
Backward fails with sparse gradients. Error: `RuntimeError: reshape is not implemented for sparse tensors`
Code to reproduce:
```python
import torch
from typing import Any, Tuple
class CheckFn(torch.autograd.Function):
@staticmethod
def forward(ctx: Any, *args: Any, **__: Any) -> torch.Tensor:
x, w = args
ctx.w = w
return x
@staticmethod
def backward(ctx: Any, *args: Any) -> Tuple[None, torch.Tensor]:
w = ctx.w
shape = w.shape
i = torch.tensor([0, 1]).reshape([1, -1])
v = torch.tensor([[1, 2], [3, 4]], dtype=torch.float)
g = torch.sparse_coo_tensor(i, v, shape, check_invariants=False)._coalesced_(True)
print(f"g: {g.shape}")
return None, g
if __name__ == '__main__':
x = torch.tensor([1, 2, 3, 4], dtype=torch.float).view([2, 2])
w = torch.tensor([1, 1, 1, 1], dtype=torch.float, requires_grad=True).view([2, 2])
print(f"w: {w.shape}")
y = CheckFn.apply(x, w)
y.sum().backward()
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-10310U CPU @ 1.70GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 12
CPU max MHz: 4400.0000
CPU min MHz: 400.0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cpu
[pip3] torchaudio==2.0.2+cpu
[pip3] torchvision==0.15.2+cpu
[conda] No relevant packages
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ezyang @albanD @zou3519 @gqchen @soulitzer @Lezcano @Varal7
| 7 |
1,859 | 105,840 |
[FSDP] FSDP doesn't work (random accuracy performance) when using `param_init_fn` and `sync_module_states=True`
|
feature, triaged, module: fsdp
|
### 🐛 Describe the bug
Currently, when using FSDP, the model is loaded for each of the N processes completely on CPU leading to huge CPU RAM usage. When training models like Flacon-40B with FSDP on a dgx node with 8 GPUs, it would lead to CPU RAM getting out of memory because each process is loading 160GB (40B x 4Bytes (FP32)) in CPU RAM for a total of 160*8=1280GB requirement which results in script getting killed due to out of CPU RAM.
To combat this, we are trying to load the model only on rank 0 and have it on `meta` device when rank!=0. Then use `param_init_fn` along with `sync_module_states=True` for FSDP to properly init the weights on other ranks and broadcast the params from rank 0 to other ranks. **This is trying to achieve what `zero.init()` from DeepSpeed does. it would be great for FSDP too to support this out of the box**
However, when using above approach, the metrics in terms of accuracy and F1 scores are random, ie., the model isn;t learning anything even though the weights seem to change and train loss seems to decrease a little bit.
Code: https://github.com/pacman100/ram_efficient_fsdp
Steps:
1. pip install -r requirements.txt
2. bash run.sh
The FSDP config is in `config.yaml`
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Output:
```
[2023-07-24 16:34:14,815] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-24 16:34:19,709] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-24 16:34:19,736] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
DistributedType.FSDP
wandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin
Found cached dataset glue (/raid/sourab/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|██████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1269.59it/s]
wandb: Tracking run with wandb version 0.15.5
wandb: Run data is saved locally in /home/sourab/ram_efficient_fsdp/wandb/run-20230724_163421-hshg5m1t
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run whole-fire-12
wandb: ⭐️ View project at https://wandb.ai/smangrul/fsdp_glue_no_trainer
wandb: 🚀 View run at https://wandb.ai/smangrul/fsdp_glue_no_trainer/runs/hshg5m1t
Found cached dataset glue (/raid/sourab/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|██████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1229.52it/s]
Loading cached processed dataset at /raid/sourab/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-c8be3b6cea2b5568.arrow
Loading cached processed dataset at /raid/sourab/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-98f28c5bb15f064c.arrow
accelerator.process_index=1 model.bert.pooler.dense.weight=Parameter containing:
tensor(..., device='meta', size=(768, 768), requires_grad=True)
accelerator.process_index=1 model.classifier.weight=Parameter containing:
tensor(..., device='meta', size=(2, 768), requires_grad=True)
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
accelerator.process_index=0 model.bert.pooler.dense.weight=Parameter containing:
tensor([[-0.0013, -0.0381, -0.0158, ..., 0.0244, -0.0008, 0.0240],
[ 0.0020, 0.0151, 0.0033, ..., 0.0180, -0.0023, 0.0231],
[-0.0386, 0.0145, 0.0621, ..., 0.0374, -0.0105, -0.0395],
...,
[-0.0111, 0.0136, 0.0541, ..., 0.0666, 0.0017, -0.0090],
[ 0.0001, 0.0024, -0.0125, ..., 0.0046, -0.0014, -0.0079],
[ 0.0415, 0.0751, 0.0305, ..., 0.0317, 0.0479, 0.0080]],
requires_grad=True)
accelerator.process_index=0 model.classifier.weight=Parameter containing:
tensor([[-0.0025, 0.0011, -0.0052, ..., -0.0212, 0.0227, 0.0206],
[ 0.0151, -0.0045, 0.0243, ..., -0.0208, -0.0183, -0.0203]],
requires_grad=True)
FullyShardedDataParallel(
(_fsdp_wrapped_module): BertForSequenceClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x FullyShardedDataParallel(
(_fsdp_wrapped_module): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=2, bias=True)
)
)
accelerator.process_index=0 model.bert.pooler.dense.weight=Parameter containing:
tensor([[-0.0013, -0.0381, -0.0158, ..., 0.0244, -0.0008, 0.0240],
[ 0.0020, 0.0151, 0.0033, ..., 0.0180, -0.0023, 0.0231],
[-0.0386, 0.0145, 0.0621, ..., 0.0374, -0.0105, -0.0395],
...,
[-0.0111, 0.0136, 0.0541, ..., 0.0666, 0.0017, -0.0090],
[ 0.0001, 0.0024, -0.0125, ..., 0.0046, -0.0014, -0.0079],
[ 0.0415, 0.0751, 0.0305, ..., 0.0317, 0.0479, 0.0080]],
device='cuda:0', requires_grad=True)
accelerator.process_index=0 model.classifier.weight=Parameter containing:
tensor([[-0.0025, 0.0011, -0.0052, ..., -0.0212, 0.0227, 0.0206],
[ 0.0151, -0.0045, 0.0243, ..., -0.0208, -0.0183, -0.0203]],
device='cuda:0', requires_grad=True)
accelerator.process_index=1 model.bert.pooler.dense.weight=Parameter containing:
tensor([[-0.0013, -0.0381, -0.0158, ..., 0.0244, -0.0008, 0.0240],
[ 0.0020, 0.0151, 0.0033, ..., 0.0180, -0.0023, 0.0231],
[-0.0386, 0.0145, 0.0621, ..., 0.0374, -0.0105, -0.0395],
...,
[-0.0111, 0.0136, 0.0541, ..., 0.0666, 0.0017, -0.0090],
[ 0.0001, 0.0024, -0.0125, ..., 0.0046, -0.0014, -0.0079],
[ 0.0415, 0.0751, 0.0305, ..., 0.0317, 0.0479, 0.0080]],
device='cuda:1', requires_grad=True)
accelerator.process_index=1 model.classifier.weight=Parameter containing:
tensor([[-0.0025, 0.0011, -0.0052, ..., -0.0212, 0.0227, 0.0206],
[ 0.0151, -0.0045, 0.0243, ..., -0.0208, -0.0183, -0.0203]],
device='cuda:1', requires_grad=True)
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Memory before entering the train : 212
Memory consumed at the end of the train (end-begin): 659
Peak Memory consumed during the train (max-begin): 1995
Total Peak Memory consumed during the train (max): 2207
epoch 0: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
Memory before entering the eval : 872
Memory consumed at the end of the eval (end-begin): 94
Peak Memory consumed during the eval (max-begin): 209
Total Peak Memory consumed during the eval (max): 1081
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Memory before entering the train : 966
Memory consumed at the end of the train (end-begin): -94
Peak Memory consumed during the train (max-begin): 1218
Total Peak Memory consumed during the train (max): 2184
epoch 1: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
Memory before entering the eval : 872
Memory consumed at the end of the eval (end-begin): 94
Peak Memory consumed during the eval (max-begin): 209
Total Peak Memory consumed during the eval (max): 1081
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Memory before entering the train : 966
Memory consumed at the end of the train (end-begin): -94
Peak Memory consumed during the train (max-begin): 1297
Total Peak Memory consumed during the train (max): 2263
epoch 2: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}
Memory before entering the eval : 872
Memory consumed at the end of the eval (end-begin): 94
Peak Memory consumed during the eval (max-begin): 209
Total Peak Memory consumed during the eval (max): 1081
wandb: Waiting for W&B process to finish... (success).
wandb:
wandb: Run history:
wandb: accuracy ▁▁▁
wandb: eval_total_peak_memory ▁▁▁
wandb: f1 ▁▁▁
wandb: train_loss █▂▁
wandb: train_total_peak_memory ▃▁█
wandb:
wandb: Run summary:
wandb: accuracy 0.68382
wandb: eval_total_peak_memory 1081
wandb: f1 0.81223
wandb: train_loss 0.63513
wandb: train_total_peak_memory 2263
wandb:
wandb: 🚀 View run whole-fire-12 at: https://wandb.ai/smangrul/fsdp_glue_no_trainer/runs/hshg5m1t
wandb: Synced 6 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20230724_163421-hshg5m1t/logs
```
As you can see, the performance remains same across the 3 epochs which is random performance.
**Expected Behaviour:**
Model learns when using `param_init_fn` and `sync_module_states=True` with FSDP so that the pretrained model can be loaded only on rank_0 and it can be `meta` for rank!=0. This is required for FSDP to be usable with large models in practice.
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA DGX Display
GPU 4: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] lion-pytorch==0.0.6
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==1.9.0
[pip3] pytorch-triton==2.1.0+9e3e10c5ed
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] lion-pytorch 0.0.6 pypi_0 pypi
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpy-base 1.25.0 py310hb5e798b_0
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 1.9.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 2.1.0+9e3e10c5ed pypi_0 pypi
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torchaudio 2.0.2+cu118 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 19 |
1,860 | 105,839 |
MPS memory issue, MPS backend out of memory, but works if I empty the MPS cache
|
module: memory usage, triaged, module: mps
|
### 🐛 Describe the bug
There appears to be something wrong the the MPS cache, I appears that either its not releasing memory when it ideally should be, or the freeable memory in the cache is not being taken into account when the check for space occurs.
The issue occurs on the currently nightly, see versions, and 2.0.1
This issue affects performance at best and terminates an application at worse.
Here's an example...
```
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from torch import mps
import torch
import fp16fixes
import gc
fp16fixes.fp16_fixes()
pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16)
pipe_prior.to("mps")
prompt = "A car exploding into colorful dust"
out = pipe_prior(prompt)
image_emb = out.image_embeds
zero_image_emb = out.negative_image_embeds
pipe_prior = None
gc.collect()
mps.empty_cache()
pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.to("mps")
pipe.enable_attention_slicing()
image = pipe(
image_embeds=image_emb,
negative_image_embeds=zero_image_emb,
height=1024,
width=1024,
num_inference_steps=30,
).images
image[0].save("cat.png")
```
This works on a 8GB M1 Mac Mini without issue the two models run at
```
100%|████████| 25/25 [00:07<00:00, 3.15it/s]
100%|████████| 30/30 [04:24<00:00, 8.82s/it]
```
Remove the `mps.empty_cache()` and it fails during the second model run
```
0%| | 0/30 [00:03<?, ?it/s]
Traceback (most recent call last):
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/8GB_M1_Diffusers_Scripts/sag/k2img.py", line 25, in <module>
image = pipe(
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py", line 272, in __call__
noise_pred = self.unet(
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 905, in forward
sample, res_samples = downsample_block(
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 1662, in forward
hidden_states = attn(
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 321, in forward
return self.processor(
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 1590, in __call__
attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice)
File "/Volumes/Sabrent Media/Documents/Source/Python/Diffusers/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 374, in get_attention_scores
attention_probs = attention_scores.softmax(dim=-1)
RuntimeError: MPS backend out of memory (MPS allocated: 3.90 GB, other allocations: 4.94 GB, max allowed: 9.07 GB). Tried to allocate 387.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
```
If I reduce the height and width values to 512 it'll run to completion but the second model runs at 40 seconds per iter with a lot of swap file access. With the cache emptied manually it runs at around 2 seconds per iter.
the fp16fixes file is required to work around some issues with using fp16 on mps which fails with a broadcast error on 2.0.1 and fails with a bad image on the nightly I'm currently using. If I remove it the issue still occurs on the nightly.
```
% cat fp16fixes.py
import torch
def fp16_fixes():
if torch.backends.mps.is_available():
torch.empty = torch.zeros
_torch_layer_norm = torch.nn.functional.layer_norm
def new_layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05):
if input.device.type == "mps" and input.dtype == torch.float16:
input = input.float()
if weight is not None:
weight = weight.float()
if bias is not None:
bias = bias.float()
return _torch_layer_norm(input, normalized_shape, weight, bias, eps).half()
else:
return _torch_layer_norm(input, normalized_shape, weight, bias, eps)
torch.nn.functional.layer_norm = new_layer_norm
def new_torch_tensor_permute(input, *dims):
result = torch.permute(input, tuple(dims))
if input.device == "mps" and input.dtype == torch.float16:
result = result.contiguous()
return result
torch.Tensor.permute = new_torch_tensor_permute
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230724
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.24.4
Libc version: N/A
Python version: 3.10.11 (main, Apr 8 2023, 02:11:11) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.1.0.dev20230724
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
1,861 | 105,838 |
Exporting the operator 'aten::grad' to ONNX opset version 18 is not supported.
|
module: onnx, module: autograd, triaged
|
### 🚀 The feature, motivation and pitch
I am attampting to conver a model from pytorch to ONNX, inside which there is a gradient calculation operation:
torch.autograd.grad(params...)
(I was using the backward hooks to get gradients, but torchscript does not support that)
And then when it calls
jitscriptModule = torch.jit.script(model) # good, and torchscript pt model works well
torch.onnx.export(jitscriptModule, tensor, "converted.onnx") # bad, operator not supported occurs.
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::grad' to ONNX opset version 18 is not supported.
### Alternatives
Using pytorch 2.0.1
### Additional context
Will this feature be supported by future pytorch?
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
1,862 | 105,835 |
Remove TORCH_API from OpaqueTensorImpl
|
triaged, open source, Stale
|
Removes the TORCH_API qualifier macro from the OpaqueTensorImpl class template, so that the class template can be inherited from by classes defined outside torch_cpu.dll and correctly generate class template implementation.
| 4 |
1,863 | 105,823 |
Add UT for NEON implementation of vec_reduce_all
|
module: cpu, triaged, open source, Stale, ciflow/trunk, release notes: sparse
|
All these changes are required to add a UT for the NEON implementation of `vec_reduce_all` that I introduced in #105590.
This enables reuse of the existing tests in aten/src/ATen/test/vec_test_all_types.cpp
As requested, I have made a separate PR for these changes so that this can land before #105590.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 10 |
1,864 | 105,822 |
Redundant kernels for torch.scatter() when using torch.compile()
|
triaged, module: functionalization, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
Hello,
I have a very simple program as follows:
```
import torch
def fn(out, src, index):
out.scatter_(0, index, src)
return out
out = torch.zeros(8, dtype=torch.int64, device='cuda')
src = torch.tensor([1001, 1002, 1003], dtype=torch.int64, device='cuda')
index = torch.tensor([1, 3, 5], dtype=torch.int64, device='cuda')
compiled_f = torch.compile(fn, backend='inductor',
options={'trace.enabled':True,
'trace.graph_diagram':True})
out = compiled_f(out, src, index)
```
I found out that the triton code generated using torch.compile() is as follow:
```
from ctypes import c_void_p, c_long
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch import empty_strided, as_strided, device
from torch._inductor.codecache import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
aten = torch.ops.aten
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
async_compile = AsyncCompile()
# kernel path: /tmp/torchinductor_surakav/ve/cve56yx3hkxlcxgcjzkzrwkfqjk32wabeioxkqmj6wzktob6ndln.py
# Original ATen: aten.scatter
# aten.scatter => scatter
triton_poi_fused_scatter_0 = async_compile.triton('triton_', '''
import triton
import triton.language as tl
from torch._inductor.ir import ReductionHint
from torch._inductor.ir import TileHint
from torch._inductor.triton_heuristics import AutotuneHint, pointwise
from torch._inductor.utils import instance_descriptor
from torch._inductor import triton_helpers
@pointwise(size_hints=[8], filename=__file__, meta={'signature': {0: '*i64', 1: '*i64', 2: 'i32'}, 'device': 0, 'constants': {}, 'mutated_arg_names': [], 'autotune_hints': set(), 'configs': [instance_descriptor(divisible_by_16=(0, 1), equal_to_1=())]})
@triton.jit
def triton_(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 8
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tl.store(out_ptr0 + (x0), tmp0, xmask)
''')
import triton
import triton.language as tl
from torch._inductor.triton_heuristics import grid, start_graph, end_graph
from torch._C import _cuda_getCurrentRawStream as get_cuda_stream
# kernel path: /tmp/torchinductor_surakav/ot/cotufkhkaxlsx7zez6h2ksaw3znejjzmdja62ubpjuiga5xnawzq.py
# Original ATen: aten.scatter
# aten.scatter => scatter
triton_poi_fused_scatter_1 = async_compile.triton('triton_', '''
import triton
import triton.language as tl
from torch._inductor.ir import ReductionHint
from torch._inductor.ir import TileHint
from torch._inductor.triton_heuristics import AutotuneHint, pointwise
from torch._inductor.utils import instance_descriptor
from torch._inductor import triton_helpers
@pointwise(size_hints=[4], filename=__file__, meta={'signature': {0: '*i64', 1: '*i64', 2: '*i64', 3: 'i32'}, 'device': 0, 'constants': {}, 'mutated_arg_names': ['out_ptr0'], 'autotune_hints': set(), 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})
@triton.jit
def triton_(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 3
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = tl.load(in_ptr1 + (x0), xmask)
tl.device_assert(((0 <= tmp0) & (tmp0 < 8)) | ~xmask, "index out of bounds: 0 <= tmp0 < 8")
tl.store(out_ptr0 + (tl.broadcast_to(tmp0, [XBLOCK])), tmp1, xmask)
''')
# kernel path: /tmp/torchinductor_surakav/3i/c3igscqoaoc2w6f2rm3zzbumbrsp3453ilgp42nn463kkpl5aed2.py
# Original ATen:
triton_poi_fused_2 = async_compile.triton('triton_', '''
import triton
import triton.language as tl
from torch._inductor.ir import ReductionHint
from torch._inductor.ir import TileHint
from torch._inductor.triton_heuristics import AutotuneHint, pointwise
from torch._inductor.utils import instance_descriptor
from torch._inductor import triton_helpers
@pointwise(size_hints=[8], filename=__file__, meta={'signature': {0: '*i64', 1: '*i64', 2: 'i32'}, 'device': 0, 'constants': {}, 'mutated_arg_names': ['out_ptr0'], 'autotune_hints': set(), 'configs': [instance_descriptor(divisible_by_16=(0, 1), equal_to_1=())]})
@triton.jit
def triton_(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 8
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tl.store(out_ptr0 + (x0), tmp0, xmask)
''')
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1, arg2_1 = args
args.clear()
assert_size_stride(arg0_1, (8, ), (1, ))
assert_size_stride(arg1_1, (3, ), (1, ))
assert_size_stride(arg2_1, (3, ), (1, ))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0) # no-op to ensure context
buf0 = empty_strided((8, ), (1, ), device='cuda', dtype=torch.int64)
stream0 = get_cuda_stream(0)
triton_poi_fused_scatter_0.run(arg0_1, buf0, 8, grid=grid(8), stream=stream0)
triton_poi_fused_scatter_1.run(arg2_1, arg1_1, buf0, 3, grid=grid(3), stream=stream0)
del arg1_1
del arg2_1
triton_poi_fused_2.run(buf0, arg0_1, 8, grid=grid(8), stream=stream0)
del arg0_1
return (buf0, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((8, ), (1, ), device='cuda:0', dtype=torch.int64)
arg1_1 = rand_strided((3, ), (1, ), device='cuda:0', dtype=torch.int64)
arg2_1 = rand_strided((3, ), (1, ), device='cuda:0', dtype=torch.int64)
return print_performance(lambda: call([arg0_1, arg1_1, arg2_1]), times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
Seems like there are two redundant kernels before and after torch.scatter() that do nothing but loading and storing the same data (triton_poi_fused_scatter_0 and triton_poi_fused_2). I am wondering if this is a bug? Is there a way to eliminate these two redundant kernels?
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git72b223c
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1038-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7V12 64-Core Processor
Stepping: 0
CPU MHz: 2445.440
BogoMIPS: 4890.88
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip rdpid
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0a0+git72b223c
[conda] magma-cuda118 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.25.0 pypi_0 pypi
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0a0+git72b223c dev_0 <develop>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,865 | 105,821 |
JIT input aliasing does not support aten::fill_
|
oncall: jit
|
### 🐛 Describe the bug
在将deformble-detr代码使用TensorBoard进行模型结构输出时出现:
```
/root/autodl-tmp/read-Deformable-DETR-main/util/misc.py:300: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
max_size = _max_by_axis([list(img.shape) for img in tensor_list])
/root/autodl-tmp/read-Deformable-DETR-main/util/misc.py:286: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
maxes[index] = max(maxes[index], item)
/root/autodl-tmp/read-Deformable-DETR-main/util/misc.py:303: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
batch_shape = [len(tensor_list)] + max_size
/root/autodl-tmp/read-Deformable-DETR-main/util/misc.py:315: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
for img, pad_img, m in zip(tensor_list, tensor, mask):
/root/autodl-tmp/read-Deformable-DETR-main/models/position_encoding.py:50: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
/root/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py:2498: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
_verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
/root/autodl-tmp/read-Deformable-DETR-main/models/deformable_transformer.py:243: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
/root/autodl-tmp/read-Deformable-DETR-main/models/deformable_transformer.py:405: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
for lvl, (H_, W_) in enumerate(spatial_shapes):
/root/miniconda3/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/root/autodl-tmp/read-Deformable-DETR-main/models/ops/modules/ms_deform_attn.py:118: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in
/root/autodl-tmp/read-Deformable-DETR-main/models/ops/modules/ms_deform_attn.py:140: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if reference_points.shape[-1] == 2:
/root/autodl-tmp/read-Deformable-DETR-main/models/deformable_transformer.py:565: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if reference_points.shape[-1] == 4:
/root/autodl-tmp/read-Deformable-DETR-main/models/deformable_transformer.py:571: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert reference_points.shape[-1] == 2
/root/autodl-tmp/read-Deformable-DETR-main/models/deformable_detr.py:294: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if reference.shape[-1] == 4:
0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::fill_ but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!))
aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!))
Error occurs, No graph saved
Traceback (most recent call last):
File "main.py", line 406, in <module>
main(args)
File "main.py", line 343, in main
train_stats = train_one_epoch(
File "/root/autodl-tmp/read-Deformable-DETR-main/engine.py", line 51, in train_one_epoch
writer.add_graph(model, samples.tensors,use_strict_trace=False)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 736, in add_graph
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, use_strict_trace))
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 297, in graph
raise e
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 291, in graph
trace = torch.jit.trace(model, args, strict=use_strict_trace)
File "/root/miniconda3/lib/python3.8/site-packages/torch/jit/_trace.py", line 741, in trace
return trace_module(
File "/root/miniconda3/lib/python3.8/site-packages/torch/jit/_trace.py", line 958, in trace_module
module._c._create_method_from_trace(
RuntimeError: 0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::fill_ but it isn't a special case. Argument types: Tensor, bool,
Candidates:
aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!))
aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!))
```
deformable_detr代码如下:
```# ------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# ------------------------------------------------------------------------
# Modified from DETR (https://github.com/facebookresearch/detr)
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# ------------------------------------------------------------------------
"""
Deformable DETR model and criterion classes.
"""
import torch
import torch.nn.functional as F
from torch import nn
import math
from util import box_ops
from util.misc import (NestedTensor, nested_tensor_from_tensor_list,
accuracy, get_world_size, interpolate,
is_dist_avail_and_initialized, inverse_sigmoid)
from .backbone import build_backbone
from .matcher import build_matcher
from .segmentation import (DETRsegm, PostProcessPanoptic, PostProcessSegm,
dice_loss, sigmoid_focal_loss)
from .deformable_transformer import build_deformable_transformer
import copy
# copy clone几份
def _get_clones(module, N):
return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
class DeformableDETR(nn.Module):
""" This is the Deformable DETR module that performs object detection """
def __init__(self, backbone, transformer, num_classes, num_queries,
num_feature_levels,
aux_loss=True, with_box_refine=False, two_stage=False):
# num_feature_levels, with_box_refine, two_stage 是DeformableDETR新增的
""" Initializes the model.
Parameters:
backbone: torch module of the backbone to be used. See backbone.py
transformer: torch module of the transformer architecture. See transformer.py
num_classes: number of object classes
num_queries: number of object queries, ie detection slot. This is the maximal number of objects
DETR can detect in a single image. For COCO, we recommend 100 queries.
aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.
with_box_refine: iterative bounding box refinement
two_stage: two-stage Deformable DETR
"""
super().__init__()
self.num_queries = num_queries
self.transformer = transformer
hidden_dim = transformer.d_model
# DETR加1,这个地方没有加1,同时后面的处理也有些不同
self.class_embed = nn.Linear(hidden_dim, num_classes)
# 因为bbox的坐标是4,这里的output_dim为4
# 因此使用多层的MLP可以更好的糅合提取必要的信息
# 相比于类别的输出头,其只需要使用一个Linear就可以了
self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
# multi level
self.num_feature_levels = num_feature_levels
# 不使用two_stage的情况
if not two_stage:
# 这里处理与DETR不同
# 这里hidden_dim的数量乘以2,在后面会被split成query_embed和tgt
# 而detr的tgt在初始时是zeros_like
# 这里的tgt在网络中是共享的
self.query_embed = nn.Embedding(num_queries, hidden_dim * 2)
if num_feature_levels > 1:
# backbone 有3层的输出 2 3 4, strides is [8,16,32]
num_backbone_outs = len(backbone.strides)
input_proj_list = []
for _ in range(num_backbone_outs):
# num_channels is [512,1024,2048]
in_channels = backbone.num_channels[_]
# 每一层的channel数量都要投影成 给到transformer一样的
input_proj_list.append(nn.Sequential(
nn.Conv2d(in_channels, hidden_dim, kernel_size=1),
nn.GroupNorm(32, hidden_dim),
))
for _ in range(num_feature_levels - num_backbone_outs):
# 这里一共需要num_feature_levels=4个特征层,但是num_backbone_outs=3,
# 那么会有一个是backbone没法提供的,因此这里会这样构造
# 这里的in_channels在第一次时最后一个channel的宽度2048
# 3x3卷积会使特征图小一半
input_proj_list.append(nn.Sequential(
nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1),
nn.GroupNorm(32, hidden_dim),
))
in_channels = hidden_dim
self.input_proj = nn.ModuleList(input_proj_list)
else:
# 用的是其中第一个512的
self.input_proj = nn.ModuleList([
nn.Sequential(
nn.Conv2d(backbone.num_channels[0], hidden_dim, kernel_size=1),
nn.GroupNorm(32, hidden_dim),
)])
self.backbone = backbone
self.aux_loss = aux_loss
self.with_box_refine = with_box_refine
self.two_stage = two_stage
prior_prob = 0.01
# todo
bias_value = -math.log((1 - prior_prob) / prior_prob)
# 权重初始化
self.class_embed.bias.data = torch.ones(num_classes) * bias_value
# 权重初始化
nn.init.constant_(self.bbox_embed.layers[-1].weight.data, 0)
nn.init.constant_(self.bbox_embed.layers[-1].bias.data, 0)
# 权重初始化
for proj in self.input_proj:
nn.init.xavier_uniform_(proj[0].weight, gain=1)
nn.init.constant_(proj[0].bias, 0)
# if two-stage, the last class_embed and bbox_embed is for region proposal generation
num_pred = (transformer.decoder.num_layers + 1) if two_stage else transformer.decoder.num_layers
if with_box_refine:
# 这里 class_embed 和 bbox_embed 也都是6个
# 不使用with_box_refine 这两个也都是六个
# 不过区别在于使用了with_box_refine之后,这六个是不同的,各自独立的
# 如果没有使用的话,这六个其实是同一个模块,就跟DETR是相同的
# clone 6个
self.class_embed = _get_clones(self.class_embed, num_pred)
# clone 6个
self.bbox_embed = _get_clones(self.bbox_embed, num_pred)
# 参数初始化
nn.init.constant_(self.bbox_embed[0].layers[-1].bias.data[2:], -2.0)
# hack implementation for iterative bounding box refinement
self.transformer.decoder.bbox_embed = self.bbox_embed
else:
# bias 初始化成了 0 0 -2 -2
# todo why -2
nn.init.constant_(self.bbox_embed.layers[-1].bias.data[2:], -2.0)
# 这里的使用与DETR不同,DETR的class_embed和bbox_embed都只有一个
# 这里给每层创建了一个(但其实是相同的class_embed,真实的网络还是只有一份的)
# 这里其实也可以保持跟DETR相同的设计,但是这里这样做的原因是因为在后面bbox的计算部分
# 需要不同的decoder层的输出上加上相应的对应层的计算出的便宜量,因此这里会是这样的设计
self.class_embed = nn.ModuleList([self.class_embed for _ in range(num_pred)])
self.bbox_embed = nn.ModuleList([self.bbox_embed for _ in range(num_pred)])
self.transformer.decoder.bbox_embed = None
if two_stage:
# hack implementation for two-stage
# class_embed 输出类别的
self.transformer.decoder.class_embed = self.class_embed
# todo 源码中应该是缺少了这行代码,这是我添加的
self.transformer.decoder.bbox_embed = self.bbox_embed
for box_embed in self.bbox_embed:
# todo why -2变回0
nn.init.constant_(box_embed.layers[-1].bias.data[2:], 0.0)
def forward(self, samples):
"""The forward expects a NestedTensor, which consists of:
- samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
It returns a dict with the following elements:
- "pred_logits": the classification logits (including no-object) for all queries.
Shape= [batch_size x num_queries x (num_classes + 1)]
- "pred_boxes": The normalized boxes coordinates for all queries, represented as
(center_x, center_y, height, width). These values are normalized in [0, 1],
relative to the size of each individual image (disregarding possible padding).
See PostProcess for information on how to retrieve the unnormalized bounding box.
- "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of
dictionnaries containing the two above keys for each decoder layer.
"""
if not isinstance(samples, NestedTensor):
samples = nested_tensor_from_tensor_list(samples)
# features的item数量为3,detr仅有一个
# 3就是在backbone里面定义了三层,并不是参数配置的
features, pos = self.backbone(samples)
# 经过input_proj之后,这里的feature的channel都是256
srcs = []
masks = []
# 遍历每一个特征层
for l, feat in enumerate(features):
# 分解出特征和mask
src, mask = feat.decompose()
# input_proj 是Conv2d,不同的特征层有不同的卷积
srcs.append(self.input_proj[l](src))
masks.append(mask)
assert mask is not None
# 4层的举例,这里会用到最后一个
# Sequential(
# (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
# (1): GroupNorm(32, 256, eps=1e-05, affine=True)
# )
# Sequential(
# (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
# (1): GroupNorm(32, 256, eps=1e-05, affine=True)
# )
# Sequential(
# (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
# (1): GroupNorm(32, 256, eps=1e-05, affine=True)
# )
# Sequential(
# (0): Conv2d(2048, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
# (1): GroupNorm(32, 256, eps=1e-05, affine=True)
# )
# 多出backbone特征层的层
if self.num_feature_levels > len(srcs):
_len_srcs = len(srcs)
for l in range(_len_srcs, self.num_feature_levels):
if l == _len_srcs:
# 这里的还可以承接backbone的特征
src = self.input_proj[l](features[-1].tensors)
else:
# 这里的只能承接上一层的了,跟backbone的特征已经中间有隔阂了
src = self.input_proj[l](srcs[-1])
# 这个是gt的mask
m = samples.mask
# 使用插值创建新的mask
mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]
# backbone[1] 是位置编码,因为这个最后的src,和gt的mask,并没有经过位置编码
# 位置编码也是module,这里就是调用了他们的forward方法
pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)
srcs.append(src)
masks.append(mask)
pos.append(pos_l)
# 如果使用了two_stage 那么传给tansformer的query_embeds是None
query_embeds = None
if not self.two_stage:
# 也是与DETR相同,用的是query_embed.weight
query_embeds = self.query_embed.weight
# DETR在这个地方的返回值只有两个,第一个是decoder的输出,第二个是encoder的输出
# 后两个只有在two stage的时候才有值
# hs [6,bs,300,256]
# init_reference [bs,300,2] 经过Linear网络 初始生成的参考点的坐标
# inter_references [6,bs,300,2] 经过各个decoder layer之后的参考点的坐标?
hs, init_reference, inter_references, \
enc_outputs_class, enc_outputs_coord_unact = self.transformer(srcs, masks,
pos,
query_embeds)
outputs_classes = []
outputs_coords = []
for lvl in range(hs.shape[0]):
if lvl == 0:
# init_reference 是经过Linear生成的 初始的 参考点的坐标
reference = init_reference
else:
# 网络的输出结果,都是对传入decoder的reference_points进行的偏移量修正,因此需要使用上一个的reference_points
reference = inter_references[lvl - 1]
# 先将参考点位取反函数结果,得到的值是真实的图像上的坐标值,修正也是使用的这个坐标系
# 但最后还是会使用sigmoid再次限制到0-1之间
# sigmoid的反函数 [bs,300,2]
reference = inverse_sigmoid(reference)
# 类别的输出 [bs,300,91]
outputs_class = self.class_embed[lvl](hs[lvl])
# [bs,300,4]
tmp = self.bbox_embed[lvl](hs[lvl])
if reference.shape[-1] == 4:
# == 4 的情况,就是中心坐标以及高宽
tmp += reference
else:
# == 2 的情况,就是只有中心坐标
# assert reference.shape[-1] == 2
# 与前两个维度相加,前两个维度就是高宽
tmp[..., :2] += reference
# bbox的输出 [bs,300,4]
outputs_coord = tmp.sigmoid()
outputs_classes.append(outputs_class)
outputs_coords.append(outputs_coord)
# 每层都有一个预测的类别,与DETR的不同在于,DETR这些网络结构只有一个,六层通过同一个网络结构
# Deformable DETR这些网络结构有六个,每层对应的使用一个网络结构
outputs_class = torch.stack(outputs_classes)
# 每层都有一个预测的框
outputs_coord = torch.stack(outputs_coords)
# 最后一个decoder的预测
out = {'pred_logits': outputs_class[-1], 'pred_boxes': outputs_coord[-1]}
# 辅助loss
if self.aux_loss:
# out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord)
out = (outputs_class[-1], outputs_coord[-1])
if self.two_stage:
# 坐标经过sigmoid,保证在0-1内
enc_outputs_coord = enc_outputs_coord_unact.sigmoid()
# out['enc_outputs'] = {'pred_logits': enc_outputs_class, 'pred_boxes': enc_outputs_coord}
return out
@torch.jit.unused
def _set_aux_loss(self, outputs_class, outputs_coord):
# this is a workaround to make torchscript happy, as torchscript
# doesn't support dictionary with non-homogeneous values, such
# as a dict having both a Tensor and a list.
return [{'pred_logits': a, 'pred_boxes': b}
for a, b in zip(outputs_class[:-1], outputs_coord[:-1])]
class SetCriterion(nn.Module):
""" This class computes the loss for DETR.
The process happens in two steps:
1) we compute hungarian assignment between ground truth boxes and the outputs of the model
2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
"""
def __init__(self, num_classes, matcher, weight_dict, losses, focal_alpha=0.25):
""" Create the criterion.
Parameters:
num_classes: number of object categories, omitting the special no-object category
matcher: module able to compute a matching between targets and proposals
weight_dict: dict containing as key the names of the losses and as values their relative weight.
losses: list of all the losses to be applied. See get_loss for list of available losses.
focal_alpha: alpha in Focal Loss
"""
super().__init__()
self.num_classes = num_classes
self.matcher = matcher
self.weight_dict = weight_dict
self.losses = losses # ['labels','boxes','cardinality']
self.focal_alpha = focal_alpha
def loss_labels(self, outputs, targets, indices, num_boxes, log=True):
"""Classification loss (NLL)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
"""
assert 'pred_logits' in outputs
src_logits = outputs['pred_logits'] # [bs,300,91]
idx = self._get_src_permutation_idx(indices)
target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
target_classes = torch.full(src_logits.shape[:2], self.num_classes,
dtype=torch.int64, device=src_logits.device) # 填充的都是91
target_classes[idx] = target_classes_o
# 这里+1 91变成92 [bs,300,92],onehot这个是原始detr中没有的
target_classes_onehot = torch.zeros([src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1],
dtype=src_logits.dtype, layout=src_logits.layout, device=src_logits.device)
# target_classes is [bs,300] ->unsqueeze(-1)-> [bs,300,1]
# scatter_(dim,index,src) 这里的src都是1
# target_classes_onehot 原本是 92个0
# target_classes 里面大部分都是91,91就是92个位置中的最后一个,会被填上1
# 然后在下一个语句中会将最后一个切掉
target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1)
# [bs,200,91]
target_classes_onehot = target_classes_onehot[:, :, :-1]
loss_ce = sigmoid_focal_loss(src_logits, target_classes_onehot, num_boxes, alpha=self.focal_alpha, gamma=2) * \
src_logits.shape[1]
losses = {'loss_ce': loss_ce}
if log:
# TODO this should probably be a separate loss, not hacked in this one here
losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]
return losses
@torch.no_grad()
def loss_cardinality(self, outputs, targets, indices, num_boxes):
"""
与DETR相同
Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes
This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients
"""
# [bs,300,91]
pred_logits = outputs['pred_logits']
device = pred_logits.device
# image中的gt的数量
tgt_lengths = torch.as_tensor([len(v["labels"]) for v in targets], device=device)
# Count the number of predictions that are NOT "no-object" (which is the last class)
card_pred = (pred_logits.argmax(-1) != pred_logits.shape[-1] - 1).sum(1)
# 差值的均数
card_err = F.l1_loss(card_pred.float(), tgt_lengths.float())
# 这个值应该是越小越好,但是变小也不能保证预测的正确性
# 并且对于Deformable DETR,它的91而不是92类别的方式,可能并不适用,这个值没有意义
losses = {'cardinality_error': card_err}
return losses
def loss_boxes(self, outputs, targets, indices, num_boxes):
"""
与DETR相同
Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size.
"""
assert 'pred_boxes' in outputs
idx = self._get_src_permutation_idx(indices)
# idx 是个tuple,正好将pred_boxes的第一维是batch的image,第二位是每个预测的box(每个图片100个),将这两个维度,直接取出来了
src_boxes = outputs['pred_boxes'][idx]
# 取出gt的boxes
target_boxes = torch.cat([t['boxes'][i] for t, (_, i) in zip(targets, indices)], dim=0)
loss_bbox = F.l1_loss(src_boxes, target_boxes, reduction='none')
losses = {}
losses['loss_bbox'] = loss_bbox.sum() / num_boxes
# giou loss,iou loss的计算需要左上右下的坐标,因此先获取左上右下的坐标
loss_giou = 1 - torch.diag(box_ops.generalized_box_iou(
box_ops.box_cxcywh_to_xyxy(src_boxes),
box_ops.box_cxcywh_to_xyxy(target_boxes)))
losses['loss_giou'] = loss_giou.sum() / num_boxes
return losses
def loss_masks(self, outputs, targets, indices, num_boxes):
"""Compute the losses related to the masks: the focal loss and the dice loss.
targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
"""
assert "pred_masks" in outputs
src_idx = self._get_src_permutation_idx(indices)
tgt_idx = self._get_tgt_permutation_idx(indices)
src_masks = outputs["pred_masks"]
# TODO use valid to mask invalid areas due to padding in loss
target_masks, valid = nested_tensor_from_tensor_list([t["masks"] for t in targets]).decompose()
target_masks = target_masks.to(src_masks)
src_masks = src_masks[src_idx]
# upsample predictions to the target size
src_masks = interpolate(src_masks[:, None], size=target_masks.shape[-2:],
mode="bilinear", align_corners=False)
src_masks = src_masks[:, 0].flatten(1)
target_masks = target_masks[tgt_idx].flatten(1)
losses = {
"loss_mask": sigmoid_focal_loss(src_masks, target_masks, num_boxes),
"loss_dice": dice_loss(src_masks, target_masks, num_boxes),
}
return losses
def _get_src_permutation_idx(self, indices):
# permute predictions following indices
batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])
src_idx = torch.cat([src for (src, _) in indices])
return batch_idx, src_idx
def _get_tgt_permutation_idx(self, indices):
# permute targets following indices
batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
tgt_idx = torch.cat([tgt for (_, tgt) in indices])
return batch_idx, tgt_idx
def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):
# 一共四种loss,masks 只有在分割任务中会被使用
# cardinality 这个也并不是一个真的loss的计算
loss_map = {
'labels': self.loss_labels,
'cardinality': self.loss_cardinality,
'boxes': self.loss_boxes,
'masks': self.loss_masks
}
assert loss in loss_map, f'do you really want to compute {loss} loss?'
return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs)
def forward(self, outputs, targets):
""" This performs the loss computation.
Parameters:
outputs: dict of tensors, see the output specification of the model for the format
"pred_logtis": [2,300,91],
"pred_boxes": [2,300,4],
"aux_outputs": 前五层decoder的输出
list 内容是bs中的images信息
targets: list of dicts, such that len(targets) == batch_size.
The expected keys in each dict depends on the losses applied, see each loss' doc
"boxes"
"labels"
"image_id"
"area"
"iscrowd"
"orig_size"
"size"
"""
outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs' and k != 'enc_outputs'}
# 两个tensor,第一个是detr中的proposal的下标,第二个是image中gt的下标
# Retrieve the matching between the outputs of the last layer and the targets
indices = self.matcher(outputs_without_aux, targets)
# 这个bs中总的boxes的数量
# Compute the average number of target boxes accross all nodes, for normalization purposes
num_boxes = sum(len(t["labels"]) for t in targets)
# 后面device的处理,就是要获取到detr输出的outputs中的tensor的device
num_boxes = torch.as_tensor([num_boxes], dtype=torch.float, device=next(iter(outputs.values())).device)
if is_dist_avail_and_initialized():
torch.distributed.all_reduce(num_boxes)
num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item()
# Compute all the requested losses
losses = {}
for loss in self.losses:
kwargs = {}
losses.update(self.get_loss(loss, outputs, targets, indices, num_boxes, **kwargs))
# In case of auxiliary losses, we repeat this process with the output of each intermediate layer.
if 'aux_outputs' in outputs:
for i, aux_outputs in enumerate(outputs['aux_outputs']):
indices = self.matcher(aux_outputs, targets)
for loss in self.losses:
if loss == 'masks':
# Intermediate masks losses are too costly to compute, we ignore them.
continue
kwargs = {}
if loss == 'labels':
# Logging is enabled only for the last layer
kwargs['log'] = False
l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_boxes, **kwargs)
l_dict = {k + f'_{i}': v for k, v in l_dict.items()}
losses.update(l_dict)
# two-stage的时候会有这个输出
if 'enc_outputs' in outputs:
enc_outputs = outputs['enc_outputs'] # 两项内容,pred_logits,pred_boxes 这两个是encoder的输出经过liner得到的
bin_targets = copy.deepcopy(targets)
for bt in bin_targets:
# 都当成同一类,为了让分配gt时不用考虑类别
# 即使这个label=0是虚假的,后面也计算了类别的loss,但是也没有什么损失
# 这部分的label 慢慢都会预测成0,也不会有其他的预测
# 不过这个地方使用的class_embed bbox_embed 是detr中的那个,不知这样是否有不妥
bt['labels'] = torch.zeros_like(bt['labels'])
indices = self.matcher(enc_outputs, bin_targets)
for loss in self.losses:
if loss == 'masks':
# Intermediate masks losses are too costly to compute, we ignore them.
continue
kwargs = {}
if loss == 'labels':
# Logging is enabled only for the last layer
kwargs['log'] = False
# 正常的计算loss
l_dict = self.get_loss(loss, enc_outputs, bin_targets, indices, num_boxes, **kwargs)
l_dict = {k + f'_enc': v for k, v in l_dict.items()}
losses.update(l_dict)
return losses
class PostProcess(nn.Module):
""" This module converts the model's output into the format expected by the coco api"""
@torch.no_grad()
def forward(self, outputs, target_sizes):
""" Perform the computation
Parameters:
outputs: raw outputs of the model
target_sizes: tensor of dimension [batch_size x 2] containing the size of each images of the batch
For evaluation, this must be the original image size (before any data augmentation)
For visualization, this should be the image size after data augment, but before padding
"""
# [bs,300,91], [bs,300,4]
out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']
assert len(out_logits) == len(target_sizes)
assert target_sizes.shape[1] == 2
# [bs,300,91]
prob = out_logits.sigmoid()
# [bs,100] [bs,,300*91]
# 在300个预测-91个类别中选出概率最大的100个
topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), 100, dim=1)
# [bs,100]
scores = topk_values
# 属于哪一个预测 [bs,100]
topk_boxes = topk_indexes // out_logits.shape[2]
# 预测的类别
labels = topk_indexes % out_logits.shape[2]
# 中心坐标高宽 -> 左上右下 [bs,300,4]
boxes = box_ops.box_cxcywh_to_xyxy(out_bbox)
# [bs,100,1] repeat -> [bs,100,4] 最后标识属于哪个预测的id 重复4次,
# 从boxes中取出对应的框
# [bs,100,4]
boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1, 1, 4))
# 图像的高 宽
# and from relative [0, 1] to absolute [0, height] coordinates
img_h, img_w = target_sizes.unbind(1)
scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)
# 缩放回图像中的框体大小
boxes = boxes * scale_fct[:, None, :]
results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)]
return results
class MLP(nn.Module):
""" Very simple multi-layer perceptron (also called FFN)"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super().__init__()
self.num_layers = num_layers
h = [hidden_dim] * (num_layers - 1)
self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
def forward(self, x):
for i, layer in enumerate(self.layers):
x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
return x
def build(args):
# 91类就是coco中的stuff类别数量
num_classes = 20 if args.dataset_file != 'coco' else 91
if args.dataset_file == "coco_panoptic":
num_classes = 250
device = torch.device(args.device)
backbone = build_backbone(args)
transformer = build_deformable_transformer(args)
model = DeformableDETR(
backbone,
transformer,
num_classes=num_classes,
num_queries=args.num_queries,
num_feature_levels=args.num_feature_levels,
aux_loss=args.aux_loss,
with_box_refine=args.with_box_refine,
two_stage=args.two_stage,
)
if args.masks:
model = DETRsegm(model, freeze_detr=(args.frozen_weights is not None))
matcher = build_matcher(args)
weight_dict = {'loss_ce': args.cls_loss_coef, 'loss_bbox': args.bbox_loss_coef}
weight_dict['loss_giou'] = args.giou_loss_coef
if args.masks:
weight_dict["loss_mask"] = args.mask_loss_coef
weight_dict["loss_dice"] = args.dice_loss_coef
# TODO this is a hack
if args.aux_loss:
aux_weight_dict = {}
for i in range(args.dec_layers - 1):
aux_weight_dict.update({k + f'_{i}': v for k, v in weight_dict.items()})
aux_weight_dict.update({k + f'_enc': v for k, v in weight_dict.items()})
weight_dict.update(aux_weight_dict)
losses = ['labels', 'boxes', 'cardinality']
if args.masks:
losses += ["masks"]
# num_classes, matcher, weight_dict, losses, focal_alpha=0.25
criterion = SetCriterion(num_classes, matcher, weight_dict, losses, focal_alpha=args.focal_alpha)
criterion.to(device)
postprocessors = {'bbox': PostProcess()}
if args.masks:
postprocessors['segm'] = PostProcessSegm()
if args.dataset_file == "coco_panoptic":
is_thing_map = {i: i <= 90 for i in range(201)}
postprocessors["panoptic"] = PostProcessPanoptic(is_thing_map, threshold=0.85)
return model, criterion, postprocessors
```
调用代码engine.py中如下:
``` writer.add_graph(model, samples.tensors,use_strict_trace=False)```
### Versions
代码来源 https://github.com/xunull/read-Deformable-DETR 只更改了上面两个文件的代码 其余文件代码未更改
运行步骤1、cd models/ops 2、 python setup.py build install 3、cd ../../ 4、python main.py
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 6 |
1,866 | 105,804 |
Conversion Error to ComplexDouble on MPS
|
triaged, module: complex, module: mps
|
### 🐛 Describe the bug
## Description
I am trying to use PyTorch for a generative model for audio phase reconstruction. In the model, I use the inverse Short-time Fourier Transform as a step on MPS, which requires a complex tensor that cannot be created when the device is set to `mps`. It works on the CPU, but then the training time is unfeasible.
I found that there is an issue with MPS and double precision numbers here: https://github.com/pytorch/pytorch/issues/77781 but I was wondering if there is a workaround for this issue.
## Code
```python
device = 'mps'
mag = torch.randn(512,512).to(device)
phase = torch.randn(512,512).to(device)
out = mag*(torch.cos(mag)+1j* torch.sin(phase))
```
## Traceback
```
libc++abi: terminating due to uncaught exception of type c10::TypeError: Trying to convert ComplexDouble to the MPS backend but it does not have support for that dtype.
Exception raised from getMPSScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:91 (most recent call first):
frame #0: at::native::mps::getMPSScalarType(c10::ScalarType) + 180 (0x1122cd940 in libtorch_cpu.dylib)
frame #1: invocation function for block in at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 156 (0x1122e8830 in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, at::native::mps::MPSCachedGraph* () block_pointer) + 216 (0x1122e38c0 in libtorch_cpu.dylib)
frame #3: _dispatch_client_callout + 20 (0x184c80400 in libdispatch.dylib)
frame #4: _dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x184c8f97c in libdispatch.dylib)
frame #5: at::native::mps::MPSGraphCache::CreateCachedGraph(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, at::native::mps::MPSCachedGraph* () block_pointer) + 160 (0x1122d19cc in libtorch_cpu.dylib)
frame #6: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 2352 (0x1122e7884 in libtorch_cpu.dylib)
frame #7: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 128 (0x1122eb3dc in libtorch_cpu.dylib)
frame #8: at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&) + 140 (0x10fa87e88 in libtorch_cpu.dylib)
frame #9: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::mul_Tensor(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) + 1412 (0x11070c2a0 in libtorch_cpu.dylib)
frame #10: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 284 (0x10e8c5878 in libtorch_cpu.dylib)
frame #11: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 396 (0x1027749e8 in libtorch_python.dylib)
frame #12: _object* torch::autograd::TypeError_to_NotImplemented_<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x1026d0a3c in libtorch_python.dylib)
<omitting python frames>
frame #30: start + 2236 (0x184ad7f28 in dyld)
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.1 (main, Apr 29 2023, 23:15:20) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-complex==0.1.2
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.2
[pip3] torchshow==0.5.1
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
1,867 | 105,803 |
Changed documentation of DataLoadder to match class signature in sour…
|
triaged, open source, Stale
|
Aligned the documentation with the class signature.
Fixes #105799
| 6 |
1,868 | 105,802 |
Errors while trying to finetune compiled transformers model
|
needs reproduction, triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
I'm switching to Pytorch 2.0.1 and want to compile the model for training times improvement. I'm trying to compile the model according to the [docs](https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/#hugging-face-models), but it does not work and throws an error.
The code looks this:
```python
model = AutoModelForSequenceClassification.from_pretrained(
model_id, num_labels=num_labels, label2id=label2id, id2label=id2label
).to(device="cuda:0")
model = torch.compile(model, dynamic=True, fullgraph=True)
training_args = TrainingArguments(
output_dir="./temp",
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
learning_rate=5e-5,
num_train_epochs=3,
optim="adamw_torch_fused",
logging_steps=1,
logging_strategy="steps",
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="f1",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
```
Here is the [link](https://colab.research.google.com/drive/1VnPcPyQ0Jg55Zn0x8pNJq-tQRX2IwIYR?authuser=2#scrollTo=DVrNg-2xmY7v) to colab version ("Pytorch API" section)
### Error logs
```bash
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-25-4dd917c6010c>](https://localhost:8080/#) in <cell line: 31>()
29 )
30
---> 31 trainer.train()
2 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer_pt_utils.py](https://localhost:8080/#) in get_model_param_count(model, trainable_only)
1051 return p.numel()
1052
-> 1053 return sum(numel(p) for p in model.parameters() if not trainable_only or p.requires_grad)
1054
1055
AttributeError: 'function' object has no attribute 'parameters'
```
### Minified repro
_No response_
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.109+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 45 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
| 11 |
1,869 | 105,801 |
[MPS] aten::erfinv bug fix: add storage offset buffers to handle slicing
|
triaged, open source, release notes: mps, ciflow/mps
|
A bug fix of a recently merged PR per comment: https://github.com/pytorch/pytorch/pull/101507#discussion_r1271393706
The follow test would fail without this bug fix:
```
import torch
def test_erfinv():
for device in ['cpu', 'mps']:
x = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5], device=device)
y = x[2:].erfinv()
x2 = torch.tensor([0.3, 0.4, 0.5], device=device)
y2 = x2.erfinv()
print(y)
print(y2)
torch.testing.assert_close(y, y2)
print(f"{device} passes.")
test_erfinv()
```
| 3 |
1,870 | 105,799 |
inconsistent signature for dataloader in docs/source/data.rst
|
module: dataloader, triaged
|
### 📚 The doc issue
docs/source/data.rst indicates the signature of the dataloader with `prefetch_factor=2` but the class uses `None` as default. This is confusing and the signature should be updated.
### Suggest a potential alternative/fix
Replace the current signature in the docs
```
DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None, *, prefetch_factor=2,
persistent_workers=False)
```
with the current signature in the code
```
DataLoader(dataset, batch_size=1, shuffle=None, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=None, persistent_workers=False, pin_memory_device='')
```
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 2 |
1,871 | 105,795 |
Enable the RUF013 rule in ruff
|
open source, release notes: fx, module: inductor, module: dynamo, ciflow/inductor, suppress-bc-linter
|
As suggested by @Skylion007 in https://github.com/pytorch/pytorch/pull/105791#issuecomment-1646644405
This rule enforces `Optional[...]` as the type for parameters having a default value of None.
Apply all RUF013 patches from ruff. Apply all other ruff patches in the affected files - otherwise lintrunner fails the build.
Fixes parts of #105230
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 12 |
1,872 | 105,792 |
[BUG] Fix grad_ready_order_indices' error type in DDP
|
triaged, open source, Stale, release notes: distributed (c10d)
|
The type of grad_ready_order_indices_ should be std::vector<size_t> instead of std::vector<int>.
In https://github.com/pytorch/pytorch/blob/afd955f3de4050e9617245fbd40769859b10515b/torch/csrc/distributed/c10d/reducer.hpp#L527, the type of grad_ready_order_indices_ is std::vector<int>.
In https://github.com/pytorch/pytorch/blob/afd955f3de4050e9617245fbd40769859b10515b/torch/csrc/distributed/c10d/reducer.cpp#L649, the index is directly inserted into grad_ready_order_indices_, and the type of index is size_t.
A type mismatch occurred during this process, and an overflow may occur.
| 5 |
1,873 | 105,790 |
torch.sparse.mm() with reduce operator for GPU support and COO matrices
|
module: sparse, feature, triaged, module: reductions
|
### 🚀 The feature, motivation and pitch
I am working on a ML program and need to use sparse matrix multiplication with "reduce" operator.
Here is an example:
```
a = torch.sparse_coo_tensor(torch.tensor([[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]), [1., 2., 3., 4., 5., 6.], size=[3, 2])
b = torch.tensor([[7, 9], [8, 10]], dtype=torch.float, requires_grad=True)
# a and b are the input. a is a sparse-COO matrix, and b is a typical dense matrix with grad
a_csr = a.to_sparse_csr()
c = torch.sparse.mm(a_csr, b, "max")
print(c) # c is what I want
```
Currently the above codes is the only way for my work, but it runs quite slow for not supporting GPU, and is redundant for converting COO to CSR.
What I hope to be possible:
1. enable GPU support to the reduce operator, and
2. enable COO matrix with the reduce operator.
Thanks a lot for your help!
### Alternatives
_No response_
### Additional context
According to the docs [here](https://pytorch.org/docs/stable/generated/torch.sparse.mm.html#torch.sparse.mm), currently `reduce` is implemented only for CSR storage format on CPU device.
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
1,874 | 105,782 |
DDP , error . [c10d] The client socket has timed out after 900s while trying to connect to (XX.XX.XX.XX, 8514).
|
oncall: distributed
|
### 🐛 Describe the bug
code:
python -m torch.distributed.launch --use_env --nproc_per_node=${GPUS_PER_NODE} --nnodes=${WORKER_CNT} --node_rank=${RANK} \
--master_addr="172.22.48.199" --master_port=${MASTER_PORT} cn_clip/training/main.py \
error:
[W socket.cpp:601] [c10d] The IPv6 network addresses of (XX.XX.XX.XX, 8514) cannot be retrieved (gai error: -2 - Name or service not known).
[E socket.cpp:860] [c10d] The client socket has timed out after 900s while trying to connect to (XX.XX.XX.XX, 8514).
Traceback (most recent call last):
File "/mnt/e/wsldata/conda/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/mnt/e/wsldata/conda/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/launch.py", line 196, in <module>
main()
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 241, in launch_agent
result = agent.run()
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 723, in run
result = self._invoke_run(role)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 858, in _invoke_run
self._initialize_workers(self._worker_group)
File "/mnt/e/wsldata/conda/envs/cnclip/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.35
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 516.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 1
Stepping: 4
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 896 KiB (28 instances)
L1i cache: 896 KiB (28 instances)
L2 cache: 28 MiB (28 instances)
L3 cache: 19.3 MiB (1 instance)
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.0+cu117
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.0+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 0 |
1,875 | 105,781 |
[WIP] Fix Prims as_strided_scatter
|
Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
Took over from https://github.com/pytorch/pytorch/pull/98483 and try to reland.
| 2 |
1,876 | 105,779 |
Mode to warm up PT2 with a regular eager mode execution
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
When we originally were designing the warmup policy of PT2, we had a lot of discussion about whether or not we should do background compilation vs wait for compilation on the very first round. We never ended up doing background compilation because it is a bit of work for a Python compilation stack (GIL contention means that you'd have to do it out of process... out of process is complicated...) Another problem was that PT2 could potentially apply memory saving optimizations, that might end up being mandatory: your model would OOM if not run under PT2. (In practice, this is rarely the case today, and even when it is true people have a preference for keeping their eager mode runnable.)
I think a more limited form of warmup would be useful for PT2, and possibly even should be the default setting. Instead of immediately attempting to compile, we instead only do a Dynamo level analysis and then bail to the regular CPython interpreter. Only when the frame is deemed sufficiently hot do we actually perform compilation.
This "wait and see" strategy has several benefits:
1. We can record how long a frame takes in eager mode. If the walltime of the frame is very low, there is not much benefit to compiling it, and we can choose to skip it (even if, in principle, we could compile it.) If we do end up compiling it, we can compare how long eager took with compile without user intervention, and automatically report how much speed up PT2 gave. This is a great benefit in a production setting, since we can monitor the benefit of PT2 and observe the effect of optimizations in aggregate.
2. If a frame is dynamic (sizes/ints vary from run to run), we can avoid compiling an overly specialized frame; by waiting, we are more likely to predict which sizes truly are dynamic and properly compile them. We can also partially solve the problem described in https://github.com/pytorch/pytorch/issues/105634#issuecomment-1645624378 by using the *largest* observed size for tuning (in this case, the user may need to assist in saying when warmup should end.)
3. If a frame will eventually be skipped, we can detect if this would happen, by generating "ghost" guards which we check if they match. If we see that we never manage a cache hit, we can give up before we've actually spent time doing expensive inductor lowering compilation. (This is also incomplete, as we won't generate guards from AOTAutograd/Inductor, but my guess is these are not the bulk of the recompilation reasons.)
(1) by itself is compelling enough that I would add this as an option. But I am really tantalized by the prospect of automatic telemetry about how much speedup PT2 is giving.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @kumpera whose questions inspired me to think more carefully about this
### Versions
main
| 3 |
1,877 | 105,778 |
https://pytorch.org/docs/stable/backends.html does not describe torch.backends.cpu
|
module: docs, module: cpu, triaged, actionable
|
### 📚 The doc issue
I was looking for the possible return values from torch.backends.cpu.get_cpu_capability() but the backends doc page did not list cpu, let alone get_cpu_capabilities
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
1,878 | 105,774 |
Add automated minifier script for Dynamo benchmarks
|
triaged, open source, Stale, topic: not user facing, module: dynamo, ciflow/inductor
|
### Description
This tool runs the minifier across a list of models from the Hugging Face, timm, or TorchBench suites. It leverages the existing TorchDynamo benchmark runner scripts and allows for end-to-end functionality.
### Purpose
This script will be useful for any developer that wants to run the existing Dynamo benchmarks (or even a single model) on a backend where support may be limited. It identifies existing model coverage for a backend, and also quickly identifies specific operators that are failing for each failing model. Additionally, the auto-minifier tool is great to reproduce failures across a set of models, and thus is a great tool to add to model pipelines. The tool covers the whole cycle, as outlined below, thus allowing for modifications to be made more easily than if it was integrated with the existing benchmarking scripts.
1) It first runs model training/inference for all models. This is to identify the models that are failing compilation or evaluation, and ensure we don't run the minifier on models that pass training and eval loops.
2) It then classifies model failures into either compilation or accuracy errors, with the purpose of running the minifier with the proper granularity. If the model contains a compilation error, `TORCHDYNAMO_REPRO_LEVEL` is set to 1, while accuracy errors set this variable to 4. Possible classification errors are caught for models by comparing to the benchmark results csv file.
3) It then runs the minifier on each failed model. If a `repro.py` script is produced, it is renamed to include the model name, and is saved to the output directory: e.g. `artifact_dir/gpt2/gpt2_repro.py`
4) Optionally, the FX IR can be traced for the minified model, and will be similarly saved to an output file. Similarly, all HLO files can be dumped for the minified model. All logs are saved to the output directory by default, and a csv file is created with the benchmark results.
### Functionality
- For Dynamo benchmarks, runs model training or inference across a given model suite
- Compatible with existing benchmark scripts, but easier to modify given it's a standalone script
- Classify failures into "compiler" and "accuracy", ensuring the correct granularity is used when running the minifier
- Produced minified graph (repro.py script) for models that failed accuracy check, and also ones that failed compilation check
- Dump minified FX IR to file for each model
- Dump HLO files to file for each model
- Support for GPUs on PyTorch 2.0 implemented and tested
- Functional for Hugging Face, TIMM, and TorchBench model suites
- Works with existing model runner scripts, and uses the same model list
- Basic run information saved to a csv file
### How to run
1) Create a model list. The existing huggingface, timm, and torchbench model lists also work. This should be a text file, with the model name and the batch size specified in a comma-separated list.
2) Specify whether you're running training or inference (eval loop) using either `--training` or `--inference` flag.
3) Specify the device, backend, precision, output directory, and whether to run in precision or accuracy mode. This should be the same as running the existing benchmark scripts.
4) (Optional) Capture FX IR of the minified graphs by passing the `--print-fx` flag
5) (Optional) Dump HLO files of the minified graphs by passing the `--gen_hlo_files` flag
6) Run the script. An example command is provided below.
`python auto-minifier.py --model-suite huggingface --model-list huggingface_models_list.txt --device gpu --mode inference --output-dir hf_artifacts --accuracy --print-fx`
### Testing
- Tested across NVIDIA A100, A10G, and T4 GPUs for Hugging Face, timm, and TorchBench model suites
- Tested across multiple compiler backends, including eager, XLA, and inductor
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 6 |
1,879 | 105,768 |
torch.compile uses more memory when using less parameters
|
module: memory usage, module: convolution, triaged, oncall: pt2
|
### 🐛 Describe the bug
When running a residual block, I noticed that compiled networks that have a 1x1 skip convolution use less memory when applied to all layers rather than just layers where the number of channels change. Up to a certain input size, the program OOMs only when 1x1 convs happen in the layers with changing channel counts. I would expect the opposite because applying 1x1 convs to all layers would add parameters thus increasing memory usage.
Also, this is not the case if there is no normalization and activation happens after the convolution. Is this just a quirk of the compile function? I know that pre-activation is not the norm but it seems to have produced better results here: https://towardsdatascience.com/resnet-with-identity-mapping-over-1000-layers-reached-image-classification-bb50a42af03e
Code to reproduce:
```python
import torch.nn.functional as F
import torch.nn as nn
import torch._dynamo
import torch, gc
class ResBlock(nn.Module):
def __init__(self, in_c: int, nc: int, conv_shortcut: bool = False):
'''
in_c: number of input channels
nc: number of output channels
'''
super().__init__()
self.norm = nn.GroupNorm(8, in_c)
self.conv = nn.Conv2d(in_c, nc, 3, padding=1)
self.skip = nn.Conv2d(in_c, nc, 1) if (in_c != nc or conv_shortcut) else None
def forward(self, x):
h = self.conv(F.relu(self.norm(x)))
#h = F.relu(self.conv(self.norm(x))) # bug also happens in this case
#h = F.relu(self.norm(self.conv(x))) # bug also happens in this case
#h = self.conv(F.relu(x)) # bug also happens in this case
#h = F.relu(self.conv(x)) # bug does NOT happens in this case
x = x if self.skip is None else self.skip(x)
return h + x
device = 'cuda'
for ENABLE_CONV_SHORTCUT in [True, False]:
print('conv shortcut:', ENABLE_CONV_SHORTCUT)
with torch.no_grad():
with torch.amp.autocast('cuda'):
model = nn.Sequential(
ResBlock(256, 256, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(256, 512, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(512, 512, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(512, 1024, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(1024, 1024, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(1024, 2048, conv_shortcut=ENABLE_CONV_SHORTCUT),
ResBlock(2048, 2048, conv_shortcut=ENABLE_CONV_SHORTCUT),
).to(device)
model = torch.compile(model)
print('N params:', sum([p.numel() for p in model.parameters()]))
x = torch.zeros((8, 256, 64, 64)).to(device)
for _ in range(2):
__ = model(x)
torch.cuda.synchronize()
print('max mem alloc (GiB):', torch.cuda.max_memory_allocated() / (1024**3))
print('max mem reser (GiB):', torch.cuda.max_memory_reserved() / (1024**3))
# clear gpu mem
del model, x, __
torch._C._cuda_clearCublasWorkspaces()
torch._dynamo.reset()
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
print()
```
Output (OS: Ubuntu 22.04, CPU: i7-12700H, GPU: RTX 3060 Mobile Max-Q):
```
1x1 conv shortcut in all layers: True
N params: 83245568
max mem alloc (GiB): 1.231997013092041
max mem reser (GiB): 1.888671875
1x1 conv shortcut in all layers: False
N params: 77671168
max mem alloc (GiB): 1.3362345695495605
max mem reser (GiB): 2.07421875
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 4700.0000
CPU min MHz: 400.0000
BogoMIPS: 5376.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] libmagma 2.7.1 hc72dce7_3 conda-forge
[conda] libmagma_sparse 2.7.1 hc72dce7_4 conda-forge
[conda] magma 2.7.1 h2c23e93_0
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.25.0 py310h5f9d8c6_0
[conda] numpy-base 1.25.0 py310hb5e798b_0
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu117 pytorch
[conda] triton 2.0.0 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,880 | 105,765 |
I just need to ssh into a CI machine
|
ciflow/trunk, release notes: releng, topic: not user facing, test-config/default
| null | 1 |
1,881 | 105,758 |
Improve error messaging on EmbeddingBag.cu
|
fb-exported, Stale, ciflow/trunk, release notes: cuda, topic: improvements
|
Test Plan: Sandcastle
Differential Revision: D47679132
| 7 |
1,882 | 105,751 |
Revisit checkpoint naming mismatch with torch name (and ONNX initializer name as a consequence)
|
module: onnx, triaged
|
Addresses https://github.com/pytorch/pytorch/blob/main/torch/onnx/_internal/fx/serialization.py#L138-L152 within `save_model_with_external_data` utility.
The hugging face model, for example, may have a checkpoint with keys `transformer.h.0.attn.bias` while the fx model has name `h.0.attn.bias`
Track why this discrepancy happens and fix it, if applicable
| 1 |
1,883 | 105,742 |
`torch.unique()` messes around with order even if `sorted=False`
|
module: docs, triaged, actionable, module: sorting and selection
|
### 🐛 Describe the bug
Hi,
When I use `torch.unique` without specifying `dim` (in which case it is known that PyTorch will sort the output, cf. [here](https://pytorch.org/docs/stable/generated/torch.unique.html)) and when specifying `sorted=False`, I find that PyTorch still messes around with the order.
**Code for reproducing:**
```python
import torch
unique_classes, counts = torch.unique(
input=torch.tensor([0, 0, 2, 1, 1, 1]).long(),
sorted=False,
return_counts=True
)
print(unique_classes, counts)
```
yields the output
```python
(tensor([1, 2, 0]), tensor([3, 1, 2]))
```
**Expected behavior:**:
Since the element `0` appears before `1` or `2` in the input tensor, I would expect `unique_classes` to be
```python
tensor([0, 2, 1])
```
instead of
```python
tensor([1, 2, 0])
```
Best,
Imahn
### Versions
conda 23.3.1
PyTorch 2.0.1 (GPU version)
cc @svekars @carljparker @ezyang @gchanan @zou3519
| 7 |
1,884 | 105,734 |
[FAILING] Make guard after freeze a hard error
|
Stale, release notes: fx, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105734
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 3 |
1,885 | 105,731 |
Pypi is missing dependencies
|
oncall: binaries
|
### 🐛 Describe the bug
Asking Pypi.org for a list of dependencies for torch does not provide a complete list of dependencies. See the list from pypi's json:
```
~$ curl -s 'https://pypi.org/pypi/torch/json' | jq '.info.requires_dist|.[]'
"filelock"
"typing-extensions"
"sympy"
"networkx"
"jinja2"
"opt-einsum (>=3.3) ; extra == 'opt-einsum'"
```
And now compare it with the actual list of dependencies in the wheel file's metadata:
```
~$ grep -i requires-dist torch-2.0.1-cp39-cp39-manylinux1_x86_64/torch-2.0.1.dist-info/METADATA
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: nvidia-cuda-nvrtc-cu11 (==11.7.99) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-runtime-cu11 (==11.7.99) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-cupti-cu11 (==11.7.101) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cudnn-cu11 (==8.5.0.96) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cublas-cu11 (==11.10.3.66) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cufft-cu11 (==10.9.0.58) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-curand-cu11 (==10.2.10.91) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusolver-cu11 (==11.4.0.1) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusparse-cu11 (==11.7.4.91) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nccl-cu11 (==2.14.3) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nvtx-cu11 (==11.7.91) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: triton (==2.0.0) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```
I don't know why all the nvidida-* packages, as well as triton, are missing from the requires_dist field in the JSON of this package. Should they be there? This is a serious problem for people who are trying to build a consistent private python package mirror on an isolated network. We need to be able to programmatically determine all the dependencies for a package so that when we mirror the package, we also mirror it's requirements. If we cannot do this accurately, installing this package will fail on machines that cannot access pypi.org and rely on the mirror I'm building.
### Versions
This is not relevant to this bug, as it does not rely on any specific python environment.
cc @seemethere @malfet
| 3 |
1,886 | 105,729 |
[FSDP] using CPUOffload cannot make the code runing stop
|
triaged, module: fsdp
|
### 🐛 Describe the bug
I was tried to use offload fsdp to run the vicuna training,
however, after I run this command. the code looks like keeping still for 3-4hours and prints nothing like and the last warning before it stay is
```
Please use torch.distributed.reduce_scatter_tensor instead.
"torch.distributed._reduce_scatter_base is a private function and will "
```
### Versions
```
torchrun --nproc_per_node=8 --master_port=20002 fastchat/train/train_mem.py \
--model_name_or_path /home/ma-user/work/jianqiao/alpca-lora/weights \
--data_path data/sharegpt_clean_gpt4_openchat_v2_maxlen1024.json \
--bf16 False \
--fp16 True \
--output_dir output_vicuna \
--num_train_epochs 3 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 1200 \
--save_total_limit 10 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard offload auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 False \
--model_max_length 1024 \
--gradient_checkpointing True \
--lazy_preprocess True \
--data_key items
```
Could anybody help to solve this issue?
Thanks.
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 1 |
1,887 | 105,728 |
Compile error PyTorch 2.0.1 / GCC 13.1.0
|
module: build, triaged
|
### 🐛 Describe the bug
I came across a strange error when compiling PyTorch (only C++ API, Python disabled) on A64FX using GCC 13.1.0.
When compiling the file `aten/src/ATen/native/mkldnn/Pooling.cpp` GCC turns on `-Werror` without turning on `-Wno-dangling-reference` so that
```
aten/src/ATen/core/IListRef_inl.h:171:17: error: possibly dangling reference to a temporary [-Werror=dangling-reference]
171 | const auto& ivalue = (*it).get();
```
This seems to be triggered by line 534 in `caffe2/CMakeLists.txt`:
```
# Required workaround for LLVM 9 includes.
if(NOT MSVC)
set_source_files_properties(${TORCH_SRC_DIR}/csrc/jit/tensorexpr/llvm_jit.cpp PROPERTIES COMPILE_FLAGS -Wno-noexcept-type)
# Force -Werror on several files
set_source_files_properties(${CMAKE_CURRENT_LIST_DIR}/../aten/src/ATen/native/mkldnn/Pooling.cpp PROPERTIES COMPILE_FLAGS "-Werror")
endif()
```
Is this special treatment of the file `Pooling.cpp` really necessary? If so, one should maybe add `-Wno-dangling-reference`.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.4 (Green Obsidian) (aarch64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-15)
Clang version: 12.0.1 (Red Hat 12.0.1-4.module+el8.5.0+715+58f51d49)
CMake version: version 3.20.2
Libc version: glibc-2.17
Python version: 3.6.8 (default, Nov 9 2021, 14:47:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (64-bit runtime)
Python platform: Linux-4.18.0-305.25.1.el8_4.aarch64-aarch64-with-centos-8.4-Green_Obsidian
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per cluster: 12
Socket(s): -
Cluster(s): 4
NUMA node(s): 4
Vendor ID: FUJITSU
Model: 0
Model name: A64FX
Stepping: 0x1
BogoMIPS: 200.00
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
NUMA node2 CPU(s): 24-35
NUMA node3 CPU(s): 36-47
Flags: fp asimd evtstrm sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm fcma dcpop sve
Versions of relevant libraries:
[pip3] numpy==1.14.3
[conda] Could not collect
cc @malfet @seemethere @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 6 |
1,888 | 105,726 |
There is a big precision error between A100 and 3090 when using torch.matmul with fp16 precision
|
module: cuda, triaged, module: half
|
### 🐛 Describe the bug
There are two intputs:
attention_scores_clamp_softmax.npy and andattention_values.npy
Test data:
The copy them with different names and there md5 value are the same:
2c93eb5671266aeb561724e0d487e2e5 38_attention_scores_clamp_softmax.npy
6316a816fdecde5084160cb9ccd10889 38_attention_values.npy
2c93eb5671266aeb561724e0d487e2e5 41_attention_scores_clamp_softmax.npy
6316a816fdecde5084160cb9ccd10889 41_attention_values.npy
2c93eb5671266aeb561724e0d487e2e5 42_attention_scores_clamp_softmax.npy
6316a816fdecde5084160cb9ccd10889 42_attention_values.npy
Hardware & Software information:
38 Server is a Server with 3090 NVIDIA Card
Driver: 530.30.02
CUDA: 12.1
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
41 Server is a Server with A100 NVIDIA Card torch
Driver: 530.30.02
CUDA: 12.1
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
42 Server is a Server with A100 NVIDIA Card torch
Driver: 525.125.06
CUDA: 12.0
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
1st, run generate_output_38.py on 3090 Server
2nd, run generate_output_41.py on A100 Server
3rd, run generate_output_42.py on A100 Server
4th, copy all files to 38 Server and 41 Server
5th, run cmp_output_fp16.py on 3090 Server
6th, run cmp_output_fp16.py on A100 Server
7th, run cmp_output_fp16.py on A100 Server
Then get results below:
On 41 Server(A100):
$ python cmp_output_fp16.py
matmul_out_38 == matmul_out_42 False
matmul_out_41 == matmul_out_42 True
allclose torch output between 38 and 42 False
allclose torch output between 38 and 41 False
allclose torch output between 41 and 42 True
On 38 Server(3090):
$ python cmp_output_fp16.py
matmul_out_38 == matmul_out_42 False
matmul_out_41 == matmul_out_42 True
allclose torch output between 38 and 42 False
allclose torch output between 38 and 41 False
allclose torch output between 41 and 42 True
On 42 Server(A100):
$ python cmp_output_fp16.py
matmul_out_38 == matmul_out_42 False
matmul_out_41 == matmul_out_42 True
allclose torch output between 38 and 42 False
allclose torch output between 38 and 41 False
allclose torch output between 41 and 42 True
It seems there is a big gap between the value generated on 3090 and A100.
Then, check the md5 value of each outputs and found the tensor from 41 and 42 (A100) are the same:
5d77b2f48b27e099b10e5bdee5f1a38b 38_output.npy
4654e5fe4740dc4480953585379ea638 41_output.npy
4654e5fe4740dc4480953585379ea638 42_output.npy
Change the atol and rtol and found the following results:
test_cmp_output_fp16_threshold.py
When np.allclose(matmul_out_38, matmul_out_42, rtol=1e-05, atol=1e-03)} the 3090 and A100 could np.allclose return True
Attachment 1 generate_output_38.py
```
import torch
import numpy as np
x = torch.from_numpy(np.load('38_attention_scores_clamp_softmax.npy')).cuda().half()
y = torch.from_numpy(np.load('38_attention_values.npy')).cuda().half()
class MatMulNetwork(torch.nn.Module):
def __init__(self):
super(MatMulNetwork, self).__init__()
def forward(self, x, y):
return torch.matmul(x, y)
net = MatMulNetwork()
net.cuda()
net.half()
net.eval()
output = net(x, y)
np.save("38_output.npy", output.cpu().numpy())
print(output)
```
Attachment 2 cmp_output_fp16.py
```
import numpy as np
import torch
torch_matmul_38 = "38_output.npy"
torch_matmul_42 = "42_output.npy"
torch_matmul_41 = "41_output.npy"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
matmul_out_38 = np.load(torch_matmul_38)
matmul_out_42 = np.load(torch_matmul_42)
matmul_out_41 = np.load(torch_matmul_41)
print(f"matmul_out_38 == matmul_out_42 {np.array_equal(matmul_out_38, matmul_out_42)}")
print(f"matmul_out_41 == matmul_out_42 {np.array_equal(matmul_out_41, matmul_out_42)}")
print(f"allclose torch output between 38 and 42 {np.allclose(matmul_out_38, matmul_out_42, rtol=1e-05, atol=1e-08)}")
print(f"allclose torch output between 38 and 41 {np.allclose(matmul_out_38, matmul_out_41, rtol=1e-05, atol=1e-08)}")
print(f"allclose torch output between 41 and 42 {np.allclose(matmul_out_41, matmul_out_42, rtol=1e-05, atol=1e-08)}")
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 800.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342 defaults
[conda] mkl-service 2.4.0 py310h5eee18b_1 defaults
[conda] mkl_fft 1.3.6 py310h1128e8f_1 defaults
[conda] mkl_random 1.2.2 py310h1128e8f_1 defaults
[conda] numpy 1.25.0 py310h5f9d8c6_0 defaults
[conda] numpy-base 1.25.0 py310hb5e798b_0 defaults
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu117 pytorch
cc @ptrblck
| 8 |
1,889 | 105,725 |
[pytorch] replace __FILE__ with __FILE_NAME__ for exceptions
|
fb-exported, Stale, topic: not user facing
|
Summary: Replace uses of the `__FILE__` macro with `__FILE_NAME__` for pytorch exceptions. This will prevent embedding build system paths in the build output, which will cause varying output depending on configuration.
Test Plan:
Build and inspect assert strings before change:
```
$ buck2 build --show-output //fbobjc/Libraries/FBPyTorchCore:torch_core_msg_opsApple
$ strings buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_opsApple__/libtorch_core_msg_opsApple.pic.a | grep FAIL | grep buck | head -n 3
*current_device == options.device() INTERNAL ASSERT FAILED at "buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_ops_aten__/out/RegisterCompositeExplicitAutogradNonFunctional.cpp":99, please report a bug to PyTorch.
*current_device == options.device() INTERNAL ASSERT FAILED at "buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_ops_aten__/out/RegisterCompositeExplicitAutogradNonFunctional.cpp":118, please report a bug to PyTorch.
*current_device == options.device() INTERNAL ASSERT FAILED at "buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_ops_aten__/out/RegisterCompositeExplicitAutogradNonFunctional.cpp":151, please report a bug to PyTorch.
```
After change:
```
$ buck2 build --show-output //fbobjc/Libraries/FBPyTorchCore:torch_core_msg_opsApple
$ strings buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_opsApple__/libtorch_core_msg_opsApple.pic.a | grep FAIL | head -n 10
isBool() INTERNAL ASSERT FAILED at "ivalue.h":666, please report a bug to PyTorch.
new_refcount != 1 INTERNAL ASSERT FAILED at "intrusive_ptr.h":268, please report a bug to PyTorch.
false INTERNAL ASSERT FAILED at "ivalue.h":1239, please report a bug to PyTorch.
owning_ptr == NullType::singleton() || owning_ptr->refcount_.load() == 0 || owning_ptr->weakcount_.load() INTERNAL ASSERT FAILED at "intrusive_ptr.h":474, please report a bug to PyTorch.
target_->refcount_ == 0 && target_->weakcount_ == 0 INTERNAL ASSERT FAILED at "intrusive_ptr.h":315, please report a bug to PyTorch.
refcount_.load() == 0 || refcount_.load() >= 2147483647 INTERNAL ASSERT FAILED at "intrusive_ptr.h":125, please report a bug to PyTorch.
weakcount_.load() == 1 || weakcount_.load() == 0 || weakcount_.load() == 2147483647 - 1 || weakcount_.load() == 2147483647 INTERNAL ASSERT FAILED at "intrusive_ptr.h":131, please report a bug to PyTorch.
isDouble() INTERNAL ASSERT FAILED at "ivalue.h":542, please report a bug to PyTorch.
isTensorList() INTERNAL ASSERT FAILED at "ivalue_inl.h":2019, please report a bug to PyTorch.
isTensorList() INTERNAL ASSERT FAILED at "ivalue_inl.h":2023, please report a bug to PyTorch.
$ strings buck-out/v2/gen/fbsource/c969ccb4ab4ae099/fbobjc/Libraries/FBPyTorchCore/__torch_core_msg_opsApple__/libtorch_core_msg_opsApple.pic.a | grep FAIL | grep buck
```
Reviewed By: milend
Differential Revision: D47637795
| 3 |
1,890 | 105,723 |
Parameter ... has been marked as ready twice
|
oncall: distributed
|
### 🐛 Describe the bug
I have the following code in file named `debug.py`
```python
import os
import torch
import argparse
class CustomModel(torch.nn.Module):
def __init__(self):
super(CustomModel, self).__init__()
self.w1 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
self.w2 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
self.w3 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
def forward(self, x, order):
if order == 0:
return self.w1 * x
elif order == 1:
return self.w2 * x
else:
return self.w3 * x
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=-1)
args = parser.parse_args()
local_rank = args.local_rank
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
torch.distributed.init_process_group(backend="nccl")
model = CustomModel()
model.to(device)
# setup distributed
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],
output_device=local_rank,
find_unused_parameters=True)
x = torch.tensor(1.)
y1 = model(x, 0)
y2 = model(x, 1)
y3 = model(x, 2)
y1.backward()
if __name__ == "__main__":
main()
```
I run this code with the following command
```shell
TORCH_DISTRIBUTED_DEBUG=DETAIL python -m torch.distributed.launch --nproc_per_node=3 debug.py
```
Then, I got the error
<pre>
Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 0 with name .w1 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
</pre>
If I change the line `y1.backward()` to `y2.backward()`, the parameter that `has been marked as ready twice` change to `.w2`
As the error suggested, there might have shared parameters across multiple forward-backward pass or a same set of parameters used by multiple backward passes. However, I find none of these two suggestions match the above provided code.
The error disappeared if I set `find_unused_parameters=False`. However, in my actual code, this setting caused another error, which was `Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel...`.
I am able to change my code to fix the problem. But the focus of my question is why such a simple code above produced the error? Why was there a parameter that `has been marked as ready twice`?
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla P100-PCIE-12GB
GPU 1: Tesla P100-PCIE-12GB
GPU 2: Tesla P100-PCIE-12GB
Nvidia driver version: 470.199.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
Stepping: 4
CPU MHz: 2100.000
CPU max MHz: 3000,0000
CPU min MHz: 800,0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 16 MiB
L3 cache: 22 MiB
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
1,891 | 105,716 |
Dynamo test pipeline failed on MaxPool2d test when changed to use f-string
|
module: ci, module: tests, triaged, module: dynamo
|
### 🐛 Describe the bug
Tests
[linux-bionic-py3.11-clang9 / test (dynamo, 1, 2, linux.2xlarge)](https://hud.pytorch.org/pr/pytorch/pytorch/105436#15182560189) ([gh](https://github.com/pytorch/pytorch/actions/runs/5604098622/jobs/10251876391))
[linux-bionic-py3.8-clang9 / test (dynamo, 1, 2, linux.2xlarge)](https://hud.pytorch.org/pr/pytorch/pytorch/105436#15182563911) ([gh](https://github.com/pytorch/pytorch/actions/runs/5604098622/jobs/10251879196))
fails when the string format call in `cls_name = 'MaxPool{}d'.format(num_dim)` is changed to f-string in `_test_maxpool_indices`. For some reason the name becomes `MaxPool1d` even when it should be `MaxPool2d`
I could not reproduce in either a linux environment or on an m1 mac. It only happens in CI.
https://github.com/pytorch/pytorch/pull/105436#issuecomment-1642786366
### Versions
```
PyTorch version: 2.1.0a0+git2d16f88
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.4 (main, Jun 20 2023, 17:23:00) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] numpy==1.25.1
[pip3] pytorch==2.1.0a0+git2d16f88
[conda] Could not collect
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,892 | 105,712 |
Fail to build c++ extension in pytorch 2.0.0
|
module: windows, module: cpp-extensions, triaged
|
### 🐛 Describe the bug
I am using pytorch2.0.0, vs19 and cuda11.8, and trying to build my custom c++ extension function. The code was built successfully when I was using pytorch1.11.0, vs19 and cuda11.3. The same code is built unsuccessfully when I am now using pytorch 2.0.0, vs19 and cuda11.8. I am not sure what the problem is. The following shows the error messages. Please kindly help me. Thank you very much.
```
D:\NP\self_code\det\extension\sigmoid_focal_loss\src>python setup.py build
running build
running build_ext
D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py:359: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'sigmoid_focal_loss_cuda' extension
Emitting ninja build file D:\NP\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages
\torch\include\torch\csrc\api\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\TH -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\in
clude" -ID:\Programs\Anaconda3\envs\NP\include -ID:\Programs\Anaconda3\envs\NP\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visua
l Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include" -c D:\NP\self_code\det\extension\sigmoid_focal_loss\src\sigmoid_focal_loss.cpp /FoD:\NP\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\s
igmoid_focal_loss.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=sigmoid_focal_loss_cuda -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++17
FAILED: D:/NP/self_code/det/extension/sigmoid_focal_loss/src/build/temp.win-amd64-cpython-38/Release/sigmoid_focal_loss.obj
cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch
\include\torch\csrc\api\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\TH -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include"
-ID:\Programs\Anaconda3\envs\NP\include -ID:\Programs\Anaconda3\envs\NP\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Stud
io\2019\Community\VC\Tools\MSVC\14.29.30133\include" -c D:\NP\self_code\det\extension\sigmoid_focal_loss\src\sigmoid_focal_loss.cpp /FoD:\NP\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\sigmoid
_focal_loss.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=sigmoid_focal_loss_cuda -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++17
注意: 包含文件: D:\NP\self_code\det\extension\sigmoid_focal_loss\src\pytorch_cpp_helper.hpp
注意: 包含文件: D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\torch\csrc\api\include\torch/types.h
注意: 包含文件: D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\ATen/ATen.h
注意: 包含文件: D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\ATen/Context.h
注意: 包含文件: D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\ATen/CPUGeneratorImpl.h
注意: 包含文件: D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\ATen/core/Generator.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\stdint.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vcruntime.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\sal.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\concurrencysal.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\vadefs.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\mutex
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\yvals_core.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xkeycheck.h
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\__msvc_chrono.hpp
注意: 包含文件: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\yvals.h
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\yvals.h(12): fatal error C1083: 无法打开包括文件: “crtdbg.h”: No such file or directory
[2/2] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output D:\NP\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\sigmoid_focal
_loss_cuda.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_h
as_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -ID:\Programs\Anaconda3\envs
\NP\lib\site-packages\torch\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\TH -ID:\Programs\Anaconda3\envs\NP\lib\site-pack
ages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\Programs\Anaconda3\envs\NP\include -ID:\Programs\Anaconda3\envs\NP\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Communit
y\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include" -c D:\NP\self_code\det\extension\sigmoid_focal_loss\src\sigmoid_focal_loss_cuda.cu -o D:\NP
\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\sigmoid_focal_loss_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERAT
ORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=sigmoid_focal_loss_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89
FAILED: D:/NP/self_code/det/extension/sigmoid_focal_loss/src/build/temp.win-amd64-cpython-38/Release/sigmoid_focal_loss_cuda.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output D:\NP\self_code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\sigmoid_focal_loss_
cuda.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_dif
ferent_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -ID:\Programs\Anaconda3\envs\NP\li
b\site-packages\torch\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\include\TH -ID:\Programs\Anaconda3\envs\NP\lib\site-packages\t
orch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\Programs\Anaconda3\envs\NP\include -ID:\Programs\Anaconda3\envs\NP\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\T
ools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include" -c D:\NP\self_code\det\extension\sigmoid_focal_loss\src\sigmoid_focal_loss_cuda.cu -o D:\NP\self_
code\det\extension\sigmoid_focal_loss\src\build\temp.win-amd64-cpython-38\Release\sigmoid_focal_loss_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__
--expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=sigmoid_focal_loss_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89
C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/include\crtdefs.h(10): fatal error C1083: 无法打开包括文件: “corecrt.h”: No such file or directory
sigmoid_focal_loss_cuda.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
subprocess.run(
File "D:\Programs\Anaconda3\envs\NP\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "setup.py", line 8, in <module>
setup(name='sigmoid_focal_loss',
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\dist.py", line 1244, in run_command
super().run_command(command)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\command\build.py", line 131, in run
self.run_command(cmd_name)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\dist.py", line 1244, in run_command
super().run_command(command)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 345, in run
self.build_extensions()
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py", line 843, in build_extensions
build_ext.build_extensions(self)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 467, in build_extensions
self._build_extensions_serial()
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 493, in _build_extensions_serial
self.build_extension(ext)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\Cython\Distutils\build_ext.py", line 127, in build_extension
super(build_ext, self).build_extension(ext)
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 548, in build_extension
objects = self.compiler.compile(
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py", line 815, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py", line 1574, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "D:\Programs\Anaconda3\envs\NP\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
```
### Versions
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.17 (default, Jul 5 2023, 20:44:21) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
'wmic' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[conda] blas 2.117 mkl conda-forge
[conda] blas-devel 3.9.0 17_win64_mkl conda-forge
[conda] libblas 3.9.0 17_win64_mkl conda-forge
[conda] libcblas 3.9.0 17_win64_mkl conda-forge
[conda] liblapack 3.9.0 17_win64_mkl conda-forge
[conda] liblapacke 3.9.0 17_win64_mkl conda-forge
[conda] mkl 2022.1.0 h6a75c08_874 conda-forge
[conda] mkl-devel 2022.1.0 h57928b3_875 conda-forge
[conda] mkl-include 2022.1.0 h6a75c08_874 conda-forge
[conda] numpy 1.23.2 py38h223ccf5_0 conda-forge
[conda] pytorch 2.0.0 py3.8_cuda11.8_cudnn8_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] pytorch-cuda 11.8 h24eeafa_5 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] pytorch-mutex 1.0 cuda https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchaudio 2.0.0 pypi_0 pypi
[conda] torchvision 0.15.0 pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @malfet @zou3519
| 4 |
1,893 | 105,706 |
Unable to build documents
|
module: onnx, triaged, module: doc infra
|
### 📚 The doc issue
i follow the instruction in [contribute.md](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-documentation)
**some mistake occured during building the document:**
```
❯ make html --debug
GNU Make 4.3
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Updating makefiles....
Updating goal targets....
File 'html' does not exist.
File 'figures' does not exist.
Must remake target 'figures'.
Successfully remade target file 'figures'.
File 'onnx' does not exist.
Must remake target 'onnx'.
Traceback (most recent call last):
File "/docs/source/scripts/onnx/build_onnx_diagnostics_rules_md.py", line 37, in <module>
main()
File "/docs/source/scripts/onnx/build_onnx_diagnostics_rules_md.py", line 33, in main
gen_docs(args.out_dir)
File "/docs/source/scripts/onnx/build_onnx_diagnostics_rules_md.py", line 16, in gen_docs
full_description_markdown = rule.full_description_markdown
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Rule' object has no attribute 'full_description_markdown'
make: *** [Makefile:22: onnx] Error 1
```
**system infomation are as follow:**
```
❯ uname -a
Linux DY-Debian 6.1.0-10-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.37-1 (2023-07-03) x86_64 GNU/Linux
```
```
❯ python -V
Python 3.11.2
```
```
❯ pip freeze
absl-py==1.4.0
alabaster==0.7.13
anyio==3.7.1
appdirs==1.4.4
asgiref==3.6.0
asttokens==2.2.1
astunparse==1.6.3
attrs==22.2.0
av==10.0.0
Babel==2.10.3
backcall==0.2.0
bcrypt==3.2.2
beautifulsoup4==4.11.2
beniget==0.4.1
blinker==1.5
Bottleneck==1.3.5
Brlapi==0.8.4
Brotli==1.0.9
bytecode==0.14.0
cachetools==5.3.1
capstone==4.0.2
certifi==2022.9.24
chardet==5.1.0
charset-normalizer==3.0.1
click==8.1.3
colorama==0.4.6
colored-traceback==0.3.0
contourpy==1.0.7
coverage==6.5.0
cryptography==38.0.4
cupshelpers==1.0
cycler==0.11.0
dbus-python==1.3.2
debugpy==1.6.3+git20221103.a2a3328
decorator==5.1.1
defusedxml==0.7.1
distro==1.8.0
dnspython==2.4.0
docutils==0.18.1
entrypoints==0.4
et-xmlfile==1.0.1
executing==1.2.0
Flask==2.2.2
fonttools==4.38.0
fs==2.4.16
fuse-python==1.0.5
future==0.18.2
gast==0.5.2
google-auth==2.22.0
google-auth-oauthlib==0.4.6
gpg==1.18.0
grpcio==1.56.0
h11==0.14.0
html5lib==1.1
httpcore==0.17.3
httplib2==0.20.4
hypothesis==6.67.1
idna==3.3
imagesize==1.4.1
iniconfig==1.1.1
intervaltree==3.0.2
invoke==2.0.0
ipykernel==6.17.0
ipython==8.5.0
itsdangerous==2.1.2
jdcal==1.0
jedi==0.18.2
Jinja2==3.1.2
jupyter_client==7.4.9
jupyter_core==4.12.0
kiwisolver==1.4.4
lazr.restfulclient==0.14.5
lazr.uri==1.0.6
llvmlite==0.39.1
louis==3.24.0
lxml==4.9.2
lz4==4.0.2+dfsg
Mako==1.2.4.dev0
Markdown==3.4.3
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.6.0
matplotlib-inline==0.1.6
mdit-py-plugins==0.3.5
mdurl==0.1.2
more-itertools==8.10.0
mpmath==0.0.0
myst-parser==0.18.1
nest-asyncio==1.5.4
numba==0.56.4
numexpr==2.8.4
numpy==1.24.2
oauthlib==3.2.2
odfpy==1.4.2
olefile==0.46
openpyxl==3.0.9
packaging==23.0
pandas==1.5.3
paramiko==2.12.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.4.0
pluggy==1.0.0+repack
plumbum==0.0.0
ply==3.11
prompt-toolkit==3.0.36
protobuf==3.19.6
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.0.0
pwntools==4.12.0.dev0
py==1.11.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycairo==1.20.1
pycups==2.0.1
pycurl==7.45.2
pydevd==2.9.5
pyelftools==0.29
Pygments==2.14.0
PyGObject==3.42.2
pyinotify==0.9.6
PyJWT==2.6.0
pylibacl==0.7.0
PyNaCl==1.5.0
pyOpenSSL==23.0.0
pyparsing==3.0.9
PyQt5==5.15.9
PyQt5-sip==12.11.1
pyserial==3.5
PySimpleSOAP==1.16.2
pysmbc==1.0.23
PySocks==1.7.1
pytest==7.2.1
python-apt==2.6.0
python-dateutil==2.8.2
python-debian==0.1.49
python-debianbts==4.0.1
python-dotenv==0.21.0
python-etcd==0.4.5
pythran==0.11.0
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git@32a6550fcfd59b7f5d9bae4f2181dc5c715de691#egg=pytorch_sphinx_theme
pytz==2022.7.1
pyxattr==0.8.1
pyxdg==0.28
PyYAML==6.0
pyzmq==24.0.1
ranger-fm==1.9.3
reportbug==12.0.0
requests==2.28.1
requests-oauthlib==1.3.1
ROPGadget==7.2
rpyc==5.3.0
rsa==4.9
scapy==2.5.0
scipy==1.10.1
simplejson==3.18.3
six==1.16.0
sniffio==1.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soupsieve==2.3.2
Sphinx==5.0.0
sphinx-copybutton==0.5.0
sphinx-panels==0.4.1
sphinxcontrib-applehelp==1.0.4
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-katex==0.8.6
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
stack-data==0.6.2
sympy==1.11.1
tables==3.7.0
tensorboard==2.10.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
torch==1.13.0a0+gitunknown
torchaudio==0.13.1
torchvision==0.14.1a0
tornado==6.2
tqdm==4.64.1
traitlets==5.5.0
typing_extensions==4.4.0
ufoLib2==0.14.0
unicorn==2.0.1.post1
urllib3==1.26.12
wadllib==1.3.6
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==2.2.2
xdg==5
```
```
❯ git status
On branch main
Your branch is up to date with 'origin/main'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
.venv/
nothing added to commit but untracked files present (use "git add" to track)
```
```
❯ git rev-parse HEAD
a8f568e99b1e59a97c70ae0a351cdd19716ef085
```
- katex installed by apt,already in $PATH
- pip requirement already satisfied
- system is Debian12 with KDE plasma
### Suggest a potential alternative/fix
_No response_
cc @ezyang @zou3519 @holly1238 @svekars
| 0 |
1,894 | 105,702 |
Slightly improve AOTAutograd logging with ViewAndMutationMeta
|
Merged, Reverted, Stale, ciflow/trunk, ciflow/inductor, release notes: AO frontend, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105702
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 11 |
1,895 | 105,700 |
Don't use weak ref finalization for freeing resources when code objects die
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Today, we use weakref on codegen'ed code objects to trigger deallocation of compiled functions when the cache is cleared (using CleanupManager). I find this to be a bit fragile in the presence of pytest/exceptions, because if an exception is raised inside our compilation stack, the exception will hold references to the frame, and thus the code object, and then if this exception is kept around, this prevents the compiled function from ever being deallocated. It seems better to treat the cache entries as owning their respective code objects etc, and directly trigger deallocation when they are cleared.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 0 |
1,896 | 105,697 |
Support symmetry in einsum
|
triaged, module: linear algebra
|
### 🚀 The feature, motivation and pitch
PyTorch is an incredibly powerful tool, particularly due to its capability to parallelize and leverage GPU resources for operations like `einsum`. One area where I'd love to see development is in the support for tensor symmetries.
For example, let's consider a tensor contraction like `A[a,b,c,d,e] * B[d,e,f,g,h] = C[a,b,c,f,g,h]`. If we have symmetries such as `A[a,b,c,d,e] = A[b,a,c,d,e]` or `B[d,e,f,g,h]=B[d,e,f,h,g]`, it would be great if we could use these symmetries to boost computational efficiency significantly.
### Alternatives
The `ctf` package does offer some functionality along these lines, but it seems limited to symmetric/antisymmetric operations for adjacent indices. Here's the GitHub repository for `ctf`:
https://github.com/cyclops-community/ctf
For more specifics, the documentation can be found here:
https://solomon2.web.engr.illinois.edu/ctf_python/ctf.html#module-ctf.core
That said, from my observations, the performance isn't quite as efficient as one might hope. You can check out the issue I'm referring to here:
https://github.com/cyclops-community/ctf/issues/136
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 1 |
1,897 | 105,694 |
An experiemnt profling
|
keep-going
|
Fixes #ISSUE_NUMBER
| 2 |
1,898 | 105,687 |
[WIP] low mem max_pool2d_with_indices
|
Stale, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #105966
* __->__ #105687
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 2 |
1,899 | 105,686 |
Error using torch.compile with HF transformers and model `mosaicml/mpt-7b`
|
triaged, oncall: pt2, module: dynamic shapes
|
### 🐛 Describe the bug
Below is a reproducible example of what is failing. It seems to relate to `rearrange` function from `einops` package.
Tagging @ezyang (at the suggestion of @ani300)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import torch._dynamo
from torch._inductor.compile_fx import compile_fx
model_path = "mosaicml/mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
).eval()
tokenizer = AutoTokenizer.from_pretrained(
model_path, trust_remote_code=True
)
compiled_forward = torch._dynamo.optimize(
lambda model, inputs: compile_fx(
model,
inputs,
config_patches={
"triton.cudagraphs": False,
"size_asserts": False,
},
),
dynamic=True,
)(model.forward)
with torch.no_grad():
tokenized_inputs = tokenizer(
"Testing PT2 Compile",
return_tensors="pt",
)
print(tokenized_inputs)
output = model.forward(
input_ids=tokenized_inputs['input_ids'],
attention_mask=tokenized_inputs['attention_mask']
)
print(output)
output = compiled_forward(
input_ids=tokenized_inputs['input_ids'],
attention_mask=tokenized_inputs['attention_mask']
)
print(output)
```
### Error logs
```bash
$ python issue.py
Instantiating an MPTForCausalLM model from /localhome/tpa/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b/72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7/modeling_mpt.py
You are using config.init_device='cpu', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:07<00:00, 3.96s/it]
{'input_ids': tensor([[38571, 10622, 19, 3631, 587]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])}
CausalLMOutputWithPast(loss=None, logits=tensor([[[30.7642, 10.5882, 32.5046, ..., 10.4191, 10.4177, 10.4317],
[23.1704, 4.3510, 23.5814, ..., 4.3556, 4.3414, 4.3513],
[37.4781, 14.9682, 38.8808, ..., 14.9036, 14.8895, 14.9042],
[14.8190, -6.1388, 14.6889, ..., -6.2073, -6.2138, -6.2110],
[36.7066, 15.7330, 39.1332, ..., 15.6551, 15.6470, 15.6626]]]), past_key_values=None, hidden_states=None, attentions=None)
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "/localhome/tpa/torch/issue.py", line 44, in <module>
output = compiled_forward(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 306, in _fn
return fn(*args, **kwargs)
File "/localhome/tpa/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b/72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7/modeling_mpt.py", line 270, in forward
outputs = self.transformer(input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, return_dict=return_dict, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
return forward_call(*args, **kwargs)
File "/localhome/tpa/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b/72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7/modeling_mpt.py", line 143, in forward
def forward(self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]]=None, attention_mask: Optional[torch.ByteTensor]=None, prefix_mask: Optional[torch.ByteTensor]=None, sequence_id: Optional[torch.LongTensor]=None, return_dict: Optional[bool]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, use_cache: Optional[bool]=None, inputs_embeds: Optional[torch.Tensor]=None):
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 466, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 546, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 129, in _fn
return fn(*args, **kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 365, in _convert_frame_assert
return _compile(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 191, in time_wrapper
r = func(*args, **kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 435, in _compile
out_code = transform_code_object(code, transform)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1002, in transform_code_object
transformations(instructions, code_options)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 420, in transform
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2060, in run
super().run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1163, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 333, in call_function
return tx.inline_user_function_return(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 592, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2165, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2262, in inline_call_
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1151, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 339, in call_function
return super().call_function(tx, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 293, in call_function
return super().call_function(tx, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 124, in call_function
return tx.inline_user_function_return(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 592, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2165, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2262, in inline_call_
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1163, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 333, in call_function
return tx.inline_user_function_return(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 592, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2165, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2262, in inline_call_
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1151, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 339, in call_function
return super().call_function(tx, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 293, in call_function
return super().call_function(tx, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 124, in call_function
return tx.inline_user_function_return(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 592, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2165, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2262, in inline_call_
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1163, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 293, in call_function
return super().call_function(tx, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 124, in call_function
return tx.inline_user_function_return(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 592, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2165, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2262, in inline_call_
tracer.run()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 679, in step
getattr(self, inst.opname)(inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 387, in wrapper
return inner_fn(self, inst)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1163, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 556, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 673, in call_function
tensor_variable = wrap_fx_proxy(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1136, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1210, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1357, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1325, in get_fake_value
return wrap_fake_exception(
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 904, in wrap_fake_exception
return fn()
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1326, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1391, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1378, in run_node
return node.target(*args, **kwargs)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/einops/einops.py", line 483, in rearrange
return reduce(cast(Tensor, tensor), pattern, reduction='rearrange', **axes_lengths)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/einops/einops.py", line 412, in reduce
return _apply_recipe(recipe, tensor, reduction_type=reduction)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/einops/einops.py", line 235, in _apply_recipe
_reconstruct_from_shape(recipe, backend.shape(tensor))
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function rearrange at 0x7fdcfa58e560>(*(FakeTensor(..., size=(1, s0, 4096)), 'b s (h d) -> b h s d'), **{'h': 32}):
unhashable type: 'SymInt'
from user code:
File "/localhome/tpa/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b/72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7/attention.py", line 201, in forward
(context, attn_weights, past_key_value) = self.attn_fn(query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights)
File "/localhome/tpa/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b/72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7/attention.py", line 21, in scaled_multihead_dot_product_attention
q = rearrange(query, 'b s (h d) -> b h s d', h=n_heads)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
```
$ python /localhome/tpa/torch/torch_compile_debug/run_2023_07_20_20_08_37_462675-pid_1406827/minifier/minifier_launcher.py
Loading inputs: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1333.64it/s]
Traceback (most recent call last):
File "/localhome/tpa/torch/torch_compile_debug/run_2023_07_20_20_08_37_462675-pid_1406827/minifier/minifier_launcher.py", line 48, in <module>
run_repro(mod, load_args, accuracy=False, command='minify',
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 566, in run_repro
COMMAND_FNS[options.command](options, mod, load_args)
File "/localhome/tpa/anaconda3/envs/pt-env/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 399, in repro_minify
raise RuntimeError(
RuntimeError: Compiler name is None - this likely means that a custom compiler was called by torchdynamo. Please remove this error, import your custom compiler function, and replace the backend=None line in run_repro to backend=<my_imported_custom_function>
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.1.0.dev20230720+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-209-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
GPU 2: Tesla V100-PCIE-16GB
GPU 3: Tesla V100-PCIE-16GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
Stepping: 7
Frequency boost: enabled
CPU MHz: 800.003
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.1.0.dev20230720+cpu
[conda] numpy 1.25.1 pypi_0 pypi
[conda] torch 2.1.0.dev20230720+cpu pypi_0 pypi
```
Conda env:
```
conda create -n pt-env python=3.10
conda activate pt-env
pip install --index-url "https://download.pytorch.org/whl/nightly/cpu" torch
pip install transformers einops
```
pip freeze:
```
$ pip freeze
certifi==2023.5.7
charset-normalizer==3.2.0
einops==0.6.1
filelock==3.9.0
fsspec==2023.4.0
huggingface-hub==0.16.4
idna==3.4
Jinja2==3.1.2
MarkupSafe==2.1.2
mpmath==1.2.1
networkx==2.6.3
numpy==1.25.1
packaging==23.1
PyYAML==6.0.1
regex==2023.6.3
requests==2.31.0
safetensors==0.3.1
sympy==1.11.1
tokenizers==0.13.3
torch==2.1.0.dev20230720+cpu
tqdm==4.65.0
transformers==4.31.0
typing_extensions==4.4.0
urllib3==2.0.4
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,900 | 105,682 |
`torch.autocast(bfloat16)` runs bwd matmuls in fp16
|
triaged, module: amp (automated mixed precision)
|
<details>
<summary> Repro code </summary>
```
"""
python test/test_amp.py
"""
import itertools
import weakref
from functools import partial
import torch
import torch.nn as nn
import torch.overrides
from torch.utils._python_dispatch import TorchDispatchMode
from torch.utils._pytree import tree_map
from torch.utils.weak import WeakIdRef
dtype_abbrs = {
torch.bfloat16: "bf16",
torch.float64: "f64",
torch.float32: "f32",
torch.float16: "f16",
torch.complex32: "c32",
torch.complex64: "c64",
torch.complex128: "c128",
torch.int8: "i8",
torch.int16: "i16",
torch.int32: "i32",
torch.int64: "i64",
torch.bool: "b8",
torch.uint8: "u8",
}
def main():
torch.manual_seed(42)
model = nn.Transformer(num_encoder_layers=1, num_decoder_layers=1, device="cuda")
src = torch.rand((10, 32, 512), device="cuda")
tgt = torch.rand((20, 32, 512), device="cuda")
amp_ctx = torch.autocast(device_type="cuda", dtype=torch.bfloat16)
logging_ctx = LoggingMode(collect_logs=True)
with amp_ctx, logging_ctx:
model(src, tgt).sum().backward()
print(logging_ctx.str_logs())
class Lit:
def __init__(self, s):
self.s = s
def __repr__(self):
return self.s
class LoggingMode(TorchDispatchMode):
next_id: int
def __init__(self, with_type: bool = True, collect_logs: bool = False):
self.memo = {}
self.next_id = 0
self.with_type = with_type
self.collect_logs = collect_logs
self.logs = []
try:
self.rank = torch.distributed.get_rank()
except:
self.rank = 0
def _shortid(self, t: torch.Tensor) -> int:
o = WeakIdRef(t)
weak_self = weakref.ref(self)
def del_memo():
self = weak_self()
if self is None:
return
self.memo.pop(o, None)
weakref.finalize(t, del_memo)
if o not in self.memo:
self.memo[o] = self.next_id
self.next_id += 1
return self.memo[o]
def _fmt(self, a: object, with_type: bool = False) -> str:
if isinstance(a, torch.Tensor):
maybe_type = ""
if with_type and self.with_type:
maybe_type = f": {dtype_abbrs[a.dtype]}[{', '.join(map(str, a.shape))}]"
return Lit(f"${self._shortid(a)}{maybe_type}")
else:
return a
def str_logs(self):
return "\n".join(self.logs)
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
rs = func(*args, **kwargs)
fmt_args = ", ".join(
itertools.chain(
(repr(tree_map(self._fmt, a)) for a in args),
(f"{k}={tree_map(self._fmt, v)}" for k, v in kwargs.items()),
)
)
fmt_rets = repr(tree_map(partial(self._fmt, with_type=True), rs))
log_msg = f"{fmt_rets} = {torch.overrides.resolve_name(func)}({fmt_args})"
if self.collect_logs:
self.logs.append(log_msg)
elif self.rank == 0:
print(log_msg)
return rs
if __name__ == "__main__":
main()
```
</details>
<details>
<summary> Logged Ops </summary>
E.g., search for `$254: f16[512, 2048] = aten.mm.default($253, $252)`:
```
$1: bf16[1536] = aten._to_copy.default($0, dtype=torch.bfloat16)
$3: bf16[1536, 512] = aten._to_copy.default($2, dtype=torch.bfloat16)
$5: bf16[10, 32, 512] = aten._to_copy.default($4, dtype=torch.bfloat16)
$6: bf16[512, 1536] = aten.t.default($3)
$7: bf16[320, 512] = aten.view.default($5, [320, 512])
$8: bf16[320, 1536] = aten.addmm.default($1, $7, $6)
$9: bf16[10, 32, 1536] = aten.view.default($8, [10, 32, 1536])
$10: bf16[10, 32, 3, 512] = aten.view.default($9, [10, 32, 3, 512])
$11: bf16[1, 10, 32, 3, 512] = aten.unsqueeze.default($10, 0)
$12: bf16[3, 10, 32, 1, 512] = aten.transpose.int($11, 0, -2)
$13: bf16[3, 10, 32, 512] = aten.squeeze.dim($12, -2)
$14: bf16[3, 10, 32, 512] = aten.clone.default($13, memory_format=torch.contiguous_format)
$15: bf16[10, 32, 512] = aten.select.int($14, 0, 0)
$16: bf16[10, 32, 512] = aten.select.int($14, 0, 1)
$17: bf16[10, 32, 512] = aten.select.int($14, 0, 2)
$18: bf16[10, 256, 64] = aten.view.default($15, [10, 256, 64])
$19: bf16[256, 10, 64] = aten.transpose.int($18, 0, 1)
$20: bf16[10, 256, 64] = aten.view.default($16, [10, 256, 64])
$21: bf16[256, 10, 64] = aten.transpose.int($20, 0, 1)
$22: bf16[10, 256, 64] = aten.view.default($17, [10, 256, 64])
$23: bf16[256, 10, 64] = aten.transpose.int($22, 0, 1)
$24: bf16[32, 8, 10, 64] = aten.view.default($19, [32, 8, 10, 64])
$25: bf16[32, 8, 10, 64] = aten.view.default($21, [32, 8, 10, 64])
$26: bf16[32, 8, 10, 64] = aten.view.default($23, [32, 8, 10, 64])
($27: bf16[32, 8, 10, 64], $28: f32[32, 8, 32], $29: i64[], $30: i64[]) = aten._scaled_dot_product_efficient_attention.default($24, $25, $26, True, 0.1)
$31: bf16[32, 8, 10, 64] = aten.detach.default($27)
$32: bf16[10, 32, 8, 64] = aten.permute.default($27, [2, 0, 1, 3])
$33: bf16[10, 32, 8, 64] = aten.clone.default($32, memory_format=torch.contiguous_format)
$34: bf16[320, 512] = aten.view.default($33, [320, 512])
$36: bf16[512] = aten._to_copy.default($35, dtype=torch.bfloat16)
$38: bf16[512, 512] = aten._to_copy.default($37, dtype=torch.bfloat16)
$39: bf16[512, 512] = aten.t.default($38)
$40: bf16[320, 512] = aten.addmm.default($36, $34, $39)
$41: bf16[10, 32, 512] = aten.view.default($40, [10, 32, 512])
($42: bf16[10, 32, 512], $43: b8[10, 32, 512]) = aten.native_dropout.default($41, 0.1, True)
$44: f32[10, 32, 512] = aten.add.Tensor($4, $42)
($47: f32[10, 32, 512], $48: f32[10, 32, 1], $49: f32[10, 32, 1]) = aten.native_layer_norm.default($44, [512], $45, $46, 1e-05)
$51: bf16[2048] = aten._to_copy.default($50, dtype=torch.bfloat16)
$53: bf16[2048, 512] = aten._to_copy.default($52, dtype=torch.bfloat16)
$54: bf16[10, 32, 512] = aten._to_copy.default($47, dtype=torch.bfloat16)
$55: bf16[512, 2048] = aten.t.default($53)
$56: bf16[320, 512] = aten.view.default($54, [320, 512])
$57: bf16[320, 2048] = aten.addmm.default($51, $56, $55)
$58: bf16[10, 32, 2048] = aten.view.default($57, [10, 32, 2048])
$59: bf16[10, 32, 2048] = aten.relu.default($58)
$60: bf16[10, 32, 2048] = aten.detach.default($59)
($61: bf16[10, 32, 2048], $62: b8[10, 32, 2048]) = aten.native_dropout.default($59, 0.1, True)
$64: bf16[512] = aten._to_copy.default($63, dtype=torch.bfloat16)
$66: bf16[512, 2048] = aten._to_copy.default($65, dtype=torch.bfloat16)
$67: bf16[2048, 512] = aten.t.default($66)
$68: bf16[320, 2048] = aten.view.default($61, [320, 2048])
$69: bf16[320, 512] = aten.addmm.default($64, $68, $67)
$70: bf16[10, 32, 512] = aten.view.default($69, [10, 32, 512])
($71: bf16[10, 32, 512], $72: b8[10, 32, 512]) = aten.native_dropout.default($70, 0.1, True)
$73: f32[10, 32, 512] = aten.add.Tensor($47, $71)
($76: f32[10, 32, 512], $77: f32[10, 32, 1], $78: f32[10, 32, 1]) = aten.native_layer_norm.default($73, [512], $74, $75, 1e-05)
($81: f32[10, 32, 512], $82: f32[10, 32, 1], $83: f32[10, 32, 1]) = aten.native_layer_norm.default($76, [512], $79, $80, 1e-05)
$85: bf16[1536] = aten._to_copy.default($84, dtype=torch.bfloat16)
$87: bf16[1536, 512] = aten._to_copy.default($86, dtype=torch.bfloat16)
$89: bf16[20, 32, 512] = aten._to_copy.default($88, dtype=torch.bfloat16)
$90: bf16[512, 1536] = aten.t.default($87)
$91: bf16[640, 512] = aten.view.default($89, [640, 512])
$92: bf16[640, 1536] = aten.addmm.default($85, $91, $90)
$93: bf16[20, 32, 1536] = aten.view.default($92, [20, 32, 1536])
$94: bf16[20, 32, 3, 512] = aten.view.default($93, [20, 32, 3, 512])
$95: bf16[1, 20, 32, 3, 512] = aten.unsqueeze.default($94, 0)
$96: bf16[3, 20, 32, 1, 512] = aten.transpose.int($95, 0, -2)
$97: bf16[3, 20, 32, 512] = aten.squeeze.dim($96, -2)
$98: bf16[3, 20, 32, 512] = aten.clone.default($97, memory_format=torch.contiguous_format)
$99: bf16[20, 32, 512] = aten.select.int($98, 0, 0)
$100: bf16[20, 32, 512] = aten.select.int($98, 0, 1)
$101: bf16[20, 32, 512] = aten.select.int($98, 0, 2)
$102: bf16[20, 256, 64] = aten.view.default($99, [20, 256, 64])
$103: bf16[256, 20, 64] = aten.transpose.int($102, 0, 1)
$104: bf16[20, 256, 64] = aten.view.default($100, [20, 256, 64])
$105: bf16[256, 20, 64] = aten.transpose.int($104, 0, 1)
$106: bf16[20, 256, 64] = aten.view.default($101, [20, 256, 64])
$107: bf16[256, 20, 64] = aten.transpose.int($106, 0, 1)
$108: bf16[32, 8, 20, 64] = aten.view.default($103, [32, 8, 20, 64])
$109: bf16[32, 8, 20, 64] = aten.view.default($105, [32, 8, 20, 64])
$110: bf16[32, 8, 20, 64] = aten.view.default($107, [32, 8, 20, 64])
($111: bf16[32, 8, 20, 64], $112: f32[32, 8, 32], $113: i64[], $114: i64[]) = aten._scaled_dot_product_efficient_attention.default($108, $109, $110, True, 0.1)
$115: bf16[32, 8, 20, 64] = aten.detach.default($111)
$116: bf16[20, 32, 8, 64] = aten.permute.default($111, [2, 0, 1, 3])
$117: bf16[20, 32, 8, 64] = aten.clone.default($116, memory_format=torch.contiguous_format)
$118: bf16[640, 512] = aten.view.default($117, [640, 512])
$120: bf16[512] = aten._to_copy.default($119, dtype=torch.bfloat16)
$122: bf16[512, 512] = aten._to_copy.default($121, dtype=torch.bfloat16)
$123: bf16[512, 512] = aten.t.default($122)
$124: bf16[640, 512] = aten.addmm.default($120, $118, $123)
$125: bf16[20, 32, 512] = aten.view.default($124, [20, 32, 512])
($126: bf16[20, 32, 512], $127: b8[20, 32, 512]) = aten.native_dropout.default($125, 0.1, True)
$128: f32[20, 32, 512] = aten.add.Tensor($88, $126)
($131: f32[20, 32, 512], $132: f32[20, 32, 1], $133: f32[20, 32, 1]) = aten.native_layer_norm.default($128, [512], $129, $130, 1e-05)
[$135: f32[512, 512], $136: f32[1024, 512]] = aten.split_with_sizes.default($134, [512, 1024])
[$138: f32[512], $139: f32[1024]] = aten.split_with_sizes.default($137, [512, 1024])
$140: bf16[512] = aten._to_copy.default($138, dtype=torch.bfloat16)
$141: bf16[512, 512] = aten._to_copy.default($135, dtype=torch.bfloat16)
$142: bf16[20, 32, 512] = aten._to_copy.default($131, dtype=torch.bfloat16)
$143: bf16[512, 512] = aten.t.default($141)
$144: bf16[640, 512] = aten.view.default($142, [640, 512])
$145: bf16[640, 512] = aten.addmm.default($140, $144, $143)
$146: bf16[20, 32, 512] = aten.view.default($145, [20, 32, 512])
$147: bf16[1024] = aten._to_copy.default($139, dtype=torch.bfloat16)
$148: bf16[1024, 512] = aten._to_copy.default($136, dtype=torch.bfloat16)
$149: bf16[10, 32, 512] = aten._to_copy.default($81, dtype=torch.bfloat16)
$150: bf16[512, 1024] = aten.t.default($148)
$151: bf16[320, 512] = aten.view.default($149, [320, 512])
$152: bf16[320, 1024] = aten.addmm.default($147, $151, $150)
$153: bf16[10, 32, 1024] = aten.view.default($152, [10, 32, 1024])
$154: bf16[10, 32, 2, 512] = aten.view.default($153, [10, 32, 2, 512])
$155: bf16[1, 10, 32, 2, 512] = aten.unsqueeze.default($154, 0)
$156: bf16[2, 10, 32, 1, 512] = aten.transpose.int($155, 0, -2)
$157: bf16[2, 10, 32, 512] = aten.squeeze.dim($156, -2)
$158: bf16[2, 10, 32, 512] = aten.clone.default($157, memory_format=torch.contiguous_format)
$159: bf16[10, 32, 512] = aten.select.int($158, 0, 0)
$160: bf16[10, 32, 512] = aten.select.int($158, 0, 1)
$161: bf16[20, 256, 64] = aten.view.default($146, [20, 256, 64])
$162: bf16[256, 20, 64] = aten.transpose.int($161, 0, 1)
$163: bf16[10, 256, 64] = aten.view.default($159, [10, 256, 64])
$164: bf16[256, 10, 64] = aten.transpose.int($163, 0, 1)
$165: bf16[10, 256, 64] = aten.view.default($160, [10, 256, 64])
$166: bf16[256, 10, 64] = aten.transpose.int($165, 0, 1)
$167: bf16[32, 8, 20, 64] = aten.view.default($162, [32, 8, 20, 64])
$168: bf16[32, 8, 10, 64] = aten.view.default($164, [32, 8, 10, 64])
$169: bf16[32, 8, 10, 64] = aten.view.default($166, [32, 8, 10, 64])
($170: bf16[32, 8, 20, 64], $171: f32[32, 8, 32], $172: i64[], $173: i64[]) = aten._scaled_dot_product_efficient_attention.default($167, $168, $169, True, 0.1)
$174: bf16[32, 8, 20, 64] = aten.detach.default($170)
$175: bf16[20, 32, 8, 64] = aten.permute.default($170, [2, 0, 1, 3])
$176: bf16[20, 32, 8, 64] = aten.clone.default($175, memory_format=torch.contiguous_format)
$177: bf16[640, 512] = aten.view.default($176, [640, 512])
$179: bf16[512] = aten._to_copy.default($178, dtype=torch.bfloat16)
$181: bf16[512, 512] = aten._to_copy.default($180, dtype=torch.bfloat16)
$182: bf16[512, 512] = aten.t.default($181)
$183: bf16[640, 512] = aten.addmm.default($179, $177, $182)
$184: bf16[20, 32, 512] = aten.view.default($183, [20, 32, 512])
($185: bf16[20, 32, 512], $186: b8[20, 32, 512]) = aten.native_dropout.default($184, 0.1, True)
$187: f32[20, 32, 512] = aten.add.Tensor($131, $185)
($190: f32[20, 32, 512], $191: f32[20, 32, 1], $192: f32[20, 32, 1]) = aten.native_layer_norm.default($187, [512], $188, $189, 1e-05)
$194: bf16[2048] = aten._to_copy.default($193, dtype=torch.bfloat16)
$196: bf16[2048, 512] = aten._to_copy.default($195, dtype=torch.bfloat16)
$197: bf16[20, 32, 512] = aten._to_copy.default($190, dtype=torch.bfloat16)
$198: bf16[512, 2048] = aten.t.default($196)
$199: bf16[640, 512] = aten.view.default($197, [640, 512])
$200: bf16[640, 2048] = aten.addmm.default($194, $199, $198)
$201: bf16[20, 32, 2048] = aten.view.default($200, [20, 32, 2048])
$202: bf16[20, 32, 2048] = aten.relu.default($201)
$203: bf16[20, 32, 2048] = aten.detach.default($202)
($204: bf16[20, 32, 2048], $205: b8[20, 32, 2048]) = aten.native_dropout.default($202, 0.1, True)
$207: bf16[512] = aten._to_copy.default($206, dtype=torch.bfloat16)
$209: bf16[512, 2048] = aten._to_copy.default($208, dtype=torch.bfloat16)
$210: bf16[2048, 512] = aten.t.default($209)
$211: bf16[640, 2048] = aten.view.default($204, [640, 2048])
$212: bf16[640, 512] = aten.addmm.default($207, $211, $210)
$213: bf16[20, 32, 512] = aten.view.default($212, [20, 32, 512])
($214: bf16[20, 32, 512], $215: b8[20, 32, 512]) = aten.native_dropout.default($213, 0.1, True)
$216: f32[20, 32, 512] = aten.add.Tensor($190, $214)
($219: f32[20, 32, 512], $220: f32[20, 32, 1], $221: f32[20, 32, 1]) = aten.native_layer_norm.default($216, [512], $217, $218, 1e-05)
($224: f32[20, 32, 512], $225: f32[20, 32, 1], $226: f32[20, 32, 1]) = aten.native_layer_norm.default($219, [512], $222, $223, 1e-05)
$227: f32[] = aten.sum.default($224, dtype=torch.float32)
$228: f32[] = aten.ones_like.default($227, pin_memory=False, memory_format=torch.preserve_format)
$229: f32[20, 32, 512] = aten.expand.default($228, [20, 32, 512])
($230: f32[20, 32, 512], $231: f32[512], $232: f32[512]) = aten.native_layer_norm_backward.default($229, $219, [512], $225, $226, $222, $223, [True, True, True])
$233: f32[512] = aten.detach.default($231)
$234: f32[512] = aten.detach.default($233)
$235: f32[512] = aten.detach.default($232)
$236: f32[512] = aten.detach.default($235)
($237: f32[20, 32, 512], $238: f32[512], $239: f32[512]) = aten.native_layer_norm_backward.default($230, $216, [512], $220, $221, $217, $218, [True, True, True])
$240: f32[512] = aten.detach.default($238)
$241: f32[512] = aten.detach.default($240)
$242: f32[512] = aten.detach.default($239)
$243: f32[512] = aten.detach.default($242)
$244: bf16[20, 32, 512] = aten._to_copy.default($237, dtype=torch.bfloat16)
$245: bf16[20, 32, 512] = aten.native_dropout_backward.default($244, $215, 1.1111111111111112)
$246: bf16[640, 512] = aten.view.default($245, [640, 512])
$247: bf16[512, 2048] = aten.t.default($210)
$248: f16[512, 2048] = aten._to_copy.default($247, dtype=torch.float16)
$249: f16[640, 512] = aten._to_copy.default($246, dtype=torch.float16)
$250: f16[640, 2048] = aten.mm.default($249, $248)
$251: bf16[512, 640] = aten.t.default($246)
$252: f16[640, 2048] = aten._to_copy.default($211, dtype=torch.float16)
$253: f16[512, 640] = aten._to_copy.default($251, dtype=torch.float16)
$254: f16[512, 2048] = aten.mm.default($253, $252)
$255: f16[2048, 512] = aten.t.default($254)
$256: f32[1, 512] = aten.sum.dim_IntList($246, [0], True, dtype=torch.float32)
$257: f32[512] = aten.view.default($256, [512])
$258: bf16[512] = aten._to_copy.default($257, dtype=torch.bfloat16)
$259: bf16[640, 2048] = aten._to_copy.default($250, dtype=torch.bfloat16)
$260: bf16[2048, 512] = aten._to_copy.default($255, dtype=torch.bfloat16)
$261: bf16[20, 32, 2048] = aten.view.default($259, [20, 32, 2048])
$262: bf16[512, 2048] = aten.t.default($260)
$263: f32[512, 2048] = aten._to_copy.default($262, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$264: f32[512, 2048] = aten.detach.default($263)
$265: f32[512, 2048] = aten.detach.default($264)
$266: f32[512] = aten._to_copy.default($258, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$267: f32[512] = aten.detach.default($266)
$268: f32[512] = aten.detach.default($267)
$269: bf16[20, 32, 2048] = aten.native_dropout_backward.default($261, $205, 1.1111111111111112)
$270: bf16[20, 32, 2048] = aten.detach.default($203)
$271: bf16[20, 32, 2048] = aten.threshold_backward.default($269, $270, 0)
$272: bf16[640, 2048] = aten.view.default($271, [640, 2048])
$273: bf16[2048, 512] = aten.t.default($198)
$274: f16[2048, 512] = aten._to_copy.default($273, dtype=torch.float16)
$275: f16[640, 2048] = aten._to_copy.default($272, dtype=torch.float16)
$276: f16[640, 512] = aten.mm.default($275, $274)
$277: bf16[2048, 640] = aten.t.default($272)
$278: f16[640, 512] = aten._to_copy.default($199, dtype=torch.float16)
$279: f16[2048, 640] = aten._to_copy.default($277, dtype=torch.float16)
$280: f16[2048, 512] = aten.mm.default($279, $278)
$281: f16[512, 2048] = aten.t.default($280)
$282: f32[1, 2048] = aten.sum.dim_IntList($272, [0], True, dtype=torch.float32)
$283: f32[2048] = aten.view.default($282, [2048])
$284: bf16[2048] = aten._to_copy.default($283, dtype=torch.bfloat16)
$285: bf16[640, 512] = aten._to_copy.default($276, dtype=torch.bfloat16)
$286: bf16[512, 2048] = aten._to_copy.default($281, dtype=torch.bfloat16)
$287: bf16[20, 32, 512] = aten.view.default($285, [20, 32, 512])
$288: bf16[2048, 512] = aten.t.default($286)
$289: f32[20, 32, 512] = aten._to_copy.default($287, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$290: f32[20, 32, 512] = aten.add.Tensor($237, $289)
$291: f32[2048, 512] = aten._to_copy.default($288, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$292: f32[2048, 512] = aten.detach.default($291)
$293: f32[2048, 512] = aten.detach.default($292)
$294: f32[2048] = aten._to_copy.default($284, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$295: f32[2048] = aten.detach.default($294)
$296: f32[2048] = aten.detach.default($295)
($297: f32[20, 32, 512], $298: f32[512], $299: f32[512]) = aten.native_layer_norm_backward.default($290, $187, [512], $191, $192, $188, $189, [True, True, True])
$300: f32[512] = aten.detach.default($298)
$301: f32[512] = aten.detach.default($300)
$302: f32[512] = aten.detach.default($299)
$303: f32[512] = aten.detach.default($302)
$304: bf16[20, 32, 512] = aten._to_copy.default($297, dtype=torch.bfloat16)
$305: bf16[20, 32, 512] = aten.native_dropout_backward.default($304, $186, 1.1111111111111112)
$306: bf16[640, 512] = aten.view.default($305, [640, 512])
$307: bf16[512, 512] = aten.t.default($182)
$308: f16[512, 512] = aten._to_copy.default($307, dtype=torch.float16)
$309: f16[640, 512] = aten._to_copy.default($306, dtype=torch.float16)
$310: f16[640, 512] = aten.mm.default($309, $308)
$311: bf16[512, 640] = aten.t.default($306)
$312: f16[640, 512] = aten._to_copy.default($177, dtype=torch.float16)
$313: f16[512, 640] = aten._to_copy.default($311, dtype=torch.float16)
$314: f16[512, 512] = aten.mm.default($313, $312)
$315: f16[512, 512] = aten.t.default($314)
$316: f32[1, 512] = aten.sum.dim_IntList($306, [0], True, dtype=torch.float32)
$317: f32[512] = aten.view.default($316, [512])
$318: bf16[512] = aten._to_copy.default($317, dtype=torch.bfloat16)
$319: bf16[640, 512] = aten._to_copy.default($310, dtype=torch.bfloat16)
$320: bf16[512, 512] = aten._to_copy.default($315, dtype=torch.bfloat16)
$321: bf16[512, 512] = aten.t.default($320)
$322: f32[512, 512] = aten._to_copy.default($321, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$323: f32[512, 512] = aten.detach.default($322)
$324: f32[512, 512] = aten.detach.default($323)
$325: f32[512] = aten._to_copy.default($318, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$326: f32[512] = aten.detach.default($325)
$327: f32[512] = aten.detach.default($326)
$328: bf16[20, 32, 8, 64] = aten.view.default($319, [20, 32, 8, 64])
$329: bf16[32, 8, 20, 64] = aten.permute.default($328, [1, 2, 0, 3])
$330: bf16[32, 8, 20, 64] = aten.detach.default($174)
($331: bf16[32, 8, 20, 64], $332: bf16[32, 8, 10, 64], $333: bf16[32, 8, 10, 64]) = aten._scaled_dot_product_efficient_attention_backward.default($329, $167, $168, $169, $330, $171, $172, $173, 0.1)
$334: bf16[32, 8, 10, 64] = aten.clone.default($333, memory_format=torch.contiguous_format)
$335: bf16[256, 10, 64] = aten._unsafe_view.default($334, [256, 10, 64])
$336: bf16[32, 8, 10, 64] = aten.clone.default($332, memory_format=torch.contiguous_format)
$337: bf16[256, 10, 64] = aten._unsafe_view.default($336, [256, 10, 64])
$338: bf16[32, 8, 20, 64] = aten.clone.default($331, memory_format=torch.contiguous_format)
$339: bf16[256, 20, 64] = aten._unsafe_view.default($338, [256, 20, 64])
$340: bf16[10, 256, 64] = aten.transpose.int($335, 0, 1)
$341: bf16[10, 256, 64] = aten.clone.default($340, memory_format=torch.contiguous_format)
$342: bf16[10, 32, 512] = aten._unsafe_view.default($341, [10, 32, 512])
$343: bf16[10, 256, 64] = aten.transpose.int($337, 0, 1)
$344: bf16[10, 256, 64] = aten.clone.default($343, memory_format=torch.contiguous_format)
$345: bf16[10, 32, 512] = aten._unsafe_view.default($344, [10, 32, 512])
$346: bf16[20, 256, 64] = aten.transpose.int($339, 0, 1)
$347: bf16[20, 256, 64] = aten.clone.default($346, memory_format=torch.contiguous_format)
$348: bf16[20, 32, 512] = aten._unsafe_view.default($347, [20, 32, 512])
$349: bf16[2, 10, 32, 512] = aten.select_backward.default($342, [2, 10, 32, 512], 0, 1)
$350: bf16[2, 10, 32, 512] = aten.select_backward.default($345, [2, 10, 32, 512], 0, 0)
$351: bf16[2, 10, 32, 512] = aten.add.Tensor($349, $350)
$352: bf16[2, 10, 32, 1, 512] = aten.unsqueeze.default($351, 3)
$353: bf16[1, 10, 32, 2, 512] = aten.transpose.int($352, 0, -2)
$354: bf16[10, 32, 2, 512] = aten.squeeze.dim($353, 0)
$355: bf16[10, 32, 2, 512] = aten.clone.default($354, memory_format=torch.contiguous_format)
$356: bf16[10, 32, 1024] = aten._unsafe_view.default($355, [10, 32, 1024])
$357: bf16[320, 1024] = aten.view.default($356, [320, 1024])
$358: bf16[1024, 512] = aten.t.default($150)
$359: f16[1024, 512] = aten._to_copy.default($358, dtype=torch.float16)
$360: f16[320, 1024] = aten._to_copy.default($357, dtype=torch.float16)
$361: f16[320, 512] = aten.mm.default($360, $359)
$362: bf16[1024, 320] = aten.t.default($357)
$363: f16[320, 512] = aten._to_copy.default($151, dtype=torch.float16)
$364: f16[1024, 320] = aten._to_copy.default($362, dtype=torch.float16)
$365: f16[1024, 512] = aten.mm.default($364, $363)
$366: f16[512, 1024] = aten.t.default($365)
$367: f32[1, 1024] = aten.sum.dim_IntList($357, [0], True, dtype=torch.float32)
$368: f32[1024] = aten.view.default($367, [1024])
$369: bf16[1024] = aten._to_copy.default($368, dtype=torch.bfloat16)
$370: bf16[320, 512] = aten._to_copy.default($361, dtype=torch.bfloat16)
$371: bf16[512, 1024] = aten._to_copy.default($366, dtype=torch.bfloat16)
$372: bf16[10, 32, 512] = aten.view.default($370, [10, 32, 512])
$373: bf16[1024, 512] = aten.t.default($371)
$374: f32[10, 32, 512] = aten._to_copy.default($372, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$375: f32[1024, 512] = aten._to_copy.default($373, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$376: f32[1024] = aten._to_copy.default($369, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$377: bf16[640, 512] = aten.view.default($348, [640, 512])
$378: bf16[512, 512] = aten.t.default($143)
$379: f16[512, 512] = aten._to_copy.default($378, dtype=torch.float16)
$380: f16[640, 512] = aten._to_copy.default($377, dtype=torch.float16)
$381: f16[640, 512] = aten.mm.default($380, $379)
$382: bf16[512, 640] = aten.t.default($377)
$383: f16[640, 512] = aten._to_copy.default($144, dtype=torch.float16)
$384: f16[512, 640] = aten._to_copy.default($382, dtype=torch.float16)
$385: f16[512, 512] = aten.mm.default($384, $383)
$386: f16[512, 512] = aten.t.default($385)
$387: f32[1, 512] = aten.sum.dim_IntList($377, [0], True, dtype=torch.float32)
$388: f32[512] = aten.view.default($387, [512])
$389: bf16[512] = aten._to_copy.default($388, dtype=torch.bfloat16)
$390: bf16[640, 512] = aten._to_copy.default($381, dtype=torch.bfloat16)
$391: bf16[512, 512] = aten._to_copy.default($386, dtype=torch.bfloat16)
$392: bf16[20, 32, 512] = aten.view.default($390, [20, 32, 512])
$393: bf16[512, 512] = aten.t.default($391)
$394: f32[20, 32, 512] = aten._to_copy.default($392, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$395: f32[20, 32, 512] = aten.add.Tensor($297, $394)
$396: f32[512, 512] = aten._to_copy.default($393, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$397: f32[512] = aten._to_copy.default($389, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$398: f32[1536] = aten.cat.default([$397, $376])
$399: f32[1536] = aten.detach.default($398)
$400: f32[1536] = aten.detach.default($399)
$401: f32[1536, 512] = aten.cat.default([$396, $375])
$402: f32[1536, 512] = aten.detach.default($401)
$403: f32[1536, 512] = aten.detach.default($402)
($404: f32[20, 32, 512], $405: f32[512], $406: f32[512]) = aten.native_layer_norm_backward.default($395, $128, [512], $132, $133, $129, $130, [True, True, True])
$407: f32[512] = aten.detach.default($405)
$408: f32[512] = aten.detach.default($407)
$409: f32[512] = aten.detach.default($406)
$410: f32[512] = aten.detach.default($409)
$411: bf16[20, 32, 512] = aten._to_copy.default($404, dtype=torch.bfloat16)
$412: bf16[20, 32, 512] = aten.native_dropout_backward.default($411, $127, 1.1111111111111112)
$413: bf16[640, 512] = aten.view.default($412, [640, 512])
$414: bf16[512, 512] = aten.t.default($123)
$415: f16[512, 512] = aten._to_copy.default($414, dtype=torch.float16)
$416: f16[640, 512] = aten._to_copy.default($413, dtype=torch.float16)
$417: f16[640, 512] = aten.mm.default($416, $415)
$418: bf16[512, 640] = aten.t.default($413)
$419: f16[640, 512] = aten._to_copy.default($118, dtype=torch.float16)
$420: f16[512, 640] = aten._to_copy.default($418, dtype=torch.float16)
$421: f16[512, 512] = aten.mm.default($420, $419)
$422: f16[512, 512] = aten.t.default($421)
$423: f32[1, 512] = aten.sum.dim_IntList($413, [0], True, dtype=torch.float32)
$424: f32[512] = aten.view.default($423, [512])
$425: bf16[512] = aten._to_copy.default($424, dtype=torch.bfloat16)
$426: bf16[640, 512] = aten._to_copy.default($417, dtype=torch.bfloat16)
$427: bf16[512, 512] = aten._to_copy.default($422, dtype=torch.bfloat16)
$428: bf16[512, 512] = aten.t.default($427)
$429: f32[512, 512] = aten._to_copy.default($428, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$430: f32[512, 512] = aten.detach.default($429)
$431: f32[512, 512] = aten.detach.default($430)
$432: f32[512] = aten._to_copy.default($425, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$433: f32[512] = aten.detach.default($432)
$434: f32[512] = aten.detach.default($433)
$435: bf16[20, 32, 8, 64] = aten.view.default($426, [20, 32, 8, 64])
$436: bf16[32, 8, 20, 64] = aten.permute.default($435, [1, 2, 0, 3])
$437: bf16[32, 8, 20, 64] = aten.detach.default($115)
($438: bf16[32, 8, 20, 64], $439: bf16[32, 8, 20, 64], $440: bf16[32, 8, 20, 64]) = aten._scaled_dot_product_efficient_attention_backward.default($436, $108, $109, $110, $437, $112, $113, $114, 0.1)
$441: bf16[32, 8, 20, 64] = aten.clone.default($440, memory_format=torch.contiguous_format)
$442: bf16[256, 20, 64] = aten._unsafe_view.default($441, [256, 20, 64])
$443: bf16[32, 8, 20, 64] = aten.clone.default($439, memory_format=torch.contiguous_format)
$444: bf16[256, 20, 64] = aten._unsafe_view.default($443, [256, 20, 64])
$445: bf16[32, 8, 20, 64] = aten.clone.default($438, memory_format=torch.contiguous_format)
$446: bf16[256, 20, 64] = aten._unsafe_view.default($445, [256, 20, 64])
$447: bf16[20, 256, 64] = aten.transpose.int($442, 0, 1)
$448: bf16[20, 256, 64] = aten.clone.default($447, memory_format=torch.contiguous_format)
$449: bf16[20, 32, 512] = aten._unsafe_view.default($448, [20, 32, 512])
$450: bf16[20, 256, 64] = aten.transpose.int($444, 0, 1)
$451: bf16[20, 256, 64] = aten.clone.default($450, memory_format=torch.contiguous_format)
$452: bf16[20, 32, 512] = aten._unsafe_view.default($451, [20, 32, 512])
$453: bf16[20, 256, 64] = aten.transpose.int($446, 0, 1)
$454: bf16[20, 256, 64] = aten.clone.default($453, memory_format=torch.contiguous_format)
$455: bf16[20, 32, 512] = aten._unsafe_view.default($454, [20, 32, 512])
$456: bf16[3, 20, 32, 512] = aten.select_backward.default($449, [3, 20, 32, 512], 0, 2)
$457: bf16[3, 20, 32, 512] = aten.select_backward.default($452, [3, 20, 32, 512], 0, 1)
$458: bf16[3, 20, 32, 512] = aten.add.Tensor($456, $457)
$459: bf16[3, 20, 32, 512] = aten.select_backward.default($455, [3, 20, 32, 512], 0, 0)
$460: bf16[3, 20, 32, 512] = aten.add.Tensor($458, $459)
$461: bf16[3, 20, 32, 1, 512] = aten.unsqueeze.default($460, 3)
$462: bf16[1, 20, 32, 3, 512] = aten.transpose.int($461, 0, -2)
$463: bf16[20, 32, 3, 512] = aten.squeeze.dim($462, 0)
$464: bf16[20, 32, 3, 512] = aten.clone.default($463, memory_format=torch.contiguous_format)
$465: bf16[20, 32, 1536] = aten._unsafe_view.default($464, [20, 32, 1536])
$466: bf16[640, 1536] = aten.view.default($465, [640, 1536])
$467: bf16[1536, 640] = aten.t.default($466)
$468: f16[640, 512] = aten._to_copy.default($91, dtype=torch.float16)
$469: f16[1536, 640] = aten._to_copy.default($467, dtype=torch.float16)
$470: f16[1536, 512] = aten.mm.default($469, $468)
$471: f16[512, 1536] = aten.t.default($470)
$472: f32[1, 1536] = aten.sum.dim_IntList($466, [0], True, dtype=torch.float32)
$473: f32[1536] = aten.view.default($472, [1536])
$474: bf16[1536] = aten._to_copy.default($473, dtype=torch.bfloat16)
$475: bf16[512, 1536] = aten._to_copy.default($471, dtype=torch.bfloat16)
$476: bf16[1536, 512] = aten.t.default($475)
$477: f32[1536, 512] = aten._to_copy.default($476, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$478: f32[1536, 512] = aten.detach.default($477)
$479: f32[1536, 512] = aten.detach.default($478)
$480: f32[1536] = aten._to_copy.default($474, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$481: f32[1536] = aten.detach.default($480)
$482: f32[1536] = aten.detach.default($481)
($483: f32[10, 32, 512], $484: f32[512], $485: f32[512]) = aten.native_layer_norm_backward.default($374, $76, [512], $82, $83, $79, $80, [True, True, True])
$486: f32[512] = aten.detach.default($484)
$487: f32[512] = aten.detach.default($486)
$488: f32[512] = aten.detach.default($485)
$489: f32[512] = aten.detach.default($488)
($490: f32[10, 32, 512], $491: f32[512], $492: f32[512]) = aten.native_layer_norm_backward.default($483, $73, [512], $77, $78, $74, $75, [True, True, True])
$493: f32[512] = aten.detach.default($491)
$494: f32[512] = aten.detach.default($493)
$495: f32[512] = aten.detach.default($492)
$496: f32[512] = aten.detach.default($495)
$497: bf16[10, 32, 512] = aten._to_copy.default($490, dtype=torch.bfloat16)
$498: bf16[10, 32, 512] = aten.native_dropout_backward.default($497, $72, 1.1111111111111112)
$499: bf16[320, 512] = aten.view.default($498, [320, 512])
$500: bf16[512, 2048] = aten.t.default($67)
$501: f16[512, 2048] = aten._to_copy.default($500, dtype=torch.float16)
$502: f16[320, 512] = aten._to_copy.default($499, dtype=torch.float16)
$503: f16[320, 2048] = aten.mm.default($502, $501)
$504: bf16[512, 320] = aten.t.default($499)
$505: f16[320, 2048] = aten._to_copy.default($68, dtype=torch.float16)
$506: f16[512, 320] = aten._to_copy.default($504, dtype=torch.float16)
$507: f16[512, 2048] = aten.mm.default($506, $505)
$508: f16[2048, 512] = aten.t.default($507)
$509: f32[1, 512] = aten.sum.dim_IntList($499, [0], True, dtype=torch.float32)
$510: f32[512] = aten.view.default($509, [512])
$511: bf16[512] = aten._to_copy.default($510, dtype=torch.bfloat16)
$512: bf16[320, 2048] = aten._to_copy.default($503, dtype=torch.bfloat16)
$513: bf16[2048, 512] = aten._to_copy.default($508, dtype=torch.bfloat16)
$514: bf16[10, 32, 2048] = aten.view.default($512, [10, 32, 2048])
$515: bf16[512, 2048] = aten.t.default($513)
$516: f32[512, 2048] = aten._to_copy.default($515, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$517: f32[512, 2048] = aten.detach.default($516)
$518: f32[512, 2048] = aten.detach.default($517)
$519: f32[512] = aten._to_copy.default($511, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$520: f32[512] = aten.detach.default($519)
$521: f32[512] = aten.detach.default($520)
$522: bf16[10, 32, 2048] = aten.native_dropout_backward.default($514, $62, 1.1111111111111112)
$523: bf16[10, 32, 2048] = aten.detach.default($60)
$524: bf16[10, 32, 2048] = aten.threshold_backward.default($522, $523, 0)
$525: bf16[320, 2048] = aten.view.default($524, [320, 2048])
$526: bf16[2048, 512] = aten.t.default($55)
$527: f16[2048, 512] = aten._to_copy.default($526, dtype=torch.float16)
$528: f16[320, 2048] = aten._to_copy.default($525, dtype=torch.float16)
$529: f16[320, 512] = aten.mm.default($528, $527)
$530: bf16[2048, 320] = aten.t.default($525)
$531: f16[320, 512] = aten._to_copy.default($56, dtype=torch.float16)
$532: f16[2048, 320] = aten._to_copy.default($530, dtype=torch.float16)
$533: f16[2048, 512] = aten.mm.default($532, $531)
$534: f16[512, 2048] = aten.t.default($533)
$535: f32[1, 2048] = aten.sum.dim_IntList($525, [0], True, dtype=torch.float32)
$536: f32[2048] = aten.view.default($535, [2048])
$537: bf16[2048] = aten._to_copy.default($536, dtype=torch.bfloat16)
$538: bf16[320, 512] = aten._to_copy.default($529, dtype=torch.bfloat16)
$539: bf16[512, 2048] = aten._to_copy.default($534, dtype=torch.bfloat16)
$540: bf16[10, 32, 512] = aten.view.default($538, [10, 32, 512])
$541: bf16[2048, 512] = aten.t.default($539)
$542: f32[10, 32, 512] = aten._to_copy.default($540, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$543: f32[10, 32, 512] = aten.add.Tensor($490, $542)
$544: f32[2048, 512] = aten._to_copy.default($541, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$545: f32[2048, 512] = aten.detach.default($544)
$546: f32[2048, 512] = aten.detach.default($545)
$547: f32[2048] = aten._to_copy.default($537, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$548: f32[2048] = aten.detach.default($547)
$549: f32[2048] = aten.detach.default($548)
($550: f32[10, 32, 512], $551: f32[512], $552: f32[512]) = aten.native_layer_norm_backward.default($543, $44, [512], $48, $49, $45, $46, [True, True, True])
$553: f32[512] = aten.detach.default($551)
$554: f32[512] = aten.detach.default($553)
$555: f32[512] = aten.detach.default($552)
$556: f32[512] = aten.detach.default($555)
$557: bf16[10, 32, 512] = aten._to_copy.default($550, dtype=torch.bfloat16)
$558: bf16[10, 32, 512] = aten.native_dropout_backward.default($557, $43, 1.1111111111111112)
$559: bf16[320, 512] = aten.view.default($558, [320, 512])
$560: bf16[512, 512] = aten.t.default($39)
$561: f16[512, 512] = aten._to_copy.default($560, dtype=torch.float16)
$562: f16[320, 512] = aten._to_copy.default($559, dtype=torch.float16)
$563: f16[320, 512] = aten.mm.default($562, $561)
$564: bf16[512, 320] = aten.t.default($559)
$565: f16[320, 512] = aten._to_copy.default($34, dtype=torch.float16)
$566: f16[512, 320] = aten._to_copy.default($564, dtype=torch.float16)
$567: f16[512, 512] = aten.mm.default($566, $565)
$568: f16[512, 512] = aten.t.default($567)
$569: f32[1, 512] = aten.sum.dim_IntList($559, [0], True, dtype=torch.float32)
$570: f32[512] = aten.view.default($569, [512])
$571: bf16[512] = aten._to_copy.default($570, dtype=torch.bfloat16)
$572: bf16[320, 512] = aten._to_copy.default($563, dtype=torch.bfloat16)
$573: bf16[512, 512] = aten._to_copy.default($568, dtype=torch.bfloat16)
$574: bf16[512, 512] = aten.t.default($573)
$575: f32[512, 512] = aten._to_copy.default($574, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$576: f32[512, 512] = aten.detach.default($575)
$577: f32[512, 512] = aten.detach.default($576)
$578: f32[512] = aten._to_copy.default($571, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$579: f32[512] = aten.detach.default($578)
$580: f32[512] = aten.detach.default($579)
$581: bf16[10, 32, 8, 64] = aten.view.default($572, [10, 32, 8, 64])
$582: bf16[32, 8, 10, 64] = aten.permute.default($581, [1, 2, 0, 3])
$583: bf16[32, 8, 10, 64] = aten.detach.default($31)
($584: bf16[32, 8, 10, 64], $585: bf16[32, 8, 10, 64], $586: bf16[32, 8, 10, 64]) = aten._scaled_dot_product_efficient_attention_backward.default($582, $24, $25, $26, $583, $28, $29, $30, 0.1)
$587: bf16[32, 8, 10, 64] = aten.clone.default($586, memory_format=torch.contiguous_format)
$588: bf16[256, 10, 64] = aten._unsafe_view.default($587, [256, 10, 64])
$589: bf16[32, 8, 10, 64] = aten.clone.default($585, memory_format=torch.contiguous_format)
$590: bf16[256, 10, 64] = aten._unsafe_view.default($589, [256, 10, 64])
$591: bf16[32, 8, 10, 64] = aten.clone.default($584, memory_format=torch.contiguous_format)
$592: bf16[256, 10, 64] = aten._unsafe_view.default($591, [256, 10, 64])
$593: bf16[10, 256, 64] = aten.transpose.int($588, 0, 1)
$594: bf16[10, 256, 64] = aten.clone.default($593, memory_format=torch.contiguous_format)
$595: bf16[10, 32, 512] = aten._unsafe_view.default($594, [10, 32, 512])
$596: bf16[10, 256, 64] = aten.transpose.int($590, 0, 1)
$597: bf16[10, 256, 64] = aten.clone.default($596, memory_format=torch.contiguous_format)
$598: bf16[10, 32, 512] = aten._unsafe_view.default($597, [10, 32, 512])
$599: bf16[10, 256, 64] = aten.transpose.int($592, 0, 1)
$600: bf16[10, 256, 64] = aten.clone.default($599, memory_format=torch.contiguous_format)
$601: bf16[10, 32, 512] = aten._unsafe_view.default($600, [10, 32, 512])
$602: bf16[3, 10, 32, 512] = aten.select_backward.default($595, [3, 10, 32, 512], 0, 2)
$603: bf16[3, 10, 32, 512] = aten.select_backward.default($598, [3, 10, 32, 512], 0, 1)
$604: bf16[3, 10, 32, 512] = aten.add.Tensor($602, $603)
$605: bf16[3, 10, 32, 512] = aten.select_backward.default($601, [3, 10, 32, 512], 0, 0)
$606: bf16[3, 10, 32, 512] = aten.add.Tensor($604, $605)
$607: bf16[3, 10, 32, 1, 512] = aten.unsqueeze.default($606, 3)
$608: bf16[1, 10, 32, 3, 512] = aten.transpose.int($607, 0, -2)
$609: bf16[10, 32, 3, 512] = aten.squeeze.dim($608, 0)
$610: bf16[10, 32, 3, 512] = aten.clone.default($609, memory_format=torch.contiguous_format)
$611: bf16[10, 32, 1536] = aten._unsafe_view.default($610, [10, 32, 1536])
$612: bf16[320, 1536] = aten.view.default($611, [320, 1536])
$613: bf16[1536, 320] = aten.t.default($612)
$614: f16[320, 512] = aten._to_copy.default($7, dtype=torch.float16)
$615: f16[1536, 320] = aten._to_copy.default($613, dtype=torch.float16)
$616: f16[1536, 512] = aten.mm.default($615, $614)
$617: f16[512, 1536] = aten.t.default($616)
$618: f32[1, 1536] = aten.sum.dim_IntList($612, [0], True, dtype=torch.float32)
$619: f32[1536] = aten.view.default($618, [1536])
$620: bf16[1536] = aten._to_copy.default($619, dtype=torch.bfloat16)
$621: bf16[512, 1536] = aten._to_copy.default($617, dtype=torch.bfloat16)
$622: bf16[1536, 512] = aten.t.default($621)
$623: f32[1536, 512] = aten._to_copy.default($622, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$624: f32[1536, 512] = aten.detach.default($623)
$625: f32[1536, 512] = aten.detach.default($624)
$626: f32[1536] = aten._to_copy.default($620, dtype=torch.float32, layout=torch.strided, device=cuda:0)
$627: f32[1536] = aten.detach.default($626)
$628: f32[1536] = aten.detach.default($627)
```
</details>
Version: main
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.