Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
1,301 | 107,945 |
Torch 1.13 Onnx Scope constant name not correct!
|
module: onnx, triaged
|
### 🐛 Describe the bug
i'm using the pytorch 1.13 to convert the stable diffusion unet to onnx model. and i noticed that torch 1.13 will preserve the original module name in onnx graph.but it seems the constant name is not always corresponding to the state_dict weights name in torch.like some weights name in torch state_dict are :
```python
down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight
down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias
```
in onnx,there is no constant data named down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight,but there is constant data named down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias. the nodes corresponding to down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj in onnx are:
```python
/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Cast (Cast)
Inputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/norm3/LayerNormalization_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 640], dtype=float32)
]
Outputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Cast_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 640], dtype=float16)
]
Attributes: OrderedDict([('to', 10)])
/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/MatMul (MatMul)
Inputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Cast_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 640], dtype=float16)
Constant (onnx::MatMul_9242): (shape=[640, 5120], dtype=float16)
]
Outputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/MatMul_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 5120], dtype=float16)
]
/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Add (Add)
Inputs: [
Constant (down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias): (shape=[5120], dtype=float16)
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/MatMul_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 5120], dtype=float16)
]
Outputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Add_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 5120], dtype=float16)
]
/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Slice (Slice)
Inputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Add_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 5120], dtype=float16)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Constant_1_output_0): (shape=[1], dtype=int64)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Mul_output_0): (shape=[1], dtype=int64)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Constant_output_0): (shape=[1], dtype=int64)
]
Outputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Slice_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 2560], dtype=float16)
]
/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Slice_1 (Slice)
Inputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Add_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 5120], dtype=float16)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Mul_output_0): (shape=[1], dtype=int64)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Mul_1_output_0): (shape=[1], dtype=int64)
Constant (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Constant_output_0): (shape=[1], dtype=int64)
]
Outputs: [
Variable (/down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/Slice_1_output_0): (shape=['2B', '(floor(H/2 - 1/2) + 1)*(floor(W/2 - 1/2) + 1)', 2560], dtype=float16)
]
```
so, should /down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/MatMul (MatMul) node constant data's name be down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight ,howerver it actually is onnx::MatMul_9242. But the bias is correct in /down_blocks.1/attentions.1/transformer_blocks.0/ff/net.0/proj/Add (Add) node .why does it happen?
### Versions
pytorch verison 1.13.1
| 0 |
1,302 | 107,929 |
Export to onnx error: RuntimeError: ArrayRef: invalid index Index = 3; Length = 3
|
module: onnx, triaged
|
### 🐛 Describe the bug
When I try to export to onnx model, I got following error,
RuntimeError: ArrayRef: invalid index Index = 3; Length = 3
``` bash
_C._jit_pass_onnx_graph_shape_type_inference(
Traceback (most recent call last):
File "export.py", line 41, in <module>
torch.onnx.export(model, # model being run
File "/home/leo/storage/sharedFolderVirtualbox/experiment/speaker_diarization-embeddings/embeddings/lib/python3.8/site-packages/torch/onnx/utils.py", line 516, in export
_export(
File "/home/leo/storage/sharedFolderVirtualbox/experiment/speaker_diarization-embeddings/embeddings/lib/python3.8/site-packages/torch/onnx/utils.py", line 1582, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/leo/storage/sharedFolderVirtualbox/experiment/speaker_diarization-embeddings/embeddings/lib/python3.8/site-packages/torch/onnx/utils.py", line 1182, in _model_to_graph
_C._jit_pass_onnx_assign_output_shape(
RuntimeError: ArrayRef: invalid index Index = 3; Length = 3
```
The code as below,
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
import torch
model = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb")
signal, fs = torchaudio.load('shortTeaching2.wav')
print( signal )
print( signal.shape )
#exit( 0 )
# Print output shape
embeddings = model.encode_batch(signal)
#print( embeddings )
#print( embeddings.shape )
#exit( 0 )
# Create dummy input
symbolic_names = {0: "batch_size", 1: "max_seq_len"}
x = torch.randn( 1, 1920000 )
# Export the model
torch.onnx.export(model, # model being run
x, # model input (or a tuple for multiple inputs)
"embeddings.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
verbose=False,
input_names = ['signal'], # the model's input names
output_names = ['embeddings'], # the model's output names
dynamic_axes={'signal' : symbolic_names, # variable length axes
})
```
When I pdb into python code, I found the error raised from this function,
_C._jit_pass_onnx_assign_output_shape
with local variables,
```bash
(Pdb) p output_wrapped
(tensor([[ 0.0677, 0.0583, 0.1144, ..., -0.1774, -0.0198, -0.1925]]), tensor([0.2038]), tensor([3869]), ['id04606'])
(Pdb) p output_tensors
[tensor([[ 0.0677, 0.0583, 0.1144, ..., -0.1774, -0.0198, -0.1925]]), tensor([0.2038]), tensor([3869])]
(Pdb) p out_desc
<torch.IODescriptor object at 0x7fecd75d08b0>
```
Then debug into pytorch c++ code( I used gdb ),
check desc ans tensors which passed into ONNXAssignOutputShape
```bash
(gdb) p desc
$24 = (const torch::jit::python::IODescriptor &) @0x74b81c0: {structure = "(vvv[s])",
strings = std::vector of length 1, capacity 1 = {"id04606"},
metadata = std::vector of length 3, capacity 3 = {{sizes = std::vector of length 2, capacity 2 = {
1, 7205}, type = c10::ScalarType::Float, device = {type_ = c10::DeviceType::CPU,
index_ = -1 '\377'}, requires_grad = false}, {
sizes = std::vector of length 1, capacity 1 = {1}, type = c10::ScalarType::Float, device = {
type_ = c10::DeviceType::CPU, index_ = -1 '\377'}, requires_grad = false}, {
sizes = std::vector of length 1, capacity 1 = {1}, type = c10::ScalarType::Long, device = {
type_ = c10::DeviceType::CPU, index_ = -1 '\377'}, requires_grad = false}},
grad_enabled = true}
(gdb) p tensors
$23 = std::vector of length 3, capacity 3 = {{<at::TensorBase> = {impl_ = {
target_ = 0x12ffadf0}}, <No data fields>}, {<at::TensorBase> = {impl_ = {
target_ = 0x12ff5540}}, <No data fields>}, {<at::TensorBase> = {impl_ = {
target_ = 0x12ffdac0}}, <No data fields>}}
```
after unflatten
``` bash
PyObject* py_obj = unflatten(outputs, desc);
(gdb) p PyTuple_Size(py_obj)
$39 = 4
so later it references outputs of graph from 0 till 3( 0, 1, 2, 3 )
```
At line pytorch/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:2295,
graph->outputs() contains 3 elements only but it try access 4th elements by outputs_index=3 which leads to TORCH_CHECK failed.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git134d415
Is debug build: True
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.8.17 (default, Jul 5 2023, 21:04:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER
Nvidia driver version: 535.86.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-9900KF CPU @ 3.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 13
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.1.0a0+git134d415
[pip3] torchaudio==2.1.0a0+47eaab4
[pip3] triton==2.0.0
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2023.1.0 h06a4308_46343
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.1.0a0+git134d415 dev_0 <develop>
[conda] torchaudio 2.1.0a0+47eaab4 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 0 |
1,303 | 107,925 |
DISABLED test_conv_weight_layout_convert_cuda (__main__.FreezingCudaTests)
|
module: rocm, triaged, module: flaky-tests, skipped, module: inductor
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_conv_weight_layout_convert_cuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16195333580).
Over the past 72 hours, it has flakily failed in 6 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_conv_weight_layout_convert_cuda`
Test file path: `inductor/test_inductor_freezing.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,304 | 107,922 |
onnx export error
|
module: onnx, triaged
|
### 🐛 Describe the bug
code:
```python
import torch
from torch import nn
import torchaudio
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
x1 = self.transform(x1)
return x1
def export():
model = DataCov().to(torch.float32)
model.eval()
input = torch.rand((1, 1, 12 * 48000), dtype=torch.float32)
torch.onnx.dynamo_export(model, (input), "DataCov.onnx", verbose=False,
input_names=['input1'], output_names=['output1'], opset_version=18)
if __name__ == '__main__':
export()
```
linux error:
```
Traceback (most recent call last):
File "/root/autodl-tmp/./main.py", line 27, in <module>
export()
File "/root/autodl-tmp/./main.py", line 22, in export
torch.onnx.dynamo_export(model, (input), "DataCov.onnx", verbose=False,
^^^^^^^^^^
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/torch/__init__.py", line 1827, in __getattr__
return importlib.import_module(f".{name}", __name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/test_onnx/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/torch/onnx/__init__.py", line 48, in <module>
from ._internal.exporter import ( # usort:skip. needs to be last to avoid circular import
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/torch/onnx/_internal/exporter.py", line 65, in <module>
from torch.onnx._internal.fx import diagnostics
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/torch/onnx/_internal/fx/diagnostics.py", line 10, in <module>
import onnxscript # type: ignore[import]
^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/onnxscript/__init__.py", line 7, in <module>
from .backend.onnx_export import export2python as proto2python
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/onnxscript/backend/onnx_export.py", line 14, in <module>
import onnxscript.onnx_types
File "/root/miniconda3/envs/test_onnx/lib/python3.11/site-packages/onnxscript/onnx_types.py", line 177, in <module>
class FLOAT8E4M3FN(TensorType, dtype=onnx.TensorProto.FLOAT8E4M3FN):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: FLOAT8E4M3FN
```
windows error:
C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\onnx\_internal\exporter.py:130: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\onnx\_internal\exporter.py", line 1091, in dynamo_export
).export()
^^^^^^^^
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\onnx\_internal\exporter.py", line 892, in export
graph_module = self.options.fx_tracer.generate_fx(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 199, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\_dynamo\eval_frame.py", line 1018, in inner
check_if_dynamo_supported()
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\_dynamo\eval_frame.py", line 533, in check_if_dynamo_supported
raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\work\pytorch_to_onnx\main.py", line 27, in <module>
export()
File "C:\work\pytorch_to_onnx\main.py", line 22, in export
torch.onnx.dynamo_export(model, (input), "DataCov.onnx", verbose=False,
File "C:\Users\dell\miniconda3\envs\onnx_export\Lib\site-packages\torch\onnx\_internal\exporter.py", line 1102, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at {sarif_report_path}. SARIF is a standard format for the output of static analysis tools. SARIF log can be loaded in VS Code SARIF viewer extension, or SARIF web viewer(https://microsoft.github.io/sarif-web-component/).Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
### Versions
linux versions:
Collecting environment information...
PyTorch version: 2.1.0.dev20230824
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 2901.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230824
[pip3] torchaudio==2.1.0.dev20230824
[pip3] torchvision==0.16.0.dev20230824
[conda] blas 1.0 mkl https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] brotlipy 0.7.0 py311h9bf148f_1002 pytorch-nightly
[conda] cffi 1.15.1 py311h9bf148f_3 pytorch-nightly
[conda] cpuonly 2.0 0 pytorch-nightly
[conda] cryptography 38.0.4 py311h46ebde7_0 pytorch-nightly
[conda] filelock 3.9.0 py311_0 pytorch-nightly
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch-nightly
[conda] mkl 2021.4.0 h06a4308_640 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py311h9bf148f_0 pytorch-nightly
[conda] mkl_fft 1.3.1 py311hc796f24_0 pytorch-nightly
[conda] mkl_random 1.2.2 py311hbba84a0_0 pytorch-nightly
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 1.24.3 py311hc206e33_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.24.3 py311hfd5febd_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] pillow 9.3.0 py311h3fd9d12_2 pytorch-nightly
[conda] pysocks 1.7.1 py311_0 pytorch-nightly
[conda] pytorch 2.1.0.dev20230824 py3.11_cpu_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] requests 2.28.1 py311_0 pytorch-nightly
[conda] torchaudio 2.1.0.dev20230824 py311_cpu pytorch-nightly
[conda] torchvision 0.16.0.dev20230824 py311_cpu pytorch-nightly
[conda] urllib3 1.26.14 py311_0 pytorch-nightly
onnxscript-preview in /root/miniconda3/envs/test_onnx/lib/python3.11/site-packages (0.1.0.dev20230814)
windows version:
Collecting environment information...
PyTorch version: 2.1.0.dev20230824
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 专业版
GCC version: (Rev5, Built by MSYS2 project) 13.1.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 17:59:51) [MSC v.1935 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.1.0.dev20230824
[pip3] torchaudio==2.1.0.dev20230824
[pip3] torchvision==0.16.0.dev20230824
[conda] blas 1.0 mkl defaults
[conda] cpuonly 2.0 0 pytorch-nightly
[conda] mkl 2023.1.0 h6b88ed4_46357 defaults
[conda] mkl-service 2.4.0 py311h2bbff1b_1 defaults
[conda] mkl_fft 1.3.6 py311hf62ec03_1 defaults
[conda] mkl_random 1.2.2 py311hf62ec03_1 defaults
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpy-base 1.25.2 py311hd01c5d8_0 defaults
[conda] pytorch 2.1.0.dev20230824 py3.11_cpu_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] torchaudio 2.1.0.dev20230824 py311_cpu pytorch-nightly
[conda] torchvision 0.16.0.dev20230824 py311_cpu pytorch-nightly
onnxscript-preview in c:\users\dell\miniconda3\envs\onnx_export\lib\site-packages (0.1.0.dev20230814)
| 3 |
1,305 | 107,914 |
[BUG] "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'
|
module: nn, triaged, module: bfloat16, actionable
|
### 🐛 Describe the bug
Attempting to `weight_norm` a bfloat16 layer in a CUDA environment results in an error like
```RuntimeError: "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'```
No error on `weight_norm` in CPU environment, but only in CUDA environment.
# errror
```
"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'
File "[user-dir]/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py", line 25, in compute_weight
return _weight_norm(v, g, self.dim)
File "[user-dir]/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py", line 50, in apply
setattr(module, name, fn.compute_weight(module))
File "[user-dir]/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py", line 109, in weight_norm
WeightNorm.apply(module, name, dim)
File "/home/jsb193/workspace/github/llm/LLM42/train/train_dpo.py", line 48, in <module>
nn.utils.weight_norm(module, "weight")
RuntimeError: "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'
```
# example
```python
import torch.nn as nn
module = torch.nn.Linear(20, 40)
module = module.to(torch.bfloat16)
module = module.to("cuda")
nn.utils.weight_norm(module, "weight")
```
### Versions
pytorch: 2.0.1+cu117
python: 3.10.11
os: ubuntu-18.04
gpu: GeForce3090
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
1,306 | 107,909 |
Provide a `reset_parameters()` method for MultiheadAttention to support FSDP meta device initializtion
|
module: nn, triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
The [MultiheadAttention](https://github.com/pytorch/pytorch/blob/2fbe6ef2f866fe6ce42a950f2053f2f6b4bdab90/torch/nn/modules/activation.py) layer has a protected [`_reset_parameters()`](https://github.com/pytorch/pytorch/blob/2fbe6ef2f866fe6ce42a950f2053f2f6b4bdab90/torch/nn/modules/activation.py#L1020) method that is responsible for initializing the params. In light of the newly introduced approach to init modules on the meta device and materializing them with FSDP (https://github.com/pytorch/pytorch/issues/104187), it would be great if the `MultiheadAttention` module could expose the `_reset_parameters` as public so FSDP can call it internally.
### Alternatives
Currently, the user has to modify the source code or patch it like so:
```py
MultiheadAttention.reset_parameters = MultiheadAttention._reset_parameters
```
### Additional context
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
I'd be happy to send a PR if there are no objections.
| 3 |
1,307 | 107,908 |
Rzou/out dtype
|
module: dynamo, ciflow/inductor
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 2 |
1,308 | 107,903 |
Support values backward on sparse CSR, CSC, BSR, and BSC tensors
|
module: autograd, open source, release notes: sparse, topic: new features
|
Fixes https://github.com/pytorch/pytorch/issues/107286
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107903
* #107150
* #107777
* #107638
cc @ezyang @albanD @zou3519 @gqchen @nikitaved @soulitzer @Lezcano @Varal7
| 1 |
1,309 | 107,898 |
[FakeTensor] fake tensor mode not working with inference mode on Tensor.item()
|
triaged, oncall: pt2, module: fakeTensor, module: dynamic shapes
|
### 🐛 Describe the bug
As titled. My repro:
```python
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
with FakeTensorMode():
with torch.inference_mode():
torch.tensor(32.).item()
```
stacktrace:
```
Traceback (most recent call last):
File "repro.py", line 21, in <module>
torch.tensor(32.).item()
File "site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "site-packages/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "site-packages/torch/_subclasses/fake_tensor.py", line 1437, in dispatch
r = func.decompose(*args, **kwargs)
File "site-packages/torch/_ops.py", line 467, in decompose
return self._op_dk(dk, *args, **kwargs)
File "site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "site-packages/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "site-packages/torch/_subclasses/fake_tensor.py", line 1470, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "site-packages/torch/_subclasses/fake_tensor.py", line 501, in local_scalar_dense
raise DataDependentOutputException(func)
torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default
```
### Versions
PyTorch version: 2.1.0.dev20230817+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4950.1948
CPU min MHz: 2200.0000
BogoMIPS: 7399.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.14.2
[pip3] ema-pytorch==0.2.3
[pip3] flake8==6.0.0
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-triton==2.1.0+9e3e10c5ed
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.5
[pip3] torch==2.1.0.dev20230817+cpu
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torch-struct==0.5
[pip3] torchaudio==2.1.0.dev20230817+cpu
[pip3] torchmetrics==1.0.1
[pip3] torchrec-nightly==2023.7.17
[pip3] torchvision==0.16.0.dev20230817+cpu
[pip3] vector-quantize-pytorch==1.6.30
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.14.2 pypi_0 pypi
[conda] ema-pytorch 0.2.3 pypi_0 pypi
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] numpy 1.21.2 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9e3e10c5ed pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.5 pypi_0 pypi
[conda] torch 2.1.0.dev20230817+cpu pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230817+cpu pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchrec-nightly 2023.7.17 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230817+cpu pypi_0 pypi
[conda] vector-quantize-pytorch 1.6.30 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 9 |
1,310 | 107,897 |
wip add a test
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107897
* #107834
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 1 |
1,311 | 107,896 |
[feature request] [ux proposal] Min-max linear normalization to be supported in F.normalize (or in a new function)
|
module: nn, triaged, topic: new features
|
### 🚀 The feature, motivation and pitch
It's useful as one of way of various image / arbitrary normalizations
E.g. supported in OpenCV: https://docs.opencv.org/4.x/d2/de8/group__core__array.html#ggad12cefbcb5291cf958a85b4b67b6149fa9f0c1c342a18114d47b516a88e29822e in the cv2.normalize function: https://docs.opencv.org/4.x/d2/de8/group__core__array.html#ga87eef7ee3970f86906d69a92cbf064bd
Also, related, NumPy supports https://numpy.org/doc/stable/reference/generated/numpy.ptp.html function which computes difference `amax() - amin()`
My impl is simple (althgough it would be best to set by default `eps = torch.finfo(x.dtype).min` to accomodate float16 inputs), but it would be nice to have it supported by core convenience F.normalize function:
```python
def normalize_min_max_(x, dim, eps = 1e-12):
# workaround for https://github.com/pytorch/pytorch/issues/61582
amin, amax = x.amin(dim = dim, keepdim = True), x.amax(dim = dim, keepdim = True)
return x.sub_(amin).div_(amax.sub_(amin).add_(eps))
```
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
1,312 | 107,894 |
Fail to build C++ test_aot_inductor
|
triaged, oncall: pt2, module: inductor
|
### 🐛 Describe the bug
When building `test_aot_inductor` with `BUILD_AOT_INDUCTOR_TEST=1 python setup.py build`, the build fails with the following error (discovered by @muchulee8)
```
[4826/7070] Generating libaot_inductor_output.so
FAILED: test_aot_inductor/libaot_inductor_output.so /home/huydo/github/pytorch/build/test_aot_inductor/libaot_inductor_output.so
cd /home/huydo/github/pytorch/build/test_aot_inductor && python /home/huydo/github/pytorch/test/cpp/aot_inductor/test.py
/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:135: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
Traceback (most recent call last):
File "/home/huydo/github/pytorch/test/cpp/aot_inductor/test.py", line 25, in <module>
lib_path, module = torch._export.aot_compile(Net().cuda(), (x, y))
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_export/__init__.py", line 483, in aot_compile
so_path = torch._inductor.aot_compile(ep.graph_module, list(all_args), options)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/__init__.py", line 48, in aot_compile
result = compile_fx_aot(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 865, in compile_fx_aot
return compile_fx(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 959, in compile_fx
return compile_fx(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 977, in compile_fx
return compile_fx(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1146, in compile_fx
return aot_autograd(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3891, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3429, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2212, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2392, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1573, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1088, in fw_compiler_base
return inner_compile(
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 244, in wrapper
compiled(real_inputs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 373, in __call__
return self.get_current_callable()(inputs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 400, in _run_from_cache
return compiled_graph.compiled_artifact(inputs)
File "/tmp/torchinductor_huydo/xt/cxti37vajc246zxrtlz254qxq2su6l34jgktkil7nadig6e4z263.py", line 72, in call
triton_poi_fused_add_cos_sin_0.run(arg2_1, arg3_1, buf0, 2048, grid=grid(2048), stream=stream0)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 383, in run
self.autotune_to_one_config(*args, grid=grid)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 308, in autotune_to_one_config
timings = self.benchmark_all_configs(*args, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 284, in benchmark_all_configs
timings = {
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 285, in <dictcomp>
launcher: self.bench(launcher, *args, **kwargs)
File "/home/huydo/py3.10/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 241, in bench
if launcher.n_spills > config.triton.spill_threshold:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
```
### Versions
The build is done on devgpu:
```
Collecting environment information...
PyTorch version: 2.1.0a0+git88c400e
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.34
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk13_zion_7455_gb24de3bdb045-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.9.5
[pip3] pytorch-metric-learning==1.7.3
[pip3] torch==2.1.0a0+gitda67b41
[pip3] torchaudio==0.13.1+cpu
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.14.1
[pip3] triton==2.0.0.post1
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-metric-learning 1.7.3 pypi_0 pypi
[conda] torch 2.1.0a0+gitda67b41 pypi_0 pypi
[conda] torchaudio 0.13.1+cpu pypi_0 pypi
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.14.1 pypi_0 pypi
[conda] triton 2.0.0.post1 pypi_0 pypi
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,313 | 107,893 |
DISABLED test_conv_with_as_strided_cpu (__main__.FreezingCpuTests)
|
module: rocm, module: cpu, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_inductor_freezing.py%3A%3AFreezingCpuTests%3A%3Atest_conv_with_as_strided_cpu)).
Skipping for now as this has shown up on a few PRs e.g https://github.com/pytorch/pytorch/pull/107812. Long term I do not think we want these CPU tests running on ROCm either way. Disabling with issue for now and will assess further.
cc: @pragupta @jithunnair-amd
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
1,314 | 107,880 |
[POC][HSDP] Add option to disable all-reduce only
|
release notes: distributed (fsdp), topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107880
* #107784
* #106080
* #106068
With >= 4 GPUs:
```
python -m pytest test/distributed/fsdp/test_fsdp_hybrid_shard.py -k test_fsdp_hybrid_shard_accumulation
```
| 1 |
1,315 | 107,879 |
FakeMode should not fakify non persistent buffer
|
triaged, oncall: pt2, module: fakeTensor
|
I initially came across this information while working with ONNX fake mode export, where the FakeMode fakifies all tensors except for constant tensors.
Referring to https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.register_buffer, non-persistent buffers can not be accessed through state_dict(). This implies that when attempting to open and load the state_dict() of a model in FakeMode, the non-persistent buffers are absent forever. Is there a way to retain non-persistent buffers in FakeMode, similar to how constant tensors are retained?
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @BowenBao
| 4 |
1,316 | 107,873 |
[BE] Consolidation of SymNode methods constant_int, maybe_as_int, etc
|
triaged, oncall: pt2
|
This is something of a preexisting problem, but I wonder why I didn't just have `maybe_as_int` take care of everything...
_Originally posted by @ezyang in https://github.com/pytorch/pytorch/pull/107089#discussion_r1294920333_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,317 | 107,870 |
[clang-tidy] Get rid of WarningsAsErrors
|
fb-exported, topic: not user facing
|
Summary: WarningsAsErrors is very dangerous option reported several times
Test Plan: N/A
Differential Revision: D48646569
| 5 |
1,318 | 107,865 |
Graph break: call_function partial in skip_files
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Log looks like:
```
[rank0]:[2023-08-23 20:16:32,832] [12/21] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION 2 [UserFunctionVariable(), NNModuleVariable(), UserDefinedObjectVariable(PyTorchEnv), TupleVariable(), ConstDictVariable(), SkipFilesVariable(), UserFunctionVariable(), NNModuleVariable()]
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert: [DEBUG] empty checkpoint
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert: [DEBUG] FAILED INLINING <code object call_module_impl at 0x7fb4e45a8570, file "<torch_package_0>.dper3/core/environment.py", line 1267>
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.output_graph: [DEBUG] restore_graphstate: removed 0 nodes
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert: [DEBUG] break_graph_if_unsupported triggered compile
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] Graph break: call_function partial in skip_files /data/users/ftruzzi/fbsource/buck-out/v2/gen/fbcode/4f72fc365136a62e/third-party-buck/platform010/build/python/cinder.3.8/__python_runtime__/python_runtime/lib/python3.8/functools.py from user code at:
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] File "<torch_package_0>.dper3/core/environment.py", line 1279, in <resume in call_module>
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] return call_module_impl(self, m, *args, **kwargs)
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] File "<torch_package_0>.dper3/core/environment.py", line 1270, in call_module_impl
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] m, env, args, kwargs, partial(torch.nn.Module.__call__, m)
[rank0]:[2023-08-23 20:16:32,833] [12/21] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]
```
@voznesenskym didn't you have a fix for this? Can we land it?
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 1 |
1,319 | 107,864 |
`C10_HOST_DEVICE` for `std::isnan(c10::complex<T>)`?
|
module: cuda, triaged
|
### 🐛 Describe the bug
I'm developing a C++ & CUDA extension (as part of the [phytorch](https://github.com/kosiokarchev/phytorch) package), and I had written a function `std::isnan(c10::complex<T>)`. However, since v2.0.0, torch includes a similar function in c10/util/complex_utils.h ([link](https://github.com/pytorch/pytorch/blame/75cfc0be21383636d300d702e5eeb66245f93048/c10/util/complex_utils.h#L41)).
My problem is that, unlike most other functions that relate to `c10:complex<T>`, `isnan` is not declared with `__host__ __device__`, and I cannot use it in CUDA code, but I also cannot overwrite it in my code (or can I?), so the compiler now throws an error. Therefore, I'm left with either having to rename my function (or move it to a different namespace), or (instruct my users to) patch the torch headers, which is very undesirable.
Will it be possible to add `C10_HOST_DEVICE` to `isnan` in `complex_utils.h`? Or would you recommend some other way of "using" it in GPU code? Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.8 (Green Obsidian) (x86_64)
GCC version: (conda-forge gcc 11.3.0-19) 11.3.0
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 525.89.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6238R CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] phytorch==0.0.post1.dev91+gd60c96f.d20230407
[pip3] phytorchx==0.1.post1.dev0+ge54312c.d20230816
[pip3] pytorch-lightning==2.0.1
[pip3] torch==2.0.0
[pip3] torch-scatter==2.1.1+pt20cu118
[pip3] torchdiffeq==0.2.2
[pip3] torchmetrics==0.11.4
[pip3] torchviz==0.0.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] cudatoolkit 11.7.0 hd8887f6_10 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.24.2 py310h8deb116_0 conda-forge
[conda] phytorch 0.0.post1.dev91+gd60c96f.d20230407 pypi_0 pypi
[conda] phytorchx 0.1.post1.dev0+ge54312c.d20230816 pypi_0 pypi
[conda] pytorch 2.0.0 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_3 pytorch
[conda] pytorch-lightning 2.0.1 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-scatter 2.1.1+pt20cu118 pypi_0 pypi
[conda] torchdiffeq 0.2.2 pyhd8ed1ab_0 conda-forge
[conda] torchmetrics 0.11.4 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
cc @ptrblck
| 0 |
1,320 | 107,855 |
About the multi-node example not working properly
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
My machine: 2 machines with different ips and 2 available Gpus on each machine
When I use the multigpu_torchrun.py example, when I pass these two directives:
`torchrun --nproc_per_node=2 --nnodes=2 --node_rank=0 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=172.xx.1.150:29603 multi_node_torchrun.py 50 10`
and
`torchrun --nproc_per_node=2 --nnodes=2 --node_rank=1 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=172.xx.1.150:29603 multi_node_torchrun.py 50 10`
When I started, the program got stuck in `self.model = DDP(self.model, device_ids=[self.local_rank])` and stopped running, But with `nvidia-smi` we can see that processes on both machines have been created and are already occupying memory. I wonder why
Looking through the history I was able to find similar issues, saying they involved synchronization deadlocks, but I don't think that was the root cause since I was using the official example.
### Versions
os: ubuntu20.04
cuda: 11.8
python: 3.8.17
torch: 1.12.1
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
1,321 | 107,854 |
"file_descriptor" multiprocessing sharing strategy works incorrectly in dataloading
|
module: dataloader, triaged, module: data
|
### 🐛 Describe the bug
Issue originally found in yolo model, but I managed to write a small reproducer based on _MultiProcessingDataLoaderIter (torch/utils/data/dataloader.py):
```import threading
import queue
import random
import torch
import torch.multiprocessing as multiprocessing
def producer(data_queue, index_queue):
torch.set_num_threads(1)
while True:
try:
r = index_queue.get(timeout=5.0)
except queue.Empty:
continue
data = torch.rand([16, 3, 640, 640])
data_queue.put(data)
del data, r
def reader(data_queue):
def do_one_step():
try:
data = data_queue.get(timeout=5.0)
except queue.Empty:
return
while True:
do_one_step()
if __name__ == "__main__":
#multiprocessing.set_sharing_strategy('file_system')
NUM_WORKERS = 8
workers = []
index_queues = []
data_queue = multiprocessing.Queue()
for i in range(NUM_WORKERS):
index_queue = multiprocessing.Queue()
index_queue.cancel_join_thread()
w = multiprocessing.Process(
target=producer,
args=(data_queue,index_queue))
w.daemon = True
w.start()
index_queues.append(index_queue)
workers.append(w)
readers = []
for i in range(1):
reader_thread = threading.Thread(
target=reader,
args=(data_queue,))
reader_thread.daemon = True
reader_thread.start()
readers.append(reader_thread)
index_tensor = torch.randint(high=10000, size=[100000])
for i in range(100000):
index_queues[i % NUM_WORKERS].put(index_tensor[i])
readers[0].join()
```
The script fails (most of the time) with the following error:
```$ python experiment.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "experiment.py", line 27, in reader
do_one_step()
File "experiment.py", line 22, in do_one_step
data = data_queue.get(timeout=5.0)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 116, in get
return _ForkingPickler.loads(res)
File "/home/user/.venvs/torch/py3.8/pt2.0.1+cpu/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 100, in rebuild_tensor
t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
File "/home/user/.venvs/torch/py3.8/pt2.0.1+cpu/lib/python3.8/site-packages/torch/_utils.py", line 149, in _rebuild_tensor
return t.set_(storage._untyped_storage, storage_offset, size, stride)
RuntimeError: Trying to resize storage that is not resizable
```
This is caused by storage from index_tensor incorrectly getting retrieved from cache when rebuilding data tensor.
When setting the sharing_strategy to "file_system" the issue goes away.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 12
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2194.843
BogoMIPS: 4389.68
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 429 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cpu
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 0 |
1,322 | 107,853 |
Throw error if setting static grads to `None` in `zero_grad()`
|
ciflow/trunk, module: dynamo, release notes: dynamo, release notes: optimizer
|
If a tensor is marked a static address for dynamo, this implies that the pointer will be the same across calls to the optimizer step. For this constraint to hold, we should not set grads marked as static to None.
After some discussion, I opted to throw an error.
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 8 |
1,323 | 107,842 |
nn.AdaptiveMaxPool2d returns identical results within a batch
|
high priority, module: nn, module: cuda, triaged, module: correctness (silent), bug
|
### 🐛 Describe the bug
When using nn.AdaptiveMaxPool2d(output_size=(1, 1)) as global max pooling after LayerNorm, it returns identical results within a batch. The code is as follows.
```
import torch
from torch import nn
import torch.nn.functional as F
class LayerNorm2d(nn.LayerNorm):
# normalization along channels
def forward(self, x):
# batch_size, channel, hight, width = x.size()
x = x.permute(0, 2, 3, 1)
x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
x = x.permute(0, 3, 1, 2)
return x
class TestConv(nn.Module):
def __init__(self, in_channel, out_channel):
super().__init__()
self.norm = LayerNorm2d(in_channel)
self.conv = nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False)
def forward(self, x):
x = self.norm(x)
x = self.conv(x)
return x
print(torch.__version__)
model = TestConv(2, 1).cuda().eval()
pooling = nn.AdaptiveMaxPool2d(output_size=(1, 1))
data_in = torch.rand((2,2,3,3)).cuda()
data_out1 = model(data_in)
data_out2 = pooling(data_out1)
print(data_out1)
print(data_out2)
```
In approximately 90% of cases, the elements of data_out2 are identical, which, I think, is incorrect.
For example, the above code may act as:
```
1.13.1+cu117
tensor([[[[-0.6695, -0.5291, -0.1605],
[ 0.9633, 0.1471, -0.0945],
[-0.0090, 0.7883, 0.7220]]],
[[[ 0.2343, -0.4815, -0.1492],
[-0.6429, 0.2614, -0.0867],
[ 0.8783, 0.7894, 0.7118]]]], device='cuda:0', grad_fn=<ConvolutionBackward0>)
tensor([[[[0.9633]]],
[[[0.9633]]]], device='cuda:0', grad_fn=<AdaptiveMaxPool2DBackward0>)
```
### Versions
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A4000
GPU 1: NVIDIA RTX A4000
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA RTX A4000
GPU 4: NVIDIA RTX A4000
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7302 16-Core Processor
Stepping: 0
CPU MHz: 1500.000
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5989.01
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-3,16-19
NUMA node1 CPU(s): 4-7,20-23
NUMA node2 CPU(s): 8-11,24-27
NUMA node3 CPU(s): 12-15,28-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.13.1
[pip3] torchtext==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39h6c91a56_3
[conda] numpy-base 1.21.5 py39ha15fc14_3
[conda] numpydoc 1.4.0 py39h06a4308_0
[conda] torch 1.13.1 pypi_0 pypi
[conda] torchaudio 0.13.1 pypi_0 pypi
[conda] torchvision 0.14.1 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck
| 3 |
1,324 | 107,841 |
Got Expand nodes with static shape input when exporting onnx model with dynamic shape
|
module: onnx, triaged, onnx-needs-info
|
### 🐛 Describe the bug
I'm trying to export a onnx model by running the code below. It was supposed to be a onnx model with dynamic shape. But when checking the exported model, I found there were two Expand nodes with static shape input, which seemed to be converted from
`graph_attn_bias[:, 1:, 0, :] = t
`
`graph_attn_bias[:, 0, :, :] = t
`
```python
import onnx
import torch
import torch.nn as nn
class Embedding(nn.Embedding):
def __init__(
self,
num_embeddings: int,
embedding_dim: int,
padding_idx: int = None,
):
super(Embedding, self).__init__(
num_embeddings, embedding_dim, padding_idx=padding_idx
)
self._normal_init()
if padding_idx is not None:
self.weight.data[self.padding_idx].zero_()
def _normal_init(self, std=0.02):
nn.init.normal_(self.weight, mean=0.0, std=std)
class EdgeFeature(nn.Module):
"""
Compute attention bias for each head.
"""
def __init__(
self,
pair_dim,
num_edge,
num_spatial,
):
super(EdgeFeature, self).__init__()
self.pair_dim = pair_dim
self.edge_encoder = Embedding(num_edge, pair_dim, padding_idx=0)
self.shorest_path_encoder = Embedding(num_spatial, pair_dim, padding_idx=0)
self.vnode_virtual_distance = Embedding(1, pair_dim)
def forward(self, shortest_path, edge_feat, graph_attn_bias):
shortest_path = shortest_path
edge_input = edge_feat
graph_attn_bias[:, 1:, 1:, :] = self.shorest_path_encoder(shortest_path)
# reset spatial pos here
t = self.vnode_virtual_distance.weight.view(1, 1, self.pair_dim)
graph_attn_bias[:, 1:, 0, :] = t
graph_attn_bias[:, 0, :, :] = t
edge_input = self.edge_encoder(edge_input).mean(-2)
graph_attn_bias[:, 1:, 1:, :] = graph_attn_bias[:, 1:, 1:, :] + edge_input
return graph_attn_bias
if __name__ == "__main__":
edge_feature = EdgeFeature(
pair_dim=256,
num_edge=64,
num_spatial=512,
).float()
attn_bias = torch.rand((64, 20, 20, 256),dtype=torch.float32)
shortest_path = torch.ones((64, 19, 19),dtype=torch.int64)
edge_feat = torch.ones((64, 19, 19, 3),dtype=torch.int64)
torch.onnx.export(edge_feature,
(shortest_path, edge_feat, attn_bias),
"edge_feature.onnx",
input_names=["shortest_path", "edge_feat", "attn_bias"],
# verbose=True,
opset_version=14,
output_names=["graph_attn_bias"],
dynamic_axes={
"attn_bias":{0: "batch_size", 1: "seq_len_1", 2: "seq_len_1"},
"shortest_path":{0: "batch_size", 1: "seq_len", 2: "seq_len"},
"edge_feat":{0: "batch_size", 2: "seq_len", 3: "seq_len"},
"graph_attn_bias":{0: "batch_size", 1: "seq_len_1", 2: "seq_len_1"}
}
)
from onnxsim import simplify
model = onnx.load("edge_feature.onnx")
# convert model
model_simp, check = simplify(model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, "edge_feature_modified.onnx")
```

### Versions
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.22.3 Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 8000
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
Stepping: 4
CPU MHz: 3400.000
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 6800.00
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 38.5 MiB NUMA node0 CPU(s): 0-5,12-17 NUMA node1 CPU(s): 6-11,18-23 Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art ar
ch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_dea
dline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms inv
pcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida a
rat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.1+cu116
[pip3] torch-tensorrt==1.2.0
[pip3] torchaudio==0.13.1+cu116
[pip3] torchvision==0.14.1+cu116
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-include 2019.5 281 conda-forge
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 1.13.1+cu116 pypi_0 pypi
[conda] torch-tensorrt 1.2.0 pypi_0 pypi
[conda] torchaudio 0.13.1+cu116 pypi_0 pypi
[conda] torchvision 0.14.1+cu116 pypi_0 pypi
| 3 |
1,325 | 107,832 |
[Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation
|
fb-exported, Merged, Reverted, ciflow/trunk, topic: not user facing, module: inductor, ciflow/inductor
|
Fixes #104391
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 16 |
1,326 | 107,830 |
FSDP custom args per module
|
triaged, module: fsdp
|
### 🚀 The feature, motivation and pitch
FSDP gives a set of arguments which are global across all wrapped modules. Instead, I would like to wrap some modules with custom args. For example, I have a module that I want to sync only across some GPUs, so I would like to specify custom process_groups which only hit a subset of ranks.
Currently, PyTorch FSDP works with this but does not support an API to expose it. For example, in this [PR](https://github.com/mosaicml/composer/pull/2460), I monkeypatch `auto_wrap` to allow returning kwargs instead of just a bool which are then used to override the FSDP args for that wrap. This lets me change the process group for specific modules. So far, we remonkeypatch PyTorch every single release since 1.13 since this is critical for us but not supported. It would be amazing if this was supported.
Our implementation extends auto_wrap to return either a bool or a set of override kwargs. I am happy to upstream this if it is acceptable.
I imagine this would be useful to squeeze out more performance in general, e.g. some things I might wrap in FULL_SHARD and some in SHARD_GRAD_OP.
### Alternatives
Monkeypatching pytorch https://github.com/mosaicml/composer/pull/2460
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 2 |
1,327 | 107,824 |
torch.compile() fails when an `autograd.Function` gets called and torch.no_grad() is *not* being used
|
oncall: distributed, triaged, ezyang's list, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
I've been implementing TensorParallel with functional collectives in order to get good performance with torch.compile() on a model at work. My implementation of TensorParallel is based on Megatron-LM/fairscale, using `autograd.Function` as the basis to do the minimum amount of `allReduce` calls both on `forward()` and `backward()`. My TP calls looks like this:
```
import torch
import torch.distributed as dist
import torch.distributed._functional_collectives as distfunc
class _CopyToModelParallelRegion(torch.autograd.Function):
"""Pass the input to the model parallel region."""
@staticmethod
def symbolic(graph, input_):
return input_
@staticmethod
def forward(ctx, input_):
return input_
@staticmethod
def backward(ctx, grad_output):
return _all_reduce(grad_output)
class _ReduceFromModelParallelRegion(torch.autograd.Function):
"""All-reduce the input from the model parallel region."""
@staticmethod
def symbolic(graph, input_):
return _all_reduce(input_)
@staticmethod
def forward(ctx, input_):
return _all_reduce(input_)
@staticmethod
def backward(ctx, grad_output):
return grad_output
def _all_reduce(input_: torch.Tensor) -> torch.Tensor:
"""All-reduce the input tensor across model parallel group."""
world_size = torch.distributed.get_world_size()
if world_size == 1:
return input_
return distfunc.all_reduce(input_, "sum", list(range(world_size)))
def copy_to_tensor_model_parallel_region(input_):
return _CopyToModelParallelRegion.apply(input_)
def reduce_from_tensor_model_parallel_region(input_):
return _ReduceFromModelParallelRegion.apply(input_)
```
When using these in a module that has support for TP, if I try to torch.compile() the module using `with torch.no_grad()`, it works and there are no issues, achieving great performance. Now, if I don't put the `with torch.no_grad()`, the results are a Dynamo error with the following stack trace (repro at the bottom):
```
(/dccstor/aviros_pytorch/conda_envs/pytorch-dev) [avirosmartin@cccxl015 pytorch]$ torchrun --standalone --nnodes=1 --nproc-per-node=2 repro_compile_autograf.py
[2023-08-23 18:04:25,043] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
[2023-08-23 18:04:25,044] torch.distributed.run: [WARNING]
[2023-08-23 18:04:25,044] torch.distributed.run: [WARNING] *****************************************
[2023-08-23 18:04:25,044] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-08-23 18:04:25,044] torch.distributed.run: [WARNING] *****************************************
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
tensor([[-0.2395, 1.7112, -0.2051, ..., -3.9962, -2.2627, 1.5794],
[ 0.6823, 0.9069, -1.0604, ..., -5.6853, -0.8283, 1.1720],
[-0.1123, 0.2991, -0.7974, ..., -4.0582, -1.8296, 1.6378],
...,
[-1.5834, 1.2646, -0.9437, ..., -2.6262, 1.2763, -2.2860],
[-1.4105, -2.2290, -2.1080, ..., 0.4712, -2.1415, -3.8139],
[ 0.2763, 1.1615, 2.2424, ..., -2.4191, -2.1367, 1.7405]])tensor([[-0.2395, 1.7112, -0.2051, ..., -3.9962, -2.2627, 1.5794],
[ 0.6823, 0.9069, -1.0604, ..., -5.6853, -0.8283, 1.1720],
[-0.1123, 0.2991, -0.7974, ..., -4.0582, -1.8296, 1.6378],
...,
[-1.5834, 1.2646, -0.9437, ..., -2.6262, 1.2763, -2.2860],
[-1.4105, -2.2290, -2.1080, ..., 0.4712, -2.1415, -3.8139],
[ 0.2763, 1.1615, 2.2424, ..., -2.4191, -2.1367, 1.7405]])
Traceback (most recent call last):
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 114, in <module>
repro()
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 104, in repro
out = opt_model(input)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 636, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 581, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 564, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 486, in compile_inner
Traceback (most recent call last):
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 114, in <module>
out_code = transform_code_object(code, transform)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
repro()
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 104, in repro
transformations(instructions, code_options)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 453, in transform
out = opt_model(input)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
tracer.run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
return self._call_impl(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
super().run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
return forward_call(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
and self.step()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
getattr(self, inst.opname)(inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
return self._call_impl(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
self.call_function(fn, args, {})
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
return forward_call(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/eval_frame.py", line 493, in catch_errors
self.push(fn.call_function(self, args, kwargs))
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return callback(frame, cache_entry, hooks, frame_state)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 636, in _convert_frame
return tx.inline_user_function_return(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 581, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 564, in _compile
return cls.inline_call_(parent, func, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 486, in compile_inner
tracer.run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
out_code = transform_code_object(code, transform)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
and self.step()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
transformations(instructions, code_options)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/convert_frame.py", line 453, in transform
getattr(self, inst.opname)(inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
tracer.run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
return inner_fn(self, inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
super().run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
self.push(fn.call_function(self, args, kwargs))
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/misc.py", line 583, in call_function
and self.step()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
return self.obj.call_apply(tx, args, kwargs).add_options(self)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/misc.py", line 353, in call_apply
getattr(self, inst.opname)(inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
).call_function(tx, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 966, in call_function
return inner_fn(self, inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
) = speculate_subgraph(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 152, in speculate_subgraph
self.call_function(fn, args, {})
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
output = f.call_function(tx, args, sub_kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/torch.py", line 727, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
tensor_variable = wrap_fx_proxy(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1187, in wrap_fx_proxy
return super().call_function(tx, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
return wrap_fx_proxy_cls(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1317, in wrap_fx_proxy_cls
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
example_value = _clone_input(example_value)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1268, in _clone_input
return cls.inline_call_(parent, func, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
value = clone_input(value)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/utils.py", line 592, in clone_input
result.copy_(x.clone())
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1101, in __torch_dispatch__
tracer.run()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
return func(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_ops.py", line 435, in __call__
and self.step()
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
return self._op(*args, **kwargs or {})
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_stats.py", line 20, in wrapper
getattr(self, inst.opname)(inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
return inner_fn(self, inst)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
return self.dispatch(func, types, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1364, in dispatch
self.call_function(fn, args, {})
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/misc.py", line 583, in call_function
) = self.validate_and_convert_non_fake_tensors(func, converter, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1584, in validate_and_convert_non_fake_tensors
return self.obj.call_apply(tx, args, kwargs).add_options(self)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/misc.py", line 353, in call_apply
args, kwargs = tree_map_only(
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 397, in tree_map_only
).call_function(tx, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 966, in call_function
return tree_map(map_only(ty)(fn), pytree)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 327, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 327, in <listcomp>
) = speculate_subgraph(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 152, in speculate_subgraph
return tree_unflatten([fn(i) for i in flat_args], spec)
output = f.call_function(tx, args, sub_kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 378, in inner
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/torch.py", line 727, in call_function
return f(x)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1574, in validate
tensor_variable = wrap_fx_proxy(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1187, in wrap_fx_proxy
raise Exception(
torch._dynamo.exc.InternalTorchDynamoError: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.copy_.default(tensor([...], size=(50, 100)), FakeTensor(..., size=(50, 100)))
from user code:
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 95, in forward
return reduce_from_tensor_model_parallel_region(out_par)
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 51, in reduce_from_tensor_model_parallel_region
return _ReduceFromModelParallelRegion.apply(input_)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
return wrap_fx_proxy_cls(
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1317, in wrap_fx_proxy_cls
example_value = _clone_input(example_value)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/variables/builder.py", line 1268, in _clone_input
value = clone_input(value)
File "/dccstor/aviros_pytorch/pytorch/torch/_dynamo/utils.py", line 592, in clone_input
result.copy_(x.clone())
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1101, in __torch_dispatch__
return func(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1364, in dispatch
) = self.validate_and_convert_non_fake_tensors(func, converter, args, kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1584, in validate_and_convert_non_fake_tensors
args, kwargs = tree_map_only(
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 397, in tree_map_only
return tree_map(map_only(ty)(fn), pytree)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 327, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 327, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/dccstor/aviros_pytorch/pytorch/torch/utils/_pytree.py", line 378, in inner
return f(x)
File "/dccstor/aviros_pytorch/pytorch/torch/_subclasses/fake_tensor.py", line 1574, in validate
raise Exception(
torch._dynamo.exc.InternalTorchDynamoError: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.copy_.default(tensor([...], size=(50, 100)), FakeTensor(..., size=(50, 100)))
from user code:
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 95, in forward
return reduce_from_tensor_model_parallel_region(out_par)
File "/dccstor/aviros_pytorch/pytorch/repro_compile_autograf.py", line 51, in reduce_from_tensor_model_parallel_region
return _ReduceFromModelParallelRegion.apply(input_)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
[2023-08-23 18:04:50,118] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 3399902) of binary: /dccstor/aviros_pytorch/conda_envs/pytorch-dev/bin/python
Traceback (most recent call last):
File "/dccstor/aviros_pytorch/conda_envs/pytorch-dev/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
File "/dccstor/aviros_pytorch/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/dccstor/aviros_pytorch/pytorch/torch/distributed/run.py", line 806, in main
run(args)
File "/dccstor/aviros_pytorch/pytorch/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/dccstor/aviros_pytorch/pytorch/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/dccstor/aviros_pytorch/pytorch/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
repro_compile_autograf.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-08-23_18:04:50
host : cccxl015.pok.ibm.com
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 3399903)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-23_18:04:50
host : cccxl015.pok.ibm.com
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 3399902)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
Repro (together with the TP code)
```
class FeedForwardBlock(torch.nn.Module):
def __init__(
self,
emb_dim,
hidden_grow_factor=4,
):
super(FeedForwardBlock, self).__init__()
self.hidden_grow_factor = hidden_grow_factor
self.hidden_dim = int(hidden_grow_factor * emb_dim)
self.w1 = torch.nn.Linear(emb_dim, self.hidden_dim)
self.a = torch.nn.SiLU()
self.w2 = torch.nn.Linear(self.hidden_dim, emb_dim)
self.reset_params()
def reset_params(self):
for layer in ["w1", "w2"]:
torch.nn.init.trunc_normal_(
getattr(self, layer).weight, mean=0.0, std=(2**0.5 / self.w1.weight.numel() ** 0.5) ** 0.5
)
def forward(self, x):
out = self.a(self.w1(x))
return self.w2(out)
class TPFeedForwardBlock(FeedForwardBlock):
def __init__(
self,
emb_dim,
hidden_grow_factor=4,
world_size=1,
rank=0,
):
hidden_dim = int(hidden_grow_factor * emb_dim)
assert hidden_dim % world_size == 0, "Hidden dim must be divisible by world size"
super(TPFeedForwardBlock, self).__init__(emb_dim, hidden_grow_factor / world_size)
self.rank = rank
self.world_size = world_size
def forward(self, x):
x_par = copy_to_tensor_model_parallel_region(x)
out_par = FeedForwardBlock.forward(self, x_par)
return reduce_from_tensor_model_parallel_region(out_par)
def repro():
model = TPFeedForwardBlock(100, 4, dist.get_world_size(), dist.get_rank())
opt_model = torch.compile(model)
input = torch.randn((50, 100))
out = opt_model(input)
print(out)
if __name__ == "__main__":
dist.init_process_group(backend="gloo")
# This works
with torch.no_grad():
repro()
# This fails
repro()
```
### Versions
Environment (this also fails on latest nightlies):
```
Collecting environment information...
PyTorch version: 2.1.0a0+gitc9f947d
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (conda-forge gcc 11.3.0-19) 11.3.0
Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af)
CMake version: version 3.26.3
Libc version: glibc-2.28
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz
Stepping: 4
CPU MHz: 4000.000
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6583.80
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0a0+gitc9f947d
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h84fe81f_48680 conda-forge
[conda] mkl-include 2023.1.0 h84fe81f_48680 conda-forge
[conda] numpy 1.24.3 py310ha4c1d20_0 conda-forge
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0a0+gitc9f947d dev_0 <develop>
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 6 |
1,328 | 107,821 |
`torch.distributions.Pareto.sample` sometimes gives `inf`
|
module: distributions, triaged, module: NaNs and Infs
|
### 🐛 Describe the bug
On my local checkout of PyTorch, the following script fails when it reaches `seed=6`:
```python
import torch
param = {
'scale': torch.tensor([
[1.7549, 1.1252, 1.1057, 1.2407, 0.8933],
[0.7661, 0.9723, 0.2869, 1.1382, 0.1948],
[0.3116, 0.5687, 0.0172, 0.2296, 0.1637],
[0.6526, 0.8761, 0.0142, 2.0804, 1.3138],
[0.8522, 2.6757, 0.9873, 0.6828, 1.1830]],
dtype=torch.double, requires_grad=True),
'alpha': torch.tensor([
[0.3219, 2.0156, 0.5705, 1.0555, 0.0485],
[0.1138, 0.1064, 0.4582, 0.3166, 0.0073],
[0.3978, 0.3402, 0.4575, 2.7605, 0.3148],
[0.0442, 1.2770, 1.3061, 1.4474, 0.6836],
[0.1606, 1.1500, 0.2150, 0.2591, 0.1284]],
dtype=torch.double, requires_grad=True)
}
dist = torch.distributions.Pareto(**param)
for seed in range(100_000):
torch.manual_seed(seed)
print(seed)
samples = dist.sample(sample_shape=(20,))
assert not samples.isinf().any()
```
The reason I found this is that in my PR #107246, I modified the tests in `test/distributions/test_distributions.py` such that the `Pareto` distribution here gets regenerated for each of the unit tests, instead of just being generated once for all the tests:
https://github.com/pytorch/pytorch/blob/36399d067a56d9875fd9c2bf61434126398ffb87/test/distributions/test_distributions.py#L380-L383
This caused the [`test_cdf_icdf_inverse`](https://github.com/pytorch/pytorch/blob/36399d067a56d9875fd9c2bf61434126398ffb87/test/distributions/test_distributions.py#L2985) test to fail on just one CI job because an unlucky seed produced the `scale` and `alpha` values from the code snippet above and `Pareto.sample()` gave an output with infinities. Then when the infinities went through `dist.cdf` and `dist.icdf`, they turned into `nan`'s, causing comparisons to fail. So up to now, that test has not failed from this issue, just due to lucky seeds being used.
In my PR, I think have a workaround where I've added `0.1` to `scale` and `alpha` to try to get the test to pass, because values further from 0 are evidently less likely to produce infinities. So if that passes, this issue is not blocking my PR
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git51d0d12
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (conda-forge gcc 9.5.0-19) 9.5.0
Clang version: Could not collect
CMake version: version 3.21.3
Libc version: glibc-2.35
Python version: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:23:11) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2060
GPU 1: NVIDIA GeForce RTX 2060
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7400.28
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+gitb0bc323
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2023.2.0 h84fe81f_49572 conda-forge
[conda] mkl-include 2023.2.0 h84fe81f_49572 conda-forge
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.1.0a0+gitb0bc323 dev_0 <develop>
```
cc @fritzo @neerajprad @alicanb @nikitaved
| 1 |
1,329 | 107,820 |
`add_image_with_boxes` method from `torch.utils.tensorboard.writer.SummaryWriter` is broken
|
triaged, module: tensorboard, oncall: visualization
|
### 🐛 Describe the bug
The method `add_image_with_boxes` of pytorch's SummaryWriter gets an error when `labels` option is not None
```python
import torch
from torch.utils.tensorboard.writer import SummaryWriter
writer = SummaryWriter()
image = torch.randn(3,100,100)
bbox = torch.tensor([[0, 1, 0, 1]])
writer.add_image_with_boxes(
"test",
image,
box_tensor=bbox,
labels=["test label"],
)
```
Will have the following error :
```
Traceback (most recent call last):
File "/home/clementpinard/workspace/test.py", line 8, in <module>
writer.add_image_with_boxes(
File "/home/clementpinard/workspace/.venv/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py", line 713, in add_image_with_boxes
image_boxes(
File "/home/clementpinard/workspace/.venv/lib/python3.10/site-packages/torch/utils/tensorboard/summary.py", line 457, in image_boxes
image = make_image(
File "/home/clementpinard/workspace/.venv/lib/python3.10/site-packages/torch/utils/tensorboard/summary.py", line 489, in make_image
image = draw_boxes(image, rois, labels=labels)
File "/home/clementpinard/workspace/.venv/lib/python3.10/site-packages/torch/utils/tensorboard/summary.py", line 468, in draw_boxes
disp_image = _draw_single_box(
File "/home/clementpinard/workspace/.venv/lib/python3.10/site-packages/torch/utils/tensorboard/summary.py", line 57, in _draw_single_box
text_width, text_height = font.getsize(display_str)
AttributeError: 'ImageFont' object has no attribute 'getsize'
```
This is most likely a deprecation from pillow that was not detected. Applying this simple fix repairs the function in torch/utils/tensorboard/summary.py
```diff
diff --git a/torch/utils/tensorboard/summary.py b/torch/utils/tensorboard/summary.py
index e07d2c6b880..bf3b9820e21 100644
--- a/torch/utils/tensorboard/summary.py
+++ b/torch/utils/tensorboard/summary.py
@@ -108,7 +108,7 @@ def _draw_single_box(
if display_str:
text_bottom = bottom
# Reverse list and print from bottom to top.
- text_width, text_height = font.getsize(display_str)
+ text_width, text_height = font.font.getsize(display_str)
margin = np.ceil(0.05 * text_height)
draw.rectangle(
[
```
I am not sure since when this call is deprecated though, I suppose a more clever fix would be to try both so that a former version of Pillow still works. I can do the corresponding PR if you want.
Since this Issue is very small I would also like to take this opportunity to formulate a related feature request : as you might see here https://github.com/pytorch/pytorch/blob/main/torch/utils/tensorboard/summary.py#L583 the bounding box will always be red. It would be nice to be able to specify a list of colors the same way we specify a list of labels to better read them. This would mean having a "colors" option in the method `add_image_with_boxes`, but also in the functions `image_boxes`, `make_image` and `draw_boxes`, and maybe a clever way to decide if we write the text in black or in white depending on the color. Nothing too complex actually.
### Versions
I don't believe it's really necessary, but here it goes :
```
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6426Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 75 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.25.1 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 1 |
1,330 | 107,811 |
doc stuff
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107811
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
| 1 |
1,331 | 107,804 |
[WIP/CI Test] Try to tighten up VT stack invariant
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107804
* #107803
cc @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 2 |
1,332 | 107,800 |
[feature request] [discussion] Include basic `ctypes` bindings for `cudart`/`cublasLt`/`cublas`/`nvrtc`/`cudnn` with stock PyTorch
|
feature, module: cuda, triaged, module: cublas
|
### 🚀 The feature, motivation and pitch
This would make it more approachable experimenting with some new function variants (like quantized int8 gemms) earlier.
Example of such ctypes bindings: https://github.com/OpenBMB/cpm_kernels/tree/master/cpm_kernels/library
Including in core some bindings like this would be great! (maybe under some `torch.cuda.ctypes.cublasLt` or something similar)
Examples of such C bindings: https://github.com/TimDettmers/bitsandbytes/blob/18e827d666fa2b70a12d539ccedc17aa51b2c97c/csrc/ops.cu#L434
Another set of bindings is now in `bitsandbytes`, but having it directly available in Python would make it more approachable for experimentation and benchmarking
There might be problems with versions, but maybe then some bindings could be versioned as well: `torch.cuda.ctypes.cublatltV8` or sth like that, so that the user is responsible for using the correct bindings for their experiments
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @csarofeen @xwang233
| 2 |
1,333 | 107,797 |
Fake Tensor error 'lengths' argument should be a 1D CPU int64 tensor, but got 1D meta Long tensor
|
triaged, oncall: pt2, module: fakeTensor, mlperf
|
### 🐛 Describe the bug
Problem found here https://github.com/mlcommons/algorithmic-efficiency/issues/498
```python
import torch
import torch.nn as nn
class BatchRNN(nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs, input_paddings):
batch_size, seq_len, _ = inputs.size()
lengths = torch.randint(1, seq_len + 1, (batch_size,))
packed_inputs = torch.nn.utils.rnn.pack_padded_sequence(
inputs, lengths, batch_first=True, enforce_sorted=False)
return packed_inputs
model = BatchRNN()
model(torch.randn(1, 32, 32), torch.randn(1, 32, 32))
model = torch.compile(model)
model(torch.randn(1, 32, 32), torch.randn(1, 32, 32))
```
### Error logs
```
(sam) ubuntu@ip-172-31-9-217:~$ python rnn.py
Traceback (most recent call last):
File "/home/ubuntu/rnn.py", line 23, in <module>
model(torch.randn(1, 32, 32), torch.randn(1, 32, 32))
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 605, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 132, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 370, in _convert_frame_assert
return _compile(
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 536, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 447, in compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in transform
tracer.run()
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1167, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 716, in call_function
tensor_variable = wrap_fx_proxy(
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1163, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1237, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1351, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1319, in get_fake_value
return wrap_fake_exception(
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 898, in wrap_fake_exception
return fn()
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1320, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1385, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1372, in run_node
return node.target(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 264, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1523, in dispatch
r = func(*args, **kwargs)
File "/opt/conda/envs/sam/lib/python3.10/site-packages/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function pack_padded_sequence at 0x7ff611d61900>(*(FakeTensor(..., size=(1, 32, 32)), FakeTensor(..., size=(1,), dtype=torch.int64)), **{'batch_first': True, 'enforce_sorted': False}):
'lengths' argument should be a 1D CPU int64 tensor, but got 1D meta Long tensor
from user code:
File "/home/ubuntu/rnn.py", line 15, in forward
packed_inputs = torch.nn.utils.rnn.pack_padded_sequence(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
_No response_
### Versions
`torch ==2.1.0.dev20230814+cu121`
cc @ezyang @wconstab @bdhirsh @anijain2305
| 3 |
1,334 | 107,795 |
Back out "[inductor] make thread order consistent with loop order (#106827)"
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: D48295371 cause batch fusion failure
Test Plan: Without revert, f469732293. With revert diff f472266199.
Differential Revision: D48593029
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 4 |
1,335 | 107,780 |
Add caffe2 ideep/onednn tests to OSS CI
|
oncall: releng, module: ci, triaged, module: mkldnn
|
### 🐛 Describe the bug
Following PR https://github.com/pytorch/pytorch/pull/97957 that updates ideep have failed in these tests internally:
https://github.com/pytorch/pytorch/tree/main/caffe2/python/ideep
caffe2 module is still used intrnally a lot. Hence we want to enable OSS CI execute ideep tests from time to time.
Perhaps we should be adding these tests as periodic.
### Versions
2.1.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 0 |
1,336 | 107,778 |
Add ceil to core IR
|
fb-exported
|
Summary: This is similar to floor.
Test Plan: After adding to IR, we can enable _check_ir_validity for deeplab
Differential Revision: D48602187
| 4 |
1,337 | 107,774 |
DISABLED test_conv_stride_constraints (__main__.CPUReproTests)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_cpu_repro.py%3A%3ACPUReproTests%3A%3Atest_conv_stride_constraints)).
I will take a look at this further locally, currently unsure why this shows up on ROCm for a CPU test.
cc: @jithunnair-amd
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang
| 1 |
1,338 | 107,771 |
libtorch infer error : CUDNN_STATUS_INTERNAL_ERROR
|
oncall: jit
|
### 🐛 Describe the bug
# code for model export
```
traced_script_module = torch.jit.trace(model, \
(data_dict['voxels'], data_dict['voxel_num_points'], data_dict['voxel_coords']))
traced_script_module.save(args.output_path)
```
# code for model infer
```
std::vector<torch::Tensor> output =
net_.forward(torch_inputs).toTensorVector();
```
# question
model init is ok:
```
torch::Device device(device_type_, device_id_);
net_ = torch::jit::load(model_file_, device);
net_.eval();
```
The error occurs when running the forward function:

### Versions
pytorch 1.7.0
libtorch 1.7.0
cudnn 8.0.4.30
cuda 11.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
1,339 | 107,770 |
libtorch vs (onnx+tensorRT) show different object detection results
|
module: onnx, triaged
|
### 🐛 Describe the bug
# libtorch
```
traced_script_module = torch.jit.trace(model, image)
traced_script_module.save(args.output_path)
```
# onnx
```
torch.onnx.export(model, image,
onnx_model_save_path, opset_version=11, verbose=False, export_params=True,
operator_export_type=OperatorExportTypes.ONNX,
input_names=['image'], output_names=['orient','conf','dim'])
```
# question
We used the two approaches to export the trained model (YOLO), and use it for inference in the object detection task.
However, different results were obtained, and the model exported by onnx shows a better and stable performance.
Could you please give some possible reasons?
### Versions
torch 2.0.1
onnx 1.14.0
onnxconverter-common 1.14.0
onnxmltools 1.11.2
onnxruntime-gpu 1.15.1
| 0 |
1,340 | 107,769 |
Enable Mypy Checking in torch/_inductor/bounds.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/bounds.py
After Fix:
mypy --follow-imports=skip torch/_inductor/bounds.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,341 | 107,767 |
DISABLED test_make_fx_symbolic_exhaustive_special_bessel_y0_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: ProxyTensor
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_bessel_y0_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16128051710).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_bessel_y0_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
| 12 |
1,342 | 107,764 |
Add Half support for range, logspace, logit, median, nanmedian, kthvalue, poisson, cummax, cummin, prod, cumprod, histc, logcumsumexp, vander, cross, aten2, logaddexp, logaddexp2, hypot, and nextafter on CPU
|
module: cpu, open source, module: half, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/mps, module: inductor, ciflow/inductor
|
Add Half support for range, logspace, logit, median, nanmedian, kthvalue, poisson, cummax, cummin, prod, cumprod, histc, logcumsumexp, vander, cross, aten2, logaddexp, logaddexp2, hypot, and nextafter on CPU.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @Xia-Weiwen @ngimel
| 1 |
1,343 | 107,762 |
DISABLED test_make_fx_symbolic_exhaustive_special_bessel_j1_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: ProxyTensor
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_bessel_j1_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16122457339).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_bessel_j1_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
| 10 |
1,344 | 107,751 |
conv cudnn support integers
|
module: cudnn, triaged
|
### 🚀 The feature, motivation and pitch
conv cudnn support integers
i tried implementing by adding
```cpp
else if (scalar_type == at::kChar) {
return CUDNN_DATA_INT8;
}
```
to https://github.com/pytorch/pytorch/blob/c093fdf9245875213cae65eba9e246a70748f9c0/aten/src/ATen/cudnn/Descriptors.cpp#L25C1-L26
but ran into
```python
output = F.conv2d(input, kernel)
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
```
@ezyang @dzdang do you some pointers on what specifically i am missing?
related PR https://github.com/pytorch/pytorch/pull/3666
https://github.com/pytorch/pytorch/pull/74673
cc: @cchan @ezhang887
### Alternatives
_No response_
### Additional context
_No response_
cc @csarofeen @ptrblck @xwang233
| 0 |
1,345 | 107,739 |
DISABLED test_make_fx_symbolic_exhaustive_special_airy_ai_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: ProxyTensor
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_airy_ai_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16113161016).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_airy_ai_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
| 9 |
1,346 | 107,714 |
[ONNX] Retire FXSymbolicTracer in FX exporter
|
module: onnx, triaged, onnx-triaged
|
FXSymbolicTracer is a pioneer of exploring fake tensor export, which does successfully export large scale transformers, like BLOOM, and GPT2. However, it's not using dynamo export, but `torch.fx._symbolic_trace.Tracer` which is not actively maintained. Besides, there are patches used to only cover FXSymbolicTracer, which is not exposed to public API, and will never be.. Now, with the fake mode api in fx exporter becomes mature. We should retire FXSymbolicTracer.
| 0 |
1,347 | 107,708 |
Bump Triton version
|
ciflow/trunk, topic: not user facing, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107708
| 8 |
1,348 | 107,705 |
DISABLED test_multilayer_var_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests)
|
triaged, skipped, module: dynamic shapes
|
Platforms: macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_torchinductor_dynamic_shapes.py%3A%3ADynamicShapesCpuTests%3A%3Atest_multilayer_var_dynamic_shapes_cpu)).
cc @ezyang
| 7 |
1,349 | 107,703 |
Hardtanh docs are inaccurate/incomplete, since hardtanh behaves like clamp
|
module: docs, triaged
|
### 📚 The doc issue
The issue is more evident when `input` < `max_val` < `min_val`, for example:
```python
import torch
torch.ops.aten.hardtanh(torch.tensor(2), min_val=4, max_val=3)
```
outputs:
```
tensor(3)
```
however, according to the [hardtanh docs](https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html) it should return the `min_val`, i.e.
```
tensor(4)
```
### Suggest a potential alternative/fix
The documentation should use `min` and `max`, like clamp's documentation. Something like:
```
HardTanh(x) = min(max(x, min_val), max_value)
```
cc @svekars @carljparker
| 0 |
1,350 | 107,702 |
Inconsistencies when handling scalars that are out of the range relative to the input tensor's dtype
|
triaged, module: int overflow
|
### 🐛 Describe the bug
```python
import torch
# Both should fail, or both should output 3.
# However ATen converts -129 to 127 uint8, applying that as the min. But it fails if the dtype is int8
torch.ops.aten.clamp.out(torch.tensor([3], dtype=torch.uint8), -129, None, out=torch.tensor([], dtype=torch.uint8))
# tensor([127], dtype=torch.uint8)
torch.ops.aten.clamp.out(torch.tensor([3], dtype=torch.int8), -129, None, out=torch.tensor([], dtype=torch.int8))
# RuntimeError: value cannot be converted to type int8_t without overflow
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
| 0 |
1,351 | 107,701 |
arange.out produces incorrect output when out tensor has dtype long
|
triaged, module: python frontend
|
### 🐛 Describe the bug
```python
import torch
print(torch.ops.aten.arange.out(2.3, out=torch.tensor([], dtype=torch.uint8)))
print(torch.ops.aten.arange.out(2.3, out=torch.tensor([], dtype=torch.int8)))
print(torch.ops.aten.arange.out(2.3, out=torch.tensor([], dtype=torch.int16)))
print(torch.ops.aten.arange.out(2.3, out=torch.tensor([], dtype=torch.int32)))
print(torch.ops.aten.arange.out(2.3, out=torch.tensor([], dtype=torch.int64)))
```
Outputs:
```
tensor([0, 1, 2], dtype=torch.uint8)
tensor([0, 1, 2], dtype=torch.int8)
tensor([0, 1, 2], dtype=torch.int16)
tensor([0, 1, 2], dtype=torch.int32)
tensor([0, 1])
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
cc @albanD
| 2 |
1,352 | 107,700 |
where.self_out doesn't fail gracefully when inputs have different dtypes
|
triaged, module: type promotion, module: advanced indexing, module: edge cases
|
### 🐛 Describe the bug
This error seems to happen every time that `input`, `other` and `out` don't have the same dtype
```python
import torch
cond = torch.zeros(2, dtype=torch.bool)
input = torch.zeros(2, dtype=torch.int)
other = torch.zeros(2, dtype=torch.long)
out = torch.zeros(2, dtype=torch.int)
torch.ops.aten.where.self_out(cond, input, other, out=out)
```
Throws this error:
```
RuntimeError: !needs_dynamic_casting<func_t>::check(iter) INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/cpu/Loops.h":310, please report a bug to PyTorch.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @nairbv @mruberry
| 2 |
1,353 | 107,699 |
index.Tensor_out & index_put.out errors or segfaults with indices list containing only null tensors
|
high priority, triaged, module: advanced indexing
|
### 🐛 Describe the bug
Indicies with **only** null tensors causes an error or segfault.
This doesn't happen if `indices` contains other tensors that aren't null
```python
import torch
input = torch.tensor([[1, 2], [3, 4]], dtype=torch.float)
indices = [None]
values = torch.tensor([10, 20], dtype=torch.float)
out = torch.tensor([[0, 0], [0, 0]], dtype=torch.float)
# Run any of the following two lines
torch.ops.aten.index.Tensor_out(input, indices, out=out)
torch.ops.aten.index_put.out(input, indices, values, False, out=out)
```
sometimes it segfaults:
```
zsh: segmentation fault
```
sometimes it throws this error:
```
RuntimeError: ntensor >= 3 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/cpu/IndexKernelUtils.h":10, please report a bug to PyTorch.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
cc @ezyang @gchanan @zou3519
| 1 |
1,354 | 107,697 |
Enable thp(transparent huge pages) for buffer sizes >=2MB
|
triaged, open source
|
The 2MB thp pages provide better allocation latencies compared to the standard 4KB pages. This change has shown substantial improvement for batch mode usecases where the tensor sizes are larger than 100MB.
Only enabled if THP_MEM_ALLOC_ENABLE environment variable is set.
re-landing https://github.com/pytorch/pytorch/pull/93888
cc: @izaitsevfb
| 5 |
1,355 | 107,695 |
New variables in torch._ops.py pollute the torch.ops namespace
|
triaged, module: library
|
### 🐛 Describe the bug
e.g. torch.ops.dl_open_guard is a thing, and prevents people from creating operators under a "dl_open_guard" namespace (e.g. torch.ops.dl_open_guard.my_conv2d.default)
### Versions
main
cc @anjali411
| 0 |
1,356 | 107,694 |
masked_fill_ outputs incorrect results for 'mps' tensor after transpose
|
triaged, module: mps
|
### 🐛 Describe the bug
I found a bug with 'mps' backend tensor, which results in incorrect output for `masked_fill_` operation after `transpose`
The following cod reproduces the behavior on my mac
```py
import torch
x = torch.tensor([[
False, False, True, True,],
[True, True, True, True]], device='mps:0')
def transform(x):
return torch.zeros_like(x, dtype=torch.float32).masked_fill_(x, 1.0)
print(x)
print(transform(x))
print(x.t())
print(transform(x.t()))
```
The output I get from the script is
```
tensor([[False, False, True, True],
[ True, True, True, True]], device='mps:0')
tensor([[0., 0., 1., 1.],
[1., 1., 1., 1.]], device='mps:0')
tensor([[False, True],
[False, True],
[ True, True],
[ True, True]], device='mps:0')
tensor([[0., 1.],
[1., 1.],
[0., 1.],
[1., 1.]], device='mps:0')
```
`transform(x)` works as expected, but `transform(x.t())` results in incorrect output.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.11.4 (main, Jul 25 2023, 17:36:13) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[conda] No relevant packages
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
1,357 | 107,693 |
Inconsistencies when casting to integral types
|
triaged, module: type promotion, module: arm, module: int overflow
|
### 🐛 Describe the bug
Inconsistencies when casting to integral types
```python
import torch
print("== Casting way out of bounds float to integral types")
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.bool))
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.uint8)) # seems to be clamping
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.int8)) # nonsense
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.int16)) # nonsense
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.int32)) # seems to be clamping
print(torch.tensor(1e20, dtype=torch.float).to(dtype=torch.int64)) # seems to be clamping
print("\n== Casting slightly out of bounds values")
print("==== uint8")
print(torch.tensor(256, dtype=torch.int64).to(dtype=torch.uint8)) # seems to wraparound
print(torch.tensor(256, dtype=torch.float).to(dtype=torch.uint8)) # seems to wraparound
print("==== int8")
print(torch.tensor(128, dtype=torch.int64).to(dtype=torch.int8)) # seems to wraparound
print(torch.tensor(128, dtype=torch.float).to(dtype=torch.int8)) # seems to wraparound
print("==== int16")
print(torch.tensor(32768, dtype=torch.int64).to(dtype=torch.int16)) # seems to wraparound
print(torch.tensor(32768, dtype=torch.float).to(dtype=torch.int16)) # seems to wraparound
print("==== int32")
print(torch.tensor(2147483648, dtype=torch.int64).to(dtype=torch.int32)) # seems to wraparound
print(torch.tensor(2150000000, dtype=torch.int64).to(dtype=torch.int32)) # seems to wraparound
print(torch.tensor(3000000000, dtype=torch.int64).to(dtype=torch.int32)) # seems to wraparound
print(torch.tensor(9000000000, dtype=torch.int64).to(dtype=torch.int32)) # seems to wraparound
print(torch.tensor(2147483648, dtype=torch.float).to(dtype=torch.int32)) # seems to be clamping
print(torch.tensor(2150000000, dtype=torch.float).to(dtype=torch.int32)) # seems to be clamping
print(torch.tensor(3000000000, dtype=torch.float).to(dtype=torch.int32)) # seems to be clamping
print(torch.tensor(9000000000, dtype=torch.float).to(dtype=torch.int32)) # seems to be clamping
```
The results we get, show that when casting a very large float to integral doesn't behave the same across all integral types. Also, we can see that casting an int64 value to int32, or the same value as float to int32, behaves differently.
```
== Casting way out of bounds float to integral types
tensor(True)
tensor(255, dtype=torch.uint8)
tensor(-1, dtype=torch.int8)
tensor(-1, dtype=torch.int16)
tensor(2147483647, dtype=torch.int32)
tensor(9223372036854775807)
== Casting slightly out of bounds values
==== uint8
tensor(0, dtype=torch.uint8)
tensor(0, dtype=torch.uint8)
==== int8
tensor(-128, dtype=torch.int8)
tensor(-128, dtype=torch.int8)
==== int16
tensor(-32768, dtype=torch.int16)
tensor(-32768, dtype=torch.int16)
==== int32
tensor(-2147483648, dtype=torch.int32)
tensor(-2144967296, dtype=torch.int32)
tensor(-1294967296, dtype=torch.int32)
tensor(410065408, dtype=torch.int32)
tensor(2147483647, dtype=torch.int32)
tensor(2147483647, dtype=torch.int32)
tensor(2147483647, dtype=torch.int32)
tensor(2147483647, dtype=torch.int32)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @nairbv @mruberry @malfet
| 4 |
1,358 | 107,691 |
torch._dynamo.exc.Unsupported: call_function BuiltinVariable(zip) [ListVariable(), ListVariable(), ListVariable(), UserDefinedObjectVariable(KJTList)] {}
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Steps to reproduce:
pytorch version: https://github.com/pytorch/pytorch/commit/796ce672296c9ae8d7387b18403810aa2f1048a1
torchrec version: 005727ef06cdc808aa7d263ab4f2837938a77ed2
1. Get working torchrec install (you need fbgemm-gpu and torchrec, install by source; explicitly uninstall fbgemm-gpu-nightly and torchrec-nightly if they exist)
2. Patch torchrec with https://gist.github.com/ezyang/e8a60401b52a3c7f630c1f7c072ec9f1
3. For ease of diagnosis, patch pytorch with https://github.com/pytorch/pytorch/pull/107683 (but I don't think it is strictly necessary)
4. Patch
```
diff --git a/torch/_dynamo/config.py b/torch/_dynamo/config.py
index b9e828750c7..17eab15f945 100644
--- a/torch/_dynamo/config.py
+++ b/torch/_dynamo/config.py
@@ -205,7 +205,7 @@ enforce_cond_guards_match = True
# run without graph-breaks, but also without comm/compute overlap.
# set torch._dynamo.config.log_level to INFO or DEBUG for more info
# about optimize_ddp behavior.
-optimize_ddp = True
+optimize_ddp = False
# Whether to skip guarding on FSDP-managed modules
skip_fsdp_guards = True
```
5. Run TORCH_LOGS=+dynamo MASTER_ADDR=127.0.0.1 MASTER_PORT=29501 RANK=0 LOCAL_RANK=0 WORLD_SIZE=1 python train_dlrm.py
```
Traceback (most recent call last):
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 164, in <module>
main()
File "/data/users/ezyang/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 52, in main
train()
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 157, in train
print(model(next(train_iterator).to(device)))
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/model_parallel.py", line 266, in forward
return self._dmp_wrapped_module(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1519, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 379, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 554, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 476, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 443, in transform
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 716, in call_function
).call_function(tx, [self] + list(args), kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builtin.py", line 635, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/base.py", line 306, in call_function
unimplemented(f"call_function {self} {args} {kwargs}")
File "/data/users/ezyang/a/pytorch/torch/_dynamo/exc.py", line 172, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function BuiltinVariable(zip) [ListVariable(), ListVariable(), ListVariable(), UserDefinedObjectVariable(KJTList)] {}
from user code:
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 893, in forward
logits = self.model(batch.dense_features, batch.sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 571, in forward
embedded_sparse = self.sparse_arch(sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 99, in forward
sparse_features: KeyedTensor = self.embedding_bag_collection(features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/types.py", line 700, in forward
return self.compute_and_output_dist(ctx, dist_input)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/embeddingbag.py", line 704, in compute_and_output_dist
for lookup, dist, sharding_ctx, features in zip(
```
Full debug log https://gist.github.com/ezyang/84f0c66e349682f689f3fb0b38433b0e
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 0 |
1,359 | 107,685 |
Error in ONNX during Export GLU with Opset 18
|
module: onnx, triaged
|
### 🐛 Describe the bug
When attempting to export a PyTorch model containing the `glu` function to ONNX format
using opset version 18, an error is encountered during the ONNX shape inference process.
The error message received is as follows:
```
/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py:1672: UserWarning: The exported ONNX model failed ONNX shape inference. The model will not be executable by the ONNX Runtime. If this is unintended and you believe there is a bug, please report an issue at https://github.com/pytorch/pytorch/issues. Error reported by strict ONNX shape inference: [ShapeInferenceError] Shape inference error(s): (op_type:Split, node name: /Split): [ShapeInferenceError] Neither 'split' input nor 'num_outputs' attribute has been given
(op_type:Sigmoid, node name: /Sigmoid): [TypeInferenceError] Input 0 expected to have type but instead is null
(op_type:Mul, node name: /Mul): [TypeInferenceError] Input 0 expected to have type but instead is null
(Triggered internally at ../torch/csrc/jit/serialization/export.cpp:1403.)
_C._check_onnx_proto(proto)
```
Code to reproduce
```python
import torch
from torch import nn
class Model(nn.Module):
def forward(self, x):
x = nn.functional.glu(x, dim=1)
return x
model = Model()
model.eval()
x = torch.rand(1024, 512)
torch.onnx.export(
model, (x,),
"model.onnx",
verbose=False,
opset_version=18,
)
```
### Versions
PyTorch version: 2.1.0.dev20230812+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5320T CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 6
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp_epp avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.9 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 50 MiB (40 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.9.4
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230812+cu121
[pip3] torch-tensorrt==1.5.0.dev0
[pip3] torchaudio==2.1.0.dev20230813+cu121
[pip3] torchdata==0.7.0a0
[pip3] torchmetrics==1.0.3
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0.dev20230813+cu121
[pip3] triton==2.0.0
[conda] Could not collect
| 7 |
1,360 | 107,684 |
[Dynamo] 'NoneType' object is not subscriptable from torchrec (bad error message)
|
triaged, oncall: pt2
|
### 🐛 Describe the bug
UPDATE: Actually this is user error, the patch causes an attribute to no longer exist, but Dynamo gives a bad error message instead of correctly noticing user code problem
Steps to reproduce:
pytorch version: https://github.com/pytorch/pytorch/commit/796ce672296c9ae8d7387b18403810aa2f1048a1
torchrec version: 005727ef06cdc808aa7d263ab4f2837938a77ed2
1. Get working torchrec install (you need fbgemm-gpu and torchrec, install by source; explicitly uninstall fbgemm-gpu-nightly and torchrec-nightly if they exist)
2. Patch torchrec with https://gist.github.com/ezyang/d1a15bb53e0cfc6e05c4e6b57a49bc85
3. For ease of diagnosis, patch pytorch with https://github.com/pytorch/pytorch/pull/107683 (but I don't think it is strictly necessary)
4. Run `TORCH_LOGS=+dynamo MASTER_ADDR=127.0.0.1 MASTER_PORT=29501 RANK=0 LOCAL_RANK=0 WORLD_SIZE=1 python train_dlrm.py`
Stack trace
```
Traceback (most recent call last):
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 164, in <module>
main()
File "/data/users/ezyang/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 52, in main
train()
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 157, in train
print(model(next(train_iterator).to(device)))
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/model_parallel.py", line 266, in forward
return self._dmp_wrapped_module(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1519, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return hijacked_callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 626, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 379, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 571, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 554, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 476, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 443, in transform
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 716, in call_function
).call_function(tx, [self] + list(args), kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 716, in call_function
).call_function(tx, [self] + list(args), kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/nn_module.py", line 716, in call_function
).call_function(tx, [self] + list(args), kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 168, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/variables/builtin.py", line 621, in call_function
self.as_python_constant()(
torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object is not subscriptable
from user code:
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 893, in forward
logits = self.model(batch.dense_features, batch.sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 571, in forward
embedded_sparse = self.sparse_arch(sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 99, in forward
sparse_features: KeyedTensor = self.embedding_bag_collection(features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/types.py", line 699, in forward
dist_input = self.input_dist(ctx, *input, **kwargs).wait().wait()
File "/data/users/ezyang/a/torchrec/torchrec/distributed/embeddingbag.py", line 669, in input_dist
awaitables.append(input_dist(features_by_shard))
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/sharding/tw_sharding.py", line 240, in forward
return self._dist(sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/dist_data.py", line 454, in forward
self._splits_cumsum[rank] : self._splits_cumsum[rank + 1]
```
Full debug logs: https://gist.github.com/ezyang/3340a5fa3330e135e13250b3b4246804
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
1,361 | 107,683 |
Actually raise an error on all graph breaks with fullgraph=True
|
Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107683
Previously, if you had a graph break inside a fullgraph=True compile,
but Dynamo was able to compile a partial graph, we would let the graph
through anyway (instead of erroring). Now you error.
Fixes https://github.com/pytorch/pytorch/issues/107639
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @Xia-Weiwen
| 3 |
1,362 | 107,680 |
torch.nn.functional.cross_entropy different loss when providing one_hot_target and class weights
|
module: nn, module: loss, triaged
|
### 🐛 Describe the bug
```python
inputs = torch.tensor([[2.3, 2.1, 4.5],
[4.2, 3.2, 5.1]])
target = torch.tensor([1, 2])
one_hot_target = torch.tensor([[0,1,0], [0,0,1]]).float()
weights = torch.tensor([25., 25., 100.])
torch.nn.functional.cross_entropy(inputs, target, reduction="mean", weight=weights) #output: 0.8705
torch.nn.functional.cross_entropy(inputs, one_hot_target, reduction="mean", weight=weights) #output: 54.4052
```
This seems to only occur with `mean` reduction when `weight` is provided, `sum` and `none` output the same thing.
### Versions
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
CPU MHz: 2445.425
BogoMIPS: 4890.85
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1+cpu
[pip3] torch-scatter==2.1.1
[conda] numpy 1.25.2 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
1,363 | 107,679 |
Enable Mypy Checking in torch/_inductor/debug.py
|
open source, topic: not user facing, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/debug.py
After Fix:
mypy --follow-imports=skip torch/_inductor/debug.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
1,364 | 107,678 |
[Torch.fx] Torch fx failed to trace torch extension library
|
triaged, module: fx
|
### 🐛 Describe the bug
Hi,
I'm currently focusing on point cloud detection workloads, and I rely heavily on the torch extension libraries like mmdet3d. I recently attempted to trace a straightforward torch cpp extension using torch.fx, but unfortunately, I encountered an error.
For reference, the code I'm working with is directly from the official PyTorch documentation, specifically this [tutorial on cpp extensions](https://pytorch.org/tutorials/advanced/cpp_extension.html).
Is there any known limitation or workaround when it comes to using torch.fx with torch cpp extensions? I'd appreciate any guidance or recommendations on this matter.
``` python
class LLTM(torch.nn.Module):
def __init__(self, input_features, state_size):
super(LLTM, self).__init__()
self.input_features = input_features
self.state_size = state_size
self.weights = torch.nn.Parameter(
torch.empty(3 * state_size, input_features + state_size))
self.bias = torch.nn.Parameter(torch.empty(3 * state_size))
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.state_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, +stdv)
def forward(self, input, state):
return LLTMFunction.apply(input, self.weights, self.bias, *state)
```
Here is the tracing code.
``` python
import time
import torch
batch_size = 16
input_features = 32
state_size = 128
X = torch.randn(batch_size, input_features)
h = torch.randn(batch_size, state_size)
C = torch.randn(batch_size, state_size)
rnn = LLTM(input_features, state_size)
compile_rnn = torch.compile(rnn, backend="inductor")
new_h, new_C = compile_rnn(X, (h, C))
import torch.fx
tracer = torch.fx.symbolic_trace(rnn)
print(tracer)
```
Then I got these errors:
```
Traceback (most recent call last):
File "/home/leosys/Documents/test_fx/lltm.py", line 62, in <module>
tracer = torch.fx.symbolic_trace(rnn)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 1150, in symbolic_trace
graph = tracer.trace(root, concrete_args)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 817, in trace
(self.create_arg(fn(*args)),),
File "/home/leosys/Documents/test_fx/lltm.py", line 41, in forward
return LLTMFunction.apply(input, self.weights, self.bias, *state)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/fx/proxy.py", line 409, in __iter__
return self.tracer.iter(self)
File "/home/leosys/.conda/envs/gencom/lib/python3.9/site-packages/torch/fx/proxy.py", line 309, in iter
raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
```
### Versions
PyTorch version: 2.1.0.dev20230821
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800X 8-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 4850.1948
CPU min MHz: 2200.0000
BogoMIPS: 7585.37
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230820+cu118
[pip3] torch-geometric==2.3.1
[pip3] torchaudio==2.1.0.dev20230821+cu118
[pip3] torchvision==0.16.0.dev20230821+cu118
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch-nightly
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] numpy-base 1.25.2 py39hb5e798b_0
[conda] pytorch 2.1.0.dev20230821 py3.9_cuda11.8_cudnn8.7.0_0 pytorch-nightly
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.1.0.dev20230820+cu118 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230821+cu118 pypi_0 pypi
[conda] torchtriton 2.1.0+e6216047b8 py39 pytorch-nightly
[conda] torchvision 0.16.0.dev20230821+cu118 pypi_0 pypi
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 2 |
1,365 | 107,674 |
Add mha to Autocast CPU
|
triaged, open source, module: amp (automated mixed precision), ciflow/trunk, topic: not user facing, intel
|
Fixes #106751.
This PR adds `_native_multi_head_attention` to Autocast CPU policy.
Behavior: Within the scope of torch.cpu.amp.autocast(dtype=torch.bfloat16) , `_native_multi_head_attention` will be forced to run with bf16 data type.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 5 |
1,366 | 107,668 |
torch.dot gives wrong result on Macos
|
high priority, triaged, module: macos, module: correctness (silent)
|
### 🐛 Describe the bug
When I try to create a demo, I found that the result is realy baffling:
(1)The demo created by me:
```
Last login: Tue Aug 22 14:42:20 on ttys000
(base) crk@crkdeMacBook-Air ~ % conda activate kk
(kk) crk@crkdeMacBook-Air ~ % python
Python 3.8.17 (default, Jul 5 2023, 15:35:58)
[Clang 14.0.6 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.arange(4.0)
>>> x
tensor([0., 1., 2., 3.])
>>> x.requires_grad_(True)
tensor([0., 1., 2., 3.], requires_grad=True)
>>> y = 2 * torch.dot(x, x)
>>> y
tensor(0., grad_fn=<MulBackward0>)
>>>
```
I'm very confused, and when I try it on Colab, the result of y is
`tensor(28., grad_fn=<MulBackward0>)`
I realy want to know why.
### Versions
My enviorment is below:
```
(kk) crk@crkdeMacBook-Air ~ % conda list
# packages in environment at /Users/crk/miniconda3/envs/kk:
#
# Name Version Build Channel
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
asttokens 2.2.1 pyhd8ed1ab_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 pyhd8ed1ab_3 conda-forge
backports.functools_lru_cache 1.6.5 pyhd8ed1ab_0 conda-forge
ca-certificates 2023.7.22 hf0a4a13_0 conda-forge
comm 0.1.4 pyhd8ed1ab_0 conda-forge
contourpy 1.1.0 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
debugpy 1.6.7 py38h313beb8_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
executing 1.2.0 pyhd8ed1ab_0 conda-forge
fonttools 4.42.0 pypi_0 pypi
glob2 0.7 pypi_0 pypi
importlib-resources 6.0.1 pypi_0 pypi
ipykernel 6.25.1 pyh5fb750a_0 conda-forge
ipython 8.12.0 pyhd1c38e8_0 conda-forge
jedi 0.19.0 pyhd8ed1ab_0 conda-forge
jupyter_client 7.3.4 pyhd8ed1ab_0 conda-forge
jupyter_core 5.3.0 py38hca03da5_0
kiwisolver 1.4.4 pypi_0 pypi
libcxx 14.0.6 h848a8c0_0
libffi 3.4.4 hca03da5_0
libsodium 1.0.18 h27ca646_1 conda-forge
matplotlib 3.7.2 pypi_0 pypi
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
ncurses 6.4 h313beb8_0
nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge
numpy 1.24.4 pypi_0 pypi
opencv-python 4.8.0.76 pypi_0 pypi
openssl 3.1.2 h53f4e23_0 conda-forge
packaging 23.1 pyhd8ed1ab_0 conda-forge
pandas 2.0.3 pypi_0 pypi
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.0.0 pypi_0 pypi
pip 23.2.1 py38hca03da5_0
platformdirs 3.10.0 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.39 pyha770c72_0 conda-forge
prompt_toolkit 3.0.39 hd8ed1ab_0 conda-forge
psutil 5.9.0 py38h1a28f6b_0
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pygments 2.16.1 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 pypi_0 pypi
python 3.8.17 hb885b13_0
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.8 2_cp38 conda-forge
pytz 2023.3 pypi_0 pypi
pyzmq 25.1.0 py38h313beb8_0
readline 8.2 h1a28f6b_0
setuptools 68.0.0 py38hca03da5_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.41.2 h80987f9_0
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 hb8d0fd4_0
torch 1.9.0 pypi_0 pypi
torchaudio 0.9.0 pypi_0 pypi
torchvision 0.10.0 pypi_0 pypi
tornado 6.1 py38hea4295b_1 conda-forge
tqdm 4.66.0 pypi_0 pypi
traitlets 5.9.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.7.1 hd8ed1ab_0 conda-forge
typing_extensions 4.7.1 pyha770c72_0 conda-forge
tzdata 2023.3 pypi_0 pypi
wcwidth 0.2.6 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py38hca03da5_0
xz 5.4.2 h80987f9_0
zeromq 4.3.4 hbdafb3b_1 conda-forge
zipp 3.16.2 pypi_0 pypi
zlib 1.2.13 h5a0b063_0
(kk) crk@crkdeMacBook-Air ~ %
```
cc @ezyang @gchanan @zou3519 @malfet @albanD
| 4 |
1,367 | 107,664 |
adding _int_mm to out_dtype mm WIP
|
Stale, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107664
Summary: it looks like out_dtype doesn't work with max-autotune in
torch.compile
Test Plan: python pytorch/torch/_inductor/ir.py -k "int_mm"
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
1,368 | 107,663 |
`RuntimeError: expected scalar type BFloat16 but found Float` with `torch.nn.TransformerEncoder`
|
module: nn, triaged, oncall: transformer/mha, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
Runtime error occurred when running `torch.nn.TransformerEncoder` in AMP scope. This issue occurs for both when `enable_nested_tensor` is `True` and `False`.
```python
import torch
encoder_layer = torch.nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first = True)
model = torch.nn.TransformerEncoder(encoder_layer, num_layers=6, enable_nested_tensor=True)
model.eval()
src_rand = torch.rand(16, 41, 512)
mask_rand = torch.zeros(16, 41)
with torch.no_grad(), torch.autocast(device_type="cpu", dtype=torch.bfloat16):
out = model(src_rand, src_key_padding_mask = mask_rand)
```
Error message:
```
Traceback (most recent call last):
File "/workspace/test/test1.py", line 13, in <module>
out = model(torch.FloatTensor(src), src_key_padding_mask = mask, is_causal = True)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 387, in forward
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 678, in forward
return torch._transformer_encoder_layer_fwd(
RuntimeError: expected scalar type BFloat16 but found Float
```
### Versions
```
Collecting environment information... PyTorch version: 2.1.0.dev20230820+cpu Is debug build: False CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 6
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 72 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.1.0.dev20230820+cpu
[pip3] torchaudio==2.1.0.dev20230821+cpu
[pip3] torchvision==0.16.0.dev20230821+cpu
[conda] mkl-include 2023.2.0 pypi_0 pypi
[conda] mkl-static 2023.2.0 pypi_0 pypi
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0.dev20230820+cpu pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230821+cpu pypi_0 pypi
[conda] torchvision 0.16.0.dev20230821+cpu pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 2 |
1,369 | 107,661 |
A backward bug of dtensor seems to be caused by new_empty_strided
|
high priority, oncall: distributed, triaged, module: dtensor
|
### 🐛 Describe the bug
```python
device_mesh = DeviceMesh("cpu", [1,2,3,4])
local_tensor = torch.randn(8,8, requires_grad=True, device="cpu")
my_dtensor = distribute_tensor(local_tensor, device_mesh, [Shard(0)])
# bug happen when backward()
my_dtensor.to_local().sum().backward()
```
```
Traceback (most recent call last):
File "/home/hzh/mytest/test_dist.py", line 101, in main_worker
my_dtensor.to_local().sum().backward()
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/_tensor.py", line 491, in backward
torch.autograd.backward(
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/autograd/__init__.py", line 250, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/distributed/_tensor/api.py", line 258, in __torch_dispatch__
res = op_dispatch.operator_dispatch(
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/distributed/_tensor/dispatch.py", line 114, in operator_dispatch
out, _, _ = _operator_dispatch(op_call, args, kwargs, sharding_propagator)
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/distributed/_tensor/dispatch.py", line 254, in _operator_dispatch
local_results = op_call(*local_tensor_args, **local_tensor_kwargs)
File "/root/miniconda3/envs/hzh/lib/python3.9/site-packages/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: The size of tensor a (8) must match the size of tensor b (2) at non-singleton dimension 0
```
this bug is directly caused here by the size check in operator `copy_` during the process of backward, and can be traced back to the previous calling of operator `new_empty_strided`.
https://github.com/pytorch/pytorch/blob/f13101640f548f8fa139c03dfa6711677278c391/torch/csrc/autograd/utils/grad_layout_contract.h#L65-L70
https://github.com/pytorch/pytorch/blob/a506d0ad8f21dd4090594c0aaca62518a5438081/aten/src/ATen/native/native_functions.yaml#L2317
When calling `new_empty_strided` with `self` to be a dtensor, it will be dispatched into `Dtensor.__torch_dispatch__`. Here, the paramater `size` and `stride` will be directly used to create a local tensor which is not i want.
https://github.com/pytorch/pytorch/blob/a506d0ad8f21dd4090594c0aaca62518a5438081/torch/distributed/_tensor/dispatch.py#L128-L146
For the example above, `self` is a dtensor with `placements = [Shard(0)]`, `shape = [8, 8]` and `_local_tensor.shape = [2, 8]`. When i call `new_empty_strided(self, [8, 8], [8, 1])`, i want a dtensor with `shape = [8, 8]` and `_local_tensor.shape = [2, 8]` just like `self`. But, in fact, it return a local tensor with `shape = [8, 8]` after `op_call`, and finally return a **self-contradictory** dtensor with `placements = [Shard(0)]`, `shape = [8, 8]` and `_local_tensor.shape = [8, 8]` which is generated by simply calling the method `Dtensor.__new__()`.
https://github.com/pytorch/pytorch/blob/a506d0ad8f21dd4090594c0aaca62518a5438081/torch/distributed/_tensor/dispatch.py#L265
### Versions
```
PyTorch version: 2.1.0a0+git849fbc6
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.17
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+git849fbc6
```
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 5 |
1,370 | 107,639 |
fullgraph=True doesn't actually raise error when you don't manage full graph inside DDP
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
This is same repro as https://github.com/pytorch/pytorch/issues/107637
In the logs https://gist.github.com/ezyang/e4bf7345326092138969387ba364f3ea#file-dist-log-L752-L761 you can see that we don't raise an error when the first graph fails compiling, we just keep going.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 1 |
1,371 | 107,637 |
[DDP PT2] TypeError: convert_frame_assert.<locals>._convert_frame_assert() missing 2 required positional arguments: 'hooks' and 'frame_state'
|
triaged, oncall: pt2, module: dynamo, mlperf
|
### 🐛 Describe the bug
Steps to reproduce:
pytorch version: 796ce672296c9ae8d7387b18403810aa2f1048a1
torchrec version: 005727ef06cdc808aa7d263ab4f2837938a77ed2
1. Get working torchrec install (you need fbgemm-gpu and torchrec, install by source; explicitly uninstall fbgemm-gpu-nightly and torchrec-nightly if they exist)
2. Patch torchrec with
```
diff --git a/examples/golden_training/train_dlrm.py b/examples/golden_training/train_dlrm.py
index 4004fbc..39fb5e4 100644
--- a/examples/golden_training/train_dlrm.py
+++ b/examples/golden_training/train_dlrm.py
@@ -127,6 +127,8 @@ def train(
)
sharder = EmbeddingBagCollectionSharder(qcomm_codecs_registry=qcomm_codecs_registry)
+ train_model.forward = torch.compile(fullgraph=True)(train_model.forward)
+
model = DistributedModelParallel(
module=train_model,
device=device,
```
In torchrec/examples/golden_training/ run `TORCH_LOGS=dynamo MASTER_ADDR=127.0.0.1 MASTER_PORT=29501 RANK=0 LOCAL_RANK=0 WORLD_SIZE=1 python train_dlrm.py`
Backtrace
```
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 163, in <module>
main()
File "/data/users/ezyang/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 52, in main
train()
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 159, in train
train_pipeline.progress(train_iterator)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 1002, in progress
losses, output = cast(Tuple[torch.Tensor, Out], self._model(self._batch_i))
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/model_parallel.py", line 266, in forward
return self._dmp_wrapped_module(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1519, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/torchrec/models/dlrm.py", line 893, in forward
logits = self.model(batch.dense_features, batch.sparse_features)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return hijacked_callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 626, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 379, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 554, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 476, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 443, in transform
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 439, in wrapper
self.output.compile_subgraph(self, reason=reason)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/output_graph.py", line 857, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/ezyang/local/a/pytorch-env/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/output_graph.py", line 1009, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/ezyang/a/pytorch/torch/_dynamo/backends/distributed.py", line 291, in compile_fn
return self.backend_compile_fn(gm, example_inputs)
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
TypeError: convert_frame_assert.<locals>._convert_frame_assert() missing 2 required positional arguments: 'hooks' and 'frame_state'
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
I spent some time shuffling around looking for the call to convert_frame_assert that was missing these extra arguments but I couldn't find it. Very mysterious.
Note that this model is using DistributedDataParallel.
Full dynamo debug log: https://gist.github.com/ezyang/e4bf7345326092138969387ba364f3ea
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
### Versions
main
| 3 |
1,372 | 107,636 |
Support integer implementations for max_pool1d/2d/3d (cpu and cuda)
|
module: cpu, triaged, open source
|
Fixes #107412
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
cc: @cchan @ezhang887
also supports `torch.compile`
```python
import torch
f = lambda y:torch.nn.functional.max_pool1d(y, kernel_size=3)
f_compiled = torch.compile(f)
x = torch.randn(3,3,3, device="cpu").to(torch.int8)
print(x)
print(f_compiled(x))
```
output
```
tensor([[[ 0, 0, 1],
[ 0, 0, 0],
[ 0, -1, 0]],
[[ 0, 1, 0],
[-1, -1, 0],
[ 0, 1, 0]],
[[ 1, 0, 0],
[ 1, 1, 0],
[-1, 0, -1]]], dtype=torch.int8)
tensor([[[1],
[0],
[0]],
[[1],
[0],
[1]],
[[1],
[1],
[0]]], dtype=torch.int8)
```
| 9 |
1,373 | 107,633 |
nvfuser does not respect CMAKE_INSTALL_PREFIX when build (cmake) libtorch
|
triaged, module: nvfuser
|
### 🐛 Describe the bug
I followed the libtorch build instructions from https://github.com/pytorch/pytorch/blob/main/docs/libtorch.rst#building-libtorch-using-cmake:
```
git clone -b main --recurse-submodule https://github.com/pytorch/pytorch.git
mkdir pytorch-build
cd pytorch-build
cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_BUILD_TYPE:STRING=Release -DPYTHON_EXECUTABLE:PATH=`which python3` -DCMAKE_INSTALL_PREFIX:PATH=../pytorch-install ../pytorch
cmake --build . --target install
```
However, "libnvfuser_codegen.so" is missing from the installation dir "pytorch-install". Looks like it's bc nvfuser_codegen's install destination is hard-coded to <ProjectRoot>/torch/lib. In nvfuser/CMakeLists.txt:
```
set(TORCH_INSTALL_LIB_DIR ${TORCH_ROOT}/torch/lib)
...
install(TARGETS ${NVFUSER_CODEGEN} EXPORT NvfuserTargets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
```
see https://github.com/pytorch/pytorch/blob/2b32a74ab084a9379c9e46a792c95348a6fc0971/third_party/nvfuser/CMakeLists.txt#L21
Please make nvfuser lib install destination settings consistent with other libs.
### Versions
since 2.0.0
cc @kevinstephano @jjsjann123
| 1 |
1,374 | 107,632 |
add user frame to shape guard
|
Stale, release notes: fx, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107632
Differential Revision: [D48534785](https://our.internmc.facebook.com/intern/diff/D48534785/)
| 2 |
1,375 | 107,631 |
torch.fx.Interpreter modules don't get compiled
|
triaged, module: dynamo
|
As a small example:
```
import torch
import torch.fx
class InterpreterModule(torch.fx.GraphModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.interpreter = torch.fx.Interpreter(self)
def __call__(self, *args, **kwargs):
return self.interpreter.run(*args, enable_io_processing=False)
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = x + x
x = x * x
return x
gm = torch.fx.symbolic_trace(Mod())
imodule = InterpreterModule({}, gm.graph)
opt_mod = torch.compile(imodule)
for i in range(1000):
opt_mod(torch.ones(2, 3))
```
Turning on the logging, I see that we avoid compiling over `torch.fx.Interpreter` because it's getting skipfile'd. When I try adding `torch.fx.Interpreter` to the skipfile, it skips again with `GraphModule.graph`, so I gave up here and decided to record an issue instead.
Not sure how easy/hard this is to support with dynamo, but it would be nice. Generally we like using `torch.fx.Interpreter` instead of the torch.fx Python codegen for two reasons:
1. You get better stack traces in the case of exceptions
2. It's easier to debug/interact with intermediates
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 4 |
1,376 | 107,630 |
torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'guards'
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Steps to reproduce:
pytorch version: 796ce672296c9ae8d7387b18403810aa2f1048a1
torchrec version: 005727ef06cdc808aa7d263ab4f2837938a77ed2
1. Get working torchrec install (you need fbgemm-gpu and torchrec, install by source; explicitly uninstall fbgemm-gpu-nightly and torchrec-nightly if they exist)
2. Patch torchrec with
```
diff --git a/examples/golden_training/train_dlrm.py b/examples/golden_training/train_dlrm.py
index 4004fbc..e22677d 100644
--- a/examples/golden_training/train_dlrm.py
+++ b/examples/golden_training/train_dlrm.py
@@ -151,8 +151,13 @@ def train(
num_embeddings=num_embeddings,
)
)
- for _ in tqdm(range(int(num_iterations)), mininterval=5.0):
- train_pipeline.progress(train_iterator)
+
+ @torch.compile()
+ def f():
+ for _ in tqdm(range(int(num_iterations)), mininterval=5.0):
+ train_pipeline.progress(train_iterator)
+
+ f()
if __name__ == "__main__":
```
3. In torchrec/examples/golden_training/ run `TORCH_LOGS=dynamo MASTER_ADDR=127.0.0.1 MASTER_PORT=29501 RANK=0 LOCAL_RANK=0 WORLD_SIZE=1 python train_dlrm.py`
Backtrace
```
Traceback (most recent call last):
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 164, in <module>
main()
File "/data/users/ezyang/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 52, in main
train()
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 160, in train
f()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 157, in f
for _ in tqdm(range(int(num_iterations)), mininterval=5.0):
File "/data/users/ezyang/a/torchrec/examples/golden_training/train_dlrm.py", line 158, in <resume in f>
train_pipeline.progress(train_iterator)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 987, in progress
self._fill_pipeline(dataloader_iter)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 979, in _fill_pipeline
self._init_pipelined_modules(self._batch_i)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 1028, in _init_pipelined_modules
self._pipelined_modules, self._model = _rewrite_model(
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 860, in _rewrite_model
arg_info_list, num_found = _get_node_args(node, feature_processor_nodes)
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 707, in _get_node_args
pos_arg_info_list, num_found = _get_node_args_helper(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 626, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 379, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 571, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 554, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 476, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/convert_frame.py", line 443, in transform
tracer.run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/a/pytorch/torch/_dynamo/symbolic_convert.py", line 246, in inner
self.output.guards.update(value.guards)
torch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'guards'
from user code:
File "/data/users/ezyang/a/torchrec/torchrec/distributed/train_pipeline.py", line 656, in _get_node_args_helper
and child_node in feature_processor_arguments
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
Full dynamo debug log: https://gist.github.com/ezyang/deae63fd778eb63265111a3190561562
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 3 |
1,377 | 107,627 |
ModuleNotFoundError: No module named 'torchgen.code_template'
|
module: build, triaged, module: android, oncall: mobile
|
### 🐛 Describe the bug
I am doing android benchmarking, with reference to https://pytorch.org/tutorials/recipes/mobile_perf.html#benchmarking
**Configuration Version:**
Android NDK: 21.1.6352462
Android SDK: 33.0.0
Android API: 34
Cmake: 3.27.0
**Problem**
When running
```
BUILD_PYTORCH_MOBILE=1 ANDROID_ABI=arm64-v8a
./scripts/build_android.sh -DBUILD_BINARY=ON
```
I got an error in cmake/VulkanCodegen.cmake:54:
```
Traceback (most recent call last):
File "/usr/local/google/home/ericaliuuu/repo/pytorch/cmake/../tools/gen_vulkan_spv.py", line 14, in <module>
from torchgen.code_template import CodeTemplate
ModuleNotFoundError: No module named 'torchgen.code_template'
CMake Error at cmake/VulkanCodegen.cmake:54 (message):
Failed to gen spv.h and spv.cpp with precompiled shaders for Vulkan backend
Call Stack (most recent call first):
caffe2/CMakeLists.txt:6 (include)
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.27.0
Libc version: glibc-2.37
Python version: 3.11.4 (main, Jun 7 2023, 10:13:09) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.3.11-1rodete1-amd64-x86_64-with-glibc2.37
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.41
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 3 MiB (12 instances)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
```
cc @malfet @seemethere
| 2 |
1,378 | 107,623 |
[ao] updating embedding_bag support for fx and eager
|
release notes: quantization
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107623
Summary: our docs were saying dynamic embedding bag wasn't supported but
it actually is (at least at the same level as embeddings were) it just wasn't previously tested/listed.
Test Plan: python test/test_quantization.py -k "test_embedding"
Reviewers:
Subscribers:
Tasks:
Tags:
| 1 |
1,379 | 107,618 |
Add dynamo support for `autograd.Function` with multiple return values.
|
open source, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110417
* __->__ #107618
* #109433
* #110290
Fix: #106389
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 2 |
1,380 | 107,615 |
[Inductor] Autotuner Model Training
|
open source, module: inductor, module: dynamo, ciflow/inductor
|
```sh
# Model Training
## XGB BASELINE
python3 inductor_autotuner/model/data_gen.py --data_dir /scratch/bohanhou/fresh/data --model_type 0 --output_dir /scratch/bohanhou/fresh/new_experiments/xgb_baseline/
python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/xgb_baseline/train.py --data_dir new_experiments/xgb_baseline/ --output_dir new_experiments/xgb_baseline/
python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/xgb_baseline/train.py --data_dir new_experiments/xgb_baseline/ --output_dir new_experiments/xgb_baseline/ --full_train
python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/test.py --data-dir new_experiments/xgb_baseline/ --model-name xgb_baseline.pkl
## NN Pointwise
python3 inductor_autotuner/model/data_gen.py --data_dir /scratch/bohanhou/fresh/data --model_type 1 --output_dir /scratch/bohanhou/fresh/new_experiments/nn_pointwise/
CUDA_VISIBLE_DEVICES=0 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_pointwise/train.py --data_dir new_experiments/nn_pointwise/ --output_dir new_experiments/nn_pointwise/
CUDA_VISIBLE_DEVICES=1 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_pointwise/train.py --data_dir new_experiments/nn_pointwise/ --output_dir new_experiments/nn_pointwise/ --full_train
CUDA_VISIBLE_DEVICES=1 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/test.py --data-dir new_experiments/nn_pointwise/ --model-name nn_pointwise_False_1400_0.09660947996426718_0.05548533260289992.pkl
CUDA_VISIBLE_DEVICES=1 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/test.py --data-dir new_experiments/nn_pointwise/ --model-name nn_pointwise_True_1400_0.0420697318410453_0.02314407822228018.pkl
## NN L2R
python3 inductor_autotuner/model/data_gen.py --data_dir /scratch/bohanhou/fresh/data --model_type 2 --output_dir /scratch/bohanhou/fresh/new_experiments/nn_l2r/
CUDA_VISIBLE_DEVICES=2 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_l2r/train.py --data_dir new_experiments/nn_l2r/ --output_dir new_experiments/nn_l2r/
CUDA_VISIBLE_DEVICES=3 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_l2r/train.py --data_dir new_experiments/nn_l2r/ --output_dir new_experiments/nn_l2r/ --full_train
CUDA_VISIBLE_DEVICES=1 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/test.py --data-dir new_experiments/nn_l2r/ --model-name nn_l2r_False_700_0.011643019712382935_0.8144301772117615.pkl
## NN L2R SMALL
python3 inductor_autotuner/model/data_gen.py --data_dir /scratch/bohanhou/fresh/data --model_type 3 --output_dir /scratch/bohanhou/fresh/new_experiments/nn_l2r_small/
CUDA_VISIBLE_DEVICES=4 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_l2r_small/train.py --data_dir new_experiments/nn_l2r_small/ --output_dir new_experiments/nn_l2r_small/
CUDA_VISIBLE_DEVICES=5 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/nn_l2r_small/train.py --data_dir new_experiments/nn_l2r_small/ --output_dir new_experiments/nn_l2r_small/ --full_train
CUDA_VISIBLE_DEVICES=5 python3 pytorch/benchmarks/dynamo/inductor_autotuner/model/test.py --data-dir new_experiments/nn_l2r_small/ --model-name nn_l2r_small_False_750_0.016344008184915007_0.8171488046646118.pkl
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107615
* #107489
* #107488
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 1 |
1,381 | 107,611 |
Previous version not found
|
oncall: releng, triaged
|
### 📚 The doc issue
I'm trying to download a specific version of pytorch (1.8.0 with CUDA 11.1) and I found this [page](https://pytorch.org/get-started/previous-versions/) where I could download it using the following command:
`pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html`
No matter what I try, I get the following error message:
> ERROR: Could not find a version that satisfies the requirement torch==1.8.0+cu111 (from versions: 2.0.0, 2.0.0+cpu, 2.0.0+cu117, 2.0.0+cu118, 2.0.1, 2.0.1+cpu, 2.0.1+cu117, 2.0.1+cu118)
> ERROR: No matching distribution found for torch==1.8.0+cu111
Could somebody tell me how to fix this issue? I'm kind of a newbie with PyTorch.
Thanks :)
### Suggest a potential alternative/fix
_No response_
| 3 |
1,382 | 107,607 |
[Fix] add validation logics to TCPStore queries
|
triaged, open source, module: c10d, release notes: distributed (c10d)
|
This PR fixes #106294.
Due to the lack of request validation mechanism, TCPStore in torch mistakenly treats nmap scan messages as valid query messages, which leads to DDP OOM. The simple solution enforces the very first query from a client is a validation query with a predefined magic number. If the validation fails, the server will terminate the connection.
| 9 |
1,383 | 107,605 |
Support AMD Ryzen Unified Memory Architecture (UMA)
|
module: rocm, triaged
|
### 🚀 The feature, motivation and pitch
**Background:**
I am using Asus Zenbook S13 OLED, which runs AMD Ryzen 6800U APU. The APU comes with 680M Graphics Card. The memory of the graphic card use the shared memory from the system and its default is 512MB (Please reference the screenshot below).

In Windows environment the memory size would dynamically change, due to the amount of GPU memory required. But in Linux environment it shows 512MB memory (which is the result of setting Auto in BIOS) and thus when I use Stable Diffusion Pytorch would face the OOM situation. As the BIOS setting of the Notebook doesn't allow users from modifying the amount of dedicated memory so would it be possible that PyTorch could support UMA?
Here is the quote from [AMD Ryzen UMA](https://www.amd.com/en/support/kb/faq/pa-280#faq-Recommendations)
> The UMA Frame Buffer Size when set to Auto (default setting) allows the system to manage the amount of shared memory for graphics. In this configuration, the size of the UMA frame buffer should scale depending on the amount of available system memory, enabling the system to perform in an optimal state. Therefore, it is recommended to leave the setting on Auto, which is ideal for most types of video processing workloads.
### Alternatives
_No response_
### Additional context
Another developer created [torch-apu-helper](https://github.com/pomoke/torch-apu-helper) that uses `CUDAPluggableAllocator` to take advantage of the shared memory on PyTorch. However when I try the code snippet with Stable Diffusion I got the following error:
```
RuntimeError: CUDAPluggableAllocator does not yet support getDeviceStats. If you need it, please file an issue describing your use case.
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 0 |
1,384 | 107,595 |
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(defaultdict) items [] {}
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
occurs in torchrec FusedEmbeddingBagCollection
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 3 |
1,385 | 107,593 |
dynamo: don't graph break on ctx.mark_dirty
|
triaged, module: dynamo
|
### 🐛 Describe the bug
Can we add dynamo support for `torch.autograd.Function::ctx.mark_dirty` without a graph break? It currently graph breaks.
Repro: https://gist.github.com/vkuzo/210b2fdf0e0f14cc01e3b6af6a4c176e
Logs: https://gist.github.com/vkuzo/171e765c492ae5175c558409892c5181
This will be needed for `Float8` training UX.
### Versions
master
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 0 |
1,386 | 107,591 |
`repeat_interleave` does not support tensor indexes on different devices while `repeat` does
|
module: onnx, triaged
|
### 🐛 Describe the bug
Hi, the ONNX export of a model using `repeat_interleave` with dynamic shapes fail due to the requirement of the `Tensor.repeat_interleave(repeats: Tuple[Union[Tensor, int]])` tensors to be on the same device. The `Tensor.repeat` operator does not have this requirement. Is this difference intended?
```python
import torch
x = torch.ones(2, 32, 64, 64).to("cuda")
n_repeat = torch.tensor(4).to(torch.int32)
res = x.repeat_interleave(n_repeat, 0)
```
raises `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)`,
while
```python
import torch
x = torch.ones(2, 32, 64, 64).to("cuda")
n_repeat = torch.tensor(4).to(torch.int32)
res = x.repeat(n_repeat, 1, 1, 1)
```
do not.
This is an issue for the ONNX export of `repeat_interleave` on CUDA device where one the indexes of the repeats is dynamic (for example [here](https://github.com/huggingface/transformers/blob/2df24228d68872d79304b932a68cf56de3061f5b/src/transformers/models/sam/modeling_sam.py#L510)) cc @justinchuby
I wonder if the issue comes from the dispatch being on `repeat_interleave_cuda` (instead of ), in turn calling `compute_cuda_kernel` that assumes that the `repeat_ptr` points to data on the device?
### Versions
PyTorch version: 2.1.0.dev20230820+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy-protobuf==3.4.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230820+cu118
[pip3] torch-ort==1.15.0
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.1.0.dev20230820+cu118 pypi_0 pypi
[conda] torch-ort 1.15.0 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 2 |
1,387 | 107,590 |
Select on a coalesced COO tensor returns COO tensor with coalesce flag set to False.
|
module: sparse, feature, triaged
|
## Issue description
As in the title. The result of `select` ought to be always coalesced when its input tensor is a coalesced COO tensor because the ordering of indices in the select result is the same as in the input tensor indices.
## Code example
```python
>>> a = torch.tensor([[0, 1, 2], [3, 4, 5], [5, 6, 7]]).to_sparse()
>>> a.is_coalesced()
True
>>> a.select(0, 0)
tensor(indices=tensor([[1, 2]]),
values=tensor([1, 2]),
size=(3,), nnz=2, layout=torch.sparse_coo)
>>> a.select(0, 0).is_coalesced()
False
```
The expected result is `True`.
## System Info
- PyTorch version: main
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 0 |
1,388 | 107,587 |
Run transformers.OPTForCausalLM(config=config) occurs 'GraphModule' object has no attribute 'compile_subgraph_reason'
|
triaged, oncall: pt2, module: export
|
### 🐛 Describe the bug
I try to convert whole opt model fw/bw without any break. It seems same to closed issue https://github.com/pytorch/pytorch/issues/97319, which is not solved in nighty version!
test.py
```
import transformers
import torch._dynamo
torch._dynamo.config.suppress_errors = False
def make_data(model, device):
batch_size = 1
seq_len = 16
input = torch.randint(
low=0, high=model.config.vocab_size, size=(batch_size, seq_len), device=device
)
label = torch.randint(low=0, high=model.config.vocab_size, size=(batch_size, seq_len),
device=device)
return input, label
device = torch.device('cuda')
config = transformers.AutoConfig.from_pretrained("facebook/opt-125m")
config.tie_word_embeddings = False
model = transformers.OPTForCausalLM(config=config)
model.to(device)
optimized_model = torch.compile(model, backend='inductor',options={'trace.enabled':True,'trace.graph_diagram':True})
data = make_data(model, device)
model.zero_grad(set_to_none=True)
with torch.cuda.amp.autocast(enabled=True, dtype=torch.float16):
torch._dynamo.explain(model, data[0])
```
occurs:
```
Traceback (most recent call last):
File "/home/training/test.py", line 26, in <module>
torch._dynamo.explain(model, data[0])
File "/usr/lib/python3.9/unittest/mock.py", line 1336, in patched
return func(*newargs, **newkeywargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 702, in explain
return inner(*extra_args, **extra_kwargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 661, in inner
opt_f(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/training/torch-byteir-training/transformers/models/opt/modeling_opt.py", line 944, in forward
outputs = self.model.decoder(
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/training/torch-byteir-training/transformers/models/opt/modeling_opt.py", line 650, in forward
causal_attention_mask = self._prepare_decoder_attention_mask(
File "/usr/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 637, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 371, in _convert_frame_assert
return _compile(
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 567, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 466, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 433, in transform
tracer.run()
File "/usr/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/usr/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2159, in RETURN_VALUE
self.output.compile_subgraph(
File "/usr/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 829, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/usr/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 953, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 181, in time_wrapper
r = func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1020, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1005, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/lib/python3.9/site-packages/torch/_dynamo/repro/after_dynamo.py", line 95, in debug_wrapper
compiled_gm = compiler_fn(copy.deepcopy(gm), example_inputs)
File "/usr/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 644, in dynamo_graph_accumulating_compiler
gm, graphs, op_count, ops_per_graph, break_reasons = _explain_graph_detail(
File "/usr/lib/python3.9/site-packages/torch/_dynamo/backends/debugging.py", line 232, in _explain_graph_detail
if gm.compile_subgraph_reason.graph_break:
File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
torch._dynamo.exc.BackendCompilerFailed: backend='dynamo_graph_accumulating_compiler' raised:
AttributeError: 'GraphModule' object has no attribute 'compile_subgraph_reason'
```
break at:
```
# File "/home/training/torch-byteir-training/transformers/models/opt/modeling_opt.py", line 650, in forward
causal_attention_mask = self._prepare_decoder_attention_mask(
attention_mask, input_shape, inputs_embeds, past_key_values_length
)
```
### Versions
--extra-index-url https://download.pytorch.org/whl/nightly/cu118
--pre
torch==2.1.0.dev20230820+cu118
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,389 | 107,586 |
Add support for float8_e4m3fnuz and _e5m2fnuz
|
module: cpu, triaged, open source, NNC, release notes: quantization, release notes: linalg_frontend, skip-pr-sanity-checks
|
This PR relates to the feature in [this feature submission](https://docs.google.com/document/d/1pF2T1xz54IPg1jG7FhykbrpbcJZVelQw0v8vBaoLkfs/edit). It has been based on #104242 which adds similar float8 types.
These new types added in this PR are described in the paper at https://arxiv.org/abs/2206.02915. A brief description and comparison of the types with other float8 types can be also found in the [OpenXLA RFC](https://github.com/openxla/stablehlo/blob/main/rfcs/20230321-fp8_fnuz.md).
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @EikanWang @albanD
| 13 |
1,390 | 107,583 |
Enable Mypy Checking in torch/_inductor/triton_heuristics.py
|
triaged, open source, Stale, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/triton_heuristics.py
After Fix:
mypy --follow-imports=skip torch/_inductor/triton_heuristics.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @Xia-Weiwen @ngimel
| 4 |
1,391 | 107,582 |
[FakeTensor] `to` doesn't error with `allow_non_fake_inputs=False`
|
triaged, module: fakeTensor
|
### 🐛 Describe the bug
```python
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
number = 0.5
const = torch.tensor(number)
with FakeTensorMode(allow_non_fake_inputs=False):
x = const.sin() # Fails as expected `Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'.`
x = const.to(torch.float) # This passes
print(x) # tensor(0.5)
x = const.to(torch.int) # This passes
print(x) # FakeTensor(..., size=(), dtype=torch.int32)
```
cc: @zou3519
### Versions
main
| 0 |
1,392 | 107,581 |
[LibTorch/iOS] Building with METAL support script is freezing
|
triaged, oncall: mobile, module: ios
|
### 🐛 Describe the bug
I am trying to build the library to support METAL on iOS, I am executing this command after cloning the repo:
`IOS_ARCH=arm64 USE_PYTORCH_METAL=1 ./scripts/build_ios.sh`
However, it just freezes here at 86%:
```
Consolidate compiler generated dependencies of target torch_cpu
[ 86%] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp.o
```
### Versions
LibTorch-Lite (1.13.0.1):
LibTorch-Lite/Core (= 1.13.0.1)
LibTorch-Lite/Core (1.13.0.1):
LibTorch-Lite/Torch
LibTorch-Lite/Torch (1.13.0.1)
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 6 |
1,393 | 107,580 |
Doc is unclear on how to install pytorch with Cuda via pip
|
triaged, topic: docs
|
### 📚 The doc issue

I've been looking on how to install torch with CUDA via pip for almost one day and the doc is absolutely not helping on how to do so.
### Suggest a potential alternative/fix
Explain clearly how to install pytorch using pip with CUDA or not.
```
To install pytorch with CUDA using pip, you first need to install CUDA on your system if it is compatible with it and then install pytorch with the following command in your shell:
`pip install ...........`
```
| 9 |
1,394 | 107,575 |
halo,I continue pretrain llama2-13B model ,but save state_dict is about 50GB file
|
oncall: distributed, triaged
|
### 🐛 Describe the bug
@record
def training_function(args):
# get some base rank info
# metric = evaluate.load("glue", "mrpc")
world_size = os.getenv("WORLD_SIZE")
rank = os.getenv("RANK")
local_rank = os.getenv("LOCAL_RANK")
assert world_size is not None, f"WORLD_SIZE is needed {world_size}"
assert rank is not None, f"RANK is needed {rank}"
assert local_rank is not None, f"RANK is needed {local_rank}"
world_size = int(world_size)
rank = int(rank)
local_rank = int(local_rank)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
# Load the pre-trained model and setup its configuration
model = LlamaForCausalLM.from_pretrained(
args.model_name_and_path,
load_in_8bit=True if args.quantization else None,
device_map="auto" if args.quantization else None,
return_dict=True
)
model.train()
# Initialize accelerator
gradient_accumulation_steps = args.gradient_accumulation_steps
if args.with_tracking:
accelerator = Accelerator(
gradient_accumulation_steps=gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with="tensorboard", project_dir=args.project_dir)
else:
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps, mixed_precision=args.mixed_precision)
# We need to initialize the trackers we use, and also store our configuration
if args.with_tracking:
run = os.path.split(__file__)[-1].split(".")[0]
tracker_cfg = {
'lr': args.lr,
'per_device_batch_size': args.batch_size,
'seed': args.seed,
'num_epoch': args.num_epochs
}
accelerator.init_trackers(run, tracker_cfg)
accelerator.print(f"***** Total GPUS: {accelerator.num_processes} *****")
# set seed
set_seed(args.seed)
# epoch or steps
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
checkpointing_steps = args.checkpointing_steps
elif args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
raise ValueError(
f"Argument `checkpointing_steps` must be either a number or `epoch`. `{args.checkpointing_steps}` passed."
)
else:
checkpointing_steps = None
if hasattr(args.lora_save_steps, "isdigit"):
if args.lora_save_steps == "epoch":
lora_save_steps = args.lora_save_steps
elif args.lora_save_steps.isdigit():
lora_save_steps = int(args.lora_save_steps)
else:
raise ValueError(
f"Argument `lora_save_steps` must be either a number or `epoch`. `{args.lora_save_steps}` passed."
)
else:
lora_save_steps = None
# Load the tokenizer and add special tokens
if args.token_name:
accelerator.print(f'Use tokenizer: [{args.token_name}]')
tokenizer = LlamaTokenizer.from_pretrained(args.token_name, )
model.config.vocab_size = tokenizer.vocab_size
model.config.pad_token_id = tokenizer.pad_token_id
model.config.bos_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.resize_token_embeddings(tokenizer.vocab_size)
else:
tokenizer = LlamaTokenizer.from_pretrained(args.model_name_and_path)
tokenizer.add_special_tokens(
{
"pad_token": "<PAD>",
}
)
# get processed dataset
# vol = os.getenv('LPAI_INPUT_DATA_0')
# dataset_all = load_from_disk(f'{vol}/NLP/bigDatasets/pretrain_D/tokened_grouped_eval')
dataset_all = load_from_disk(args.dataset_path)
dataset_all = dataset_all.train_test_split(test_size=0.01)
dataset_train = dataset_all['train']
dataset_val = dataset_all['test']
train_sampler = None
val_sampler = None
# print(f'dist.get_rank() ->{dist.get_rank()}, dist.get_world_size() -> {dist.get_world_size()}')
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding=True,
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
if args.enable_fsdp:
train_sampler = DistributedSampler(
dataset_train,
rank=dist.get_rank(),
# rank=accelerator.device,
seed=args.seed,
num_replicas=dist.get_world_size(),
shuffle=True,
)
if args.run_validation:
val_sampler = DistributedSampler(
dataset_val,
rank=dist.get_rank(),
# rank=accelerator.device,
num_replicas=dist.get_world_size(),
)
train_dataloader = torch.utils.data.DataLoader(
dataset_train,
batch_size=args.batch_size,
num_workers=args.num_workers_dataloader,
pin_memory=True,
sampler=train_sampler if train_sampler else None,
drop_last=True,
# collate_fn=default_data_collator,
collate_fn=collate_fn,
)
if args.run_validation:
eval_dataloader = torch.utils.data.DataLoader(
dataset_val,
batch_size=args.val_batch_size,
num_workers=args.num_workers_dataloader,
pin_memory=True,
sampler=val_sampler if val_sampler else None,
drop_last=True,
# collate_fn=default_data_collator,
collate_fn=collate_fn,
)
if args.use_peft:
model.to(torch.bfloat16)
if args.resume_from_lora:
assert args.resume_from_checkpoint is None, 'lora adapter or checkpoint just need use any one'
if args.resume_from_lora is not None or args.resume_from_lora != "":
accelerator.print(f"Resumeing from lora_adapter: {args.resume_from_lora}")
model=PeftModel.from_pretrained(model, args.resume_from_lora, is_trainable=True)
model.print_trainable_parameters()
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
accelerator.print(f"Resumeing from lora_adapter: {path}")
model=PeftModel.from_pretrained(model, path, is_trainable=True)
model.print_trainable_parameters()
else:
# target_modules = args.lora_modules.split(',')
# accelerator.print(target_modules)
# model.to(torch.bfloat16)
peft_config = LoraConfig(
r=args.lora_r,
lora_alpha=32,
# target_modules=target_modules,
target_modules=["q_proj", "v_proj", "o_proj", "k_proj", "gate_proj", "up_proj"],
bias="none",
task_type="CAUSAL_LM",
lora_dropout=0.05,
inference_mode=False
)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
mixed_precision_policy, wrapping_policy = get_policies(fsdp_config, rank)
my_auto_wrapping_policy = fsdp_auto_wrap_policy(model, LlamaDecoderLayer)
bfSixteen = MixedPrecision(
param_dtype=torch.bfloat16,
# Gradient communication precision.
reduce_dtype=torch.bfloat16,
# Buffer precision.
buffer_dtype=torch.bfloat16,
cast_forward_inputs=True,
)
torch.cuda.set_device(local_rank)
model = FSDP(
model,
auto_wrap_policy= my_auto_wrapping_policy if args.use_peft else wrapping_policy,
# mixed_precision=mixed_precision_policy if not fsdp_config.pure_bf16 else None,
mixed_precision=bfSixteen,
sharding_strategy=fsdp_config.sharding_strategy,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
device_id=torch.cuda.current_device(),
# forward_prefetch=True,
limit_all_gathers=True,
)
if fsdp_config.fsdp_activation_checkpointing:
policies.apply_fsdp_checkpointing(model)
# We need to keep track of how many total steps we have iterated over
overall_step = 0
# We also need to keep track of the stating epoch so files are named properly
starting_epoch = 0
optimizer = AdamW(params=model.parameters(), lr=args.lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=500,
num_training_steps=(len(train_dataloader) * args.num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
total_batch_size = args.batch_size * accelerator.num_processes * args.gradient_accumulation_steps
num_training_steps=int((len(train_dataloader) * args.num_epochs) // gradient_accumulation_steps)
accelerator.print("***** Running training *****")
accelerator.print(f" Num examples = {len(dataset_train)}")
accelerator.print(f" Num train_loader = {len(train_dataloader)}")
accelerator.print(f" Num eval_loader = {len(eval_dataloader)}")
accelerator.print(f" Num Epochs = {args.num_epochs}")
accelerator.print(f" Instantaneous batch size per device = {args.batch_size}")
accelerator.print(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
accelerator.print(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
accelerator.print(f" Total optimization steps = {num_training_steps}")
progress_bar = tqdm(range(num_training_steps), disable=not accelerator.is_local_main_process)
# Potentially load in the weights and states from a previous save
if args.resume_from_lora:
assert args.resume_from_checkpoint is None, 'lora adapter or checkpoint just need use any one'
if args.resume_from_lora is not None or args.resume_from_lora != "":
path = os.path.basename(args.resume_from_lora)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("lora_epoch_", "")) + 1
resume_step = None
resume_step_r = starting_epoch * len(train_dataloader)
else:
resume_step_r = int(training_difference.replace("lora_step_", ""))
starting_epoch = resume_step_r // len(train_dataloader)
resume_step = resume_step_r - (starting_epoch * len(train_dataloader))
accelerator.print(f'***** In use_peft resume_step {resume_step}')
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"***** Resumed from checkpoint: {args.resume_from_checkpoint} *****")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
accelerator.print(f"***** Resumed from checkpoint: {path} *****")
accelerator.load_state(args.resume_from_checkpoint)
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
resume_step_r = starting_epoch * len(train_dataloader)
else:
resume_step_r = int(training_difference.replace("step_", ""))
starting_epoch = resume_step_r // len(train_dataloader)
resume_step = resume_step_r - (starting_epoch * len(train_dataloader))
accelerator.print(f'***** In checkpoint resume_step {resume_step}')
# overall_step = resume_step // args.gradient_accumulation_steps
# update the progress_bar if load from checkpoint
# Create a gradient scaler for fp16
if train_config.use_fp16 and train_config.enable_fsdp:
scaler = ShardedGradScaler()
elif train_config.use_fp16 and not train_config.enable_fsdp:
scaler = torch.cuda.amp.GradScaler()
if train_config.enable_fsdp:
world_size = int(os.environ["WORLD_SIZE"])
# Now we train the model
for epoch in range(starting_epoch, args.num_epochs):
model.train()
train_sampler.set_epoch(epoch)
accelerator.print(f'***** Ins loop epoch and distribute set epoch *****')
if args.with_tracking:
total_loss = 0
if (args.resume_from_checkpoint or args.resume_from_lora) and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
accelerator.print(f'accelerator.skip_first_batches -> resume_step {resume_step}')
# active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
active_dataloader = skip_first_batches(train_dataloader, resume_step)
overall_step += resume_step_r
progress_bar.update(overall_step)
accelerator.print(f'***** oversteps {overall_step} skip train loader *****')
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
first_f = False
for step, batch in enumerate(active_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
# batch.to(accelerator.device)
with accelerator.accumulate(model):
if not first_f:
log_ids = batch['input_ids']
accelerator.print(tokenizer.batch_decode(log_ids))
first_f = True
for key in batch.keys():
if args.enable_fsdp:
batch[key] = batch[key].to(accelerator.device)
# batch[key] = batch[key].to(local_rank)
else:
batch[key] = batch[key].to('cuda:0')
loss = model(**batch).loss
# loss = loss / gradient_accumulation_steps
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
accelerator.log({"loss": loss.item()}, step=step)
accelerator.print(f"step: {overall_step} loss: {loss.item()}")
if accelerator.sync_gradients:
accelerator.clip_grad_norm_(model.parameters(), 1.0)
# if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
if accelerator.sync_gradients:
overall_step += 1
progress_bar.update(1)
if isinstance(checkpointing_steps, int):
output_dir = f"step_{overall_step}"
if overall_step % checkpointing_steps == 0:
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
# 0.0
# keep_fp32_wrapper
# create singleton saving policies to avoid making over and over
fullstate_save_policy = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
with FSDP.state_dict_type(
model, StateDictType.FULL_STATE_DICT, fullstate_save_policy
):
accelerator.print(f'***** model.state_dict() {model.state_dict().keys()} *****')
cpu_state = model.state_dict()
# pwd_dir = Path('./').absolute() / args.output_dir
pwd_dir = os.path.join(args.output_dir, f'epoch_model.pth')
# torch.save(cpu_state, args.output_dir)
# accelerator.print(f'***** pwd dir: {pwd_dir} *****')
accelerator.save(cpu_state, pwd_dir)
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.57.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
CPU: 112
在线 CPU 列表: 0-111
每个核的线程数: 2
每个座的核数: 28
座: 2
NUMA 节点: 2
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 106
型号名称: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
步进: 6
CPU MHz: 2593.904
BogoMIPS: 5187.80
超管理器厂商: KVM
虚拟化类型: 完全
L1d 缓存: 48K
L1i 缓存: 32K
L2 缓存: 1280K
L3 缓存: 49152K
NUMA 节点0 CPU: 0-55
NUMA 节点1 CPU: 56-111
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq spec_ctrl arch_capabilities
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.5
[pip3] pytorch-lightning==1.6.5
[pip3] pytorch-metric-learning==1.5.2
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-wpe==0.0.1
[pip3] torch==2.0.1
[pip3] torch-audiomentations==0.11.0
[pip3] torch-complex==0.4.3
[pip3] torch-optimizer==0.1.0
[pip3] torch-pitch-shift==1.2.2
[pip3] torch-stoi==0.1.2
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.7.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.21.5 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] pytorch-metric-learning 1.5.2 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-audiomentations 0.11.0 pypi_0 pypi
[conda] torch-complex 0.4.3 pypi_0 pypi
[conda] torch-optimizer 0.1.0 pypi_0 pypi
[conda] torch-pitch-shift 1.2.2 pypi_0 pypi
[conda] torch-stoi 0.1.2 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchmetrics 0.7.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 2 |
1,395 | 107,574 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor, merging
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 5 |
1,396 | 107,573 |
caching keys+values in TransformerDecoderLayer for faster inference
|
triaged, oncall: transformer/mha
|
### 🚀 The feature, motivation and pitch
In autoregressive generation, at step k, we only need to compute the new token k+1, based on all the previous ones.
This can be done in $O(k d)$ for each step, if we cache previous examples.
However, the current nn.TransformerDecoderLayer (and Encoder) does not support this.
Therefore, currently the most efficient inference method would be in $O(k^2 d)$ for each step.
### Alternatives
add an option to feed in previously calculated keys+values, and if they appear, avoid some unneeded computations.
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 2 |
1,397 | 107,568 |
RuntimeError: Unsupported value kind: Tensor while torch.jit.script nn.Module
|
oncall: jit
|
### 🐛 Describe the bug
I trying to scripting Lovasz loss function that I found online. There are any error that occurs, but i have fixed them. However, I am reaching a dead-end right now. The error is RuntimeError: Unsupported value kind: Tensor. It doesn't sound right at all because TorchScript does support Tensor. Please help me!
This is my code:
```
class LovaszLoss(nn.Module):
def __init__(self, per_image=False):
super().__init__()
self.per_image = per_image
def forward(self, logit: torch.Tensor, labels: torch.Tensor):
return lovasz_hinge(logit, labels, per_image=self.per_image)
def lovasz_hinge(logits, labels, per_image: bool = True, ignore=None):
r"""
Binary Lovasz hinge loss
logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
per_image: compute the loss per image instead of per batch
ignore: void class id
"""
if per_image:
loss_list = []
for log, lab in zip(logits, labels):
loss_list.append(lovasz_hinge_flat(
*flatten_binary_scores(log.unsqueeze(0), lab.unsqueeze(0), ignore)))
loss = torch.cat(loss_list, 0).mean()
else:
loss = lovasz_hinge_flat(
*flatten_binary_scores(logits, labels, ignore))
return loss
def lovasz_hinge_flat(logits, labels):
r"""
Binary Lovasz hinge loss
logits: [P] Variable, logits at each prediction (between -\infty and +\infty)
labels: [P] Tensor, binary ground truth labels (0 or 1)
ignore: label to ignore
"""
if len(labels) == 0:
# only void pixels, the gradients should be 0
return logits.sum() * 0.0
signs = 2.0 * labels.float() - 1.0
errors = 1.0 - logits * signs
errors_sorted, perm = torch.sort(errors, dim=0, descending=True)
perm = perm.data
gt_sorted = labels[perm]
grad = lovasz_grad(gt_sorted)
loss = torch.dot(F.elu(errors_sorted) + 1, grad)
return loss
def flatten_binary_scores(scores, labels, ignore=None):
"""Flattens predictions in the batch (binary case) Remove labels equal to 'ignore'."""
scores = scores.view(-1)
labels = labels.view(-1)
if ignore is None:
return scores, labels
valid = labels != ignore
vscores = scores[valid]
vlabels = labels[valid]
return vscores, vlabels
def lovasz_grad(gt_sorted):
"""Computes gradient of the Lovasz extension w.r.t sorted errors See Alg.
1 in paper
"""
p = len(gt_sorted)
gts = gt_sorted.sum()
intersection = gts - gt_sorted.float().cumsum(0)
union = gts + (1 - gt_sorted).float().cumsum(0)
jaccard = 1.0 - intersection / union
if p > 1: # cover 1-pixel case
jaccard[1:p] = jaccard[1:p] - jaccard[0:-1]
return jaccard
if __name__ == "__main__":
x = torch.rand((1, 3, 256, 256))
y = torch.rand((1, 3, 256, 256))
loss = LovaszLoss()
loss = torch.jit.script(loss)
print(loss(x, y))
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 535.86.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
CPU max MHz: 5000,0000
CPU min MHz: 800,0000
BogoMIPS: 5199.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1,5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.7
[pip3] torch==2.0.1
[pip3] torchmetrics==1.0.3
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] tritonclient==2.36.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch-lightning 2.0.7 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
[conda] tritonclient 2.36.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,398 | 107,567 |
Hacked up SHAPE_ENV provenance
|
release notes: fx, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107567
* #107439
* #107471
* #107562
* #107532
* #107530
* #107516
* #107505
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,399 | 107,566 |
[pt2] enable meta tests for `foreach` ops
|
open source, Stale
|
Stack from [ghstack](https://github.com/ezyang/ghstack):
* __->__ #107566
* #107560
| 3 |
1,400 | 107,561 |
Dynamo guards on unused Tensor variables
|
triaged, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
```
import torch
@torch.compile(backend="eager")
def f(x, y):
return x * 2
f(torch.randn(2), torch.randn(3))
```
Produces these guards:
```
(/home/ezyang/local/b/pytorch-env) [ezyang@devgpu005.nha1 ~/local/b/pytorch (fe888068)]$ TORCH_LOGS=guards python n.py
[2023-08-20 17:50:05,934] [0/0] torch._dynamo.guards.__guards: [DEBUG] GUARDS:
[2023-08-20 17:50:05,934] [0/0] torch._dynamo.guards.__guards: [DEBUG] hasattr(L['x'], '_dynamo_dynamic_indices') == False # _dynamo/variables/builder.py:1237 in wrap_fx_proxy_cls
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] hasattr(L['y'], '_dynamo_dynamic_indices') == False # _dynamo/variables/builder.py:1237 in wrap_fx_proxy_cls
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] ___is_grad_enabled() # _dynamo/output_graph.py:345 in init_ambient_guards
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] not ___are_deterministic_algorithms_enabled() # _dynamo/output_graph.py:341 in init_ambient_guards
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] ___is_torch_function_enabled() # _dynamo/output_graph.py:349 in init_ambient_guards
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:347 in init_ambient_guards
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[2], stride=[1]) # _dynamo/variables/builder.py:1237 in wrap_fx_proxy_cls
[2023-08-20 17:50:05,935] [0/0] torch._dynamo.guards.__guards: [DEBUG] check_tensor(L['y'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[3], stride=[1]) # _dynamo/variables/builder.py:1237 in wrap_fx_proxy_cls
```
Guarding on y is unnecessary because it is unused. In fact, we know to prune it from the graph we compile, but we don't know to avoid doing guard tests on it.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.