Serial Number
int64
1
6k
Issue Number
int64
75.6k
112k
Title
stringlengths
3
357
Labels
stringlengths
3
241
โŒ€
Body
stringlengths
9
74.5k
โŒ€
Comments
int64
0
867
1,201
108,401
Crash on converting circular padding to onnx
module: onnx, triaged
### ๐Ÿ› Describe the bug converting a network with circular padding to onnx model suffers a crash. here is the code ``` def circular_padding(x,sz=1): x = F.pad(x,(sz,sz,0,0), mode='circular') # unable to convert x = F.pad(x,(0,0,sz,sz)) # x = torch.cat([x[:,:,:,-sz:].clone(),x,x[:,:,:,:sz].clone()],dim = -1).clone() # a workaround return x class Decoder(nn.Module): def __init__(self,args): super(Decoder, self).__init__() self.conv = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=0, bias=False) def forward(x): x = circular_padding(x) x = self.conv(x) return x if __name__ == '__main__': model = Decoder() # remove all paddings for all conv, and add a circularpadding before the conv device = torch.device('cpu') input = input_val.to(device) input_names = ["input1"] output_names = ["output"] torch.onnx.export(model, input, 'model.onnx', verbose=True, input_names=input_names, output_names=output_names,opset_version=11) ``` and here is the output: ``` Traceback (most recent call last): File "convert.py", line 40, in <module> main() File "convert.py", line 37, in main convert_torch_to_onnx(model, onnx_filename, input) File "convert.py", line 10, in convert_torch_to_onnx torch.onnx.export(model, input, onnx_filename, verbose=True, input_names=input_names, output_names=output_names,opset_version=11) File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph graph = _optimize_graph( File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph graph = _C._jit_pass_onnx(graph, operator_export_type) File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/utils.py", line 1891, in _run_symbolic_function return symbolic_fn(graph_context, *inputs, **attrs) File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/symbolic_opset11.py", line 864, in pad return opset9._pad_circular(g, input, pad) File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py", line 1919, in _pad_circular left = symbolic_helper._slice_helper( File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py", line 729, in _slice_helper return _slice10(g, input, axes, starts, ends, steps, dynamic_slice) File "/usr/local/lib/miniconda3/envs/SDC/lib/python3.8/site-packages/torch/onnx/symbolic_opset10.py", line 352, in _slice if ends[0] > _constants.INT64_MAX: TypeError: '>' not supported between instances of 'NoneType' and 'int' ``` ### Versions Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.8.15 (default, Nov 4 2022, 20:59:55) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB Nvidia driver version: 515.65.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz Stepping: 6 CPU MHz: 3400.000 CPU max MHz: 3400.0000 CPU min MHz: 800.0000 BogoMIPS: 5200.00 Virtualization: VT-x L1d cache: 3 MiB L1i cache: 2 MiB L2 cache: 80 MiB L3 cache: 96 MiB NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.23.4 [pip3] torch==2.0.1 [pip3] torch-tb-profiler==0.4.1 [pip3] torchaudio==2.0.1 [pip3] torchvision==0.15.1 [pip3] triton==2.0.0 [conda] numpy 1.23.4 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torch-tb-profiler 0.4.1 pypi_0 pypi [conda] torchaudio 2.0.1 pypi_0 pypi [conda] torchvision 0.15.1 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi
0
1,202
108,400
quantized module serialization through prepack function registration
module: cpu, triaged, open source, release notes: quantization
Fixes #108399 # Motivation PyTorch only can serialize&deserialize parameters in quantized model for official supported QEngine now. There's no chance to extend it to achieve this goal for out-of-tree QEngine based on current code structure. `torch.save/torch.load` would fail with a ScriptModule on QuantizedXPU backend. Serializing quantized module (e.g. `ConvPackedParamsBase, LinearPrepackParamsBase`) depends on specific `prepack&unpack` implementation with regard to each QEngine. We want to propose a mechanism that allows out-of-tree QEngines register their `prepack` implementation in PyTorch to finish the overall serialization process. # Modification 1. Add `register_prepack` `get_device_prepack_fn` utils pair for `qconv`, `qlinear` to register and query the pointer to prepacking function of each QEngine. 2. Add qengine `QXPU`, which is the engine for quantized XPU cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
13
1,203
108,399
Generalize weight prepacking during quantized model deserialization
oncall: quantization, triaged
### ๐Ÿš€ The feature, motivation and pitch ## Motivation PyTorch only can serialize&deserialize parameters in quantized model for official supported QEngine now. There's no chance to extend it to achieve this goal for out-of-tree QEngine based on current code structure. `torch.save/torch.load` would fail with a ScriptModule on QuantizedXPU backend. Serializing quantized module (e.g. `ConvPackedParamsBase, LinearPrepackParamsBase`) depends on specific `prepack&unpack` implementation with regard to each QEngine. We want to propose a mechanism that allows out-of-tree QEngines register their `prepack` implementation in PyTorch to finish the overall serialization process. ## Current behaviour Current pickling method call `prepack` of different backend based on compilation marco and global context `QEngine`. For each `QEngine`, code of using prepacking function to deserialize weight need to be added in https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp#L481. Like, `PackedLinearWeightsQnnp::prepack`, `PackedLinearWeightsOnednn::prepack` `PackedLinearWeight::prepack` There's no chance to extend it to out-of-tree QEngines. Also, it is inconvenient for official supported qengine. For any new qengine added, it is required to add the code for deserialization, and maintainers need to know the some details of pickling method. ## Plan Considering the `prepack` function has same signature for each `QEgnine`, we suggest to generalize the process of calling prepacking function. The prepacking function could be found at runtime based on the global context `QEngine`. Following is our proposal: A dictionary is maintained by the library, where the key for each entry is `QEngine` and the value is the pointer to prepacking function of such engine. Each engine only need register the prepack function to the dictionary. When deserializing weight file, the `__getstate__` method can use `QEngine` as key to find the proper pointer to prepacking function. For maintainers of each official supported `QEngine`, only registering the prepacking function is needed (through `register_prepack_fn`). No need to add code in https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp#L481 anymore. For maintainers of out-of-tree `QEngine`, maintainers can register prepacking function within their library. Then function pointer is inserted to the map after registration and be found at runtime. No modifications is needed in PyTorch. (Maintainers still need add enum `QEngine` in PyTorch.๏ผ‰ ![quantsave0](https://github.com/pytorch/pytorch/assets/12457857/8c86a538-0b53-4640-9cb9-41ac83cfd08a) @ezyang @arthuryuan1987 @gujinghui ### Alternatives _No response_ ### Additional context _No response_ cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
6
1,204
108,384
[vision hash update] update the pinned vision hash
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml). Update the pinned vision hash.
4
1,205
108,381
FSDP always puts parameters to fp32 when loading state_dict, even if state_dict has bf16 params
oncall: distributed
### ๐Ÿ› Describe the bug In cases where a non-FSDP trained model is loaded with FSDP and the state_dict to load is bf16, loading into FSDP sharded_state_dict will make the parameters to fp32. In my particular case, I am converting LLaMA weights to PyTorch FSDP format for FSDP training, but the checkpoint I'm working with is in bf16. This could cause issues in cases where users want to benchmark numerical equivalence. ### Versions main cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
0
1,206
108,378
NCCL ISend is not asynchronous
oncall: distributed, module: c10d
### ๐Ÿ› Describe the bug NCCL backend isend will block if no matching irecv from peer; Run the below script with 2 workers will result in: rank 1 finishes, but rank 0 hang. However, if you switch from NCCL backend and GLOO backend, both processes do not hang. ```python import torch torch.distributed.init_process_group(backend="nccl") torch.cuda.set_device(torch.distributed.get_rank() % torch.distributed.get_world_size()) t = torch.randn(16, 256, 128).cuda() if torch.distributed.get_rank() == 0: req = torch.distributed.isend(tensor=t, dst=1) print(f"{torch.distributed.get_rank()} finishes") ``` ### Versions PyTorch version: 2.1.0a0+git2b1058c Is debug build: False CUDA used to build PyTorch: 12.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.31 Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-156-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.2.128 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 535.54.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 1497.754 CPU max MHz: 2250.0000 CPU min MHz: 1500.0000 BogoMIPS: 4500.20 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.25.2 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.1.0a0+git2b1058c [pip3] torchaudio==2.0.1+cu118 [pip3] torchvision==0.15.1+cu118 [pip3] triton==2.1.0 [conda] blas 1.0 mkl [conda] mkl 2023.1.0 h213fc3f_46343 [conda] mkl-service 2.4.0 py310h5eee18b_1 [conda] mkl_fft 1.3.6 py310h1128e8f_1 [conda] mkl_random 1.2.2 py310h1128e8f_1 [conda] numpy 1.25.2 py310h5f9d8c6_0 [conda] numpy-base 1.25.2 py310hb5e798b_0 [conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi [conda] torch 2.1.0a0+git2b1058c dev_0 <develop> [conda] torchaudio 2.0.1+cu118 pypi_0 pypi [conda] torchvision 0.15.1+cu118 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
2
1,207
108,376
[ONNX] dort to inline onnx model before running ort
open source, release notes: onnx
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #110313 * #110477 * #110178 * __->__ #108376
1
1,208
108,359
[dynamorunner] init FSDP on meta device to fix OOM on large models
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108359 By default init models with `meta` device when using FSDP. Example command (use own HF token): ``` HUGGING_FACE_HUB_TOKEN=hf_xxx python benchmarks/dynamo/torchbench.py --float16 -dcuda --output=./performance.csv --training --backend=eager --only llama_v2_7b_16h --fsdp --performance --timing --print-memory --multiprocess ``` cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
6
1,209
108,352
Use fmt::print to format exception messages
open source, release notes: jit
This PR replaces printf based way of constructing exceptions with fmt::print which is more easy to use and removes redundant code. Our use of fmt::print also provides compile time check of formatting.
1
1,210
108,345
Cross Entropy doesn't work with the specific batch, but works with each sample from this batch
module: nn, module: cuda, triaged, actionable
### ๐Ÿ› Describe the bug I've found the strange error of cross_entropy loss with one specific batch. ```python import torch labels = torch.load('labels.pt').to('cuda:0') # labels.pt is attached logits = torch.ones(190, 250002, 50).float().to('cuda:0') loss = torch.nn.functional.cross_entropy(logits, labels) ``` The code above falls down with this CUDA error: `../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [188,0,0], thread: [32,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [32,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [33,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [34,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [37,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [38,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [40,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [42,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [46,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [183,0,0], thread: [14,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [180,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [180,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [180,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [180,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [18,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [19,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [184,0,0], thread: [20,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [181,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [181,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [181,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [185,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [185,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [185,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [185,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [185,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [12,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [175,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [34,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [37,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [40,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [43,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [45,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [47,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [48,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [173,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [12,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [15,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [16,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [17,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [18,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [21,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [22,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [23,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [24,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [25,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [26,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [28,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [178,0,0], thread: [30,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [174,0,0], thread: [34,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [174,0,0], thread: [38,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [174,0,0], thread: [39,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [174,0,0], thread: [43,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [174,0,0], thread: [48,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [179,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [179,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [179,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [179,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [15,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [17,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [18,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [21,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [22,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [23,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [24,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [25,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [26,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [187,0,0], thread: [27,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [5,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [14,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [15,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [16,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [21,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [23,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [24,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [26,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [27,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [186,0,0], thread: [28,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [177,0,0], thread: [16,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [14,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [17,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [19,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [20,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [21,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [23,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [24,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [27,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [29,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [30,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [172,0,0], thread: [31,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [12,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [14,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [15,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [16,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [182,0,0], thread: [17,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [4,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [7,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [10,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [11,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [12,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [176,0,0], thread: [15,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [1,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [2,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [3,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [6,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [8,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [9,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [12,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [13,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [14,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [19,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [21,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [22,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [23,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [24,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [25,0,0] Assertion `input_index >= 0` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:107: nll_loss2d_forward_kernel: block: [189,0,0], thread: [27,0,0] Assertion `input_index >= 0` failed.` But, if I pass `cross_entropy` each sample from this batch individually, it works fine. Moreover, the call of `cross_entropy` with entire batch works on cpu. [labels.pt.zip](https://github.com/pytorch/pytorch/files/12485430/labels.pt.zip) ### Versions Versions of relevant libraries: [pip3] imagenetv2-pytorch==0.1 [pip3] lion-pytorch==0.1.2 [pip3] numpy==1.24.4 [pip3] torch==2.0.1 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] Could not collect cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck
8
1,211
108,342
ONNX export constant folding messes up with shared weight deduplication
module: onnx, triaged
### ๐Ÿ› Describe the bug Hi, it appears that using `do_constant_folding=True` in the ONNX export will undo some weight deduplication. For example, an `nn.Linear` weight will go from ![image](https://github.com/pytorch/pytorch/assets/9808326/5e0a9e87-8c2a-4f23-ae7c-4c4548e95641) & ![image](https://github.com/pytorch/pytorch/assets/9808326/752aabad-eeda-44eb-bb36-326c692ba242) to ![image](https://github.com/pytorch/pytorch/assets/9808326/19c8e6c4-00f5-4973-bf0c-356670348c78) effectively transposing the weight. Given that the `DeduplicateInitializersByDataPtr` relies on the tensor size, in case the shared weight has a different size (e.g. an embedding weight), the deduplication pass will fail. It seems to me that the initializer deduplication should happen before the constant folding, and constant folding should be done only for non-shared weights. WDTY @justinchuby @BowenBao ? Thank you! Repro: ``` pip install optimum optimum-cli export onnx -m bigscience/bloom-560m bloom_onnx --no-post-process ``` and inspect the output with netron ### Versions Both on 2.0.1 and nightly
2
1,212
108,341
Added attention mechanism error,Need to modify torch.use_deterministic_algorithms(True)
triaged, module: determinism
### ๐Ÿ› Describe the bug RuntimeError: adaptive_avg_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. Error in adding attention mechanism when using YOLOv5,Need to set torch.use_deterministic_algorithms(False). However, the code cannot be reproduced. Can you help me? ### Versions def init_seeds(seed=0, deterministic=False): # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # for Multi-GPU, exception safe # torch.backends.cudnn.benchmark = True # AutoBatch problem https://github.com/ultralytics/yolov5/issues/9287 if deterministic and check_version(torch.__version__, '1.12.0'): # https://github.com/ultralytics/yolov5/pull/8213 torch.use_deterministic_algorithms(True) torch.backends.cudnn.deterministic = True os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8' os.environ['PYTHONHASHSEED'] = str(seed) cc @mruberry @kurtamohler
0
1,213
108,332
RuntimeError: dims.value().size() == self->getMaybeRFactorDomain().size()
oncall: jit
### ๐Ÿ› Describe the bug When I was doing the inference with a model export by `torch.jit.script()` (the pytorch version used to export the model is before v2.0, I don't know what is it exactly). And I met this errors: ``` python tools/recognize.py --world-s ize 1 --manifest-in data/manifests/librilight_chunk_cuts_small.jsonl.gz --manifest-out librilight_cuts_test2.jsonl.gz --nn-model-filename exp/exp/jit_script.pt --tokens exp/data/lang_bpe_500/tokens.txt 2023-08-31 15:27:31,194 INFO [recognize.py:323] Decoding started 2023-08-31 15:27:31,197 INFO [recognize.py:336] {'subsampling_factor': 4, 'frame_shift_ms': 10, 'beam_size': 4, 'world_size': 1, 'master_port': 12354, 'manifes t_in': PosixPath('data/manifests/librilight_chunk_cuts_small.jsonl.gz'), 'manifest_out': PosixPath('librilight_cuts_test2.jsonl.gz'), 'log_dir': PosixPath('log s'), 'nn_model_filename': 'exp/exp/jit_script.pt', 'tokens': 'exp/data/lang_bpe_500/tokens.txt', 'decoding_method': 'greedy_search', 'max_duration': 600.0, 're turn_cuts': True, 'num_mel_bins': 80, 'num_workers': 8, 'manifest_out_dir': PosixPath('.'), 'suffix': '.jsonl.gz', 'cuts_filename': 'librilight_cuts_test2', 'b lank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2023-08-31 15:27:31,268 INFO [recognize.py:341] device: cuda:0 2023-08-31 15:27:31,269 INFO [recognize.py:343] Loading jit model 2023-08-31 15:27:44,860 INFO [recognize.py:299] cuts processed until now is 20 /star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501: UserWarning: operator() profile_node %626 : int = prim::profile_ivalue(%624) does not have profile information (Triggered internally at ../third_party/nvfuser/csrc/graph_fuser.cpp:104.) return forward_call(*args, **kwargs) /star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501: UserWarning: FALLBACK path has been taken in$ide: compileCudaFusionGroup. This is an indication that codegen Failed for some reason. To debug try disable codegen fallback path via setting the env variable `export PYTORCH_NVFUSER_DISABLE=fallback` To report the issue, try enable logging via setting the envvariable ` export PYTORCH_JIT_LOG_LEVEL=manager.cpp` (Triggered internally at ../third_party/nvfuser/csrc/manager.cpp:243.) return forward_call(*args, **kwargs) /star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501: UserWarning: FALLBACK path has been taken inside: runCudaFusionGroup. This is an indication that codegen Failed for some reason. To debug try disable codegen fallback path via setting the env variable `export PYTORCH_NVFUSER_DISABLE=fallback` (Triggered internally at ../third_party/nvfuser/csrc/manager.cpp:335.) return forward_call(*args, **kwargs) Traceback (most recent call last): File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 433, in <module> main() File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 426, in main run(rank=0, world_size=world_size, args=args, in_cuts=in_cuts) File "/star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 365, in run decode_dataset( File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 287, in decode_dataset hyps, timestamps, scores = decode_one_batch( File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 184, in decode_one_batch encoder_out, encoder_out_lens = model.encoder( File "/star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: shape '[1, 0, 2]' is invalid for input of size 7659520 ``` When I set the `export PYTORCH_NVFUSER_DISABLE=fallback` the logs became: ``` python tools/recognize.py --world-size 1 --manifest-in data/manifests/librilight_chunk_cuts_small.jsonl.gz --manifest-out librilight_cuts_test2.jsonl.gz --nn-model-filename exp/exp/jit_script.pt --tokens exp/data/lang_bpe_500/tokens.txt 2023-08-31 15:33:15,659 INFO [recognize.py:323] Decoding started 2023-08-31 15:33:15,663 INFO [recognize.py:336] {'subsampling_factor': 4, 'frame_shift_ms': 10, 'beam_size': 4, 'world_size': 1, 'master_port': 12354, 'manifest_in': PosixPath('data/manifests/librilight_chunk_cuts_small.jsonl.gz'), 'manifest_out': PosixPath('librilight_cuts_test2.jsonl.gz'), 'log_dir': PosixPath('logs'), 'nn_model_filename': 'exp/exp/jit_script.pt', 'tokens': 'exp/data/lang_bpe_500/tokens.txt', 'decoding_method': 'greedy_search', 'max_duration': 600.0, 'return_cuts': True, 'num_mel_bins': 80, 'num_workers': 8, 'manifest_out_dir': PosixPath('.'), 'suffix': '.jsonl.gz', 'cuts_filename': 'librilight_cuts_test2', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2023-08-31 15:33:15,737 INFO [recognize.py:341] device: cuda:0 2023-08-31 15:33:15,737 INFO [recognize.py:343] Loading jit model 2023-08-31 15:33:29,322 INFO [recognize.py:299] cuts processed until now is 20 /star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py:1501: UserWarning: operator() profile_node %626 : int = prim::profile_ivalue(%624) does not have profile information (Triggered internally at ../third_party/nvfuser/csrc/graph_fuser.cpp:104.) return forward_call(*args, **kwargs) Traceback (most recent call last): File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 433, in <module> main() File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 426, in main run(rank=0, world_size=world_size, args=args, in_cuts=in_cuts) File "/star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 365, in run decode_dataset( File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 287, in decode_dataset hyps, timestamps, scores = decode_one_batch( File "/star-kw/kangwei/code/text_search/examples/libriheavy/tools/recognize.py", line 184, in decode_one_batch encoder_out, encoder_out_lens = model.encoder( File "/star-kw/kangwei/dev_tools/anaconda/envs/textsearch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) RuntimeError: dims.value().size() == self->getMaybeRFactorDomain().size() INTERNAL ASSERT FAILED at "../third_party/nvfuser/csrc/parser.cpp":3399, please report a bug to PyTorch. ``` This is the issue in my repo https://github.com/k2-fsa/text_search/issues/53, if you need to reproduce the issue, pls let me know, I can upload some testdata here. ### Versions Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.27.2 Libc version: glibc-2.27 Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.54-1.0.0.std7c.el7.2.x86_64-x86_64-with-glibc2.27 Is CUDA available: True CUDA runtime version: 10.1.243 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-PCIE-32GB GPU 1: Tesla V100-PCIE-32GB GPU 2: Tesla V100-PCIE-32GB GPU 3: Tesla V100-PCIE-32GB GPU 4: Tesla V100-PCIE-32GB GPU 5: Tesla V100-PCIE-32GB GPU 6: Tesla V100-PCIE-32GB GPU 7: Tesla V100-PCIE-32GB Nvidia driver version: 455.32.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.2 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 72 On-line CPU(s) list: 0-71 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Stepping: 7 CPU MHz: 3299.999 BogoMIPS: 5200.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 25344K NUMA node0 CPU(s): 0-17,36-53 NUMA node1 CPU(s): 18-35,54-71 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg f ma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_ l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 er ms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_oc cup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.25.2 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] triton==2.0.0 [conda] numpy 1.25.2 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
0
1,214
108,324
[inductor][cpu] Perf regression
triaged, oncall: pt2, module: cpu inductor
<p>perf regression found - compare with 2023_08_22 nightly</p> <p>Repro</p> bash [inductor_single_test.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance suite model float32 first dynamic cpp 0 <p>new_perf_regression</p> <table border="1" class="dataframe table"> <thead> <tr style="text-align: right;"> <th>name</th> <th>batch_size_new</th> <th>speed_up_new</th> <th>inductor_new</th> <th>eager_new</th> <th>compilation_latency_new</th> <th>batch_size_old</th> <th>speed_up_old</th> <th>inductor_old</th> <th>eager_old</th> <th>compilation_latency_old</th> <th>Ratio Speedup(New/old)</th> <th>Eager Ratio(old/new)</th> <th>Inductor Ratio(old/new)</th> <th>Compilation_latency_Ratio(old/new)</th> </tr> </thead> <tbody> <tr> <td>doctr_det_predictor</td> <td>1</td> <td>1.069458</td> <td>0.148391054</td> <td>0.158697999828732</td> <td>33.396333</td> <td>1</td> <td>1.503562</td> <td>0.106578389</td> <td>0.160247215721618</td> <td>37.416409</td> <td>0.71</td> <td>1.01</td> <td>0.72</td> <td>1.12</td> </tr> <tr> <td>pytorch_unet</td> <td>1</td> <td>0.862569</td> <td>0.310560677</td> <td>0.267880012599213</td> <td>18.169774</td> <td>1</td> <td>1.057315</td> <td>0.24839536899999998</td> <td>0.262632149574235</td> <td>27.68669</td> <td>0.82</td> <td>0.98</td> <td>0.8</td> <td>1.52</td> </tr> <tr> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> </tr> <tr> <td>doctr_det_predictor</td> <td>1</td> <td>0.652977</td> <td>3.336484332</td> <td>2.178647529656364</td> <td>38.657023</td> <td>1</td> <td>1.20895</td> <td>1.828299074</td> <td>2.2103221655123</td> <td>36.253952</td> <td>0.54</td> <td>1.01</td> <td>0.55</td> <td>0.94</td> </tr> <tr> <td>pytorch_unet</td> <td>1</td> <td>0.915661</td> <td>5.48157092</td> <td>5.01926071017812</td> <td>20.518048</td> <td>1</td> <td>0.998196</td> <td>4.898655984</td> <td>4.889818808604864</td> <td>29.142998</td> <td>0.92</td> <td>0.97</td> <td>0.89</td> <td>1.42</td> </tr> <tr> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> </tr> </tbody> </table> bash [inductor_single_test.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance torchbench llama float32 first static default 0 <p>new_perf_regression</p> <table border="1" class="dataframe table"> <thead> <tr style="text-align: right;"> <th>name</th> <th>batch_size_new</th> <th>speed_up_new</th> <th>inductor_new</th> <th>eager_new</th> <th>compilation_latency_new</th> <th>batch_size_old</th> <th>speed_up_old</th> <th>inductor_old</th> <th>eager_old</th> <th>compilation_latency_old</th> <th>Ratio Speedup(New/old)</th> <th>Eager Ratio(old/new)</th> <th>Inductor Ratio(old/new)</th> <th>Compilation_latency_Ratio(old/new)</th> </tr> </thead> <tbody> <tr> <td>llama</td> <td>32</td> <td>0.578757</td> <td>0.053369041000000006</td> <td>0.030887706062037</td> <td>35.195648</td> <td>32</td> <td>1.143321</td> <td>0.027263855</td> <td>0.031171337962455</td> <td>40.613965</td> <td>0.51</td> <td>1.01</td> <td>0.51</td> <td>1.15</td> </tr> <tr> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> <td>*</td> </tr> </tbody> </table> <p>SW info</p> <table border="1" class="dataframe table"> <thead> <tr style="text-align: right;"> <th>SW</th> <th>Nightly commit</th> <th>Main commit</th> </tr> </thead> <tbody> <tr> <td>Pytorch</td> <td>f54acf0</td> <td>bad3f2d</td> </tr> <tr> <td>Torchbench</td> <td>/</td> <td>770d5cf7</td> </tr> <tr> <td>torchaudio</td> <td>dc83b38</td> <td>66f661d</td> </tr> <tr> <td>torchtext</td> <td>c11d758</td> <td>60bea66</td> </tr> <tr> <td>torchvision</td> <td>58366ab</td> <td>a6dea86</td> </tr> <tr> <td>torchdata</td> <td>1d231d1</td> <td>757c032</td> </tr> <tr> <td>dynamo_benchmarks</td> <td>f228c8b</td> <td>/</td> </tr> </tbody> </table> cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
4
1,215
108,312
[dynamo][stream]support device-agnostic stream in dynamo and capture stream/event method in fx graph
triaged, module: mkldnn, open source, ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
This PR implements 2 things: 1. support the device agnostic stream and runtime APIs captured by the dynamo. 2. support the stream methods(include the event) captured by the dynamo. Here are details for 1st. Previously the stream captured in dynamo was tightly bind to CUDA. Here we implement a global singleton container named `StreamMethodContainer` for different backends to register their associated stream methods to dynamo. When import the backendโ€™s product, the stream operations can be registered directly by calling ``` device_stream_method = {'current_stream': method_1, 'create_stream_context': method_2, 'set_stream': method_3, 'set_stream_by_id': method_4} torch._dynamo.stream.register_stream_method(device_name, device_stream_method) ``` Stream methods need to be passed in this API according to the precise semantics represented by the dict key in `device_stream_method`. After register, these methods can be used by dynamo to capture the stream operations in usersโ€™ script, for example, get the current stream or set the specific stream. Additionally, the wrapped stream variable and the stream context variable are changed to be the device-agnostic, the proxy functions of these variables are assigned by the associated methods in the container. All of this are illustrated in the below. Below is a illustration. ![image](https://github.com/pytorch/pytorch/assets/74231238/37ac7350-c539-4167-9886-c3744ecab65d) cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @yanboliang
25
1,216
108,304
[vision hash update] update the pinned vision hash
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml). Update the pinned vision hash.
4
1,217
108,303
[optim] Make casting to match params a hook (2nd try)
ciflow/trunk, release notes: optimizer
Reland of #106725, but instead of registering the hook in the constructor, do it lazily through @property Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108303
1
1,218
108,301
[ez] Make internal linter happy
fb-exported
Summary: Title Test Plan: CI Reviewed By: angelayi Differential Revision: D48851508
3
1,219
108,300
Can't run Test/Inductor test: test_compiled_optimizers.py
module: windows, triaged, oncall: pt2
### ๐Ÿ› Describe the bug When I run in terminal: `pytest test_compiled_optimizers.py` I encounter the following error: ``` =============================================== test session starts ================================================ platform win32 -- Python 3.11.1, pytest-7.4.0, pluggy-1.3.0 rootdir: ...\Python Projects\PyTorchContribute\pytorch configfile: pytest.ini plugins: anyio-3.7.0, hypothesis-6.82.7, pytorch-0.2.1 collected 0 items / 1 error ====================================================== ERRORS ====================================================== ____________________________ ERROR collecting test/inductor/test_compiled_optimizers.py ____________________________ ImportError while importing test module '...\Python Projects\PyTorchContribute\pytorch\test\inductor\test_compiled_optimizers.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: test_compiled_optimizers.py:23: in <module> from .test_torchinductor import check_model, check_model_cuda, requires_cuda ...\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\assertion\rewrite.py:178: in exec_module exec(co, module.__dict__) test_torchinductor.py:29: in <module> from torch._dynamo.testing import ( E ImportError: cannot import name 'expectedFailureCodegenDynamic' from 'torch._dynamo.testing' (C:\Users\1idan\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\_dynamo\testing.py) During handling of the above exception, another exception occurred: ...\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) test_compiled_optimizers.py:25: in <module> from test_torchinductor import check_model, check_model_cuda, requires_cuda E ModuleNotFoundError: No module named 'test_torchinductor' ------------------------------------------------- Captured stderr -------------------------------------------------- <class 'ModuleNotFoundError'>: No module named 'test_torchinductor' ============================================= short test summary info ============================================== ERROR test_compiled_optimizers.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ================================================= 1 error in 6.13s ================================================= ``` after add at the beginning of test script: `sys.path.append(r'...\Python Projects\PyTorchContribute\pytorch\test\inductor')` I got the following error: `<class 'ImportError'>: cannot import name 'expectedFailureCodegenDynamic' from 'torch._dynamo.testing' (C:\Users\1idan\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\_dynamo\testing.py)` I need assistance in resolving this issue as I want to run the tests and contribute to pytorch. Thanks. ### Versions PyTorch version: 2.1.0.dev20230902 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: ??Microsoft Windows 11 Home GCC version: (MinGW.org GCC-6.3.0-1) 6.3.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: N/A Python version: 3.11.0 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:21) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22621-SP0 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce MX330 Nvidia driver version: 532.09 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=1391 DeviceID=CPU0 Family=198 L2CacheSize=5120 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=1690 Name=11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz ProcessorType=3 Revision= Versions of relevant libraries: [pip3] flake8==6.0.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.12.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==0.981 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.25.2 [pip3] torch==2.1.0.dev20230902 [pip3] torchaudio==2.2.0.dev20230902 [pip3] torchvision==0.16.0.dev20230902 [conda] blas 1.0 mkl [conda] mkl 2023.1.0 h6b88ed4_46357 [conda] mkl-service 2.4.0 py311h2bbff1b_1 [conda] mkl_fft 1.3.6 py311hf62ec03_1 [conda] mkl_random 1.2.2 py311hf62ec03_1 [conda] mpmath 1.2.1 py311_0 pytorch-nightly [conda] numpy 1.25.2 py311hdab7c0b_0 [conda] numpy-base 1.25.2 py311hd01c5d8_0 [conda] pytorch 2.1.0.dev20230902 py3.11_cuda11.8_cudnn8_0 pytorch-nightly [conda] pytorch-cuda 11.8 h24eeafa_5 pytorch-nightly [conda] pytorch-mutex 1.0 cuda pytorch-nightly [conda] requests 2.28.1 py311_0 pytorch-nightly [conda] torchaudio 2.2.0.dev20230902 pypi_0 pypi [conda] torchvision 0.16.0.dev20230902 pypi_0 pypi cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
9
1,220
108,296
torchbench mfu mem bandwidth+ refactors
topic: not user facing, module: dynamo, ciflow/inductor
``` (sourcetorch) ubuntu@ip-172-31-1-136:~/pytorch$ ./benchmarks/dynamo/torchbench.py --inference --performance --no-skip --inductor --freezing --only stable_diffusion cuda eval stable_diffusion Average memory bandwidth per image over 1 samples: 107.47 GB/s Model FLOPS: 0.48736161619492085 TF/s ``` EDIT: I'm angrily refactoring, we can probably just merge by rebasing at https://github.com/pytorch/pytorch/pull/108296/commits/6019d5a559d9a7ef167e5b151614057f5619f5e4 or I might just do https://github.com/pytorch/pytorch/pull/108296/commits/34145b169ac234ca64ca2cd669101a699245521d in a seperate PR EDIT 2: These numbers are quite low so either its well known for stable diffusion or a bug in my measurement, will baseline with llama instead EDIT 3: I will research and summarize other implementations of the metric, try to find a denominator for MFU and finally run a larger sweep, it is expected for most blueberries models that mfu will be low since we are doing bs=1 so need to find other things cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
7
1,221
108,294
Pass 'dynamic' flag from `torch.compile` to backends
open source, release notes: onnx
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108294 * #111593 Previously, `dynamic` was only set as field of `_TorchCompileWrapper`, but never accessible from `compiler_fn`.
1
1,222
108,279
[do not land] [FSDP] deepspeed benchmark
null
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108279
1
1,223
108,277
Transformer performance drop due to slow PyTorch GEMMs
module: performance, module: cuda, triaged
### ๐Ÿ› Describe the bug While trying to debug our transformer performance, we see the following problem. When looking at the performance of different matrix multiplications using `torch.nn.functional.linear()`, we see a huge drop in performance for GEMMs of size (2048, 4, 82432)x(82432, 20608) through (2048, 4, 89088)x(89088, 22272) as displayed in the below figures. ![image](https://github.com/pytorch/pytorch/assets/74274091/0a59c531-2cf7-4da3-b93c-f7fa7eec2892) ![image](https://github.com/pytorch/pytorch/assets/74274091/1c64a607-d177-4a67-bf6c-e5231b41d0ac) In the area of bad performance, there is a large drop in L2 cache hit rate, as well as a shifting in the tiling dimensions of the kernel call (128x256x64 to 256x128x64). When investigating this using Cutlass, we saw that it does not share the performance drop PyTorch does for these GEMM sizes. Why do we see this poor performance in GEMMs, and how can we avoid this poor performance? full kernel names outside of dip: `ampere_fp16_s16816gemm_fp16_128x256_ldg8_f2f_stages_64x3_tn` within dip: `ampere_fp16_s16816gemm_fp16_256x128_ldg8_f2f_stages_64x3_tn` full script used to produce figures: ``` import time import torch def benchmark_mm_b(m, n, k, b=None, num_iterations=100): B = torch.randn(k, n).half().to("cuda:0") if b is None: A = torch.randn(m, n).half().to("cuda:0") b=1 C = torch.empty(m, k).half().to("cuda:0") else: A = torch.randn(b,m,n).half().to("cuda:0") C = torch.empty(b,m, k).half().to("cuda:0") num_warmup_iterations = 50 for i in range(num_warmup_iterations + num_iterations): if i == num_warmup_iterations: start_time = time.time() with torch.no_grad(): torch.nn.functional.linear(A, B, out=C) torch.cuda.synchronize() elapsed_time = (time.time() - start_time) / num_iterations if b is None: print(f"Elapsed time for {m}x{n}x{k}: {elapsed_time:.3f}") print(f"Throughput (in TFLOP/s) for {m}x{n}x{k}: {(2 * m * n * k) / (elapsed_time * 10**12):.3f}") else: print(f"Elapsed time for {m}x{n}x{k}, b={b}: {elapsed_time:.4f}") print(f"Throughput (in TFLOP/s) for {m}x{n}x{k}, b={b}: " f"{(2 * b * m * n * k) / (elapsed_time * 10**12):.3f}") print("-" * 80) for h in range(20608-128, 22272+128+64, 64): benchmark_mm_b(4, 4*h, h, b=2048) ``` ### Versions ``` Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping: 7 CPU MHz: 2999.998 BogoMIPS: 5999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 71.5 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] cudatoolkit 11.7.0 hd8887f6_10 conda-forge [conda] numpy 1.24.3 pypi_0 pypi [conda] torch 1.13.0+cu116 pypi_0 pypi [conda] torchaudio 0.13.0+cu116 pypi_0 pypi [conda] torchvision 0.14.0+cu116 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi ``` cc @ptrblck
2
1,224
108,268
ONNX-FX based exporter documentation/tutorial topics for PyTorch 2.1
module: onnx, triaged, release notes: onnx
```[tasklist] ### Tasks - [ ] https://github.com/pytorch/tutorials/issues/2543 - [ ] Tutorial: Using diagnostics to find and report bugs (create new diag, generate sarif and vie sarif?) - [x] Tutorial: Using dynamic shapes - [x] Tutorial: Using fake tensor to export large scale models - [ ] [Tutorial: Register custom operators](https://microsoft.sharepoint.com/:w:/t/ONNX2/Ee7O0MS48bZEr7ENbFt35-sBQO5xJsgeuZY4f-CU-ltl3w?e=gLvas8) - [ ] Deep dive: ONNX-FX architecture - [ ] Deep dive: Design comparison with TorchScript-based ONNX exporter - [ ] https://github.com/pytorch/pytorch/issues/108274 - [ ] https://github.com/pytorch/tutorials/issues/2552 ```
0
1,225
108,253
Back out "Serialize pytree to json string (#106116)"
fb-exported, module: dynamo, ciflow/inductor, module: export
Summary: Original commit changeset: 7341758b5249 Original Phabricator Diff: D47848392 Test Plan: Backout of backward compat breaking diff. Differential Revision: D48834000 cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
3
1,226
108,247
[REDO][WIP][cuDNN][cuDNN V8 API] Add experimental cuDNN MHA/Flash Attention support
null
This is a copy of https://github.com/pytorch/pytorch/pull/101916 which has gone stale, I am going to rebase and update to try and get some numbers for speed comparison """ Initial implementation of forward pass cuDNN Flash Attention; current major restrictions are: Only packed data layout (e.g., QKV tensors as chunks of the same tensor) supported Only SM 9.0 and SM 8.0 support Only head dim and sequence length divisible by 64 supported Gated by TORCH_CUDNN_MHA_ENABLED=1 environment variable. The plan is to eventually avoid pattern matching against strides as the support matrix from the cuDNN side is improved """
2
1,227
108,246
pack_padded_sequence on GPU device
triaged, oncall: pt2
### ๐Ÿ› Describe the bug `packed_data = pack_padded_sequence(fts, lengths, batch_first=True, enforce_sorted=False)` gives me the error : _'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor_ I get it, and looking at all other bugs I should do this: `packed_data = pack_padded_sequence(fts, mask.cpu(), batch_first=True, enforce_sorted=False)` but gives me: _Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)_ How am I supposed to get `pack_padded_sequence ` working on GPU. It seems that those two issues are unsolvable? ### Versions ``` PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.27.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti Nvidia driver version: 535.98 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 4 BogoMIPS: 7199.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 6 MiB (6 instances) L3 cache: 8.3 MiB (1 instance) Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Versions of relevant libraries: [pip3] numpy==1.25.2 [pip3] torch==2.0.1 [pip3] torch-geometric==2.3.1 [pip3] triton==2.0.0 [conda] No relevant packages ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
1,228
108,245
Integrate cutlass headers and scripts in pytorch package
oncall: releng, triaged, topic: binaries, oncall: pt2
### ๐Ÿ› Describe the bug Feature request : https://github.com/pytorch/pytorch/issues/106991 PR: https://github.com/pytorch/pytorch/pull/108015 Require cutlass integration: Pytorch release: to properly release cutlass together with Pytorch, we need to find a way to pack cutlass into Pytorch package distribution. This includes both C++ header files (third_party/cutlass/include) as well as Python scripts and modules (third_party/cutlass/tools/library/scripts). OS support - Linux only. Support CUDA 11.8, 12.1. OSS use case: It's mainly for performance reason. Cutlass is the most efficient open source library on H100 for now (Triton is catching up). The template interface it provides make it easier to do epilogue fusions. If it's necessary, we could also do some model level benchmark testing, though it would require quite some implementation work. We could work with NVIDIA to package this into a cutlass python dependency. It also looks like there's some additional Python requirements for these scripts. One option is also always copy-pasting these files into our tree into a subfolder with the correct LICENSE header etc. Questions: - Can we build this as special pypi or conda package so that we don't need to integrate cutlass in each of our binaries ? But depend dynamically on it. See https://github.com/pytorch/builder/blob/main/conda/pytorch-cuda/meta.yaml for example of integration - Need to confirm if we need whole of cutlass/tools/library/, rather then only third_party/cutlass/tools/library/scripts @ipiszy could you please confirm this with a test ? - Since this code is property of Nivida do we have a rights to distribute it in this form ? cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @ptrblck cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @seemethere @malfet @pytorch/pytorch-dev-infra @jansel @ipiszy @albanD @cpuhrsch ### Versions 2.2.0
10
1,229
108,244
Pytorch versions without the abi3 flag
oncall: binaries, triaged, module: python frontend
### ๐Ÿš€ The feature, motivation and pitch In some projects I work on, I see not deterministic behavior when I try to import Pytorch with other libraries(for example-tensorrt). The most famous issue is the imports order, which causes to different behaviors. I can't share the code with you because of my company policy. But when I tried to figure out what is the problem, I came to conclusion that it might be the abi3 flag which you use(and the other libraries not). Is there a reason why you don't publish wheel files without this flag? Can you publish versions without the abi3 flag? Or explain how I can do that by myself? Thanks! ### Alternatives _No response_ ### Additional context _No response_ cc @seemethere @malfet @albanD
2
1,230
108,241
Unrecognized attribute: axes for operator ReduceMean during onnx model conversion
module: onnx, triaged
### ๐Ÿ› Describe the bug I am trying to convert demucs model to onnx , Generally demucs onnx is not supported as of now i have written all preprocessing and postprocessing steps like stft outside model. While converting demucs to onnx i am facing below issue Using opset version: 18 . Using torchnightly version installed from ```pip3 install --pre torch torchaudio torchvision --index-url https://download.pytorch.org/whl/nightly/cu121``` ``` Traceback (most recent call last): File "/../lib/python3.9/site-packages/torch/onnx/utils.py", line 1686, in _export _C._check_onnx_proto(proto) RuntimeError: Unrecognized attribute: axes for operator ReduceMean ==> Context: Bad node spec for node. Name: /ReduceMean OpType: ReduceMean The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/../demucs/onnxfile_create_script.py", line 67, in <module> torch.onnx.export(model, args=(mix, x), f="htdemucs_ft_onnx.onnx", opset_version=18, do_constant_folding=False) File "/home/../lib/python3.9/site-packages/torch/onnx/utils.py", line 516, in export _export( File "/home/../demucs/lib/python3.9/site-packages/torch/onnx/utils.py", line 1688, in _export raise errors.CheckerError(e) from e torch.onnx.errors.CheckerError: Unrecognized attribute: axes for operator ReduceMean ==> Context: Bad node spec for node. Name: /ReduceMean OpType: ReduceMean ``` ### Versions ``` Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.1.0.dev20230830+cu121 [pip3] torchaudio==2.1.0.dev20230830+cu121 [pip3] torchvision==0.16.0.dev20230830+cu121 [pip3] triton==2.0.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.7.0 hd8887f6_10 conda-forge [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge [conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge [conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge [conda] numpy 1.24.3 py39h14f4228_0 [conda] numpy-base 1.24.3 py39h31eccc5_0 [conda] pytorch-mutex 1.0 cpu pytorch [conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi [conda] torch 2.1.0.dev20230830+cu121 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230830+cu121 pypi_0 pypi [conda] torchvision 0.16.0.dev20230830+cu121 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi ```
2
1,231
108,238
RuntimeError: NYI: Named tensors are not supported with the tracer
triaged, open source, topic: not user facing
#48054 RuntimeError: NYI: Named tensors are not supported with the tracer #49538 jit tracer doesn't work with unflatten layer #31591 when i try to export a pytorch model to ONNX, got RuntimeError: output of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer. - This bug was closed but exists. Multiple comments on it still showing error. This is addressed Likely fixes the following issues (but untested) #63297 Named tensor in tracer #2323 [Bug] torch.onnx.errors.UnsupportedOperatorError when convert mask2former to onnx Fix zero dimensioned tensors when used with jit.trace They are currently assigned an empty set for names {} this is not the same as "no name" so jit.trace bails with "NYI: Named tensors are not supported with the tracer" This happens when I am trying to save a non-trivial model as onnx but the simplest repro I have seen is 48054 above which has been added as test/jit/test_zero_dim_tensor_trace.py Test plan: New unit test added Broken scenarios tested locally CI Fixes #48054 @pytorchbot label "topic: not user facing"
6
1,232
108,234
[FSDP] New rate limiter and memory reuse system
release notes: distributed (fsdp)
### 1. Motivation Today's FSDP uses CPU-GPU synchronization to limit issuance rate of all-gather. This stalls the CPU thread from issuance new tasks when waiting for GPU computation to finish. See "CPU Thread Activities". <img width="910" alt="Screenshot 2023-09-28 at 1 08 16 AM" src="https://github.com/pytorch/pytorch/assets/6676466/1d07138f-b8fc-4e06-8bfb-69b8575c2df4"> And when the CPU thread resumes, there is an 235 us overhead between the time the synchronization is met and the time a new all-gather is launched. See "Overhead (zoomed in)". This overhead would turn into GPU utilization gap. <img width="1011" alt="Screenshot 2023-09-28 at 1 12 24 AM" src="https://github.com/pytorch/pytorch/assets/6676466/d381ea12-c5b3-4c2b-aa96-deb33475f409"> ### 2. Proposal Replace CPU synchronization with stream wait in limitation of all-gather issuance. We ask the unshard stream to wait for the resharding event of the "-2" all-gather, which marks the compute completion of the "-2" FSDP instance. In order for the CUDA caching allocator to work with the stream wait, we ask it to hold the all-gather buffer till the "+2" all-gather. We do so by stashing the all-gather buffer into the existing "free event queue", and free it only when we are about to issue the "+2" all-gather (after "+2" all-gather has established wait relationship). CPU timeline as follows: ![fsdp_rate_limiter](https://github.com/pytorch/pytorch/assets/6676466/a5089bc0-d9ab-4345-b12e-7ae3c4a62b43) ### 2.1 Additional Changes **2.1.1 Moved `_free_storage` from reshard to +2 unshard.** ![free_storage](https://github.com/pytorch/pytorch/assets/6676466/7b912f65-95bf-4494-bbaf-fd7f2945e5b4) **2.1.2 Consolidated `BACKWARD_PRE` and `BACKWARD_POST` options for backward prefetch.** i.e. we would just use BACKWARD_POST, for example, the completion of layer 12 backward would trigger all-gather of layer 10 (with a stream wait, of course) ### 2.2 Backward wait system The backward wait is implemented as follows: ![backward](https://github.com/pytorch/pytorch/assets/6676466/47e9e5fc-ff2b-4494-968f-4f0d75086dbc) Location 1: We need to allocate two buffers to achieve compute-communication overlap. This is auto-satisfied by the state of the free-event queue at the end of forward -- which still has two last layers stashed. In fact, by such stashing, we save two all-gathers (for the last two layers) compared to today's FSDP. Location 2: What happen here are: Compute on unshard buffer B done -> Unshard stream wait for compute stream -> Free unshard buffer B -> Switch to unshard stream -> Allocate unshard buffer B+1 -> Perform all-gather on B+1 Location 3: Switch back to default stream, call `split_and_view` to get individual parameter -> Default stream wait for unshard stream. We switch back to default stream so that the backward op for `split_and_view` would be also on default stream. Otherwise, autograd would use a separate stream for this backward op and call `record_stream` on the param tensors. ### 3. Evaluation ![380210060_682184966891666_4956717331232594368_n](https://github.com/pytorch/pytorch/assets/6676466/7b67e75e-068b-49d7-bf18-e0ff7206b6ea) Cc: @awgu @albanD @janeyx99
2
1,233
108,231
DistributedSampler class: Change total_size into num_samples
oncall: distributed, triaged
https://github.com/pytorch/pytorch/blob/68b518c13e128997a0c7c9ab8ce9508cc4062e3a/torch/utils/data/distributed.py#L118C42-L118C52 This line can be: indices = indices[self.rank:self.**num_samples**:self.num_replicas] cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
4
1,234
108,226
torch.nn.functional.pad() with value type bool
oncall: jit
### ๐Ÿ› Describe the bug ``` @torch.jit.script def fn(): a = torch.zeros(10, dtype=torch.bool) a = F.pad(a, (1,1), value=True) return a ``` Error: ``` aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor: Expected a value of type 'Optional[float]' for argument 'value' but instead found type 'bool'. : File ..., line ... def fn(): a = torch.zeros(10, dtype=torch.bool) a = F.pad(a, (1,1), value=True) ~~~~~ <--- HERE return a ``` ### Versions Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.27.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 45 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz CPU family: 6 Model: 141 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 Stepping: 1 BogoMIPS: 5222.40 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm avx512_vp2intersect flush_l1d arch_capabilities Hypervisor vendor: VMware Virtualisation type: full L1d cache: 384 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 10 MiB (8 instances) L3 cache: 192 MiB (8 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.4 [pip3] pytorch-lightning==2.0.6 [pip3] recurrent-memory-transformer-pytorch==0.5.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchmetrics==1.0.3 [pip3] triton==2.0.0 [conda] Could not collect cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
1
1,235
108,225
[docs] F.interpolate(uint8_input, mode = 'bicubic', ...) overshoot behavior: adjust the note in docs to explain that for uint8 saturating store is done and no manual clamp is needed or mention that bicubic is not supported for uint8 inputs
module: docs, module: nn, triaged, actionable
### ๐Ÿ› Describe the bug The https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html say that for avoiding overshoot the user must manually do `.clamp(0, 255)`. <img width="459" alt="image" src="https://github.com/pytorch/pytorch/assets/1041752/5add9591-f605-4fb4-8275-050518727012"> For me, this makes sense if the input is float32 (or `int32` when it gets supported https://github.com/pytorch/pytorch/issues/5580). But if the input/output is uint8 (which is now finally supported), the overshoot would simply overflow and clamp would not help. I would propose that for uint8 inputs / outputs F.interpolate should do saturating/clamping cast/store (so that negative values are clamped to 0 and >255 values are clamped to 255) ### Versions N/A cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
7
1,236
108,224
qnnpack quantized model can not be traced
oncall: jit, oncall: quantization, triaged
### ๐Ÿ› Describe the bug Hi, quantization team guys QAT config of qnnpack backend can be converted to quantized model successfuly, but can not be traced using torch.jit.trace. The error seems to be introduced by xnnpack. code: ``` import torch from torch.ao.quantization.qconfig_mapping import QConfigMapping from torch.ao.quantization.backend_config.qnnpack import get_qnnpack_backend_config from torch.ao.quantization.qconfig import default_symmetric_qnnpack_qat_qconfig # from torch.ao.quantization.backend_config.x86 import get_x86_backend_config # from torch.ao.quantization.qconfig import get_default_qat_qconfig from torch.ao.quantization.quantize_fx import prepare_qat_fx from torch.ao.quantization.quantize_fx import convert_fx import torch.nn as nn class Net(nn.Module): def __init__(self, ): super().__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=1, padding=0, stride=1, groups=1,bias=True) self.conv2 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=1, padding=0, stride=1, groups=1,bias=True) self.beta = nn.Parameter(torch.zeros((1, 3, 1, 1)), requires_grad=True) def forward(self, inp): x_1 = self.conv1(inp) x_2 = self.conv2(inp) x = x_1 * x_2 y = inp + x * self.beta #error is introduced by this line. return y if __name__=='__main__': torch.manual_seed(3) print('The default quantized engine is {}:'.format(torch.backends.quantized.engine)) torch.backends.quantized.engine='qnnpack' print('After change the quantization backend,The default quantized engine is {}:'.format(torch.backends.quantized.engine)) net=Net() backend_config=get_qnnpack_backend_config() qconfig=default_symmetric_qnnpack_qat_qconfig # backend_config=get_x86_backend_config() # qconfig=get_default_qat_qconfig() qconfig_mapping = QConfigMapping().set_global(qconfig) net.train() net_prepare=prepare_qat_fx(net,qconfig_mapping,torch.randn(1,3,10,10),backend_config=backend_config) net_prepare(torch.randn(1,3,10,10)) net_prepare(torch.randn(1,3,10,10)) net_converted=convert_fx(net_prepare,qconfig_mapping=qconfig_mapping,backend_config=backend_config) print('net_converted:') print(net_converted) net_converted_traced=torch.jit.trace(net_converted,torch.randn(1,3,10,10)) torch.jit.save(net_converted_traced,'net_converted_traced.pth') ``` result: ``` The default quantized engine is x86: After change the quantization backend,The default quantized engine is qnnpack: net_converted: GraphModule( (conv1): QuantizedConv2d(3, 3, kernel_size=(1, 1), stride=(1, 1), scale=0.0169826652854681, zero_point=0) (conv2): QuantizedConv2d(3, 3, kernel_size=(1, 1), stride=(1, 1), scale=0.01659565418958664, zero_point=0) ) def forward(self, inp): conv1_input_scale_0 = self.conv1_input_scale_0 conv1_input_zero_point_0 = self.conv1_input_zero_point_0 quantize_per_tensor = torch.quantize_per_tensor(inp, conv1_input_scale_0, conv1_input_zero_point_0, torch.qint8); inp = conv1_input_scale_0 = conv1_input_zero_point_0 = None conv1 = self.conv1(quantize_per_tensor) conv2 = self.conv2(quantize_per_tensor) _scale_0 = self._scale_0 _zero_point_0 = self._zero_point_0 mul_2 = torch.ops.quantized.mul(conv1, conv2, _scale_0, _zero_point_0); conv1 = conv2 = _scale_0 = _zero_point_0 = None beta = self.beta _scale_1 = self._scale_1 _zero_point_1 = self._zero_point_1 quantize_per_tensor_4 = torch.quantize_per_tensor(beta, _scale_1, _zero_point_1, torch.qint8); beta = _scale_1 = _zero_point_1 = None _scale_2 = self._scale_2 _zero_point_2 = self._zero_point_2 mul_3 = torch.ops.quantized.mul(mul_2, quantize_per_tensor_4, _scale_2, _zero_point_2); mul_2 = quantize_per_tensor_4 = _scale_2 = _zero_point_2 = None _scale_3 = self._scale_3 _zero_point_3 = self._zero_point_3 add_1 = torch.ops.quantized.add(quantize_per_tensor, mul_3, _scale_3, _zero_point_3); quantize_per_tensor = mul_3 = _scale_3 = _zero_point_3 = None dequantize_6 = add_1.dequantize(); add_1 = None return dequantize_6 # To see more debug info, please use `graph_module.print_readable()` Traceback (most recent call last): net_converted_traced=torch.jit.trace(net_converted,torch.randn(1,3,10,10)) File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/jit/_trace.py", line 794, in trace return trace_module( File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/jit/_trace.py", line 1056, in trace_module module._c._create_method_from_trace( File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped return self._wrapped_call(self, *args, **kwargs) File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/fx/graph_module.py", line 281, in __call__ raise e File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/fx/graph_module.py", line 271, in __call__ return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "<eval_with_key>.6", line 19, in forward File "/home/sfadmin/anaconda3/envs/torch2.01/lib/python3.9/site-packages/torch/_ops.py", line 502, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: xnnp_mul(): xnn setup operator failed(2)! ``` ### Versions Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.31 Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.3.58 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB MIG 7g.40gb Device 0: GPU 1: NVIDIA A100-PCIE-40GB MIG 3g.20gb Device 0: MIG 2g.10gb Device 1: MIG 2g.10gb Device 2: GPU 2: NVIDIA A100-PCIE-40GB MIG 3g.20gb Device 0: MIG 2g.10gb Device 1: MIG 2g.10gb Device 2: GPU 3: NVIDIA A100-PCIE-40GB MIG 3g.20gb Device 0: MIG 2g.10gb Device 1: MIG 2g.10gb Device 2: Nvidia driver version: 470.82.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-89 Off-line CPU(s) list: 90-95 Thread(s) per core: 1 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz Stepping: 7 CPU MHz: 2811.906 CPU max MHz: 4000.0000 CPU min MHz: 1200.0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 768 KiB L1i cache: 768 KiB L2 cache: 24 MiB L3 cache: 35.8 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] torch==2.0.1+cu117 [pip3] torch-pruning==1.0.0 [pip3] torch-scatter==2.1.0+pt112cu113 [pip3] torch-tb-profiler==0.4.0 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.15.2 [pip3] torchviz==0.0.2 [pip3] triton==2.0.0 [conda] numpy 1.22.3 pypi_0 pypi [conda] torch 2.0.1+cu117 pypi_0 pypi [conda] torch-pruning 1.0.0 pypi_0 pypi [conda] torch-scatter 2.1.0+pt112cu113 pypi_0 pypi [conda] torch-tb-profiler 0.4.0 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.15.2 pypi_0 pypi [conda] torchviz 0.0.2 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @Xia-Weiwen @leslie-fang-intel
4
1,237
108,212
Is there a standard procedure to check the consistency of environment across all nodes in PyTorch DDP training?
oncall: distributed
### ๐Ÿš€ The feature, motivation and pitch In a distributed environment, maintaining consistency across all nodes (e.g., driver versions, NCCL versions) can help reduce some training issues caused by the environment, such as performance or compatibility problems, and make troubleshooting easier. I am wondering if PyTorch provides any standard hooks or other methods to support this kind of check? In my current setup, I have multiple nodes with different hardware and software configurations, and I want to ensure that the environment is consistent across all nodes before starting the DDP training. Is there any built-in functionality in PyTorch to perform these checks, or do I need to implement a custom solution to compare the environments? Any suggestions or best practices on how to handle this situation would be greatly appreciated. Thank you in advance for your help! ### Alternatives _No response_ ### Additional context _No response_ cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
0
1,238
108,211
[Compile] Running Llama2 with torch.compile and FSDP results in Type mismatch assert in LlamaRotaryEmbedding
high priority, oncall: distributed, triaged, module: fsdp, oncall: pt2
### ๐Ÿ› Describe the bug Running Torch.compile with Llama7B and FSDP mixed precision, results in assert during first forward pass of training: (you can repro by going to https://github.com/lessw2020/llama-recipes/tree/rotary_embeddings and run "bash run.sh") ~~~ File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 42, in assert_eq assert a == b, f"{a} != {b}" AssertionError: torch.float32 != torch.bfloat16 ~~~ from this section (full trace below): ~~~ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 123, in forward self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), ~~~ Effectively there is a type mismatch but at least in adding some debugging to the Rotary cache and the incoming tensors, everything is all fp32. Here's the full stack trace: ~~~ Training Epoch0: 0%| | 0/48 [01:11<?, ?it/s] Traceback (most recent call last): File "/data/home/less/llama_rotary/llama_finetuning.py", line 262, in <module> fire.Fire(main) File "/data/home/less/miniconda3/lib/python3.9/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/data/home/less/miniconda3/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/data/home/less/miniconda3/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/data/home/less/llama_rotary/llama_finetuning.py", line 245, in main results = train( File "/data/home/less/llama_rotary/utils/train_utils.py", line 92, in train loss = model(**batch).loss File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn return fn(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 839, in forward output = self._fsdp_wrapped_module(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 807, in forward outputs = self.model( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 694, in forward layer_outputs = decoder_layer( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 839, in forward output = self._fsdp_wrapped_module(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 409, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/optimum/bettertransformer/models/decoder_models.py", line 387, in forward return llama_forward(self, *args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/optimum/bettertransformer/models/attention.py", line 616, in llama_forward cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 122, in forward print(f"{x.dtype=}") File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 488, in catch_errors return callback(frame, cache_entry, hooks, frame_state) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 625, in _convert_frame result = inner_convert(frame, cache_entry, hooks, frame_state) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 139, in _fn return fn(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 380, in _convert_frame_assert return _compile( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 555, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 477, in compile_inner out_code = transform_code_object(code, transform) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object transformations(instructions, code_options) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 444, in transform tracer.run() File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run super().run() File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run and self.step() File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step getattr(self, inst.opname)(inst) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1191, in LOAD_ATTR result = BuiltinVariable(getattr).call_function( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 618, in call_function result = handler(tx, *args, **kwargs) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1116, in call_getattr obj.var_getattr(tx, name).clone(source=source).add_options(options) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/user_defined.py", line 482, in var_getattr return VariableBuilder(tx, source)(subobj).add_options(options) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 223, in __call__ vt = self._wrap(value).clone(**self.options()) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 368, in _wrap return type_dispatch(self, value) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 879, in wrap_tensor return self.tx.output.register_attr_or_module( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 728, in register_attr_or_module return wrap_name(name) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 634, in wrap_name return wrap_fx_proxy( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1187, in wrap_fx_proxy return wrap_fx_proxy_cls( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1302, in wrap_fx_proxy_cls example_value = wrap_to_fake_tensor_and_record( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1583, in wrap_to_fake_tensor_and_record fake_e = wrap_fake_exception( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 916, in wrap_fake_exception return fn() File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1584, in <lambda> lambda: tx.fake_mode.from_tensor( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1720, in from_tensor return self.fake_tensor_converter( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 371, in __call__ return self.from_real_tensor( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 324, in from_real_tensor out = self.meta_converter( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 595, in __call__ r = self.meta_tensor( File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 493, in meta_tensor assert_metadata_eq(assert_eq, t, r, skip_symbolic=True) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 79, in assert_metadata_eq return go(m1, m2) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 74, in go go(m1._base, m2._base) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 51, in go assert_eq(m1.dtype, m2.dtype) File "/data/home/less/miniconda3/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 46, in assert_eq assert a == b, f"{a} != {b}" AssertionError: torch.float32 != torch.bfloat16 from user code: File "/data/home/less/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 124, in <resume in forward> self.cos_cached[:, :, :seq_len, ...].to(dtype=torch.bfloat16), # x.dtype), Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information ~~~ ### Versions Collecting environment information... PyTorch version: 2.1.0.dev20230825+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping: 7 CPU MHz: 1250.736 BogoMIPS: 5999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 71.5 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.1 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] st-moe-pytorch==0.0.22 [pip3] torch==2.1.0.dev20230825+cu121 [pip3] torchaudio==2.1.0.dev20230825+cu121 [pip3] torchinfo==1.8.0 [pip3] torchvision==0.16.0.dev20230825+cu121 [pip3] vit-pytorch==1.4.1 [conda] numpy 1.24.1 pypi_0 pypi [conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi [conda] st-moe-pytorch 0.0.22 pypi_0 pypi [conda] torch 2.1.0.dev20230825+cu121 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230825+cu121 pypi_0 pypi [conda] torchinfo 1.8.0 pypi_0 pypi [conda] torchvision 0.16.0.dev20230825+cu121 pypi_0 pypi [conda] vit-pytorch 1.4.1 pypi_0 pypi cc @ezyang @gchanan @zou3519 @kadeng @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @msaroufim @wconstab @bdhirsh @anijain2305
3
1,239
108,210
Using distributed RPC and DDP together triggers error.
oncall: distributed
### ๐Ÿ› Describe the bug I'm trying to use distributed.all_reduce and distributed.rpc.rpc_sync togather. It sometimes succeeds, sometimes hang, and sometimes crashes with thread pool error. This this is an minimum reproduction of the case where it crashes ```python import os import time import torch import torch.distributed as dist import torch.distributed.rpc as rpc os.environ['TP_SOCKET_IFNAME']='lo' torch.distributed.init_process_group("nccl") local_rank = int(os.environ["RANK"]) world_size = int(os.environ["WORLD_SIZE"]) options=rpc.TensorPipeRpcBackendOptions( init_method="env://", _transports=["uv"], _channels=["basic"], ) rpc.init_rpc(f"worker_{local_rank}", rank=local_rank, world_size=world_size, rpc_backend_options=options ) torch.cuda.set_device(local_rank) print('ranksize', local_rank, world_size) a = [{"rank": local_rank, "world_size": world_size}] print('Rank ', local_rank, a) b = torch.rand(3).cuda() dist.all_reduce(b) # works if this line is removed def worker(a, b): return a + b if local_rank > 0: print(local_rank, 'reduce') dist.all_reduce(b) print('Rank ', local_rank, a) if local_rank == 0: time.sleep(1) print('enter rpc') ret = rpc.rpc_sync("worker_1", print, args=(torch.ones(2), 3)) print(f"worker_0: add result: {ret}") dist.all_reduce(b) print('Rank ', local_rank, a) ``` Running this with `python -m torch.distributed.run --nproc_per_node 2 test.py` would crash with: ``` ranksize 0 2 Rank 0 [{'rank': 0, 'world_size': 2}] ranksize 1 2 Rank 1 [{'rank': 1, 'world_size': 2}] 1 reduce Rank 1 [{'rank': 1, 'world_size': 2}] enter rpc [E thread_pool.cpp:111] Exception in thread pool task: unknown FATAL: exception not rethrown ``` ### Versions Collecting environment information... PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.27.0 Libc version: glibc-2.31 Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: A100-SXM-80GB GPU 1: A100-SXM-80GB GPU 2: A100-SXM-80GB GPU 3: A100-SXM-80GB GPU 4: A100-SXM-80GB GPU 5: A100-SXM-80GB GPU 6: A100-SXM-80GB GPU 7: A100-SXM-80GB Nvidia driver version: 450.191.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz Stepping: 6 CPU MHz: 2999.994 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 4600.00 Virtualization: VT-x L1d cache: 3 MiB L1i cache: 2 MiB L2 cache: 80 MiB L3 cache: 108 MiB NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.7.10 [pip3] flake8-quotes==3.3.2 [pip3] mypy==1.4.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.4 [pip3] torch==2.0.1+cu118 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2023.1.0 h6d00ec8_46342 [conda] mkl-service 2.4.0 py310h5eee18b_1 [conda] mkl_fft 1.3.6 py310h1128e8f_1 [conda] mkl_random 1.2.2 py310h1128e8f_1 [conda] numpy 1.24.4 pypi_0 pypi [conda] pytorch-cuda 11.8 h7e8668a_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch 2.0.1+cu118 pypi_0 pypi [conda] torchaudio 2.0.2+cu118 pypi_0 pypi [conda] torchvision 0.15.2+cu118 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
1
1,240
108,209
ninja: build stopped: subcommand failed.
needs reproduction, module: cpp-extensions, triaged
### ๐Ÿ› Describe the bug ``` ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build subprocess.run( File "/home/hhw/.conda/envs/geo/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "setup.py", line 35, in <module> setup( File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/__init__.py", line 107, in setup return distutils.core.setup(**attrs) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands dist.run_commands() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands self.run_command(cmd) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/dist.py", line 1234, in run_command super().run_command(command) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command cmd_obj.run() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/command/develop.py", line 111, in install_for_development self.run_command('build_ext') File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command self.distribution.run_command(command) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/dist.py", line 1234, in run_command super().run_command(command) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command cmd_obj.run() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run _build_ext.run(self) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run self.build_extensions() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions build_ext.build_extensions(self) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 467, in build_extensions self._build_extensions_serial() File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 493, in _build_extensions_serial self.build_extension(ext) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension _build_ext.build_extension(self, ext) File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 548, in build_extension objects = self.compiler.compile( File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 491, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1250, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension ``` ### Versions ``` ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/hhw/.conda/envs/geo/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build subprocess.run( File "/home/hhw/.conda/envs/geo/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. ``` The above exception was the direct cause of the following exception: [vision3d-main.zip](https://github.com/pytorch/pytorch/files/12471232/vision3d-main.zip) cc @malfet @zou3519
2
1,241
108,197
AdaptiveMaxPool documentation is not detailed
module: docs, module: nn, triaged, actionable, module: pooling
### ๐Ÿ“š The doc issue The documentation does not give insights into how the assignments are performed for AdaptiveMaxPool2d (and other AdaptiveMaxPoolXd). While it is clear what happens in simple cases where the input shape is divisible by the input shape, the behavior when it is not divisible (i.e., where it is actually necessary) is not clarified. For example, for a 3x3 input and a 2x2 output, it is not obvious that each output is the maximum of 2x2 inputs. (It could also be maxima over 2x2, 2x1, 1x2, and 1x1 input elements.) My source for the behavior is: https://github.com/pytorch/pytorch/blob/51861cc9b19d9c483598e39932661822a826d3a2/aten/src/ATen/native/AdaptiveMaxPooling2d.cpp ### Suggest a potential alternative/fix The following could be added: For each element of the batch and each channel, given an input shape of $H_{in}\times W_{in}$ and a desired output of shape $H_{out}\times W_{out}$, `AdaptiveMaxPool2d` is defined as follows: $$Y[i, j] = \max\left\\{ X[a, b] \ |\ \left\lfloor \frac{i \cdot H_{in}}{H_{out}} \right\rfloor \leq a < \left\lceil \frac{(i+1) \cdot H_{in}}{H_{out}} \right\rceil \ ,\ \left\lfloor \frac{j \cdot W_{in}}{W_{out}} \right\rfloor \leq b < \left\lceil \frac{(j+1) \cdot W_{in}}{W_{out}} \right\rceil \right\\}$$ cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
1
1,242
108,193
[inductor] benchmark fusion
module: inductor, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #109245 * __->__ #108193 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
29
1,243
108,190
[FSDP] incorrect backward prefetch order when using BackwardPrefetch.BACKWARD_POST
triaged, module: fsdp
### ๐Ÿ› Describe the bug how to reproduce * pytest test/distributed/fsdp/test_fsdp_backward_prefetch.py * the unit test wraps nn.Transformer with FSDP for backward_prefetch = BackwardPrefetch.BACKWARD_POST * state._exec_order_data.handles_post_forward_order equals forward order: encoder 0...5 -> decoder 0...5 -> root * post-backward hook (AccumulateGrad) order: decoder 5, 4...0 -> encoder 5...0 -> root * current prefetch order: decoder 4...0 -> encoder 5...0 -> None -> decoder 5. decoder 5 backward is the 1st forward but it's prefetched at the the last. it means decoder 5 will be block waiting for all-gather without prefetch * ideal prefetch order: decoder 5...0 -> encoder 5...0 -> None the default backward_prefetch is BackwardPrefetch.BACKWARD_PRE. if more customers use BACKWARD_POST, we should revisit the priority for fixing ### Versions [conda] torch 2.1.0a0+gitd3da6ce dev_0 <develop> cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
0
1,244
108,186
Add FakeTensor support to torch._utils._rebuild_tensor
triaged, open source, module: fakeTensor, release notes: dynamo
Partially fixes https://github.com/pytorch/pytorch/issues/105077 Repro: ```python import tempfile import torch from torch._subclasses import fake_tensor class TheModelClass(torch.nn.Module): def __init__(self): super(TheModelClass, self).__init__() self.fc1 = torch.nn.Linear(5, 10) def forward(self, x): return self.fc1(x) with tempfile.NamedTemporaryFile() as state_dict_file: # Create state_dict to be loaded later model = TheModelClass() torch.save(model.state_dict(), state_dict_file.name) fake_mode = fake_tensor.FakeTensorMode() with fake_mode: # This is where the bug is triggered state_dict = torch.load(state_dict_file.name) ``` Error: ```bash Traceback (most recent call last): File "issue_gh_torch_105077.py", line 22, in <module> state_dict = torch.load(state_dict_file.name) File "/opt/pytorch/torch/serialization.py", line 1014, in load return _load(opened_zipfile, File "/opt/pytorch/torch/serialization.py", line 1422, in _load result = unpickler.load() File "/opt/pytorch/torch/_utils.py", line 205, in _rebuild_tensor_v2 tensor = _rebuild_tensor(storage, storage_offset, size, stride) File "/opt/pytorch/torch/_utils.py", line 184, in _rebuild_tensor return t.set_(storage._untyped_storage, storage_offset, size, stride) File "/opt/pytorch/torch/utils/_stats.py", line 20, in wrapper return fn(*args, **kwargs) File "/opt/pytorch/torch/_subclasses/fake_tensor.py", line 1288, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/opt/pytorch/torch/_subclasses/fake_tensor.py", line 1468, in dispatch self.invalidate_written_to_constants(func, flat_arg_fake_tensors, args, kwargs) File "/opt/pytorch/torch/_subclasses/fake_tensor.py", line 1733, in invalidate_written_to_constants _, new_kwargs = normalize_function( File "/opt/pytorch/torch/fx/operator_schemas.py", line 297, in normalize_function torch_op_schemas = get_signature_for_torch_op(target) File "/opt/pytorch/torch/fx/operator_schemas.py", line 167, in get_signature_for_torch_op signatures = [_torchscript_schema_to_signature(schema) for schema in schemas] File "/opt/pytorch/torch/fx/operator_schemas.py", line 167, in <listcomp> signatures = [_torchscript_schema_to_signature(schema) for schema in schemas] File "/opt/pytorch/torch/fx/operator_schemas.py", line 70, in _torchscript_schema_to_signature arg_type = _torchscript_type_to_python_type(arg.type) File "/opt/pytorch/torch/fx/operator_schemas.py", line 64, in _torchscript_type_to_python_type return eval(ts_type.annotation_str, _type_eval_globals) File "<string>", line 1, in <module> NameError: name 'Storage' is not defined ``` This PR adds the ability to create fake tensors during `torch.load` by wrapping the `torch.tensor.set_` call around a `torch.utils._mode_utils.no_dispatch()` to skip fake mode dispatcher for it and thus create a real tensor. It later calls `fake_mode.from_tensor(t)` to finally create the fake tensor. cc @eellison @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
1
1,245
108,183
[Performance] Pass in head_size_og to FlashAttentionV2
triaged, module: multi-headed-attention
# Summary We currently pad the inputs to FLashAttention V2 to the nearest multiple of 8 in order to make sure that inputs are aligned. We are likely doing extra MAD instructions in this case. This change was made in order to make the operator compliant to Autograds aliasing rules. A solution would be to pass down the og size from top level sdpa to the flash_api.cpp calls in order to save on some wasted compute when we are padding to a larger multiple.
0
1,246
108,181
'test_index_add_correctness' - "AssertionError: RuntimeError not raised by "
module: tests, triaged, oncall: pt2
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
0
1,247
108,178
Properly skip fast path in TransformerEncoder/MHA if autocast is enabled on CPU
null
Fix #107663 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108178
1
1,248
108,175
Enable FlashAttentionV2 on Windows
triaged, module: multi-headed-attention
# Summary This PR: https://github.com/pytorch/pytorch/issues/108174 will update the FlashAttention kernel within PyTorch core to V2. Currently this kernel does not support windows. This Issue is used to track support. See: https://github.com/Dao-AILab/flash-attention/issues/345
0
1,249
108,174
FlashAttentionV2 will OOM when building on ci/cd with default settings
module: cuda, module: ci, triaged
# Summary Changes were needed to be able to build FlashAttentionV2 on the CI/CD runners that are used when building Cuda with with Arch_list = 8.6. The default MAX_JOBS will cause the docker container building PyTorch to get killed with exit code 137 from utilizing more than its allocated 16gb of memory. This was resolved by setting MAX_JOBS =2. ## Investigate This issue is to investigate potential other solutions that maintain build speed. PR that added this functionality: https://github.com/pytorch/pytorch/pull/105602 cc @ptrblck @seemethere @malfet @pytorch/pytorch-dev-infra
4
1,250
108,170
TorchInductor Opinfo fixes for rng ops
ciflow/trunk, topic: not user facing, module: inductor, keep-going
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108170 Tests rng ops both with - fallback_random=True, assertEqual=True - fallback_random=False, assertEqual=False cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
42
1,251
108,169
[inductor] Replace empty_strided with empty in simple cases
topic: not user facing, module: inductor, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108169 * #108172 * #108168 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
2
1,252
108,158
TORCHELASTIC_RESTART_COUNT doesn't seem to be broadcasted to all worker
triaged, module: elastic, oncall: r2p
### ๐Ÿ› Describe the bug I use torchrun elastic to run my training, I use multi-worker when each worker is a multi-GPU machine. I wanted to use the env var `TORCHELASTIC_RESTART_COUNT ` to track the elastic restarts. Intuitively, I thought the count should increment whenever the whole group restarts. But I have noticed the below behaviors instead: 1. When one worker restarts, the whole group restarts, but the count doesn't get incremented on all workers. 2. The count often (not always) gets incremented on the worker that has rank 0 GPU 3. The count also sometimes gets incremented on a random other worker. How is count designed to be used? I can't seem to get a consistent behavior across workers on when the group restarts. I'm not particularly interested in the count, but I'm interested in "if the current worker has restarted". Currently the count can still be 0 even right after a restart. ### Versions ``` Collecting environment information... PyTorch version: 2.0.1+cu118.post4 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.10.7 (main, Sep 9 2022, 04:02:34) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX A5000 Nvidia driver version: 470.82.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz Stepping: 1 CPU MHz: 1200.000 CPU max MHz: 3500.0000 CPU min MHz: 1200.0000 BogoMIPS: 5986.65 Virtualization: VT-x L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 3 MiB L3 cache: 30 MiB NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d Versions of relevant libraries: [pip3] fft-conv-pytorch==1.1.3 [pip3] gpytorch==1.9.1 [pip3] mypy==0.960 [pip3] mypy-extensions==1.0.0 [pip3] mypy-protobuf==3.3.0 [pip3] numpy==1.22.4 [pip3] pytorch-lightning==1.7.3 [pip3] pytorch3d==0.7.4 [pip3] torch==2.0.1+cu118.post4 [pip3] torch-geometric==2.3.0 [pip3] torch-tb-profiler==0.4.1 [pip3] torch-tensorrt==1.4.0 [pip3] torchmetrics==0.11.0 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect ``` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @dzhulgakov
0
1,253
108,155
Automate release only changes in https://github.com/pytorch/pytorch/pull/108053
oncall: releng, module: ci, triaged
We can use following approach: Update branch used in composite actions from trunk to release (for example, can be done by running ``for i in .github/workflows/*.yml; do sed -i -e s#@master#@release/2.0# $i; done`` Example: https://github.com/pytorch/pytorch/commit/51b42d98d696a9a474bc69f9a4c755058809542f create new script in https://github.com/pytorch/pytorch/tree/main/scripts/release post_cut_release_branch.sh to generate the above PR automatically cc @seemethere @malfet @pytorch/pytorch-dev-infra
1
1,254
108,152
`Tensor.uniform_` uses illegal argument name `from`
module: distributions, triaged, module: python frontend
### ๐Ÿ› Describe the bug The autogenerated `torch.Tensor.uniform_` has an illegal python signature. From `torch/_C/_TensorBase.py`: ```python def uniform_(self, from=0, to=1): # real signature unknown; restored from __doc__ ``` `from` is an illegal name for keyword arguments in python. The problem originates from the C++ code that generates this function: https://github.com/pytorch/pytorch/blob/80c7fdf49febf90f8fe0d3840145db85c9972b5e/aten/src/ATen/native/vulkan/ops/Random.cpp#L15-L19 https://github.com/pytorch/pytorch/blob/80c7fdf49febf90f8fe0d3840145db85c9972b5e/aten/src/ATen/native/mps/operations/Distributions.mm#L218 This leads to issues for other tools like pycharm, which doesn't understand the signature: ![image](https://github.com/pytorch/pytorch/assets/39696536/029cc557-118a-4e08-8ed1-43fd82dbe909) ### Versions <details> ``` PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.27.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: 12.2.128 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: GenuineIntel Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz CPU family: 6 Model: 141 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 1 CPU max MHz: 4800,0000 CPU min MHz: 800,0000 BogoMIPS: 4992.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 384 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 10 MiB (8 instances) L3 cache: 24 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-annotations==3.0.1 [pip3] flake8-black==0.3.6 [pip3] flake8-bugbear==23.7.10 [pip3] flake8-comprehensions==3.14.0 [pip3] flake8-docstrings==1.7.0 [pip3] flake8-pyi==23.6.0 [pip3] flake8-rst==0.8.0 [pip3] flake8-rst-docstrings==0.3.0 [pip3] mypy==1.5.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.25.2 [pip3] torch==2.0.1 [pip3] torchinfo==1.8.0 [pip3] triton==2.0.0 [conda] Could not collect ``` </details> cc @fritzo @neerajprad @alicanb @nikitaved @albanD
2
1,255
108,145
Problems hit when upgrading the version of HF used in CI
triaged, oncall: pt2, module: dynamic shapes
@angelayi upgraded the HF version used on CI to overcome a common Export+AOTInductor failure on HF, https://github.com/pytorch/pytorch/pull/107400. However, the upgrading revealed several other issues, - [ ] test_hf_bert_ddp_inductor failure, `python test/distributed/test_dynamo_distributed.py -k test_hf_bert_ddp_inductor` - [ ] several benchmarks failures with dynamic shapes ON, also reported in https://github.com/pytorch/pytorch/issues/107200 - [ ] increased graph break count, see https://github.com/pytorch/pytorch/pull/107400/files . Most of the affected benchmarks overlap with those failed at dynamic shapes, so likely caused by the same issue. Performance [drop]( https://hud.pytorch.org/benchmark/compilers?startTime=Tue%2C%2022%20Aug%202023%2015%3A56%3A44%20GMT&stopTime=Tue%2C%2029%20Aug%202023%2015%3A56%3A44%20GMT&granularity=hour&suite=torchbench&mode=training&dtype=amp&lBranch=angelayi/hf_version&lCommit=c08e3fad00a2702ed7a1ee06a3c832d4d00bf74b&rBranch=main&rCommit=138e2895d08a6517c5718b2a0118c1b23ff4664c) because of additional graph breaks. cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
1,256
108,138
Enable Mypy Checking in torch/_inductor/scheduler.py
triaged, open source, module: inductor, ciflow/inductor
Fixes #105230 Summary: As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/kernel/scheduler.py After Fix: mypy --follow-imports=skip torch/_inductor/kernel/scheduler.py Success: no issues found in 1 source file cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
5
1,257
108,128
nccl:all_reduce is not profiled correctly
oncall: distributed, oncall: profiler
### ๐Ÿ› Describe the bug I am reading the source code or PyTorch DDP and using PyTorch profiler to measure the performance of NCCL allreduce operation. I understand the ncclAllReduce is an async call. In https://github.com/pytorch/pytorch/blob/6648880aca3914cc7995535fa0813ae580582492/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1710, work is created in collective method(in my case, it's ncclAllReduce), and add a callback to the work at https://github.com/pytorch/pytorch/blob/6648880aca3914cc7995535fa0813ae580582492/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1809. In the callback, it records the function end. Then at https://github.com/pytorch/pytorch/blob/00eed6f36759323c70c6e05ccb1561367921eee0/torch/csrc/distributed/c10d/reducer.cpp#L1565, the Future waits for the CUDA stream to be completed, then the callback will be triggered and record the end time of nccl:all_reduce. Questions: 1. Is my above understanding correct? 2. If I understand the code correctly, why the duration of nccl:all_reduce the profiler captured looks incorrect? E.g., the duration is very short, and it's not the end time of CUDA allreduce kernel. ![image](https://github.com/pytorch/pytorch/assets/97820520/ad7f8f92-ac8d-42ce-9982-c22e536df63e) ### Versions https://github.com/pytorch/pytorch.git cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
1
1,258
108,110
[inductor] Minifier fails on resnet50_quantized_qat
triaged, oncall: pt2, module: inductor
``` TORCHDYNAMO_REPRO_AFTER=dynamo ./benchmarks/dynamo/torchbench.py --training --performance --no-skip --inductor --only resnet50_quantized_qat ... torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information Minifier script written to /data/users/jansel/pytorch/torch_compile_debug/run_2023_08_28_17_07_24_508190-pid_3449892/minifier/minifier_launcher.py. Run this script to find the smallest traced graph which reproduces this error. ``` The generated script contains invalid syntax: ```python python /data/users/jansel/pytorch/torch_compile_debug/run_2023_08_28_17_07_24_508190-pid_3449892/minifier/minifier_launcher.py File "/data/users/jansel/pytorch/torch_compile_debug/run_2023_08_28_17_07_24_508190-pid_3449892/minifier/minifier_launcher.py", line 27 (activation_post_process): MovingAverageMinMaxObserver(min_val=-5.017982006072998, max_val=4.760481357574463) ^ SyntaxError: invalid syntax ``` ```python class Repro(torch.nn.Module): def __init__(self): super().__init__() self.L__mod___activation_post_process_0 = FusedMovingAvgObsFakeQuantize( fake_quant_enabled=tensor([1], device='cuda:0'), observer_enabled=tensor([1], device='cuda:0'), scale=tensor([0.0770], device='cuda:0'), zero_point=tensor([65], device='cuda:0', dtype=torch.int32), dtype=torch.quint8, quant_min=0, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=True (activation_post_process): MovingAverageMinMaxObserver(min_val=-5.017982006072998, max_val=4.760481357574463) ) self.L__mod___conv1 = ConvBnReLU2d( 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (weight_fake_quant): FusedMovingAvgObsFakeQuantize( ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
0
1,259
108,108
[BC BREAKING] Change default behavior of scaled_dot_product_attention's causal masking alignment
module: bc-breaking, module: nn, oncall: transformer/mha, topic: bc breaking
## Current Behavior: The `torch.nn.functional.scaled_dot_product_attention` function currently applies causal masking to the top left corner of the attention matrix. This well defined when seqlen_q equals seqlen_kv, but an alignment choice needs to made when they are not. #### Current alignment choice If seqlen_q != seqlen_kv and causal=True, the causal mask is aligned to the the top-left corner. For example, if seqlen_q = 2 and seqlen_kv = 5, the causal mask (1 = keep, 0 = masked out) is: ``` 1 0 0 0 0 1 1 0 0 0 ``` If seqlen_q = 5 and seqlen_kv = 2, the causal mask is: ``` 1 0 1 1 1 1 1 1 1 1 ``` ## Proposal: I propose changing the default behavior of the scaled_dot_product_attention function's causal masking to align with the bottom right corner of the attention matrix. This change would be backward incompatible for inputs when seqlen_q does not equal seqlen_kv, as it would shift the masking pattern. For example, if seqlen_q = 2 and seqlen_kv = 5, the causal mask (1 = keep, 0 = masked out) is: ``` 1 1 1 1 0 1 1 1 1 1 ``` If seqlen_q = 5 and seqlen_kv = 2, the causal mask is: ``` 0 0 0 0 0 0 1 0 1 1 ``` _If the row of the mask is all zero, the output will be zero._ ## Benefits: This choice of mask is beneficial when performing autoregressive decoding for LLMs. For example lets say that your prefill prompt consisted of 4 tokens and you had a max KV cache size of 8. Then at step 1 of decoding, your query sequence length would be size 1 (index position 5) and your KV input would be of size 5. The query's seqlen is 1 but it really represents the 5 token in the sequence. So the mask for this attention should be: ``` 1 1 1 1 1 0 0 0 ``` With the existing implementation of causal, the produced attention mask would be. ``` 1 0 0 0 0 0 0 0 ``` ## Drawbacks: Backward compatibility: The proposed change would break backward compatibility for cases where seqlen_q doesn't equal seqlen_kv. Existing code: Users who rely on the current masking alignment would need to update their code if they want to adopt the new behavior. ## Implementation: To implement this change, the following steps need to be taken: - Modify the `scaled_dot_product_attention_math` function to align the causal masking with the bottom right corner of the attention matrix. - Change the default call `_scaled_dot_product_efficient_attention` to https://github.com/pytorch/pytorch/blob/3488837ec118cc8f8bf9fe10a8c5a32726ba0a7b/aten/src/ATen/native/transformers/cuda/attention.cu#L734 to use the bottom right - After updating to FlashAttentionV2 we would apply the changes from this commit: https://github.com/Dao-AILab/flash-attention/commit/9e5e8bc91e30af5cdc321362b553f6c0da332e30 - Update the fused CPU implementation to obey the above semantics. - Update the function's documentation and any relevant examples to reflect the new masking behavior. ## Additional Context: Both Xformers and FlashAttention have made this behavior the new default cc @ezyang @gchanan @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1
0
1,260
108,107
[inductor] soft_actor_critic training is slower than eager
triaged, oncall: pt2, module: inductor
Our [perf dashboard](https://hud.pytorch.org/benchmark/torchbench/inductor_with_cudagraphs?startTime=Mon,%2021%20Aug%202023%2023:38:56%20GMT&stopTime=Mon,%2028%20Aug%202023%2023:38:56%20GMT&granularity=hour&mode=training&dtype=amp&lBranch=main&lCommit=138e2895d08a6517c5718b2a0118c1b23ff4664c&rBranch=main&rCommit=134d415615e39fb4a54420e58a508236a0614610) shows that soft_actor_critic is ~5% slower than eager for AMP training. Running locally with float32 precision is even worse ``` TORCHINDUCTOR_MAX_AUTOTUNE=1 ./benchmarks/dynamo/torchbench.py --training --performance --no-skip --inductor --only soft_actor_critic ``` with a speedup of `0.82x`. This is one of the only slowdowns inductor generates. We should investigate what is going on here and fix it. Looking at float32 might be easier to debug since the slowdown is more dramatic. cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
0
1,261
108,102
_sampled_addmm_kernel cause 'misaligned address' with new triton pin
module: sparse, triaged
### ๐Ÿ› Describe the bug This issue tracks the test failure for ``` python test/test_sparse_csr.py -k test_triton_scaled_dot_product_attention_block_size_16_cuda_bfloat16 ``` with new triton commit: 0410652666ddcad54aa4c403276b478a36379b90 Right after calling `_sampled_addmm_kernel` kernel, printing the tensor arguments result in 'misaligend address' error. Note that test `test_triton_sampled_addmm` passes even though it runs 18 parametrized tests calling `_sampled_addmm_kernel` kernel. So the issue only happens with very specific input to the kernel. We disable the test to unblock the triton pin upgrade. Use this issue to track it. ### Versions .. cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
4
1,262
108,101
[fx] Show original user stack trace on GraphModule exception if it's available
fb-exported, release notes: fx
Summary: Previously we only showed the FX-generated source code, which is not very helpful to end-users. This PR changes things by showing the original user stack trace if it's available. We do this by recording a map of lines -> nodes during codegen and using that to figure out which node to show a stack trace for. This works only if recording stack traces (either you set `record_stack_traces=True`, or are using torch.export/make_fx). It also doesn't persist across GraphModule pickling, as we have no way of persisting the node references. In cases where we don't have stack traces available, or we don't have the line -> node mapping, things should degrade gracefully to the previous behavior. Test Plan: added a unit test Differential Revision: D48758566
8
1,263
108,099
DISABLED test_multilayer_var_cpu (__main__.CpuTests)
triaged, module: macos, skipped
Platforms: mac, macos This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_torchinductor.py%3A%3ACpuTests%3A%3Atest_multilayer_var_cpu)). Here is an example https://hud.pytorch.org/pytorch/pytorch/commit/60bb02a907aa1f5e6f2150d50ea28a48afaa8e95 cc @malfet @albanD @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
1,264
108,095
[inductor] minifier fails on moco
triaged, oncall: pt2, module: dynamic shapes, module: inductor
Run without minifier, the failure seems to be tensor being assigned to symint compile error: ```python CUDA_LAUNCH_BLOCKING=1 ./benchmarks/dynamo/torchbench.py --inference --performance --no-skip --inductor --freezing --only moco ... torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: TypeError: can't assign a SymInt to a torch.cuda.LongTensor While executing %setitem_1 : [num_users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod), kwargs = {}) Original traceback: File "/data/users/jansel/torchbenchmark/torchbenchmark/models/moco/moco/builder.py", line 66, in <resume in _dequeue_and_enqueue> self.queue_ptr[0] = ptr ``` If I enable the minifier, the error somehow changes to an IMA at runtime and the compile error goes away: ```python TORCHDYNAMO_REPRO_AFTER=dynamo CUDA_LAUNCH_BLOCKING=1 ./benchmarks/dynamo/torchbench.py --inference --performance --no-skip --inductor --freezing --only moco ... File "/tmp/torchinductor_jansel/4j/c4jliihk5stxxkbqkwmftdi3lzst3kortckftv6e3hikrpuh5wyo.py", line 665, in call triton_poi_fused_convolution_0.run(arg162_1, buf0, 96, 50176, grid=grid(96, 50176), stream=stream0) File "/data/users/jansel/pytorch/torch/_inductor/triton_heuristics.py", line 426, in run return launcher( File "<string>", line 6, in launcher torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered While executing %submod_0 : [num_users=2] = call_module[target=submod_0](args = (%l_im_q_,), kwargs = {}) Original traceback: None Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information Minifier script written to /data/users/jansel/pytorch/torch_compile_debug/run_2023_08_28_13_50_54_266476-pid_245453/minifier/minifier_launcher.py. Run this script to find the smallest traced graph which reproduces this error. ``` If I run the generated script, the failure can't be reproed: ```python python /data/users/jansel/pytorch/torch_compile_debug/run_2023_08_28_13_50_54_266476-pid_245453/minifier/minifier_launcher.py ... RuntimeError: Input graph did not fail the tester ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
3
1,265
108,090
[Optimizer Perf] Improve speed of _init_group to c++
module: performance, module: optimizer, triaged, needs research
### ๐Ÿš€ The feature, motivation and pitch It has been noticed on many models that[ _init_group](https://github.com/pytorch/pytorch/blob/main/torch/optim/adamw.py#L90-L146) takes 33% of the whole `optimizer.step()` on **every step** for popular optimizers like adam, adamw, etc. since it has to loop through each single param for each step ## Proposal, do this in c++ just like how `_group_tensors_by_device_and_dtype` in https://github.com/pytorch/pytorch/pull/100007 was moved in c++, lets move _init_group there too cc: @janeyx99 @crcrpar cc: @clchan @ezhang887 ### Alternatives _No response_ ### Additional context _No response_ cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
4
1,266
108,079
Aliased Input/Output Requirement in `aot_export_joint_simple`
triaged, oncall: pt2, module: aotdispatch
### ๐Ÿ› Describe the bug When compiling a simple module with the function [`aot_export_joint_simple`](https://github.com/pytorch/pytorch/blob/054f3f1d8f9eb63ef8437991eba5b8f2aeee920f/torch/_functorch/aot_autograd.py#L4084), the following error is encountered: ```python Traceback (most recent call last): File "~/test.py", line 16, in <module> gm = aot_export_joint_simple(Aliased(), [torch.rand((5, 5))], trace_joint=False) File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 4134, in aot_export_joint_simple raise RuntimeError(f"aot_export_joint_simple does not support outputs that alias inputs. {str(metadata)}") RuntimeError: aot_export_joint_simple does not support outputs that alias inputs. ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False)], output_info=[OutputAliasInfo(output_type=<OutputType.alias_of_input: 2>, raw_type=<class 'torch.Tensor'>, base_idx=0, dynamic_dims=set())], requires_grad_info=[False], num_intermediate_bases=0, keep_input_mutations=False, traced_tangents=[], num_symints_saved_for_bw=None) ``` The model can be found below: ```python import torch from torch._functorch.aot_autograd import aot_export_joint_simple class Aliased(torch.nn.Module): def forward(self, x): return (x[None],) gm = aot_export_joint_simple(Aliased(), [torch.rand((5, 5))], trace_joint=False) ``` If we comment out these lines: https://github.com/pytorch/pytorch/blob/054f3f1d8f9eb63ef8437991eba5b8f2aeee920f/torch/_functorch/aot_autograd.py#L4133-L4134 Then, the module compiles and returns a graph in which the inputs and outputs are valid and no longer aliased, as so: ```python graph(): %arg0_1 : [num_users=1] = placeholder[target=arg0_1] %unsqueeze : [num_users=1] = call_function[target=torch.ops.aten.unsqueeze.default](args = (%arg0_1, 0), kwargs = {}) return (unsqueeze,) ``` Would it be possible to have the input/output alias check in this function be controlled by a flag, since it does not seem required for the output graph to be valid? Additionally, are there any suggestions for utilities to de-alias inputs from outputs? ### Versions PyTorch Version: `2.1.0.dev20230825+cu121` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
6
1,267
108,070
Update to newest CUTLASS version 3.2.1
topic: not user facing
# Summary This PR updates the third_party/ dependency Cutlass from v3.1.0 to v.3.2.1
10
1,268
108,059
testing out_dtype_int_mm
release notes: fx, module: inductor, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108059 Summary: Test Plan: python /home/cdhernandez/local/pytorch/test/test_out_dtype_op.py -k "int_mm" &> log.log Reviewers: Subscribers: Tasks: Tags: cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
1
1,269
108,052
[POC] Add `frozen_offload`
null
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108052 * #107784 * #106080 * #106068 This is a proof-of-concept for an internal use case request. `frozen_offload` only applies to a module with all frozen parameters. In default, the parameters are offloaded to CPU (as views into a single contiguous tensor for copy efficiency), and for forward computation, the parameters are moved to GPU. ![Screenshot 2023-08-28 at 10 10 01 AM](https://github.com/pytorch/pytorch/assets/31054793/866d1245-f1a2-44c9-ad41-03759acc854c)
1
1,270
108,049
enhance argument checking for at::embedding_bag
open source, ciflow/trunk, topic: not user facing
Fix https://github.com/pytorch/pytorch/issues/107432 and https://github.com/pytorch/pytorch/issues/106362. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #108049
1
1,271
108,047
DDP training can not accept subnet address in IPV6
oncall: distributed, triaged
### ๐Ÿ› Describe the bug When I pass a normal ipv6 address[fe80::4315:8136:2e6:13f8] to torch.distributed.run, starting command is shown below: ```shell python -m torch.distributed.launch --rdzv_backend=static --nnodes=1 --nproc_per_node=4 --rdzv_endpoint=[fe80::121b:54ff:fe0f:41d3]:29500 testipv6.py ``` It can not be connected even I can ping this address successfully. Fisrtly, I thought IPV6 address is not supported in elastic launch. But when I pass [::1] to the script, the address is connected and the training started and finished. Thus, I turned on the debug log by: ```shell export TORCH_CPP_LOG_LEVEL=INFO export TORCH_DISTRIBUTED_DEBUG=DETAIL ``` The it gave me the error like below: ``` [I socket.cpp:454] [c10d - debug] The server socket will attempt to listen on an IPv6 address. [I socket.cpp:504] [c10d - debug] The server socket is attempting to listen on [::]:29500. [I socket.cpp:578] [c10d] The server socket has started to listen on [::]:29500. [I TCPStore.cpp:252] [c10d - debug] The server has started on port = 29500. [I socket.cpp:691] [c10d - debug] The client socket will attempt to connect to an IPv6 address of (fe80::4315:8136:2e6:13f8, 29500). [I socket.cpp:763] [c10d - trace] The client socket is attempting to connect to [xxx]:29500. [W socket.cpp:665] [c10d] The client socket has failed to connect to [xxx]:29500 (errno: 22 - Invalid argument). [I socket.cpp:700] [c10d - debug] The client socket will attempt to connect to an IPv4 address of (fe80::4315:8136:2e6:13f8, 29500). [W socket.cpp:665] [c10d] The IPv4 network addresses of (fe80::4315:8136:2e6:13f8, 29500) cannot be retrieved (gai error: -9 - Address family for hostname not supported). [W socket.cpp:665] [c10d] The IPv4 network addresses of (fe80::4315:8136:2e6:13f8, 29500) cannot be retrieved (gai error: -9 - Address family for hostname not supported). [W socket.cpp:665] [c10d] The IPv4 network addresses of (fe80::4315:8136:2e6:13f8, 29500) cannot be retrieved (gai error: -9 - Address family for hostname not supported). [W socket.cpp:665] [c10d] The IPv4 network addresses of (fe80::4315:8136:2e6:13f8, 29500) cannot be retrieved (gai error: -9 - Address family for hostname not supported). [W socket.cpp:665] [c10d] The IPv4 network addresses of (fe80::4315:8136:2e6:13f8, 29500) cannot be retrieved (gai error: -9 - Address family for hostname not supported). ``` From the err log shown above, I can not understand why the ipv6 address is an Invalid argument. Thus I found the lastest code right at the line which caused the error. The lateset code in socket.cpp is shown below: ```c++ SocketConnectOp::ConnectResult SocketConnectOp::tryConnectCore( const ::addrinfo& addr) { int r = ::connect(socket_->handle(), addr.ai_addr, addr.ai_addrlen); ``` Then, I found that the error is caused by the function connect. To debug easily, I extract this code into a cpp file, to find out why the address I passed is an invalid argument. The test .cpp script is shown below: ```c++ #include <iostream> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> #include <cstring> #include <net/if.h> int main() { int sockfd = socket(AF_INET6, SOCK_STREAM, 0); if (sockfd == -1) { std::cerr << "Socket creation failed." << std::endl; return 1; } sockaddr_in6 serverAddr; serverAddr.sin6_family = AF_INET6; serverAddr.sin6_port = htons(29499); // Port number //serverAddr.sin6_scope_id = if_nametoindex("eno1"); if (inet_pton(AF_INET6, "fe80::4315:8136:2e6:13f8", &serverAddr.sin6_addr) <= 0) { std::cout << "Invalid IPv6 address." << std::endl; return 1; } if (connect(sockfd, (struct sockaddr*)&serverAddr, sizeof(serverAddr)) == -1) { perror("Connect failed"); return 1; } std::cout << "Connected to the server!" << std::endl; // Close the socket close(sockfd); return 0; } ``` In this script, I tried a different ipv6 address "2001:0db8:85a3:0000:0000:8a2e:0370:7334", this time, it gave me a different new error ``` Connect failed: Network is unreachable ``` Suddenly I realized that the IP address I passed may have something special that caused this error. Then, I found the IPV6 address is a subnet address started with fe80. The I tried other ip addresses not started with fe80, the invalid argument error disappeared, the new error is "network is unreachable". I think the current code can not support a subnet ipv6 address. Therefore, I add this line to the code: ```c++ serverAddr.sin6_scope_id = if_nametoindex("eno1"); ``` Run the .cpp script again, the result is shown as below: ``` Connect failed: Connection refused ``` At this step, I realized that I passed the verfication step, and the socket tried to connect the ip but been refused. Thus, I changed the origin code in torch in socket.cpp like this: ![image](https://github.com/pytorch/pytorch/assets/55902565/85716358-6b21-4083-8594-39627fe62fe8) Then, I compile the code, and run the elastic launch command again. This time, my training script started run and finished. Thus, I concluded that the source code does not support a subnet ipv6 address ### Versions [pip3] flake8==6.0.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.11.1 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==0.960 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.1 [pip3] torch==2.1.0.dev20230810+cpu [pip3] torchvision==0.15.1 [conda] numpy 1.23.1 pypi_0 pypi [conda] torchvision 0.15.1 pypi_0 pypi cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
6
1,272
108,046
Enhanced Available Backend Discovery and Selection in PyTorch 2
triaged, enhancement, module: python frontend
### ๐Ÿš€ The feature, motivation and pitch New utility functions are proposed to make backend selection more intuitive and efficient in PyTorch 2: Avoid this type of constructs: DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") DEVICE = torch.device("cuda" if torch.cuda.is_available() else ("mps" if torch.has_mps else "cpu")) By introducing for example: torch.backends.get_available() -> List[str]: Returns a sorted list of available backends based on a set of predefined compatibility and performance metrics. torch.backends.select_optimal() -> str: Automatically selects the most compatible and efficient backend for the current system configuration. (The first one from the sorted get_available() list.) The feature aims to categorize and sort available computational backends, such as CUDA, MPS, and CPU, based on their computational power or other suitable criteria. ### Alternatives _No response_ ### Additional context _No response_ cc @albanD
5
1,273
108,043
RuntimeError with nn.ConstantPad when using torch.compile in max-autotune mode
triaged, oncall: pt2, module: inductor
### ๐Ÿ› Describe the bug Using torch.compile with the max-autotune mode and a tensor with negative dimensions leads to a RuntimeError. Specifically, I get the error "Trying to create tensor with negative dimension -1: [-1]". After a series of tests, I discovered that this error is triggered only when the input size is greater than dim=2. ๐Ÿ”„ To Reproduce Here's the code snippet to reproduce the issue: ```python import torch.nn as nn class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.layer1 = nn.ConstantPad2d(padding=0, value=1) def forward(self, inputs): return self.layer1(inputs) ip_size = [0, 0, 1] input_tensor = torch.randn(ip_size) cuda_inputs = input_tensor.clone().to('cuda') mymodel = CustomModel() no_op_info = mymodel(input_tensor) mymodel.to('cuda') op_info = torch.compile(mymodel.forward, mode='max-autotune')(cuda_inputs) ``` ๐Ÿ“ƒ Stack Trace ``` File "/home/.conda/envs/stable/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 300, in static_input buffer = torch.zeros(needed_size, dtype=x.dtype, device=x.device) RuntimeError: Trying to create tensor with negative dimension -1: [-1] ``` If this issue is known or intended behavior, please feel free to close it. ### Versions PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 GPU 1: NVIDIA GeForce RTX 2070 GPU 2: NVIDIA GeForce RTX 2070 GPU 3: NVIDIA GeForce RTX 2070 Nvidia driver version: 535.86.10 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz CPU family: 6 Model: 63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 Stepping: 2 CPU max MHz: 3200.0000 CPU min MHz: 1200.0000 BogoMIPS: 4794.42 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d Virtualization: VT-x L1d cache: 512 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 4 MiB (16 instances) L3 cache: 40 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1+cu118 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] numpy 1.24.3 pypi_0 pypi [conda] torch 2.0.1+cu118 pypi_0 pypi [conda] torchaudio 2.0.2+cu118 pypi_0 pypi [conda] torchvision 0.15.2+cu118 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @ezyang @gchanan @zou3519 @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
3
1,274
108,042
[xla hash update] update the pinned xla hash
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml). Update the pinned xla hash.
4
1,275
108,041
Undefined Symobl: pybind11::detail::type_caster<at::Tensor, void>::load(pybind11::handle, bool)
needs reproduction, triaged, module: pybind
### ๐Ÿ› Describe the bug This bug report is a derivation of what was reported on the pybind/pybind11 GitHub repo: [pybind issue #4696](https://github.com/pybind/pybind11/issues/4696) The traceback I got was: Traceback (most recent call last): File "run.py", line 25, in import cfvpy.tasks File "/home/usrname/rebel-main/cfvpy/tasks.py", line 15, in import cfvpy.selfplay File "/home/usrname/rebel-main/cfvpy/selfplay.py", line 27, in import cfvpy.rela ImportError: /home/usrname/rebel-main/cfvpy/rela.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN8pybind116detail11type_casterIN2at6TensorEvE4loadENS_6handleEb To reproduce, please follow the steps as outlined by me in a comment of the pybind issue above: [reproduction procedure](https://github.com/pybind/pybind11/issues/4696#issuecomment-1678574353) And one of pybind's contributors, rwgk, suggested that this might be a bug/issue of PyTorch, [rwgk's comment](https://github.com/pybind/pybind11/issues/4696#issuecomment-1679013574) ### Versions PyTorch: 1.4.1 pybind: [a1b71df](https://github.com/pybind/pybind11/tree/a1b71df137e015d44f7e31f7b6d4807253fb7871)
5
1,276
108,037
Script to compare measured (trace) runtimes with estimated runtimes
fb-exported, topic: not user facing, module: inductor, ciflow/inductor
Reviewed By: xuzhao9 Differential Revision: D48523883 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
33
1,277
108,030
DISABLED test_autocast_flash_attention (__main__.ActivationCheckpointingViaTagsTests)
module: rocm, triaged, module: flaky-tests, skipped, module: dynamo
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_autocast_flash_attention) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16234790976). Over the past 72 hours, it has flakily failed in 9 workflow(s). **Debugging instructions (after clicking on the recent samples link):** To find relevant log snippets: 1. Click on the workflow logs linked above 2. Grep for `test_autocast_flash_attention` Test file path: `dynamo/test_activation_checkpointing.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
1
1,278
108,023
[FSDP] Ignored modules on meta device seem to be initialized on CUDA device
oncall: distributed, triaged, actionable, module: fsdp
### ๐Ÿ› Describe the bug When passing in a module as `ignored_modules`, should we also ensure FSDP does not initialize it via `to_empty` + `reset_parameters`? If `ignored_modules` contract is that FSDP ignores the module entirely, this might be what's expected especially if users want to apply different initialization / sharding to the ignored modules. One way this could cause issues in practice is if I have a very large embedding on a meta device, wrap with FSDP and ignore it, and want to initialize + shard the embedding separately after FSDP wrap. In this case, seems like the full Embedding would be allocated by FSDP. The following test should reproduce ``` @skip_if_lt_x_gpu(2) def test_fsdp_ignored_module_meta(self): class CPUGPUModule(nn.Module): def __init__(self): super().__init__() self.a = nn.Linear(1, 1) self.b = nn.Linear(1, 1) with torch.device("meta"): m= CPUGPUModule() m = FSDP(m, device_id=self.rank, ignored_modules=[m.a], use_orig_params=True) print(f"RV: {next(m.a.parameters()).device}") ``` ### Versions main cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
2
1,279
108,022
ShapeEnv produce_guards AssertionError Triggered when tensor is resized
triaged, has workaround, oncall: pt2, module: dynamic shapes, module: dynamo
### ๐Ÿ› Describe the bug Using `torch.atan` with a 5D input tensor triggers an AssertionError, specifically `assert len(constraint) == t.dim()`, when compiling the model with `torch.compile()` ### ๐Ÿ”„ To Reproduce Here's the minimal code required to reproduce the issue: ```python import torch from torch import nn class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() def forward(self, inputs): return torch.atan(**inputs) ip_size = [1, 3, 4, 1, 1] input_tensor = torch.randn(ip_size) out_size = [1] out_tensor = torch.randn(out_size) cuda_inputs = input_tensor.clone().to('cuda') cuda_out = out_tensor.clone().to('cuda') mymodel = CustomModel() no_op_info= mymodel({'input': input_tensor, 'out': out_tensor}) mymodel.to('cuda') op_info = torch.compile(mymodel)({'input': cuda_inputs, 'out': cuda_out}) print(op_info) ``` ### ๐Ÿ“ƒ Stack Trace ``` return self.create_fn(self.source.select(local_builder, global_builder), self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/guihuan/.conda/envs/night/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 599, in SHAPE_ENV guards = output_graph.shape_env.produce_guards( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/guihuan/.conda/envs/night/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2633, in produce_guards assert len(constraint) == t.dim() AssertionError: ``` If this is an intended behavior or a known bug, feel free to close this issue. Should more information be required for debugging, I'm available to provide it. ### Versions PyTorch version: 2.1.0.dev20230826+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 GPU 1: NVIDIA GeForce RTX 2070 GPU 2: NVIDIA GeForce RTX 2070 GPU 3: NVIDIA GeForce RTX 2070 Nvidia driver version: 535.86.10 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz CPU family: 6 Model: 63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 Stepping: 2 CPU max MHz: 3200.0000 CPU min MHz: 1200.0000 BogoMIPS: 4794.42 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d Virtualization: VT-x L1d cache: 512 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 4 MiB (16 instances) L3 cache: 40 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.1.0.dev20230826+cu118 [pip3] torchaudio==2.1.0.dev20230826+cu118 [pip3] torchvision==0.16.0.dev20230826+cu118 [conda] numpy 1.24.1 pypi_0 pypi [conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi [conda] torch 2.1.0.dev20230826+cu118 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230826+cu118 pypi_0 pypi [conda] torchvision 0.16.0.dev20230826+cu118 pypi_0 pypi cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
5
1,280
108,017
(#66813) Adding support for slots in subclasses of Module.
triaged, open source, release notes: nn, topic: improvements
Fixes #66813
2
1,281
108,016
Failure in Initiating Pyotch DDP-style code ( Multi-machine multi-card environment)
oncall: distributed, triaged
### ๐Ÿ› Describe the bug I implemented a DDP-style pytorch program, and it can can be successfully run in single-machine multi-card Environment, the program (demo3_ddp.py) is as follows, -------------------------------------- ``` #! /usr/bin/env `# -*- coding:utf-8 -*-` import os import os.path as osp import socket from contextlib import closing from datetime import datetime import argparse import torch.multiprocessing as mp import torchvision import torchvision.transforms as transforms import torch import torch.nn as nn import torch.distributed as dist #from apex.parallel import DistributedDataParallel as DDP #from apex import amp from DDP.model import ConvNet def train(gpu, args): ############################################################ rank = args.node_rank * args.gpus_per_node + gpu assert (torch.distributed.is_nccl_available() == True) if args.init_method == 'ENV': dist.init_process_group( backend='nccl', init_method='env://', world_size=args.world_size, rank=rank ) elif args.init_method == 'TCP': dist.init_process_group( backend='nccl', init_method='tcp://{}:{}'.format(args.master_addr, args.master_port), world_size=args.world_size, rank=rank ) elif args.init_method == 'SFILE': dist.init_process_group( backend='nccl', init_method='file://{}'.format(args.shard_file), world_size=args.world_size, rank=rank ) else: print(f'No implementation for init_method={args.init_method}') exit(-1) print(f'process/rank {rank} is initialized.') ############################################################ torch.manual_seed(0) model = ConvNet() ''' torch.cuda.set_device(gpu_id) # set single gpu for current process torch.cuda.set_device('cuda:' + str(gpu_ids)) # set multiple gpus for current process ''' torch.cuda.set_device(gpu) # gpu: local rank, set GPU for each process in the current node model.cuda(gpu) ############################################################### # model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) # especially for BN operation in DDP mode # Wrap the model model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu], output_device=gpu) ############################################################### ''' if rank == 0: torch.save(model.state_dict(), osp.join(args.checkpoint_path, 'model.pt')) dist.barrier() # ensure save model operation finished. map_location = {"cuda:0": f"cuda:{gpu}"} model.load_state_dict(torch.load(osp.join(args.checkpoint_path, 'model.pt'), map_location=map_location)) # each process load model from rank:0 ''' batch_size = args.batch_size # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda(gpu) optimizer = torch.optim.SGD(model.parameters(), 1e-4) # Data loading code trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) train_dataset = torchvision.datasets.MNIST( root='./data', train=True, transform=trans, download=True ) ################################################################ train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset, num_replicas=args.world_size, rank=rank ) ################################################################ train_loader = torch.utils.data.DataLoader( dataset=train_dataset, batch_size=batch_size, ############################## shuffle=False, # ############################## num_workers=0, pin_memory=True, ############################# sampler=train_sampler) # ############################# os.makedirs(args.checkpoint_path, exist_ok=True) start = datetime.now() total_step = len(train_loader) for epoch in range(args.epochs): # set update seed for epoch, ensure the acquired data for each process is different across epochs train_sampler.set_epoch(epoch) for i, (images, labels) in enumerate(train_loader): print(f'Batch, input.shape: {images.shape}') images = images.cuda(non_blocking=True) labels = labels.cuda(non_blocking=True) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() #if (i + 1) % 100 == 0 and gpu == 0: if (i + 1) % 100 == 0 and rank == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format( epoch + 1, args.epochs, i + 1, total_step, loss.item()) ) # save model per epoch if rank == 0: # note that dist.barrior() is not required, since the all-reduce operation has ensured the synchronism between processes. torch.save(model.state_dict(), osp.join(args.checkpoint_path, f'model_{epoch}.pt')) #if gpu == 0: #rank = dist.get_rank() if rank == 0: # only print message in the first process print("Training complete in: " + str(datetime.now() - start)) def evaluation(gpu, args): ''' Not implemented, consider...... ''' ############################################################ rank = args.node_rank * args.gpus_per_node + gpu if args.init_method == 'ENV': dist.init_process_group( backend='nccl', init_method='env://', world_size=args.world_size, rank=rank ) elif args.init_method == 'TCP': dist.init_process_group( backend='nccl', init_method='tcp://172.16.21.159:8887', world_size=args.world_size, rank=rank ) print(f'process/rank {rank} is initialized.') ############################################################ torch.manual_seed(0) model = ConvNet() ''' torch.cuda.set_device(gpu_id) # set single gpu for current process torch.cuda.set_device('cuda:' + str(gpu_ids)) # set multiple gpus for current process ''' torch.cuda.set_device(gpu) # gpu: local rank, set GPU for each process in the current node model.cuda(gpu) ############################################################### # Wrap the model model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) ############################################################### ''' if rank == 0: torch.save(model.state_dict(), osp.join(args.checkpoint_path, 'model.pt')) dist.barrier() # ensure save model operation finished. map_location = {"cuda:0": f"cuda:{gpu}"} model.load_state_dict(torch.load(osp.join(args.checkpoint_path, 'model.pt'), map_location=map_location)) # each process load model from rank:0 ''' batch_size = args.batch_size # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda(gpu) optimizer = torch.optim.SGD(model.parameters(), 1e-4) # Data loading code train_dataset = torchvision.datasets.MNIST( root='./data', train=True, transform=transforms.ToTensor(), download=True ) ################################################################ train_sampler = torch.utils.data.distributed.DistributedSampler( train_dataset, num_replicas=args.world_size, rank=rank ) ################################################################ train_loader = torch.utils.data.DataLoader( dataset=train_dataset, batch_size=batch_size, ############################## shuffle=False, # ############################## num_workers=0, pin_memory=True, ############################# sampler=train_sampler) # ############################# os.makedirs(args.checkpoint_path, exist_ok=True) start = datetime.now() total_step = len(train_loader) for epoch in range(args.epochs): # set update seed for epoch, ensure the acquired data for each process is different across epochs train_sampler.set_epoch(epoch) for i, (images, labels) in enumerate(train_loader): print(f'Batch, input.shape: {images.shape}') images = images.cuda(non_blocking=True) labels = labels.cuda(non_blocking=True) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() #if (i + 1) % 100 == 0 and gpu == 0: if (i + 1) % 100 == 0 and rank == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format( epoch + 1, args.epochs, i + 1, total_step, loss.item()) ) # save model per epoch if rank == 0: # note that dist.barrior() is not required, since the all-reduce operation has ensured the synchronism between processes. torch.save(model.state_dict(), osp.join(args.checkpoint_path, f'model_{epoch}.pt')) #if gpu == 0: #rank = dist.get_rank() if rank == 0: # only print message in the first process print("Training complete in: " + str(datetime.now() - start)) def get_open_port(): with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: s.bind(('', 0)) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) return s.getsockname()[1] def main(): parser = argparse.ArgumentParser() parser.add_argument('--init_method', type=str, default='ENV', choices=['TCP', 'ENV']) parser.add_argument('-nodes', '--nnodes', default=1, type=int, metavar='N', help='the number of nodes') parser.add_argument('-gpus', '--gpus_per_node', default=1, type=int, help='number of gpus per node') parser.add_argument('-nr', '--node_rank', default=0, type=int, help='ranking of the current nodes') parser.add_argument('--master_addr', type=str, default='', help='master addr') parser.add_argument('--master_port', type=str, default='36185', help='') parser.add_argument('--epochs', default=200, type=int, metavar='N', help='number of total epochs to run') parser.add_argument('--batch_size', default=100, type=int, help='') parser.add_argument('--checkpoint_path', type=str, default='./DDP/checkpoints/model.checkpoints', help='') args = parser.parse_args() free_port = get_open_port() print(f'free_port: {free_port}') ######################################################### args.world_size = args.gpus_per_node * args.nnodes # #GPU = #Process os.environ['MASTER_ADDR'] = args.master_addr # IP addr of process-0 (node-0) os.environ['MASTER_PORT'] = args.master_port # PORT of process-0 (node-0) ''' using mp.spawn mp.spawn(train, nprocs=args.gpus_per_node, args=(args,)) => [train(i, args) for i in [0, args.gpus_per_node - 1]] ''' mp.spawn(train, nprocs=args.gpus_per_node, args=(args,)) # start all processes in the current node using torch.mp ######################################################### ''' # using mp.Process processes = [] for local_rank in range(args.gpus_per_node): p = mp.Process(target=train, args=(local_rank, args)) p.start() processes.append(p) for p in processes: p.join() ''' if __name__ == '__main__': print(f'torch.cuda.is_available(): {torch.cuda.is_available()}') print(f'torch.cuda.device_count(): {torch.cuda.device_count()}') main() ``` --------------------------------- wherein the ConvNet model (model.py) is implemented as follows, --------------------------------- ``` #! /usr/bin/env # -*- coding:utf-8 -*- import torch import torch.nn as nn import torchvision class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc = nn.Linear(7*7*32, num_classes) def forward(self, x): print(f'Forward, input.shape: {x.shape}') out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out class ResNet(nn.Module): def __init__(self, num_classes=10): super(ResNet, self).__init__() net = torchvision.models.resnet101(num_classes=num_classes) net.conv1 = torch.nn.Conv2d(1, 64, (7, 7), (2, 2), (3, 3), bias=False) self.network = net def forward(self, x): return self.network(x) ``` ---------------------------- My experimental environment is: machine 1, two RTX3090; machine 2, two RTX 4090; the program is initiated using the following commands on two machines respectively, ``` $ python -m DDP.demo3_ddp -nodes 2 -gpus 2 -nr 0 --master_addr 'your_master_machine_ip' $ python -m DDP.demo3_ddp -nodes 2 -gpus 2 -nr 1 --master_addr 'your_master_machine_ip' ``` then, the results are as follows, on the first (master) machine, the output information is as follows, torch.cuda.is_available(): True torch.cuda.device_count(): 2 free_port: 37781 process/rank 0 is initialized. process/rank 1 is initialized. On the second machine, the output information is as follows, torch.cuda.is_available(): True torch.cuda.device_count(): 2 free_port: 56899 Process SpawnProcess-1: Process SpawnProcess-2: ... torch.multiprocessing.spawn.ProcessRaisedException: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/home/tme/anaconda3/envs/pt1.10py3.9/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/qinkeke/workspace/pycharm/DDP_project/DDP/demo3_ddp.py", line 70, in train dist.init_process_group( File "/home/tme/anaconda3/envs/pt1.10py3.9/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 611, in init_process_group default_pg._set_sequence_number_for_group() RuntimeError: Socket Timeout It seems that the process group can not be successfully initiated. Can any one help me fix this issue? ### Versions Torch1.11 Python3.8 cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
1
1,282
108,014
NameError: name 's1' is not defined
needs reproduction, triaged, oncall: pt2, module: dynamic shapes, module: inductor
### ๐Ÿ› Describe the bug I am running a DDP job on 8 GPUs. torch.compile fails with the following error. Single GPU compilation runs fine. ### Error logs ``` File "/usr/local/lib/python3.8/dist-packages/torch/fx/graph_module.py", line 284, in __call__ raise e File "/usr/local/lib/python3.8/dist-packages/torch/fx/graph_module.py", line 274, in __call__ return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "<eval_with_key>.191", line 11, in forward submod_1 = self.compiled_submod_1(getitem, getitem_1, l_self_pos_enc_class_pe); getitem = getitem_1 = l_self_pos_enc_class_pe = None File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/backends/distributed.py", line 335, in forward x = self.submod(*args) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 328, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 3905, in forward return compiled_fn(full_args) File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1482, in g return f(*args) File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 2527, in runtime_wrapper all_outs = call_func_with_args( File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1506, in call_func_with_args out = normalize_as_list(f(args)) File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1482, in g return f(*args) File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 539, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 3010, in forward fw_outs = call_func_with_args( File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/aot_autograd.py", line 1506, in call_func_with_args out = normalize_as_list(f(args)) File "/usr/local/lib/python3.8/dist-packages/torch/_inductor/codecache.py", line 374, in __call__ return self.get_current_callable()(inputs) File "/usr/local/lib/python3.8/dist-packages/torch/_inductor/compile_fx.py", line 628, in run return model(new_inputs) File "/usr/local/lib/python3.8/dist-packages/torch/_inductor/codecache.py", line 401, in _run_from_cache return compiled_graph.compiled_artifact(inputs) File "/tmp/torchinductor_root/qe/cqecvn34iq32lm3fql55scrqfzzt5hcnj4g2gnpyqjgnpdbaf527.py", line 161, in call assert_size_stride(primals_3, (8, (s1 // 6), 64*(s2 // 6)), (64*(s1 // 6)*(s2 // 6), 64*(s2 // 6), 1)) NameError: name 's1' is not defined ``` ### Minified repro TORCHDYNAMO_REPRO_AFTER="aot" doesn't seem to do anything TORCHDYNAMO_REPRO_AFTER="dynamo" generates minifier_launcher.py which fails: ``` File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/output_graph.py", line 1009, in call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 288, in dynamo_minifier_backend minifier( File "/usr/local/lib/python3.8/dist-packages/torch/_functorch/fx_minifier.py", line 189, in minifier raise RuntimeError("Input graph did not fail the tester") torch._dynamo.exc.BackendCompilerFailed: backend='debug_wrapper' raised: RuntimeError: Input graph did not fail the tester ``` ### Versions Versions of relevant libraries: [pip3] numpy==1.24.4 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.1.0.dev20230825+cu118 [pip3] torchaudio==2.1.0.dev20230825+cu118 [pip3] torchdata==0.7.0.dev20230825 [pip3] torchtext==0.16.0.dev20230826+rocm5.6 [pip3] torchvision==0.16.0.dev20230826+cu118 [pip3] triton==2.0.0 cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
2
1,283
108,013
Installation with rocm5.6 results in error: assert len(weights) == expected_node_count AssertionError
needs reproduction, module: build, module: rocm, triaged
### ๐Ÿ› Describe the bug I'm trying to install pytorch from wheel with rocm5.6, but yields error. I use the following command: pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/rocm5.6/ here is the resulting stack trace error: ERROR: Exception: Traceback (most recent call last): File "/home/rubing/venv/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 165, in exc_logging_wrapper status = run_func(*args) File "/home/rubing/venv/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper return func(self, options, args) File "/home/rubing/venv/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 389, in run to_install = resolver.get_installation_order(requirement_set) File "/home/rubing/venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 188, in get_installation_order weights = get_topological_weights( File "/home/rubing/venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 276, in get_topological_weights assert len(weights) == expected_node_count ### Versions Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35 Is CUDA available: N/A CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: AuthenticAMD Model name: AMD Ryzen 5 3600X 6-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 4408.5928 CPU min MHz: 2200.0000 BogoMIPS: 7585.68 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 3 MiB (6 instances) L3 cache: 32 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] No relevant packages [conda] Could not collect cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
1
1,284
108,001
Intra-graph communication reordering pass on Inductor scheduler IR (based on #100762)
fb-exported, module: inductor, ciflow/inductor
Differential Revision: D47657156 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @chenyang78 @kadeng @muchulee8 @aakhundov
3
1,285
107,999
`upsample_bilinear2d_backward_out_cuda` is nondeterministic
triaged, module: determinism
### ๐Ÿ› Describe the bug torch.use_deterministic_algorithms(True) nn.Upsample(scale_factor=scale_factor, mode='bilinear') RuntimeError: upsample_bilinear2d_backward_out_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. ### Versions nn.Upsample(scale_factor=scale_factor, mode='bilinear') cc @mruberry @kurtamohler
3
1,286
107,990
[vision hash update] update the pinned vision hash
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml). Update the pinned vision hash.
4
1,287
107,984
fix: adam(w) ignore stride mismatch when dim is size 1
module: optimizer, triaged, open source, release notes: foreach_frontend
Fixes #106951 when dim is of size 1, stride mismatch does not matter As discussed in #78050 strides are inconsistent and may sometimes change when dim 1 during autograd. Thus we fix the fused_adam(w) check to account for this case. cc: @clchan @ezhang887 cc for review @janeyx99 @albanD cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
11
1,288
107,980
DISABLED test_predispatch_with_for_out_dtype_nested_dynamic_shapes (__main__.DynamicShapesExportTests)
triaged, module: flaky-tests, skipped, module: dynamo
Platforms: asan, linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_predispatch_with_for_out_dtype_nested_dynamic_shapes&suite=DynamicShapesExportTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16222572004). Over the past 3 hours, it has been determined flaky in 54 workflow(s) with 162 failures and 54 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_predispatch_with_for_out_dtype_nested_dynamic_shapes` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `dynamo/test_dynamic_shapes.py` cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
2
1,289
107,972
enable in-place buffer mutation
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #107972 Differential Revision: [D48693776](https://our.internmc.facebook.com/intern/diff/D48693776/) cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
4
1,290
108,065
Batching rule for aten::_scaled_dot_product_attention_math not yet implemented
triaged, actionable, module: vmap, oncall: transformer/mha, module: functorch
I got the following warning: `UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::_scaled_dot_product_attention_math. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ..\aten\src\ATen\functorch\BatchedFallback.cpp:84.)` Since you asked for it ("If you see any of these cases, please let us know by opening an issue at our GitHub!"), here is the issue! cc @zou3519 @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @Chillee @samdow @kshitij12345 @janeyx99
0
1,291
107,967
[PyTorch] torch.empty_permuted: rename param name from 'physical_layout' to 'dim_order'
fb-exported, module: inductor, ciflow/inductor
Summary: Alignes well with the new `tensor.dim_order()` fn. Since this is arg and not kwarg, and same dtype, this change shouldn't be BC breaking. Test Plan: CI Differential Revision: D48176693 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
4
1,292
107,961
aten.lift throws error in dynamo backends -> RuntimeError: !at::functionalization::impl::isFunctionalTensor(self) INTERNAL ASSERT FAILED at "../aten/src/ATen/FunctionalizeFallbackKernel.cpp":167
triaged, module: functionalization
### ๐Ÿ› Describe the bug Minimal Example: ```python import torch def foo(x): return torch.ops.aten.lift(x) opt = torch.compile(foo, backend="inductor") opt(torch.randn(10)) ``` Using any backend (other than vanilla `eager` mode) in Dynamo seems to trigger this assertion error. The offending assertion is [here](https://github.com/pytorch/pytorch/blob/3a3cf0e09d475df9237c95ebd14debf650e0c038/aten/src/ATen/FunctionalizeFallbackKernel.cpp#L176) and the `isFunctionalTensor` check is [here](https://github.com/pytorch/pytorch/blob/3a3cf0e09d475df9237c95ebd14debf650e0c038/aten/src/ATen/FunctionalTensorWrapper.cpp#L575). It seems that the tensor is not functionalized properly? I'm a newbie to dynamo so I'm not sure exactly what the issue is but this seems to affect all the dynamo backends - even though backends like `inductor` actually attempt to [lower](https://github.com/pytorch/pytorch/blob/3a3cf0e09d475df9237c95ebd14debf650e0c038/torch/_inductor/lowering.py#L647) this op into a no-op first. Dynamo newbie so I'm not sure if this even an intended use case, but would like to know what the issue is here. cc @bdhirsh @ezyang @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @brucekimrokcmu @stellaraccident ### Versions ``` PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.0 (arm64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: version 3.26.4 Libc version: N/A Python version: 3.10.12 (main, Jul 5 2023, 15:02:25) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-13.0-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M2 Max Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchdata==0.6.1 [pip3] torchtext==0.15.2 [pip3] torchvision==0.15.2 [conda] cpuonly 2.0 0 pytorch-nightly [conda] numpy 1.24.3 pypi_0 pypi [conda] pytorch-mutex 1.0 cpu pytorch-nightly [conda] torch 2.0.1 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi [conda] torchdata 0.6.1 pypi_0 pypi [conda] torchtext 0.15.2 pypi_0 pypi [conda] torchvision 0.15.2 pypi_0 pypi ```
2
1,293
107,960
Torch compile: libcuda.so cannot found
triaged, dependency issue, oncall: pt2
### ๐Ÿ› Describe the bug Using `torch.compile` with a Colab T4 GPU fails and gives a very cryptic error running on nightly 2.1 <details> <summary> Error logs </summary> ```python --------------------------------------------------------------------------- BackendCompilerFailed Traceback (most recent call last) [<ipython-input-8-3e6b92348d53>](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in <cell line: 1>() ----> 1 audio = pipe("brazilian samba drums").audios[0] 52 frames [/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in decorate_context(*args, **kwargs) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) 116 117 return decorate_context [/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/audioldm2/pipeline_audioldm2.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in __call__(self, prompt, audio_length_in_s, num_inference_steps, guidance_scale, negative_prompt, num_waveforms_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, generated_prompt_embeds, negative_generated_prompt_embeds, attention_mask, negative_attention_mask, max_new_tokens, return_dict, callback, callback_steps, cross_attention_kwargs, output_type) 925 926 # predict the noise residual --> 927 noise_pred = self.unet( 928 latent_model_input, 929 t, [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) 1519 1520 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _call_impl(self, *args, **kwargs) 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1528 1529 try: [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _fn(*args, **kwargs) 326 dynamic_ctx.__enter__() 327 try: --> 328 return fn(*args, **kwargs) 329 finally: 330 set_eval_frame(prior) [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) 1519 1520 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _call_impl(self, *args, **kwargs) 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1528 1529 try: [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in catch_errors(frame, cache_entry, frame_state) 486 487 with compile_lock, _disable_current_modes(): --> 488 return callback(frame, cache_entry, hooks, frame_state) 489 490 catch_errors._torchdynamo_orig_callable = callback # type: ignore[attr-defined] [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _convert_frame(frame, cache_entry, hooks, frame_state) 623 counters["frames"]["total"] += 1 624 try: --> 625 result = inner_convert(frame, cache_entry, hooks, frame_state) 626 counters["frames"]["ok"] += 1 627 return result [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _fn(*args, **kwargs) 137 cleanup = setup_compile_debug() 138 try: --> 139 return fn(*args, **kwargs) 140 finally: 141 cleanup.close() [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _convert_frame_assert(frame, cache_entry, hooks, frame_state) 378 ) 379 --> 380 return _compile( 381 frame.f_code, 382 frame.f_globals, [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_size, frame, frame_state, compile_id) 553 with compile_context(CompileContext(compile_id)): 554 try: --> 555 guarded_code = compile_inner(code, one_graph, hooks, transform) 556 return guarded_code 557 except ( [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_inner(code, one_graph, hooks, transform) 475 for attempt in itertools.count(): 476 try: --> 477 out_code = transform_code_object(code, transform) 478 orig_code_map[out_code] = code 479 break [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in transform_code_object(code, transformations, safe) 1026 propagate_line_nums(instructions) 1027 -> 1028 transformations(instructions, code_options) 1029 return clean_and_assemble_instructions(instructions, keys, code_options)[1] 1030 [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in transform(instructions, code_options) 442 try: 443 with tracing(tracer.output.tracing_context): --> 444 tracer.run() 445 except (exc.RestartAnalysis, exc.SkipFrame): 446 raise [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in run(self) 2072 2073 def run(self): -> 2074 super().run() 2075 2076 def match_nested_cell(self, name, cell): [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in run(self) 722 self.instruction_pointer is not None 723 and not self.output.should_exit --> 724 and self.step() 725 ): 726 pass [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in step(self) 686 self.f_code.co_filename, self.lineno, self.f_code.co_name 687 ) --> 688 getattr(self, inst.opname)(inst) 689 690 return inst.opname != "RETURN_VALUE" [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in RETURN_VALUE(self, inst) 2160 ) 2161 log.debug("RETURN_VALUE triggered compile") -> 2162 self.output.compile_subgraph( 2163 self, 2164 reason=GraphCompileReason( [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_subgraph(self, tx, partial_convert, reason) 855 if count_calls(self.graph) != 0 or len(pass2.graph_outputs) != 0: 856 output.extend( --> 857 self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 858 ) 859 [/usr/lib/python3.10/contextlib.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in inner(*args, **kwds) 77 def inner(*args, **kwds): 78 with self._recreate_cm(): ---> 79 return func(*args, **kwds) 80 return inner 81 [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_and_call_fx_graph(self, tx, rv, root) 955 ) 956 --> 957 compiled_fn = self.call_user_compiler(gm) 958 compiled_fn = disable(compiled_fn) 959 [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in call_user_compiler(self, gm) 1022 unimplemented_with_warning(e, self.root_tx.f_code, msg) 1023 except Exception as e: -> 1024 raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 1025 e.__traceback__ 1026 ) from None [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in call_user_compiler(self, gm) 1007 if config.verify_correctness: 1008 compiler_fn = WrapperBackend(compiler_fn) -> 1009 compiled_fn = compiler_fn(gm, self.example_inputs()) 1010 _step_logger()(logging.INFO, f"done compiler function {name}") 1011 assert callable(compiled_fn), "compiler_fn did not return callable" [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in debug_wrapper(gm, example_inputs, **kwargs) 115 raise 116 else: --> 117 compiled_gm = compiler_fn(gm, example_inputs) 118 119 return compiled_gm [/usr/local/lib/python3.10/dist-packages/torch/__init__.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in __call__(self, model_, inputs_) 1566 from torch._inductor.compile_fx import compile_fx 1567 -> 1568 return compile_fx(model_, inputs_, config_patches=self.config) 1569 1570 def get_compiler_config(self): [/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_fx(model_, example_inputs_, inner_compile, config_patches, decompositions) 1148 tracing_context 1149 ), compiled_autograd.disable(): -> 1150 return aot_autograd( 1151 fw_compiler=fw_compiler, 1152 bw_compiler=bw_compiler, [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compiler_fn(gm, example_inputs) 53 # NB: NOT cloned! 54 with enable_aot_logging(), patch_config: ---> 55 cg = aot_module_simplified(gm, example_inputs, **kwargs) 56 counters["aot_autograd"]["ok"] += 1 57 return disable(cg) [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, keep_inference_input_mutations, inference_compiler) 3889 3890 with compiled_autograd.disable(): -> 3891 compiled_fn = create_aot_dispatcher_function( 3892 functional_call, 3893 full_args, [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in create_aot_dispatcher_function(flat_fn, flat_args, aot_config) 3427 # You can put more passes here 3428 -> 3429 compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 3430 if aot_config.is_export: 3431 [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in aot_wrapper_dedupe(flat_fn, flat_args, aot_config, compiler_fn, fw_metadata) 2210 2211 if ok: -> 2212 return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2213 2214 # export path: ban duplicate inputs for now, add later if requested. [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in aot_wrapper_synthetic_base(flat_fn, flat_args, aot_config, fw_metadata, needs_autograd, compiler_fn) 2390 # Happy path: we don't need synthetic bases 2391 if synthetic_base_info is None: -> 2392 return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2393 2394 # export path: ban synthetic bases for now, add later if requested. [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in aot_dispatch_base(flat_fn, flat_args, aot_config, fw_metadata) 1571 if torch._guards.TracingContext.get(): 1572 torch._guards.TracingContext.get().fw_metadata = fw_metadata -> 1573 compiled_fw = compiler(fw_module, flat_args) 1574 1575 # This boxed_call handling happens inside create_runtime_wrapper as well. [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) [/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in fw_compiler_base(model, example_inputs, is_inference) 1090 } 1091 -> 1092 return inner_compile( 1093 model, 1094 example_inputs, [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_aot.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in debug_wrapper(gm, example_inputs, **kwargs) 78 # Call the compiler_fn - which is either aot_autograd or inductor 79 # with fake inputs ---> 80 inner_compiled_fn = compiler_fn(gm, example_inputs) 81 except Exception as e: 82 # TODO: Failures here are troublesome because no real inputs, [/usr/local/lib/python3.10/dist-packages/torch/_inductor/debug.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in inner(*args, **kwargs) 226 def inner(*args, **kwargs): 227 with DebugContext(): --> 228 return fn(*args, **kwargs) 229 230 return wrap_compiler_debug(inner, compiler_name="inductor") [/usr/lib/python3.10/contextlib.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in inner(*args, **kwds) 77 def inner(*args, **kwds): 78 with self._recreate_cm(): ---> 79 return func(*args, **kwds) 80 return inner 81 [/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in newFunction(*args, **kwargs) 52 @wraps(old_func) 53 def newFunction(*args, **kwargs): ---> 54 return old_func(*args, **kwargs) 55 56 return newFunction [/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_fx_inner(gm, example_inputs, cudagraphs, num_fixed, is_backward, graph_id, cpp_wrapper, aot_mode, is_inference, boxed_forward_device_index, user_visible_outputs, layout_opt) 339 } 340 --> 341 compiled_graph: CompiledFxGraph = fx_codegen_and_compile( 342 *graph_args, **graph_kwargs # type: ignore[arg-type] 343 ) [/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in fx_codegen_and_compile(gm, example_inputs, cudagraphs, num_fixed, is_backward, graph_id, cpp_wrapper, aot_mode, is_inference, user_visible_outputs, layout_opt) 563 else: 564 context.output_strides.append(None) --> 565 compiled_fn = graph.compile_to_fn() 566 567 if graph.disable_cudagraphs: [/usr/local/lib/python3.10/dist-packages/torch/_inductor/graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_to_fn(self) 965 return AotCodeCache.compile(self, code, cuda=self.cuda) 966 else: --> 967 return self.compile_to_module().call 968 969 def get_output_names(self): [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) [/usr/local/lib/python3.10/dist-packages/torch/_inductor/graph.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in compile_to_module(self) 936 linemap = [(line_no, node.stack_trace) for line_no, node in linemap] 937 key, path = PyCodeCache.write(code) --> 938 mod = PyCodeCache.load_by_key_path(key, path, linemap=linemap) 939 self.cache_key = key 940 self.cache_path = path [/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in load_by_key_path(cls, key, path, linemap) 1137 mod.__file__ = path 1138 mod.key = key -> 1139 exec(code, mod.__dict__, mod.__dict__) 1140 sys.modules[mod.__name__] = mod 1141 # another thread might set this first [/tmp/torchinductor_root/7n/c7nibse6wgoaypjbdhgrkgsgbfcxqvdo7jque3pr52vtf6vor5yi.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in <module> 7289 7290 -> 7291 async_compile.wait(globals()) 7292 del async_compile 7293 [/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in wait(self, scope) 1416 pbar.set_postfix_str(key) 1417 if isinstance(result, (Future, TritonFuture)): -> 1418 scope[key] = result.result() 1419 pbar.update(1) 1420 [/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in result(self) 1275 return self.kernel 1276 # If the worker failed this will throw an exception. -> 1277 self.future.result() 1278 kernel = self.kernel = _load_kernel(self.kernel_name, self.source_code) 1279 latency = time() - t0 [/usr/lib/python3.10/concurrent/futures/_base.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in result(self, timeout) 456 raise CancelledError() 457 elif self._state == FINISHED: --> 458 return self.__get_result() 459 else: 460 raise TimeoutError() [/usr/lib/python3.10/concurrent/futures/_base.py](https://f1ggesi86gr-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in __get_result(self) 401 if self._exception: 402 try: --> 403 raise self._exception 404 finally: 405 # Break a reference cycle with the exception in self._exception BackendCompilerFailed: backend='inductor' raised: AssertionError: libcuda.so cannot found! Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` </details> ### Minified repro https://colab.research.google.com/drive/1XwD2UpPoi6RFLHA9tcXL7BdbOgkKvQ_7?usp=sharing ### Versions PyTorch version: 2.1.0.dev20230825+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.27.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.109+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 525.105.17 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 0 BogoMIPS: 4399.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 256 KiB (1 instance) L3 cache: 55 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.1.0.dev20230825+cu121 [pip3] torchaudio==2.1.0.dev20230825+cu121 [pip3] torchdata==0.6.1 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.15.2 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
3
1,294
107,957
Improve Error Message in MultiMarginLoss for Inconsistent Target Size
triaged, open source, ciflow/inductor
As the suggestion in #106251, improve the error message of MultiMarginLoss for inconsistent target size using the format below: ```MultiMarginLoss: The size of input tensor ({input size}) must match the size of target tensor ({target size}) at non-singleton dimension {dimension}``` cc @GwiHwan-Go @ezyang @mikaylagawarecki
3
1,295
107,955
PyTorch profile issues summary
triage review, module: regression, oncall: profiler
### ๐Ÿ› Describe the bug Recenly, I planed to profile the whole training process of my recipe. After reading several [official docs](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html?highlight=profile), I'm confident it should be easy. However, a plenty of issues and some [unsatisfactory answer](https://github.com/pytorch/kineto/issues/782#issuecomment-1640960718) make me wonder what the hell is the solution. 1. Distributed view cannot work with PyTorch 2.0 (works in PyTorch 1.11) Like this [issue](https://github.com/pytorch/kineto/issues/782), when DDP is enabled, it doesn't show in Tensorboard as the doc says. But kernels like `ncclKernel_AllReduce_RING_*` actually exist. Switching to use PyTorch <= 1.11 works. 2. Dataloader timing doesn't work in PyTorch 2.0 (works in PyTorch) When using PyTorch 2.0, I cannot get the profile info about data loading, which is a feature regression. ![image](https://github.com/pytorch/pytorch/assets/11533479/c046f1ce-3664-4849-851b-f8703a73a361) In PyTorch 1.11, it works like below ![image](https://github.com/pytorch/pytorch/assets/11533479/91c91b46-610e-4cd3-9e65-e391ec76eb2d) 3. Hang with PyTorch 1.1 if not profile with schedule(repeat=0) without early break Let's change the example code in doc to ``` import torch import torch.nn import torch.optim import torch.profiler import torch.utils.data import torchvision.datasets import torchvision.models import torchvision.transforms as T transform = T.Compose( [T.Resize(224), T.ToTensor(), T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) train_set = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True) device = torch.device("cuda") model = torchvision.models.resnet18().cuda(device) criterion = torch.nn.CrossEntropyLoss().cuda(device) optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9) model.train() def train(data): inputs, labels = data[0].to(device=device), data[1].to(device=device) outputs = model(inputs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() with torch.profiler.profile( schedule=torch.profiler.schedule(wait=100, warmup=100, active=300, repeat=0), <- change for long running on_trace_ready=torch.profiler.tensorboard_trace_handler('./log/resnet18'), record_shapes=True, profile_memory=True, with_stack=True ) as prof: for step, batch_data in enumerate(train_loader): # if step >= (1 + 1 + 3) * 2: <- remove early break # break train(batch_data) prof.step() ``` It hangs after `wait + warmup + active` steps and has 100% CPU usage and memory infinite increasing (memory leak) ![ๅพฎไฟกๆˆชๅ›พ_20230825183430](https://github.com/pytorch/pytorch/assets/11533479/605e78b6-abe7-4c9c-8229-4e8788c0ab64) ![image](https://github.com/pytorch/pytorch/assets/11533479/c4f1c24d-35d9-4f48-9c96-9996ff7355cd) 4. [profiler.export_stacks doesn't return stack trace unless experimental_config is provided](https://github.com/pytorch/pytorch/issues/100253#top) 5. Warning and occasional crash with PyTorch 1.11 ``` [W CPUAllocator.cpp:219] Memory block of unknown size was allocated before the profiling started, profiler results will not include the deallocation event Segmentation fault (core dumped) ``` 6. Hang if CUPTI cannot be find with PyTorch 1.11. Warning in 2.0 of missing CUPTI is not very eye-catching. Welcome everyone to collect more issues. Currenly, profile is completely unusable. Hope PyTorch team could pay more effort to avoid feature regression and make profiling more stable to use. 7. Verbose warning: https://github.com/pytorch/pytorch/pull/107216 ### Versions Collecting environment information... PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.27.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 GPU 1: NVIDIA GeForce RTX 4090 GPU 2: NVIDIA GeForce RTX 4090 GPU 3: NVIDIA GeForce RTX 4090 Nvidia driver version: 535.54.03 cuDNN version: Probably one of the following: /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 Stepping: 6 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 4600.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 3 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 80 MiB (64 instances) L3 cache: 108 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] pytorch-triton==2.1.0+e6216047b8 [pip3] torch==2.0.1+cu118 [pip3] torch-tb-profiler==0.4.1 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] numpy 1.24.3 pypi_0 pypi [conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi [conda] torch 2.0.1+cu118 pypi_0 pypi [conda] torch-tb-profiler 0.4.1 pypi_0 pypi [conda] torchaudio 2.0.2+cu118 pypi_0 pypi [conda] torchtts 0.1 dev_0 <develop> [conda] torchvision 0.15.2+cu118 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
3
1,296
107,954
add torch.float16 support for xla amp
triaged, module: xla, open source, module: amp (automated mixed precision)
The "bfloat16 only" limitation is only applied to google TPU but not other backends like GPU and other accelerators Fixes #107953 cc @bdhirsh @mcarilli @ptrblck @leslie-fang-intel @jgong5
3
1,297
107,953
add torch.float16 support for xla amp
triaged, module: xla, module: bfloat16, module: amp (automated mixed precision)
### ๐Ÿš€ The feature, motivation and pitch https://github.com/pytorch/pytorch/blob/main/torch/amp/autocast_mode.py#L308 The "bfloat16 only" limitation is only applied to google TPU but not other backends like GPU and other accelerators. And XLA itself is kind of a device abstraction, so for the long run, maybe these checks should be moved to lower level. ### Alternatives _No response_ ### Additional context _No response_ cc @bdhirsh @mcarilli @ptrblck @leslie-fang-intel @jgong5
1
1,298
107,950
DISABLED test_redundant_clone_for_layout_convert_cuda (__main__.FreezingCudaTests)
module: rocm, triaged, module: flaky-tests, skipped, module: inductor
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_redundant_clone_for_layout_convert_cuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16199575184). Over the past 72 hours, it has flakily failed in 6 workflow(s). **Debugging instructions (after clicking on the recent samples link):** To find relevant log snippets: 1. Click on the workflow logs linked above 2. Grep for `test_redundant_clone_for_layout_convert_cuda` Test file path: `inductor/test_inductor_freezing.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
2
1,299
107,948
Exporting the operator 'aten::linalg_inv' to ONNX opset version 18 is not supported.
module: onnx, triaged
While executing below code to export the model to onnx, ``` crops_flat = torch.rand([2, 3, 256, 256]) new_intrinsic_matrix_flat = torch.rand([2, 3, 3]) print(crops_flat.shape, type(crops_flat)) print(new_intrinsic_matrix_flat.shape, type(new_intrinsic_matrix_flat)) torch.onnx.export(model, (([crops_flat, new_intrinsic_matrix_flat])), "model.onnx", input_names=['crops_flat', 'new_intrinsic_matrix_flat'], output_names=['output'], dynamic_axes={'crops_flat': {0: 'batch_size'}, 'new_intrinsic_matrix_flat': {0: 'batch_size'}}, opset_version=18, verbose=True ) ``` I got this error. ``` ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ======================== ERROR: missing-standard-symbolic-function ========================================= Exporting the operator 'aten::linalg_inv' to ONNX opset version 18 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues. None <Set verbose=True to see more details> Traceback (most recent call last): File "D:\amnt\Quidich_pose_estimation_poc\runbox\release\runbox\standalone_metrabs_torch.py", line 106, in <module> multiperson_model_pt = load_multiperson_model().cuda() File "D:\amnt\Quidich_pose_estimation_poc\runbox\release\runbox\standalone_metrabs_torch.py", line 52, in load_multiperson_model model_pytorch = load_crop_model() File "D:\amnt\Quidich_pose_estimation_poc\runbox\release\runbox\standalone_metrabs_torch.py", line 82, in load_crop_model torch.onnx.export(model, File "C:\Users\harsh\anaconda3\envs\runbox\lib\site-packages\torch\onnx\utils.py", line 506, in export _export( File "C:\Users\harsh\anaconda3\envs\runbox\lib\site-packages\torch\onnx\utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "C:\Users\harsh\anaconda3\envs\runbox\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph graph = _optimize_graph( File "C:\Users\harsh\anaconda3\envs\runbox\lib\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph graph = _C._jit_pass_onnx(graph, operator_export_type) File "C:\Users\harsh\anaconda3\envs\runbox\lib\site-packages\torch\onnx\utils.py", line 1901, in _run_symbolic_function raise errors.UnsupportedOperatorError( torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::linalg_inv' to ONNX opset version 18 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues. Process finished with exit code 1 ``` Can anyone help me with this? ### Versions Here the env ``` PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Home Single Language GCC version: Could not collect Clang version: Could not collect CMake version: version 3.26.4 Libc version: N/A Python version: 3.10.12 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 19:01:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti Nvidia driver version: 536.25 cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2400 DeviceID=CPU0 Family=205 L2CacheSize=1024 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2400 Name=Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz ProcessorType=3 Revision= Versions of relevant libraries: [pip3] msgpack-numpy==0.4.8 [pip3] numpy==1.23.5 [pip3] pytorch-lightning==2.0.7 [pip3] pytorch3d==0.7.4 [pip3] torch==2.0.1+cu117 [pip3] torchaudio==2.0.2+cu117 [pip3] torchmetrics==0.11.4 [pip3] torchvision==0.15.2+cu117 [conda] msgpack-numpy 0.4.8 pypi_0 pypi [conda] numpy 1.23.5 pypi_0 pypi [conda] pytorch-lightning 2.0.7 pypi_0 pypi [conda] pytorch3d 0.7.4 pypi_0 pypi [conda] torch 2.0.1+cu117 pypi_0 pypi [conda] torchaudio 2.0.2+cu117 pypi_0 pypi [conda] torchmetrics 0.11.4 pypi_0 pypi [conda] torchvision 0.15.2+cu117 pypi_0 pypi ```
19
1,300
107,946
Remove parameter `self` in `typeConvertIndices`
triaged, module: advanced indexing, better-engineering
### ๐Ÿš€ The feature, motivation and pitch In the file `aten/src/ATen/TensorIndexing.h`, there's a function `typeConvertIndices`: ```c++ static inline c10::List<c10::optional<Tensor>> typeConvertIndices( const Tensor& /*self*/, std::vector<Tensor>&& indices) { c10::List<c10::optional<Tensor>> converted_inds; converted_inds.reserve(indices.size()); for (const auto& i : indices) { converted_inds.push_back(std::move(i)); } return converted_inds; } ``` This function is later invoked by `dispatch_index_put_`: ```c++ static inline Tensor dispatch_index_put_( Tensor& self, std::vector<Tensor>&& indices, const Tensor& value) { return self.index_put_( impl::typeConvertIndices(self, std::move(indices)), value); } ``` Upon inspection, I'm unsure about the necessity of passing the `self` parameter to `typeConvertIndices` as it seems to be unused within the function. Is it possible to simplify the function to: ```c++ static inline c10::List<c10::optional<Tensor>> typeConvertIndices(std::vector<Tensor>&& indices) { c10::List<c10::optional<Tensor>> converted_inds; converted_inds.reserve(indices.size()); for (const auto& i : indices) { converted_inds.push_back(std::move(i)); } return converted_inds; } ``` by removing the `self` parameter? ### Alternatives _No response_ ### Additional context _No response_
3