Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
901 | 109,420 |
Supporting Block_Ptrs in inductor code gen
|
triaged, oncall: pt2, module: inductor
|
### Summary
More curiosity and a tracking issue for me or someone else to pick up.
Triton has introduced Block_pointer: https://triton-lang.org/main/getting-started/tutorials/08-experimental-block-pointer.html
Personally I think these are easier to work with/reason about than raw strides. I could be missing something fundamental to inductors loop level IR but would be interesting to see what would be needed to support usage in this for example:
https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/mm.py#L31-L91
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 10 |
902 | 109,419 |
Move eval_frame global variables into module state
|
triaged, open source, module: dynamo
|
Partially addresses https://github.com/pytorch/pytorch/issues/108942
Updated PR from pytorch repo instead of my fork. This was previously https://github.com/pytorch/pytorch/pull/108943
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
903 | 109,413 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
904 | 109,412 |
[export] Support tracing constant attribute mutations
|
module: dynamo, ciflow/inductor
|
This adds support for tracing (specifically when exporting) code that contains mutations on constant (int, str, bool) attributes. Basically, I just set the object's attribute to be the new constant. Since they're constants, they shouldn't result in any operations appearing in the graph. I'm unsure if this is the proper way to add support -- any suggestion is greatly appreciated!
Some datapoints on why we need this fix:
* 2 of the HF models (GPT2ForSequenceClassification, LayoutLMForSequenceClassification) run into a tracing failure when they're setting some config to be a string value ([example code](https://github.com/huggingface/transformers/blob/0a55d9f7376f72ad3ff296d4249840021b03bcc4/src/transformers/models/longformer/modeling_longformer.py#L1953-L1972)).
* An internal model contains code where user wants to print to stdout based on a boolean attribute self._print_once, and they tend to do self._print_once = False a lot in forward functions. This also fails when tracing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 8 |
905 | 109,407 |
Revert D49284640: Multisect successfully blamed "D49284640: [inductor][Optimus]Improve logging for group batch fusion" for test or build failures
|
fb-exported, topic: not user facing, module: inductor, ciflow/inductor
|
Summary:
This diff is reverting D49284640
D49284640: [inductor][Optimus]Improve logging for group batch fusion by jackiexu1992 has been identified to be causing the following test or build failures:
Tests affected:
- [aps_models/examples/dlrm/tests:deterministic_ne_test - test_determinitic_ne (aps_models.examples.dlrm.tests.deterministic_ne_test.TestDLRMDeterministicNE)](https://www.internalfb.com/intern/test/281475072657704/)
Here's the Multisect link:
https://www.internalfb.com/multisect/3056925
Here are the tasks that are relevant to this breakage:
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.
If you believe this diff has been generated in error you may Commandeer and Abandon it.
Test Plan: NA
Reviewed By: mengluy
Differential Revision: D49338482
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 4 |
906 | 109,403 |
timeout for send() / recv()
|
release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109403
| 1 |
907 | 109,401 |
Interleaved isend and irecv causes hang
|
oncall: distributed, triaged
|
There seems to be a bug with `isend()` and `irecv()` which prevents asynchronicity when the calls are interleaved. In this case we are calling
rank 0: sync send() [completed] -> Async recv() -> Async send() [completed] -> Async recv()
rank 1: sync recv() [completed] -> Async recv() [completed] -> Async send() -> Async send()
Minimal example
```python
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import os
import datetime
def setup_dist(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
torch.cuda.set_device(rank)
dist.init_process_group("nccl", rank=rank, world_size=world_size, timeout=datetime.timedelta(seconds=10))
def main(rank, world_size):
setup_dist(rank, world_size)
fwd_handles = []
bwd_handles = []
if rank == 0:
dist.isend(torch.randn([1]).to("cuda"), dst=1).wait()
bwd_handles.append(dist.irecv(torch.randn([1]).to("cuda"), src=1))
elif rank == 1:
dist.irecv(torch.randn([1]).to("cuda"), src=0).wait()
if rank == 0:
fwd_handles.append(dist.isend(torch.randn([1]).to("cuda"), dst=1))
bwd_handles.append(dist.irecv(torch.randn([1]).to("cuda"), src=1))
elif rank == 1:
fwd_handles.append(dist.irecv(torch.randn([1]).to("cuda"), src=0))
bwd_handles.append(dist.isend(torch.randn([1]).to("cuda"), dst=0))
bwd_handles.append(dist.isend(torch.randn([1]).to("cuda"), dst=0))
print(f"Rank {rank} waiting for {fwd_handles=}, {bwd_handles=}")
for handle in fwd_handles:
handle.wait()
print(f"Rank {rank} done waiting for fwd_handles, now running {bwd_handles=}")
for handle in bwd_handles:
handle.wait()
if __name__ == "__main__":
n_gpus = 2
world_size = n_gpus
mp.spawn(main, args=(world_size,), nprocs=world_size, join=True)
```
output
```
Rank 1 waiting for fwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7f672134b9b0>], bwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7f6720f0d330>, <torch.distributed.distributed_c10d.Work object at 0x7f67210ea8b0>]
Rank 1 done waiting for fwd_handles, now running bwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7f6720f0d330>, <torch.distributed.distributed_c10d.Work object at 0x7f67210ea8b0>]
Rank 0 waiting for fwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7faec809e5b0>], bwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7faec809d2b0>, <torch.distributed.distributed_c10d.Work object at 0x7faec8036e70>]
Rank 0 done waiting for fwd_handles, now running bwd_handles=[<torch.distributed.distributed_c10d.Work object at 0x7faec809d2b0>, <torch.distributed.distributed_c10d.Work object at 0x7faec8036e70>]
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu @penguinwu
| 2 |
908 | 109,392 |
[FSDP] Implement additional check for turn on 2D TP + FSDP extension
|
triaged, module: fsdp
|
### π The feature, motivation and pitch
Currently, since we only support TP + FSDP, we automatically turn on the extension, when we detect FSDP's DeviceMesh has a parent mesh. This check is currently sufficient.
However, additional check is required once we support other types of composition, for example, FSDP + PiPPy. Creating an issue to keep track of this.
### Alternatives
_No response_
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 0 |
909 | 109,386 |
Make Fx Generating Incorrect Graph For GPTQ model
|
triaged, module: fx, module: ProxyTensor
|
### π Describe the bug
Hi, I'm trying to generate the fx graph for the [Falcon-7b-GPTQ](https://huggingface.co/TheBloke/falcon-7b-instruct-GPTQ) model but the graph generated is not correct. If we run the fx graph with the inputs then the results generated from the fx graph and from directly running the model are different. Also, if we run the same fx graph again it would generate different results and after 2-3 executions the fx graph will generate NANs while the PyTorch execution would give the same results every time.
To reproduce the issue, run the following code:
```
from auto_gptq import AutoGPTQForCausalLM
import torch
from torch.fx.experimental.proxy_tensor import make_fx
torch.manual_seed(0)
model_name_or_path = "TheBloke/falcon-7b-instruct-GPTQ"
model_basename = "model"
use_triton = False
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
# device_map={"cpu":""},
device="cuda:0",
# device="cpu",
use_triton=use_triton,
quantize_config=None)
compilation_input_ids = torch.randint(
low=1, high=10000, size=(1, 100)
).to(device="cuda")
fx_g = make_fx(model)(compilation_input_ids)
print("PyTorch Result: ", model.forward(compilation_input_ids)["logits"])
print("Fx graph Result: ", fx_g(compilation_input_ids)["logits"])
```
To run this code you would also need to install:
```
GITHUB_ACTIONS=true pip install auto-gptq
pip install einops
```
If you see an error like this:
```
File "/home/vivek/.cache/huggingface/modules/transformers_modules/TheBloke/falcon-7b-instruct-GPTQ/d6ce55f4e840bbbd596d1a65f64888f0a3c3326b/modelling_RW.py", line 279, in forward
attn_output = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.
```
Then change the`value_layer_ ` to `value_layer_ .to(torch.float32)` to here: https://huggingface.co/TheBloke/falcon-7b-instruct-GPTQ/blob/main/modelling_RW.py#L280. You will find this file at the location: `~/.cache/huggingface/modules/transformers_modules/TheBloke/falcon-7b-instruct-GPTQ/d6ce55f4e840bbbd596d1a65f64888f0a3c3326b/modelling_RW.py`
After these changes and installation, you should be able to reproduce the issue.
### Versions
```
Collecting environment information...
PyTorch version: 2.2.0.dev20230913+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-1027-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0b1
[pip3] pytorch-lightning==2.1.0rc0
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0.dev20230913+cu121
[pip3] torchmetrics==1.0.3
[pip3] torchopt==0.7.2
[pip3] torchvision==0.17.0.dev20230913+cu121
[conda] Could not collect
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 4 |
910 | 109,385 |
FSDP crashes when submodule calls method that isn't `forward()`
|
triaged, module: fsdp
|
### π Describe the bug
I am getting various runtime errors given an FSDP module that wraps multiple children modules, where in the forward pass, we invoke a submodule's non-forward method. The autowrap policy wraps each submodule separately. The minimal example below should make this more clear:
Run with (at least 2 GPUs): `torchrun --standalone --nnodes 1 --nproc-per-node 2 <script.py>
```python
from functools import partial
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, ShardingStrategy
from torch.distributed.fsdp.wrap import _module_wrap_policy
# This code is a "dummy" VisionTransformer that just implements the offending partial logic from a default pretrained ViT
# Our actual code uses a model loaded from `timm` (PyTorch Image Models); full Gist is linked below.
class VisionTransformer(nn.Module):
def __init__(self) -> None:
super().__init__()
self.embed_dim = 1024
# Mimics the "Patch Embedding" for a ViT-Large w/ Patch Size = 14
self.patch_proj = nn.Conv2d(in_channels=3, out_channels=self.embed_dim, kernel_size=14, stride=14, bias=True)
def forward_features(self, imgs: torch.Tensor) -> torch.Tensor:
patches = self.patch_proj(imgs) # [bsz, embed_dim, 16 = (224 / 14), 16 = (224 / 14)]
patch_embeddings = patches.flatten(2).transpose(1, 2) # [bsz, 256 = 16 * 16, 1024]
return patch_embeddings
def forward(self, imgs: torch.Tensor) -> torch.Tensor:
return self.forward_features(imgs).sum(dim=1)
class LinearProjector(nn.Module):
def __init__(self, in_dim: int, out_dim: int) -> None:
super().__init__()
self.projector = nn.Linear(in_dim, out_dim)
def forward(self, patch_embeddings: torch.Tensor) -> torch.Tensor:
return self.projector(patch_embeddings)
# === Actual Network ===
class Net(nn.Module):
def __init__(self) -> None:
super().__init__()
self.vit, self.projector = VisionTransformer(), LinearProjector(in_dim=1024, out_dim=256)
def forward(self, imgs: torch.Tensor) -> torch.Tensor:
patch_embeddings = self.vit.forward_features(imgs) # [ERRORS HERE]
return self.projector(patch_embeddings)
# === Main ===
def bug_fsdp() -> None:
dist.init_process_group(backend="nccl", init_method="env://")
torch.cuda.set_device(dist.get_rank() % torch.cuda.device_count())
# Initialize Network
net = Net()
# FSDP w/ custom module-based autowrap policy
auto_wrap_policy = partial(_module_wrap_policy, module_classes={VisionTransformer, LinearProjector})
net = FSDP(
net,
auto_wrap_policy=auto_wrap_policy,
sharding_strategy=ShardingStrategy.FULL_SHARD,
device_id=torch.cuda.current_device(),
limit_all_gathers=True,
)
# Run a forward pass w/ dummy input (bsz = 4)
dummy_input = torch.randn(4, 3, 224, 224)
net(dummy_input) # CRASH!
if __name__ == "__main__":
bug_fsdp()
```
This results in the following error message:
```
Traceback (most recent call last):
File "/mnt/fsx/skaramcheti/code/prismatic-vlms/x-reference/bugs-fixes/bug_fsdp.py", line 74, in <module>
bug_fsdp()
File "/mnt/fsx/skaramcheti/code/prismatic-vlms/x-reference/bugs-fixes/bug_fsdp.py", line 70, in bug_fsdp
net(dummy_input) # CRASH!
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 748, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/fsx/skaramcheti/code/prismatic-vlms/x-reference/bugs-fixes/bug_fsdp.py", line 46, in forward
patch_embeddings = self.vit.forward_features(imgs) # [ERRORS HERE]
File "/mnt/fsx/skaramcheti/code/prismatic-vlms/x-reference/bugs-fixes/bug_fsdp.py", line 22, in forward_features
patches = self.patch_proj(imgs) # [bsz, embed_dim, 16 = (224 / 14), 16 = (224 / 14)]
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/mambaforge/envs/fsdp-debug/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
```
---
Further context: I'm working on a project where we take the patch features from a (frozen) Vision Transformer backbone and transform them into a different latent space where they're used to decode other modalities (e.g., depth).
This gist provides an annotated example that reflects our setup a bit better: https://gist.github.com/siddk/db3e8808bed2a9cb90ae62b5338de68d
---
**Some other things I tried (to help speed along debugging) -- all of this is in the linked Gist**:
- **Setting `use_orig_params=True` results in a different error at the same Conv2D call (`RuntimeError: weight should have at least three dimensions`)
- **Freezing the ViT** (as required in our original setup) results in yet another error at the Conv2D call (`RuntimeError: GET was unable to find an engine to execute this computation`)
Interestingly, if we monkey patch the `vit` instance such that `vit.forward = vit.forward_features` and call `self.vit(imgs)` in `Net.forward()` -- **all of these bugs disappear!**
### Versions
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.25.2 py310ha4c1d20_0 conda-forge
[conda] pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu118 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu118 pytorch
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 2 |
911 | 109,383 |
cuda rng state for 2.0.1 cannot be used for 2.1.0
|
module: cuda, triaged, module: random
|
### π Describe the bug
Running `torch.cuda.random.get_rng_state().size()` in 2.0.1 and 2.1.0 (nightly) gives different results (816 vs 16). This breaks backwards compatibility of checkpoint loading/saving between the two versions because [rng state can't be set](https://github.com/pytorch/pytorch/blob/v2.0.1/torch/cuda/random.py#L43) when the expected tensor size is different.
### Versions
pytorch 2.0.1 and 2.1.0 (8/27 nightly and rc4)
cc @ptrblck @pbelevich
| 4 |
912 | 109,381 |
DISABLED test_ddp_activation_checkpointing (__main__.TestMultiProc)
|
module: rocm, triaged, skipped
|
Platforms: rocm
First known bad: https://hud.pytorch.org/pytorch/pytorch/commit/34ddf08f2737025fb1447070fae3087889b2e8bb
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/distributed%2Ftest_dynamo_distributed.py%3A%3ATestMultiProc%3A%3Atest_ddp_activation_checkpointing)).
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
913 | 109,379 |
DISABLED test_compute_local_shape_and_global_offset_1D (__main__.UtilTest)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compute_local_shape_and_global_offset_1D&suite=UtilTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16828471140).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compute_local_shape_and_global_offset_1D`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/_tensor/test_utils.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 3 |
914 | 109,374 |
TCPStore() RuntimeError: unmatched '}' in format string
|
oncall: distributed
|
### π Describe the bug
When I was running: python train.py --exp-dir=./logs --exp-name=first_test --modality=video --mode=offline --root-dir=./lrs3_trained --sp-model-path=./spm_unigram_1023.model
It got some errors.
```
Traceback (most recent call last):
File "train.py", line 147, in <module>
cli_main()
File "train.py", line 141, in cli_main
trainer.fit(model, data_module)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 582, in fit
call._call_and_handle_interrupt(
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\trainer\call.py", line 36, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\strategies\launchers\subprocess_script.py", line 90, in launch
return function(*args, **kwargs)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 624, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 997, in _run
self.strategy.setup_environment()
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\strategies\ddp.py", line 153, in setup_environment
self.setup_distributed()
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\pytorch_lightning\strategies\ddp.py", line 204, in setup_distributed
_init_dist_connection(self.cluster_environment, self._process_group_backend, timeout=self._timeout)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\lightning_lite\utilities\distributed.py", line 237, in _init_dist_connection
torch.distributed.init_process_group(torch_distributed_backend, rank=global_rank, world_size=world_size, **kwargs)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\torch\distributed\c10d_logger.py", line 82, in wrapper
func_return = func(*args, **kwargs)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\torch\distributed\distributed_c10d.py", line 1142, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\torch\distributed\rendezvous.py", line 241, in _env_rendezvous_handler
store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
File "C:\ProjectForCoding\tftorchvenv\lib\site-packages\torch\distributed\rendezvous.py", line 172, in _create_c10d_store
return TCPStore(
RuntimeError: unmatched '}' in format string
```
### Versions
PyTorch version: 2.1.0.dev20230718+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
OS: Microsoft Windows 11 pro
GCC version: (MinGW.org GCC-6.3.0-1) 6.3.0
CMake version: version 3.27.5
Python version: 3.8.10
Is CUDA available: True
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
cuDNN version: 8.1.0
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==2.0.9
[pip3] torch==2.1.0.dev20230718+cu121
[pip3] torchaudio==2.1.0.dev20230719+cu121
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==0.7.0
[pip3] torchvision==0.16.0.dev20230718+cu121
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 1 |
915 | 109,365 |
[PT2.1/ PT2.2(Nightly)][torch.compile][dynamic shape enabled]: TorchDynamo failed with Dynamic shape gives runtime error in 'pow' operation.
|
triaged, module: dynamo
|
### π Describe the bug
When dynamic flag enabled in torch.compile and the 'pow' operation is executed on the CPU, it is observed that TorchDynamo encounters a runtime error. However, the same operation, when executed with dynamic flag disabled in torch.compile, results in the proper generation and execution of the FX graph on the CPU.
Tried latest Nightly version(2.2.0.dev20230914+cpu).
Code Block:
```
import torch
import torch.nn as nn
import os
from typing import List
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
print(gm.code)
print(gm.graph)
gm.graph.print_tabular()
return gm.forward
class OpWrapperModule(torch.nn.Module):
def __init__(self, op):
super().__init__()
self.op = op
def forward(self, inputs):
result = self.op(**inputs)
return result
def test_dynamic_shape_pow(is_dyn):
print("Starting the test.................")
sizes = [[340695, 80],
[340658, 80],
[340688, 80],
[340658, 80],
[340663, 80]]
exponent = 2
dev = torch.device("cpu")
dtype = torch.float32
op = torch.pow
input_tensors = []
for s in sizes:
input_tensors.append(torch.rand(s, dtype=dtype))
model = OpWrapperModule(op)
model = torch.compile(model, backend=my_compiler, dynamic=is_dyn)
model.to(dev)
for s in input_tensors:
s = s.to(dev)
inputs = {"input":s, "exponent": exponent}
result = model(inputs)
dyn_shape = os.getenv("DYNAMIC_SHAPE")
if dyn_shape is not None and dyn_shape == "0":
test_dynamic_shape_pow(False)
else:
test_dynamic_shape_pow(True)
```
**To Execute:**
- Add Above code into a file test_pow.py
- To Execute in Dynamic shape
$ DYNAMIC_SHAPE=1 python test_pow.py
- To check same in static shape
$ DYNAMIC_SHAPE=0 python test_pow.py
**Traceback for the issue:**
```
Traceback (most recent call last):
File "test_pow.py", line 48, in <module>
test_dynamic_shape_pow(True)
File "test_pow.py", line 42, in test_dynamic_shape_pow
result = model(inputs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 347, in _fn
return fn(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 509, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 633, in _convert_frame
result = inner_convert(frame, cache_entry, hooks, frame_state)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 140, in _fn
return fn(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 380, in _convert_frame_assert
return _compile(
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 561, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 197, in time_wrapper
r = func(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 483, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 449, in transform
tracer.run()
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2089, in run
super().run()
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 739, in run
and self.step()
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 702, in step
getattr(self, inst.opname)(inst)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 403, in wrapper
return inner_fn(self, inst)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1170, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/variables/torch.py", line 731, in call_function
tensor_variable = wrap_fx_proxy(
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 1224, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 1311, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1395, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1356, in get_fake_value
return wrap_fake_exception(
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 931, in wrap_fake_exception
return fn()
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1357, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1436, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1417, in run_node
return node.target(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1299, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1505, in dispatch
return decomposition_table[func](*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 229, in _fn
result = fn(*args, **kwargs)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 132, in _fn
result = fn(**bound.arguments)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_refs/__init__.py", line 1029, in _ref
output = prim(a, b)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/_refs/__init__.py", line 1194, in pow
elif b == 2.0:
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/__init__.py", line 352, in __bool__
return self.node.bool_()
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1003, in bool_
return self.guard_bool("", 0)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/fx/experimental/symbolic_shapes.py", line 985, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/home/vshekhawat/venv_Exp/lib/python3.8/site-packages/torch/fx/experimental/symbolic_shapes.py", line 3554, in evaluate_expr
assert orig_expr == hint, f"{orig_expr} != {hint}"
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method pow of type object at 0x7f1c76ede6a0>(*(), **{'input': FakeTensor(..., size=(s0, s1)), 'exponent': s2}):
False != True
```
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20230914+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.21.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 12
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2194.843
BogoMIPS: 4389.68
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 429 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.2.0.dev20230914+cpu
[pip3] torchaudio==2.2.0.dev20230914+cpu
[pip3] torchvision==0.17.0.dev20230914+cpu
[conda] Could not collect
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
916 | 109,362 |
DISABLED test_nondeterministic_alert_median_cuda_float64 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_median_cuda_float64&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16813760825).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_median_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
917 | 109,355 |
Update Android to R21e
|
triaged, open source, ciflow/trunk, topic: not user facing
|
R19c is too old, R21e is LTS version.
| 16 |
918 | 109,354 |
Minimize protobuf dependency
|
open source, ciflow/binaries, topic: not user facing
| null | 5 |
919 | 109,345 |
Adding T4 GPUs to inductor nightly benchmarks
|
open source, topic: not user facing
|
1. Forks the nightly benchmarks workflow to support multiple hardware types.
2. Adds g4dn instances with T4 Turing GPUs to the list of tested devices (Inference only)
| 4 |
920 | 109,341 |
DISABLED test_nondeterministic_alert_kthvalue_cuda_float64 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_kthvalue_cuda_float64&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16805573167).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 12 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_kthvalue_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
921 | 109,339 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
922 | 109,321 |
DISABLED test_backend_match_guard_multi_threads (__main__.MiscTests)
|
triaged, module: flaky-tests, module: macos, skipped, oncall: pt2, module: dynamo
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_backend_match_guard_multi_threads&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16801698712).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 12 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_backend_match_guard_multi_threads`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
cc @malfet @albanD @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
923 | 109,316 |
Add decomp rule for div and trunc
|
fb-exported, ciflow/inductor
|
Differential Revision: D49290277
| 4 |
924 | 109,311 |
Add a multiprocess CI job to torchbench dynamo runner
|
release notes: releng, module: dynamo, ciflow/inductor
|
Currently no torchbench CI job runs python benchmarks/dynamo/torchbench.py with `--multiprocess` flag, so we don't run CI test for distributed torchbench models e..g simple_gpt (https://github.com/pytorch/benchmark/pull/1867)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
925 | 109,310 |
DISABLED test_nondeterministic_alert_histc_cuda (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_histc_cuda&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16792643971).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_histc_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
926 | 109,309 |
Support the `ExitStack` context manager (or a simplified version)
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
I'm trying to compile a piece of code with `fullgraph=True` that uses multiple context managers.
I saw that https://github.com/pytorch/pytorch/pull/98725 should support this sort of behaviour (cc: @yanboliang as author)
https://github.com/Lightning-AI/lightning/pull/18557 includes some real pieces of code that show this pattern
### Error logs
_No response_
### Minified repro
```python
from contextlib import ExitStack
class A:
def __enter__(self): pass
def __exit__(self, exc_type, exc_val, exc_tb): pass
class B:
def __enter__(self): pass
def __exit__(self, exc_type, exc_val, exc_tb): pass
def init_context():
stack = ExitStack()
stack.enter_context(A())
stack.enter_context(B())
return stack
def fn():
with init_context():
return 1 + 2
import torch
fn = torch.compile(fn, fullgraph=True)
out = fn()
```
However, this fails with
```python
Traceback (most recent call last):
File "/home/carmocca/git/lightning/kk2.py", line 25, in <module>
out = fn()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 564, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 486, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 453, in transform
tracer.run()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 594, in call_function
return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 329, in call_method
raise unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 172, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_method GenericContextWrappingVariable() enter_context [GenericContextWrappingVariable()] {}
from user code:
File "/home/carmocca/git/lightning/kk2.py", line 20, in fn
with init_context():
File "/home/carmocca/git/lightning/kk2.py", line 14, in init_context
stack.enter_context(A())
```
I also tried my own simplified implementation:
```python
class ExitStack:
def __init__(self, context_managers) -> None:
self._context_managers = context_managers
def __enter__(self):
for ctx_manager in self._context_managers:
ctx_manager.__enter__()
def __exit__(self, exc_type, exc_value, traceback):
for ctx_manager in reversed(self._context_managers):
ctx_manager.__exit__(exc_type, exc_value, traceback)
class A:
def __enter__(self): pass
def __exit__(self, exc_type, exc_val, exc_tb): pass
class B:
def __enter__(self): pass
def __exit__(self, exc_type, exc_val, exc_tb): pass
def init_context():
stack = ExitStack([A(), B()])
return stack
def fn():
with init_context():
return 1 + 2
import torch
fn = torch.compile(fn, fullgraph=True)
out = fn()
```
Which fails with
```python
Traceback (most recent call last):
File "/home/carmocca/git/lightning/kk2.py", line 33, in <module>
out = fn()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 564, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 486, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 453, in transform
tracer.run()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1003, in SETUP_WITH
self.setup_or_before_with(inst)
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1755, in setup_or_before_with
unimplemented(f"{inst.opname} {ctx}")
File "/home/carmocca/git/venv/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 172, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: SETUP_WITH UserDefinedObjectVariable(ExitStack)
from user code:
File "/home/carmocca/git/lightning/kk2.py", line 28, in fn
with init_context():
```
The test suite includes tests for similar classes: https://github.com/yanboliang/pytorch/blob/main/test/dynamo/test_ctx_manager.py so I would expect that supporting this basic version is feasible.
### Versions
2.1.0.rc0
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
927 | 109,303 |
Create static analysis tool to improve ONNX export success
|
module: onnx, triaged, onnx-triaged, release notes: onnx
|
### π The feature, motivation and pitch
Although the ONNX exportes strives to export the model without code changes on user side, some limitations might arise and changes on user script become inevitable
One way to ease the changes on user side is to create a static analysis tool that identifies unsupported patterns and recommend code changes, possibly even making the changes for the user
Integrations with IDEs, such as Visual Studio Code is a plus
Supporting is a must `torch.onnx.dynamo_export` but supporting `torch.onnx.export` woul;d be a great plus
Tools to consider during planning: https://github.com/pytorch/test-infra/tree/main/tools/torchfix and https://libcst.readthedocs.io/en/latest/
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
928 | 109,294 |
Attribute 'kernel_shape' is expected to have field 'ints' when exporting a module with `List[Tensor]` inputs/outputs
|
module: onnx, triaged
|
### π Describe the bug
Hi folks π
I have a model that performs much better when I keep some of its states during inference.
To do so, I have to accept a `List[Tensor]` as argument and return the new state in the output as well. I wasn't able to use it with `Tuple[Tensor,...]` because the amount of Tensors in the state might vary according to the model's configuration.
This is the error I have:
```sh
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/Users/.../repos/vendor/onnx-issue/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1636, in _export
_C._check_onnx_proto(proto)
RuntimeError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec for node. Name: /layer2/Conv OpType: Conv
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/.../repos/vendor/onnx-issue/example.py", line 77, in <module>
export()
File "/Users/.../repos/vendor/onnx-issue/example.py", line 44, in export
torch.onnx.export(
File "/Users/.../repos/vendor/onnx-issue/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/Users/.../repos/vendor/onnx-issue/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1638, in _export
raise errors.CheckerError(e) from e
torch.onnx.errors.CheckerError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec for node. Name: /layer2/Conv OpType: Conv
```
To simplify its reproducibility I wrote a minimal setup: https://github.com/rcelha/onnx-issue-example/blob/main/example.py
---
First I thought the issue was with varying size of tensors in the `aux` parameter, but I have the same issue in this other example: https://github.com/rcelha/onnx-issue-example/blob/main/example_same_len_aux.py
Here is a version of a similar model that works (without the `aux` argument): https://github.com/rcelha/onnx-issue-example/blob/main/example_working_module.py
---
Also, I noticed that does export the model if my conv1d has `bias=False`, but it will fail later when running the model:
```sh
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
2023-09-14 14:43:02.125355 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Conv node. Name:'/layer1/Conv' Status Message: kernel_shape is not compatible with W shape. kernel_shape: {100} W: {16,1,1} channels_last: 0
Traceback (most recent call last):
File "/Users/.../repos/vendor/onnx-issue/example.py", line 78, in <module>
test_inference()
File "/Users/.../repos/vendor/onnx-issue/example.py", line 70, in test_inference
ort_outputs = ort_session.run(None, ort_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/.../repos/vendor/onnx-issue/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Conv node. Name:'/layer1/Conv' Status Message: kernel_shape is not compatible with W shape. kernel_shape: {100} W: {16,1,1} channels_last: 0
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 16.0.6
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[conda] Could not collect
| 0 |
929 | 109,292 |
aten::squeeze exported to ONNX as an `If` node
|
module: onnx, triaged
|
### π Describe the bug
Hi,
I am exporting to ONNX llama from transformers implementation, and don't understand why `If` nodes are inserted for the export of this `squeeze` operator, where the first dims are always of shape `1` and `1`: https://github.com/huggingface/transformers/blob/7c63e6fc8c34dcf8b0121eaee776f41ccf3b1137/src/transformers/models/llama/modeling_llama.py#L182
It appears we are going into this path: https://github.com/pytorch/pytorch/blob/b6a1d3fb97ca8eeccf15a4c495fdd1af4b197f88/torch/onnx/symbolic_opset11.py#L937
because the symbolic helper `_get_tensor_sizes` give us [here](https://github.com/pytorch/pytorch/blob/b6a1d3fb97ca8eeccf15a4c495fdd1af4b197f88/torch/onnx/symbolic_helper.py#L578) `x_type.varyingSizes() = [None, None, None, None]` .
Is there a way to hint that the shapes are constant hard-coded to `1`? The intermediate captured graph is (the aten::sub is not in the original code base - added for debugging):
```
%4407 : Float(1, 1, 17, 4, strides=[68, 68, 4, 1], requires_grad=0, device=cpu) = aten::sub(%cos.1, %4405, %4406), scope: transformers.models.llama.modeling_llama.LlamaForCausalLM::/transformers.models.llama.modeling_llama.LlamaModel::model/transformers.models.llama.modeling_llama.LlamaDecoderLayer::layers.0/transformers.models.llama.modeling_llama.LlamaAttention::self_attn # /home/fxmarty/hf_internship/transformers/src/transformers/models/llama/modeling_llama.py:183:0
%4408 : int = prim::Constant[value=1](), scope: transformers.models.llama.modeling_llama.LlamaForCausalLM::/transformers.models.llama.modeling_llama.LlamaModel::model/transformers.models.llama.modeling_llama.LlamaDecoderLayer::layers.0/transformers.models.llama.modeling_llama.LlamaAttention::self_attn # /home/fxmarty/hf_internship/transformers/src/transformers/models/llama/modeling_llama.py:185:0
%4409 : Float(1, 17, 4, strides=[68, 4, 1], requires_grad=0, device=cpu) = aten::squeeze(%4407, %4408), scope: transformers.models.llama.modeling_llama.LlamaForCausalLM::/transformers.models.llama.modeling_llama.LlamaModel::model/transformers.models.llama.modeling_llama.LlamaDecoderLayer::layers.0/transformers.models.llama.modeling_llama.LlamaAttention::self_attn # /home/fxmarty/hf_internship/transformers/src/transformers/models/llama/modeling_llama.py:185:0
```
A workaround is to use `cos = cos[0, 0]` - wondering if there is anything better.
Thank you!
cc @justinchuby
### Versions
torch 2.0.1, opset_version = 12
| 2 |
930 | 109,290 |
DISABLED test_nondeterministic_alert_bincount_cuda (__main__.TestTorchDeviceTypeCUDA)
|
triaged, module: flaky-tests, skipped
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_bincount_cuda&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16782561380).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_bincount_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_torch.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"422792","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"e774cf397307c4c11e0a581b2258b54d36c079b283546f13beeb2d80ed18b6cf\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"F6E4:0CE2:26B37:2FF4E:65030054","accept-ranges":"bytes","date":"Thu, 14 Sep 2023 12:45:23 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10051-SJC","x-cache":"MISS","x-cache-hits":"0","x-timer":"S1694695523.836439,VS0,VE286","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"981f341f188ae8c6947a1d84967501ca2cc1c0a4","expires":"Thu, 14 Sep 2023 12:50:23 GMT","source-age":"0"}
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
931 | 109,289 |
PyTorch 2.1 smoke test requirements
|
triaged
|
PyTorch-2.1 is a big release with a lots of new feature, so we need to make sure that:
- [ ] CUDA
- [x] pypi binaries with slimmed dependencies are usable in standard AWS containers (amazonlinux:2 regression in 1.13)
- [x] pypi binaries with slimmed dependencies are usable with stock Ubuntu-20.04: https://github.com/pytorch/pytorch/issues/91067 . Test: https://github.com/pytorch/builder/actions/runs/6190494544/job/16806853909#step:11:2278
- [ ] PyTorch+CUDA11.8 and CUDA 12.1 has a working FFT on 4090
- [x] Check cuda 1.12.1 update issue: https://github.com/pytorch/pytorch/issues/94772 with small wheels
- [ ] `torch.compile`
- [x] Basic test works (for example see test mentioned in https://github.com/openai/triton/pull/1176 ) in PyTorch docker container
- [ ] `torch.compile` produces a binary which can be used on 3090
- [x] `torch.compile` raises an error if used on Windows. Test: https://github.com/pytorch/builder/actions/runs/6171808682/job/16752011341#step:9:13394
- [x] `torch.compile` works on 3.11 : Test: https://github.com/pytorch/builder/actions/runs/6223171830/job/16889860866
- MPS
- [x] Resnet is usable out of the box (i.e. https://github.com/pytorch/builder/blob/main/test/smoke_test/smoke_test.py passes for MPS device). Test: https://github.com/pytorch/builder/actions/runs/6171808682/job/16752006835#step:9:1349
### Versions
2.1
| 2 |
932 | 109,285 |
[inductor][cpu] perf regression
|
triaged, oncall: pt2, module: cpu inductor
|
<p>new_perf_regression 2023-09-10 compare with 2023-09-04 nightly release</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>hf_T5_generate</td>
<td>1.0</td>
<td>1.02661</td>
<td>1.409398686</td>
<td>1.44690278503446</td>
<td>687.487129</td>
<td>1.0</td>
<td>1.205255</td>
<td>1.2150767370000002</td>
<td>1.4644773126529351</td>
<td>1519.766411</td>
<td>0.85</td>
<td>1.01</td>
<td>0.86</td>
<td>2.21</td>
</tr>
<tr>
<td>cait_m36_384</td>
<td>1.0</td>
<td>0.445781</td>
<td>0.777321322</td>
<td>0.346515076242482</td>
<td>128.702354</td>
<td>2.0</td>
<td>0.901154</td>
<td>0.684335916</td>
<td>0.616692048047064</td>
<td>124.246723</td>
<td>0.49</td>
<td>1.78</td>
<td>0.88</td>
<td>0.97</td>
</tr>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
<tr>
<td>lennard_jones</td>
<td>1.0</td>
<td>1.367114</td>
<td>4.8845e-05</td>
<td>6.677668332999999e-05</td>
<td>12.23025</td>
<td>1.0</td>
<td>1.785091</td>
<td>3.7474e-05</td>
<td>6.6894500134e-05</td>
<td>12.156179</td>
<td>0.77</td>
<td>1.0</td>
<td>0.77</td>
<td>0.99</td>
</tr>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
</tbody>
</table>
<p>SW info</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>SW</th>
<th>Nightly commit</th>
<th>Main commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pytorch</td>
<td>3316374</td>
<td>8ff0036</td>
</tr>
<tr>
<td>Torchbench</td>
<td>/</td>
<td>ffbbebb9</td>
</tr>
<tr>
<td>torchaudio</td>
<td>475b6ae</td>
<td>ede4309</td>
</tr>
<tr>
<td>torchtext</td>
<td>142d029</td>
<td>45e4b8c</td>
</tr>
<tr>
<td>torchvision</td>
<td>8636bf3</td>
<td>4ac707a</td>
</tr>
<tr>
<td>torchdata</td>
<td>eb9bf61</td>
<td>d76d92c</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>0200b11</td>
<td>/</td>
</tr>
</tbody>
</table>
<p>Repro</p>
<a href=https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh>
inductor_single_run.sh </a>
<code>
bash inductor_single_run.sh single inference performance torchbench hf_T5_generate float32 first static cpp 0
bash inductor_single_run.sh single inference performance timm_models cait_m36_384 float32 first static cpp 0
bash inductor_single_run.sh multiple inference performance torchbench lennard_jones float32 first static cpp 0
</code>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
933 | 109,284 |
Adding index function for lists
|
triaged, open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Fixes #109031
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 9 |
934 | 109,281 |
add _amp_foreach_non_finite_check_and_unscale_cpu_ and _amp_update_scale_cpu_ kernels on CPU
|
module: cpu, open source, ciflow/trunk, ciflow/periodic, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109994
* #109993
* __->__ #109281
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 3 |
935 | 109,276 |
DISABLED test_nondeterministic_alert_MaxUnpool3d_cuda_float64 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool3d_cuda_float64&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16776175972).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 11 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool3d_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
936 | 109,268 |
DISABLED test_nondeterministic_alert_MaxUnpool3d_cuda_float32 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool3d_cuda_float32&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16768147456).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool3d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
937 | 109,266 |
Add a unittest for ModuleWrapPolicy callable
|
oncall: distributed, good first issue, triaged, pt_distributed_rampup
|
### π The feature, motivation and pitch
We made ModuleWrapPolicy a callable in https://github.com/pytorch/pytorch/pull/109117, and should add an appropriate unittest.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 2 |
938 | 109,265 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
939 | 109,263 |
[profiler] Show shapes for lists of tensors in chrome traces
|
oncall: profiler
|
### π The feature, motivation and pitch
When profiling with record_shapes=True, chrome traces show shapes for tensors; but not for lists of tensors. It would be helpful if we also saw shape info for lists of tensors.
Suggestions:
* Build a local script that you can use for testing. It can be basic - just run a few basic pytorch ops, and then run with the profiler with the `record_shapes=True` option turned on. Then export the chrome trace, and view it in `chrome://tracing`. Example script https://gist.github.com/davidberard98/a9711961a78f50b1b0eff530322848df - note that `torch.cat` takes a list of tensors, so it's a good op for testing here. Verify that when you click on the op, it shows some shape information.
* See https://github.com/pytorch/pytorch/pull/100593 for a similar change (focus on the changes in profiler_kineto.cpp). Note that the referenced PR adds support for both collecting and displaying the information; in this case, collection is already done; you'll just need to display the info
* To avoid making traces too much larger, you should probably limit to, say, the first 30 elements if list of tensors has >30 elements.
* For now you can probably just display the information in the chrome trace, and not add it into the KinetoEvent. (You could do that too, if you want)
* Add a test
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov
| 0 |
940 | 109,262 |
Add .item() and .tolist() support in Dynamo/Inductor without graph break
|
release notes: fx, module: inductor, ciflow/inductor
|
Main changes:
1. Allow DynamicScalar to flow around in the system (by implementing `ir.DynamicScalar` and not throw error in `size_hint()` or `evaluate_expr()` when returned value has unbacked symint)
2. Make DynamicScalar object synonymous to its underlying Symbol object, to make downstream SymPy reasoning in various part of the system easier (NOTE: this is a semi-hacky change, and needs discussion on whether it's okay to do)
3. Change various inequality checks to use the SymPy equivalent version, to recognize the fact that now there is unbacked symint involved in those comparisons
4. Make the downstream kernel depend on the buffer that represents the symint, so that the symint buffer would be not DCEβed away
5. Pass needed unbacked symints into Triton kernel
6. For Triton size hint, replace unbacked symints with default value 32 (NOTE: this is the only place we do this kind of direct replacement to integer value)
7. Remove the `sym_constrain_range_for_size` decomposition empty stub, and implement the `sym_constrain_range_for_size` inductor lowering (this is needed for user side `constrain_as_size` annotation to work)
8. Gate these changes behind `torch._dynamo.config.capture_scalar_outputs` flag
Will break this down into several PRs if we are good with the general approach.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
941 | 109,261 |
[dynamo] Disable DDPOptimizer or error out if DDPOptimizer + static_graph is detected
|
triaged, module: ddp, oncall: pt2
|
### π Describe the bug
AFAIK, DDPOptimizer doesn't work with DDP when static_graph=True.
Until we have a fix / better solution, we could probably either warn, error out, or automatically disable DDPOptimizer if we detect static_graph=True. That would help with debugging why DDPOptimizer is erroring out.
### Versions
N/A
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
942 | 109,260 |
[FSDP] Simplify `_fully_sharded_module_to_handle`
|
triaged, module: fsdp
|
It looks like `_fully_sharded_module_to_handle` contains at most 1 element, so there is no reason for it to be a `Dict`.
This can also be simplified to only insert the state on the fully sharded module if not already inserted (by the `None` check):
https://github.com/pytorch/pytorch/blob/dbddf1816a847382a9e110a1ed2dfb245495df41/torch/distributed/_composable/fully_shard.py#L115-L124
cc @zhaojuanmao @mrshenli @rohan-varma @fegin @penguinwu
| 0 |
943 | 109,258 |
Decompose div.Tensor_mode
|
ciflow/inductor, module: export
| null | 2 |
944 | 109,256 |
Increase tolerances for atanh opinfo test
|
topic: not user facing, module: inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109264
* __->__ #109256
* #109165
* #109164
The deviation is pretty small after all
| 4 |
945 | 109,249 |
Remove det_singular OpInfo
|
open source
|
Fixes #93045
singular linalg_det test cases are flaky. Consensus seems to be we can remove associated OpInfo according to this:
https://github.com/pytorch/pytorch/issues/93045#issuecomment-1477674083
@lezcano
This change is a duplicate of a different stale PR: https://github.com/pytorch/pytorch/pull/102581
| 2 |
946 | 109,245 |
metric table
|
module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109245
* #108193
In dynamo/inductor, sometimes it helps to gather metrics/statistics for each model in different levels like model level, graph level, kernel level or pair of fusion nodes level. This kind of thing will be very easy to do with Scuba, but we only have scuba in fbcode. This PR build metric tables to solve part of the problem.
Q: why not log to stdout/err direclty
A: sometimes we need more structured data. E.g., it would be helpful to gather all the stats in a CSV and then do post-processing (like calculating a geomean etc.). Also metric table will tag each row with the model name which is helpful.
Q: what's the difference with speedup_indcutor.csv
A: speedup_indcutor.csv is a special case that gather statistics on model level: i.e., we have one row for each model. But recording statistics on finer grain level like graph etc. is also helpful.
Example use cases:
- As a followup on the bechmark fusion PR, I want to gather all the 'slow' fusion and analyze them. With the metric table, I can easily log slow fusion for each model into a csv file. Here is the log gathered for huggingface:
https://gist.github.com/shunting314/964e73cc98368b301414ec7b7ad4c702 .
- To help understand the effect of 'loop ordering after fusion' PR, it would be helpful to gather stats like how many fusions happens for each graph. Previously we log the metric to stderr directly. But logging these metrics in a structural way is useful.
- gather number of registers, register spills, shared memory usage for each kernel in each model with runnable kernel code logged.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
947 | 109,240 |
AOTAutograd should put keep mutations in the graph during training
|
triaged, oncall: pt2, module: aotdispatch
|
Tracking issue for future work in aot autograd.
Today, if you compile a model that has input mutations, and you're compiling an inference graph (because e.g. you're running your model under `no_grad()`), then AOTAutograd will do the following:
(1) functionalize the graph; `x.mul_(2)` becomes `x_updated = x.mul(2)`
(2) detect that there were input mutations, which it represents as extra `x.copy_(x_updated)` calls
(3) Put those copy_() nodes directly in the graph so that inductor can fuse them
However, this is not true in the training case. Instead, AOTAutograd will:
(1) return `x_updated` as an additional graph output
(2) run an opaque epilogue (in eager mode), that manually performs the mutation: `x.copy_(x_updated)`
Historically, getting input mutations into the graph for inference was important because of optimizers: optimizers are (usually) run under no_grad so we get an inference graphs, and captured graphs of optimizers have a large number of input mutations, due to each parameter (a graph input) being mutated. So we made sure that when you compile optimizer code as an inference graph, we keep the input mutations in the graph for inductor to fuse.
It would still be useful to keep input mutations in the graph for the training case too, though. One example is batch norm: today, inductor can't actually see and speed up the buffer mutations that occur from batch norm during training, because they are not kept in the graph.
The main difficult in getting this to work is that we need to make sure that when AOTAutograd invokes the partitioner to partition out a separate fw/bw graph, the "correct" thing needs to happen (if there are copy_() nodes in the joint graph due to input mutations, those nodes should always show up at the very end of the partitioned forward graph).
cc @ezyang @msaroufim @wconstab @anijain2305
| 21 |
948 | 109,237 |
AOTAutograd should track view chains so it can replay them, instead of using as_strided.
|
triaged, oncall: pt2, module: aotdispatch
|
This is a tracker issue for doing the longer term thing described in https://github.com/pytorch/pytorch/issues/109053.
The point of this is mainly perf: As mentioned in this internal issue, calling as_strided() means that as_strided_backward() will show up in the backward graph, which is known to be slow (internal post: https://fb.workplace.com/groups/147416452503510/posts/1211736186131257/)
cc @ezyang @msaroufim @wconstab @anijain2305
| 0 |
949 | 109,236 |
Report name of defining class along side function name in Dynamo logs
|
module: logging, triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
Here's a common log message I see:
```
[rank0]:[2023-09-13 11:54:28,851] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line <listcomp> /data/users/ezyang/b/torchrec/torchrec/distributed/dist_data.py:173 (inline depth: 15)
```
This message is annoying because all it says is `<listcomp>` but what I really wanted to know was what class (and function) I was in. In principle, we should be able to figure this, not from the code object (which doesn't really know what class it was defined in) but from the line number.
A shitty implementation that would do better might look like this: when processing `dist_data.py`, read and parse the Python file, giving a stream of tokens. For every line that precedes an increase of indentation, check if it is `class XXX` or `def XXX`; if so, associate all nested lines with `XXX`. This can be done recursively so you get `XXX.YYY`. The line to class/meth table can be cached so we only have to do this once per file.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @williamwen42
### Versions
main
| 0 |
950 | 109,235 |
ONNX exporter issue: fails to add conversions exporting T5 Transformer model
|
module: onnx, triaged
|
### π Describe the bug
To reproduce, run the following script:
```
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
def input_example(self, max_batch=1, max_dim=64, seq_len=16):
sample = next(self.parameters())
input_ids = torch.randint(low=0, high=max_dim, size=(max_batch, seq_len), device=sample.device)
labels = torch.randint(low=0, high=max_dim, size=(max_batch, seq_len), device=sample.device)
attention_mask = torch.randint(low=0, high=1, size=(max_batch, seq_len), device=sample.device)
return tuple([input_ids, attention_mask, labels])
model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
torch.onnx.export(model,
input_example(model),
"t5.onnx",
verbose=True,
opset_version=16,
do_constant_folding=True,
)
```
```
python t5_repro.py
```
Observe result:
```
....
/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py:1655: UserWarning: The exported ONNX model failed ONNX shape inference.The model will not be executable by the ONNX Runt
ime.If this is unintended and you believe there is a bug,please report an issue at https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference: [ShapeIn
ferenceError] (op_type:Where, node name: /encoder/block.0/layer.0/SelfAttention/Where): Y has inconsistent type tensor(float) (Triggered internally at /opt/pytorch/pytorch/torch/cs
rc/jit/serialization/export.cpp:1410.)
```
Resulting ONNX does fail in ORT if attempted.
Expected behavior
Working ONNX file should be generated.
The problem is that some arguments to transformer functions are coming in as float that should have been long integers. Torch eager and TorchScript are doing implicit conversion on the fly, while ONNX exporter does not add explicit casts, and that code fails later in ORT because ONNX does not have implicit conversion mechanism.
### Versions
PyTorch version: 2.1.0a0+4136153
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
....
Versions of relevant libraries:
[pip3] k2==1.24.3.dev20230725+cuda12.1.torch2.1.0a0
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.9.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+4136153
[pip3] torch-cluster==1.6.1
[pip3] torch-geometric==2.3.1
[pip3] torch-sparse==0.6.17
[pip3] torch-spline-conv==1.2.2
[pip3] torch-tensorrt==1.5.0.dev0
> pip show onnxruntime-gpu
Name: onnxruntime-gpu
Version: 1.15.0
| 0 |
951 | 109,224 |
DISABLED test_nondeterministic_alert_MaxUnpool3d_cuda_float16 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool3d_cuda_float16&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16755371891).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool3d_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
952 | 109,213 |
inductor/test_max_autotune having timeout issues
|
module: tests, triaged, oncall: pt2, module: inductor
|
These tests sometimes take <1min, and other times cause the entire test file to timeout, meaning they are taking upwards of 20 minutes. Sometimes they succeed on reruns which has been reducing the impact, but if it happens to more than just 1 or 2 tests, then the job will fail since it will run out of rerun chances.
They seem to only impact `linux-focal-cuda12.1-py3.10-gcc9-sm86 / test (default)` tests
Visible on hud https://hud.pytorch.org/hud/pytorch/pytorch/bfa8429c6a2ee9e0f4050066304743f6b13f14f7/1?per_page=100 (check show unstable)
The tests were introduced in https://github.com/pytorch/pytorch/pull/108015/files
Unlikely related to parallelization, the serial mem leak check job also had problems https://github.com/pytorch/pytorch/actions/runs/6169966074/job/16746129137, but I submitted https://github.com/pytorch/pytorch/pull/109209 just to be sure.
Here is a list of tests I have noticed do this:
```
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_addmm_dynamic_False_max_autotune_gemm_backends_CUTLASS
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_addmm_dynamic_False_max_autotune_gemm_backends_ATen, Triton, CUTLASS
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_regular_mm_dynamic_False_max_autotune_gemm_backends_CUTLASS
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_regular_mm_dynamic_False_max_autotune_gemm_backends_ATen, Triton, CUTLASS
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_mm_bias_dynamic_False_max_autotune_gemm_backends_CUTLASS
inductor/test_max_autotune.py::TestDoBench::test_max_autotune_cutlass_backend_mm_bias_dynamic_False_max_autotune_gemm_backends_ATen, Triton, CUTLASS
```
If this doesn't get fixed, I'm going to need to revert the change or disable all these tests.
Example log: https://ossci-raw-job-status.s3.amazonaws.com/log/16755850617
ctrl f for `KeyboardInterrupt` and the relevant logs should be above it
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
953 | 109,210 |
[TEST] Release only changes
|
with-ssh, release notes: releng, ciflow/inductor
|
Fixes #ISSUE_NUMBER
| 1 |
954 | 109,208 |
Apply release only changes to core
|
topic: not user facing
|
Utility script to run after branch cut have been completed.
Execute: ``RELEASE_VERSION=2.1 apply-release-changes.sh``
Similar to: https://github.com/pytorch/audio/pull/3590
Test PR: https://github.com/pytorch/pytorch/pull/109210
Automate generation of PRs:
https://github.com/pytorch/pytorch/pull/108053
https://github.com/pytorch/pytorch/pull/108688
https://github.com/pytorch/pytorch/pull/108064
| 3 |
955 | 109,205 |
TST: add a numpy reproducibility test
|
open source, topic: not user facing, module: dynamo
|
Setting a global `torch._dynamo.config.use_numpy_random_stream` should make compiled numpy programs using torch.dynamo use NumPy's PRNG (at an expense of having a graph break, most likely).
ATM, this only adds a test which passes but is going to start failing once gh-109109 lands, so will need to reevaluate once that one lands.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
956 | 109,204 |
Pytorch ROCM windows builds
|
module: windows, module: rocm, triaged
|
### π The feature, motivation and pitch
There are more guides showing up for ROCM on Windows such as this cuda program which needed cublas dependencies compiled with AMDs equivalent HIPblas:
https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/Windows-v1.43-ROCm
As of July 27th, AMD officially supports HIP SDK on windows:
https://www.amd.com/en/developer/rocm-hub/hip-sdk.html
It would be very good for Pytorch Windows to function with a greater variety of AMD devices.
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 0 |
957 | 109,198 |
[torch.optim/C++] Add Adamax optimizer
|
triaged, open source, release notes: optim
|
Adds Adamax optimizer for #107224.
cc @albanD @jbschlosser @soumith @iramazanli @vincentqb @janeyx99
| 1 |
958 | 109,195 |
DISABLED test_nondeterministic_alert_MaxUnpool2d_cuda_float64 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool2d_cuda_float64&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16744750754).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool2d_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
959 | 109,194 |
`torch.mean` not supported for `torch.sparse_coo_tensor`, but `torch.sum` is supported (`scipy.sparse.coo_matrix` does support both `mean` and `sum`)
|
module: sparse, triaged, enhancement
|
### π The feature, motivation and pitch
I guess, torch.mean can easily be supported at least by doing torch.sum and then dividing by the aggregated dim size?
This can be used for estimating e.g. average edge weight in a graph or similar
```python
import torch
a = torch.eye(10)
print(a.to_sparse().sum(dim = -1))
# tensor(indices=tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]),
# values=tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]),
# size=(10,), nnz=10, layout=torch.sparse_coo)
print(a.to_sparse().mean(dim = -1))
# NotImplementedError: Could not run 'aten::mean.dim' with arguments from the 'SparseCPU' backend. ...
```
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 5 |
960 | 109,193 |
F.conv2d(input, weight, bias, self.stride, RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
|
needs reproduction, module: cudnn, module: nn, module: cuda, triaged
|
### π Describe the bug
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([1, 512, 64, 64], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(512, 256, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7452 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2350.0000
CPU min MHz: 1500.0000
BogoMIPS: 4699.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce d extd_apicid aperfmperf rapl pni pclmulqdq monitor sstopoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xstx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_ npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flusvmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
Is XNNPACK available: True [0/1876]
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7452 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2350.0000
CPU min MHz: 1500.0000
BogoMIPS: 4699.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpui
d extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce
topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xs
aves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_
vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.0
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1 [pip3] triton==2.0.0 d extd_apicid aperfmperf rapl pni pclmulqdq monitor ss
[conda] numpy 1.23.0 pypi_0 pypi tx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stib[conda] torch 1.13.1 pypi_0 pypi npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flus[conda] torchvision 0.14.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
961 | 109,191 |
Gradients across different ranks are not synchronized when using DDP
|
oncall: distributed, triaged
|
# π Bug
Gradients across different ranks are not synchronized when using DDP.
You can run the following code:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn.functional as functional
from torch.nn.parallel import DistributedDataParallel as DDP
import os
import numpy as np
import random
class TwoLayerMLP(nn.Module):
def __init__(self, model_dim, feedford_dim):
super(TwoLayerMLP, self).__init__()
self.linear1 = nn.Linear(model_dim, feedford_dim, bias=False)
self.linear2 = nn.Linear(feedford_dim, model_dim, bias=False)
def forward(self, input):
a1 = functional.relu(self.linear1(input))
a2 = self.linear2(a1)
return input + a2
def setup_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
# torch.backends.cudnn.deterministic = True
def run_fn(rank, world_size):
print(f"Running DDP on rank {rank}.")
# create default process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
setup_seed(1024)
model_dim = 2
feedford_dim = 8
micro_batch_size = 3
inputs_1 = torch.Tensor([[ 0.5920, -0.6301],
[-0.8856, 1.2261],
[-0.4671, -1.0279]]).to(rank)
labels_1 = torch.Tensor([[-1.0387, 0.1039],
[ 0.5989, -1.4801],
[-0.8618, -0.9181]]).to(rank)
inputs_2 = torch.Tensor([[-0.0355, 0.4145],
[ 0.6798, -0.2936],
[ 0.1872, -0.2724]]).to(rank)
labels_2 = torch.Tensor([[-0.5524, -0.8358],
[-2.8240, 0.2564],
[ 0.5045, -1.1290]]).to(rank)
inputs_3 = torch.Tensor([[-0.6166, -0.3604],
[ 0.1046, 1.4810],
[-0.2449, 1.1106]]).to(rank)
labels_3 = torch.Tensor([[-0.3063, -1.3320],
[ 0.7281, 0.1859],
[ 0.5624, -1.4094]]).to(rank)
inputs_1 = inputs_1 + rank
labels_1 = labels_1 + rank
inputs_2 = inputs_2 + rank
labels_2 = labels_2 + rank
inputs_3 = inputs_3 + rank
labels_3 = labels_3 + rank
# inputs_1.requires_grad_(True)
# inputs_2.requires_grad_(True)
# inputs_3.requires_grad_(True)
inputs = [inputs_1, inputs_2, inputs_3]
labels = [labels_1, labels_2, labels_3]
loss_fn = nn.MSELoss()
model = TwoLayerMLP(model_dim, feedford_dim).to(rank)
ddp_model = DDP(model, device_ids=[rank])
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
outputs = [0] * micro_batch_size
outputs[0] = ddp_model(inputs[0])
with ddp_model.no_sync():
for i in range(1, micro_batch_size):
outputs[i] = ddp_model(inputs[i])
for i in range(1, micro_batch_size):
loss = loss_fn(outputs[i], labels[i])
loss.backward()
loss = loss_fn(outputs[0], labels[0])
loss.backward()
print("rank", rank, "backward:")
for name, param in ddp_model.named_parameters():
print("rank", rank, name, ":", param.grad)
optimizer.step()
def main():
world_size = 2
mp.spawn(run_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
# Environment variables which need to be
# set when using c10d's default "env"
# initialization mode.
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
main()
print("Done!")
```
And the results are:
```python
Running DDP on rank 1.
Running DDP on rank 0.
rank 1 backward:
rank 0 backward:
rank 1 module.linear1.weight : tensor([[-0.1659, 1.0537],
[-0.4481, -1.6936],
[ 0.2067, -1.4908],
[ 0.0000, 0.0000],
[ 0.1104, -0.0408],
[-0.4700, -0.1127],
[-0.1041, -0.0192],
[-0.0166, -0.3176]], device='cuda:1')
rank 1 module.linear2.weight : tensor([[ 0.3232, -0.8505, 0.4239, 0.0000, 0.0077, 0.6049, 0.9509, -0.0285],
[ 2.0952, 2.1533, 1.7861, 0.0000, 0.0142, -0.1346, -0.2149, 0.0685]],
device='cuda:1')
rank 0 module.linear1.weight : tensor([[-0.4369, 0.6137],
[ 0.3596, -0.9361],
[ 0.7006, -0.9808],
[-0.2947, 0.4298],
[ 0.2228, -0.1902],
[-0.2892, 0.0564],
[-0.1001, -0.0041],
[ 0.1462, -0.3997]], device='cuda:0')
rank 0 module.linear2.weight : tensor([[ 0.5414, -0.6772, 0.6198, -0.0058, 0.2271, 0.8106, 0.9740, -0.2846],
[ 0.7560, 1.8476, 0.5504, 0.0230, 0.0383, -0.1884, -0.1909, 0.7888]],
device='cuda:0')
Done!
```
We can find that gradients across different ranks are not synchronized.
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3060 Ti
GPU 1: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Xeon Processor (Skylake, IBRS)
Stepping: 4
CPU MHz: 2399.971
BogoMIPS: 4799.94
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 16 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-3
NUMA node1 CPU(s): 4-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```[tasklist]
### Tasks
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 8 |
962 | 109,181 |
DISABLED test_nondeterministic_alert_MaxUnpool2d_cuda_float32 (__main__.TestTorchDeviceTypeCUDA)
|
triaged, module: flaky-tests, skipped
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool2d_cuda_float32&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16736765453).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool2d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_torch.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"422792","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"e774cf397307c4c11e0a581b2258b54d36c079b283546f13beeb2d80ed18b6cf\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"9A3A:380B:798740:912FAB:65015924","accept-ranges":"bytes","date":"Wed, 13 Sep 2023 06:39:33 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000140-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1694587173.107265,VS0,VE176","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"13152540781d8e5f8c129c621c20215635673f7e","expires":"Wed, 13 Sep 2023 06:44:33 GMT","source-age":"0"}
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
963 | 109,180 |
FSDP vs. MiCS
|
oncall: distributed, module: docs, triaged, module: fsdp
|
### π The doc issue
I have read the documentation and paper about PyTorch FSDP (https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). I know there are two different type model sharding strategies: one is full sharding, and another is hybrid sharding. Also, i have read the MiCS in the DeepSpeed (paper: https://www.vldb.org/pvldb/vol16/p37-zhang.pdf code: https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/mics.py).
What is the difference between the PyTorch FSDP's Hybrid sharding and MiCS ? It seems that both of them first perform a reduce scatter of the gradients in the sharding Group and then the AllReduce operation in the replication group. The documentation makes me a bit confused.
### Suggest a potential alternative/fix
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @svekars @carljparker @fegin
| 1 |
964 | 109,175 |
SparseSemiStructuredTensors are constructed differently from the original dense ones
|
module: sparse, triaged
|
### π Describe the bug
I'm working on semi-structured sparsity using the recently introduced features of SparseSemiStructuredTensors.
However, I have encountered inconsistent results between the dense (i.e., HalfTensor) and sparse (i.e., SparseSemiStructuredTensors) formats of 2:4 pruned weights.
My models almost always perform worse when using sparse formats than dense formats.
Moreover, when I reconstructed a dense tensor from a sparse tensor, I found several weights were indexed differently.
There may exist some errors in encoding indices resulting in these inconsistent behaviors.
Here's a minimal code for reproducing the problem.
```
import torch
from torch.sparse import to_sparse_semi_structured
a = torch.randn(3072, 768).cuda().half()/100
pruning_inds = a.abs().view(-1, 4).argsort(dim=1)[:, :2]
a.view(-1, 4).scatter_(1, pruning_inds, torch.zeros_like(pruning_inds).half())
b = to_sparse_semi_structured(a).to_dense()
print(((a-b)**2).sum())
print(a[a!=b][:32])
print(b[a!=b][:32])
```
Here's my result of running the above code. It can be clearly found that the remaining weights are positioned wrong.
```
tensor(239.8750, device='cuda:0', dtype=torch.float16)
tensor([ 0.0000, -0.0087, 0.0000, 0.0162, 0.0000, -0.0073, 0.0163, 0.0000,
0.0000, 0.0155, 0.0000, -0.0177, 0.0128, 0.0000, 0.0000, -0.0084,
-0.0058, 0.0000, 0.0000, 0.0124, 0.0000, 0.0093, 0.0000, 0.0060,
0.0078, 0.0145, 0.0000, -0.0056, 0.0000, 0.0157, 0.0146, 0.0000],
device='cuda:0', dtype=torch.float16)
tensor([-0.0087, 0.0000, 0.0162, 0.0000, -0.0073, 0.0000, 0.0000, 0.0163,
0.0155, 0.0000, -0.0177, 0.0000, 0.0000, 0.0128, -0.0084, 0.0000,
0.0000, -0.0058, 0.0124, 0.0000, 0.0093, 0.0000, 0.0060, 0.0000,
0.0000, 0.0078, 0.0145, 0.0000, -0.0056, 0.0000, 0.0157, 0.0146],
device='cuda:0', dtype=torch.float16)
```
Thanks.
### Versions
```
Collecting environment information...
PyTorch version: 2.2.0a0+gitb193f29
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
GPU 2: NVIDIA RTX A5000
GPU 3: NVIDIA RTX A5000
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 36
On-line CPU(s) list: 0-35
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1200.079
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 576 KiB
L1i cache: 576 KiB
L2 cache: 18 MiB
L3 cache: 24.8 MiB
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.2.0a0+gitb193f29
[pip3] torchaudio==2.2.0.dev20230906+cu121
[pip3] torchvision==0.16.0.dev20230906+cu121
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2023.1.0 h06a4308_46343
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.2.0a0+gitb193f29 dev_0 <develop>
[conda] torchaudio 2.2.0.dev20230906+cu121 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230906+cu121 pypi_0 pypi
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 9 |
965 | 109,166 |
NAN appears in the backward results of masked.cumprod on macos
|
needs reproduction, module: autograd, triaged, module: NaNs and Infs, module: half, module: mps
|
### π Describe the bug
When we add fp16 support for cumprod on CPU, there is a CI failure on macos for masked.cumprod. I don't have a MacOS machine to reproduce this issue, so I print the results in the UT defined in test_mps.py. https://github.com/pytorch/pytorch/actions/runs/6166735824/job/16736964541.
`mps_grad_inputs has nan: tensor(True, device='mps:0')` NAN appears in the backward results of masked.cumprod on macos.
### Versions
latest main branch.
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @kulinseth @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
966 | 109,162 |
DISABLED test_nondeterministic_alert_MaxUnpool2d_cuda_float16 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool2d_cuda_float16&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16731933745).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool2d_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
967 | 109,161 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
968 | 109,150 |
[pytorch][PR] [pytorch-vulkan] add aten::randn_like & aten::normal_
|
fb-exported, module: vulkan, release notes: vulkan, ciflow/periodic
|
Summary:
**COPIED FROM D48814024 (having issue on Pytorch sync).**
Implemented aten::normal_ shader and used it to create aten::randn_like.
Ops defintions:
https://pytorch.org/docs/stable/generated/torch.randn_like.html
https://pytorch.org/docs/stable/generated/torch.Tensor.normal_.html
Test Plan:
```
[ttingchulin@53491.od /data/sandcastle/boxes/fbsource (randn)]$ LD_LIBRARY_PATH=third-party/swiftshader/lib/linux-x64/ buck run fbcode/mode/dev-nosan //xplat/caffe2:pt_vulkan_api_test_bin -- --gtest_filter="*<test>*" eg. -- --gtest_filter="*randn_like*"
[==========] Running 2 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 2 tests from VulkanAPITest
[ RUN ] VulkanAPITest.randn_like
[ OK ] VulkanAPITest.randn_like (230 ms)
[ RUN ] VulkanAPITest.randn_like_large
[ OK ] VulkanAPITest.randn_like_large (570 ms)
[----------] 2 tests from VulkanAPITest (801 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test suite ran. (801 ms total)
[ PASSED ] 2 tests.
[ttingchulin@53491.od /data/sandcastle/boxes/fbsource (randn)]$ LD_LIBRARY_PATH=third-party/swiftshader/lib/linux-x64/ buck run fbcode/mode/dev-nosan //xplat/caffe2:pt_vulkan_api_test_bin -- --gtest_filter="*<test>*" eg. -- --gtest_filter="*normal_*"
[==========] Running 3 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 3 tests from VulkanAPITest
[ RUN ] VulkanAPITest.normal_
[ OK ] VulkanAPITest.normal_ (222 ms)
[ RUN ] VulkanAPITest.normal_large
[ OK ] VulkanAPITest.normal_large (136 ms)
[ RUN ] VulkanAPITest.normal_error
[ OK ] VulkanAPITest.normal_error (37 ms)
[----------] 3 tests from VulkanAPITest (396 ms total)
[----------] Global test environment tear-down
[==========] 3 tests f.
```
Differential Revision: D49208229
| 3 |
969 | 109,149 |
CPU Publish: Fix Assign device error, when module has multiple devices
|
fb-exported, release notes: quantization, release notes: AO frontend
|
Summary:
Fix Assign device error, when module has multiple devices
If fc_fp16_quantization enabled for CPU model.
And module REMOTE_OTHER has multiple devices: {device(type='meta'), device(type='cpu')}
We fail on this assertion:
https://www.internalfb.com/code/fbsource/[7ad02231251739135fda7567c5d5323b5132e0ae]/fbcode/caffe2/torch/ao/quantization/fx/utils.py?lines=232
Since CPU models work on CPU devices, added a condition before the assertion.
In case, we have CPU in module list of devices. Set device as CPU.
Please see debug details:
https://docs.google.com/document/d/1pMPCeJyMPA15NhFc2uAyNDkS9azR40uaNyOP0DIgHjU/edit
Test Plan:
AIMP_DISAGG_CPU=true buck run mode/opt -c python.package_style=inplace -c fbcode.enable_gpu_sections=true lego/scripts:lego_cli -- run-locally --model_entity_id 959168967 --config_version 28 --publish_context OFFLINE_PUBLISH --lego_pipeline aiplatform.modelstore.model_generation.lego.lego_pipeline_builder.gmpp_lego_pipeline --gmpp_config '{"gmpp_pipeline_descriptor": "aiplatform.modelstore.model_generation.v1.ads_pipelines.aimp_pyper_pipeline.model_generation_pipeline", "worker_process_number":12, "worker_thread_per_process_number": 6, "use_work_assignment": true}' 2>&1 | tee /tmp/gmpp_lc.txt
Snapshot:
https://www.internalfb.com/manifold/explorer/ads_storage_fblearner/tree/user/facebook/fblearner/predictor/959168967/47
Differential Revision: D49110166
| 15 |
970 | 109,138 |
[dynamo] `torch.no_grad` doesn't preserve the name of the wrapped function (eager mode does)
|
triaged, module: dynamo
|
### π Describe the bug
```python
import torch
def cool_name(x):
return x.sin()
def fn(x):
return torch.no_grad(cool_name)
x = torch.ones([])
result = fn(x)
print(result.__name__)
result = torch.compile(fn, backend="eager", fullgraph=True)(x)
print(result.__name__)
```
Output:
```
cool_name
set_grad_enabled
```
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
971 | 109,137 |
[ONNX] Provide an option to not generate `report_dynamo_export.sarif`
|
module: onnx, triaged
|
### π The feature, motivation and pitch
When bulk converting or running tests, it would be nice to have an option to not create `report_dynamo_export.sarif` because they get overwritten anyways.
cc @BowenBao
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
972 | 109,135 |
Decomp div.Tensor_mode
|
fb-exported, ciflow/inductor, module: export
|
Differential Revision: D49201183
| 6 |
973 | 109,134 |
FSDP should have tests for partial state_dict and optim state_dict loading
|
triaged, module: fsdp, module: distributed_checkpoint
|
### π The feature, motivation and pitch
See the context in https://github.com/pytorch/pytorch/pull/109116. It is common for users to load pre-trained state_dict and train the new parameters. FSDP should support the use cases that state_dict and optim state_dict are partial. We need test cases for the use cases.
### Alternatives
_No response_
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @penguinwu
| 0 |
974 | 109,132 |
[inductor] Parameterize ir.Scan on combine_fn
|
open source, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110911
* __->__ #109132
* #106581
* #109601
This replaces `tl.cumsum` and `tl.cumprod` with calls to `tl.associative_scan`
where the combine function is generated from inductor IR.
So before we had:
```python
@triton.jit
def triton_(in_ptr0, out_ptr0, xnumel, rnumel, XBLOCK : tl.constexpr):
xnumel = 20
rnumel = 30
RBLOCK: tl.constexpr = 32
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
rindex = tl.arange(0, RBLOCK)[None, :]
rmask = rindex < rnumel
r1 = rindex
x0 = xindex
tmp0 = tl.load(in_ptr0 + (r1 + (30*x0)), rmask & xmask, other=0).to(tl.float32)
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, RBLOCK])
tmp2 = tl.where(rmask & xmask, tmp1, 0)
tmp3 = tl.cumsum(tmp2, 1)
tl.store(out_ptr0 + (r1 + (30*x0)), tmp3, rmask & xmask)
```
Now we have:
```python
@triton.jit
def _triton_helper_fn0(arg0, arg1):
tmp0 = tmp0 + tmp1
return tmp0
@triton.jit
def triton_(in_ptr0, out_ptr0, xnumel, rnumel, XBLOCK : tl.constexpr):
xnumel = 20
rnumel = 30
RBLOCK: tl.constexpr = 32
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
rindex = tl.arange(0, RBLOCK)[None, :]
rmask = rindex < rnumel
r1 = rindex
x0 = xindex
tmp0 = tl.load(in_ptr0 + (r1 + (30*x0)), rmask & xmask, other=0).to(tl.float32)
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, RBLOCK])
tmp2 = tl.where(rmask & xmask, tmp1, 0)
tmp3 = tl.associative_scan(tmp2, 1, _triton_helper_fn0)
tl.store(out_ptr0 + (r1 + (30*x0)), tmp3, rmask & xmask)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
975 | 109,131 |
Introduce 'backend' concept to torch.export.export API
|
module: onnx, feature, triaged, topic: new features, module: export
|
### π The feature, motivation and pitch
Currently, ONNX exporter is exposed through `torch.onnx.export`, which was the only βexportβ API on PyTorch repo until PyTorch 2.0 introduced Dynamo and its great backends.
With the introduction of `torch.export` on PyTorch 2.0, there are 2 `export` APIs, namely `torch.onnx.export` and `torch.export.export`, which might confuse users, especially the existing ONNX users.
We propose a way to allow `torch.export.export` to support multiple backends, similar in principle to the `torch.compile` API, which allow backends to produce, for example, either a `GraphModule` with the default `backend=dynamo` or an `onnx.ModelProto` when `backend="onnx"` is specified, unifying both FX and ONNX export APIs into one.
Backend implementation are separate as it is today and users could register extra backends through `register_backend` or list the available ones through `list_backends`
Below is an example on how a model would be exported with the new API
```python
import torch
def f(x, y):
return x[0] + y
inp = ([torch.ones(1, 3)], torch.ones(1, 3))
exported_program = torch.export.export(f, inp, backend="dynamo", options={}) # ``backend`` and ``options`` are optionals
output = exported_program(*inp)
```
The PR https://github.com/pytorch/pytorch/pull/109649 roughly implements the backend concept
The ONNX exporter could leverage dynamo under the hood and be exported as below:
```python
import torch
def func(x):
y = x + 1
z = y.relu()
return (y, z)
export_options = torch.onnx.ExportOptions() # TODO: Maybe we can flatten ExportOption in favor of a dict as dynamo?
exported_onnx = torch.export.export(func, (torch.randn(1, 1, 2),), backend="onnx", options={"export_options": export_options},
)
```
The PR https://github.com/pytorch/pytorch/pull/109650 adds onnx as a backend using the new API.
### Additional context
This idea was briefly brainstormed with @msaroufim @kit1980 and @malfet who referred @suo as an important stakeholder.
| 1 |
976 | 109,130 |
DISABLED test_nondeterministic_alert_MaxUnpool1d_cuda_float64 (__main__.TestTorchDeviceTypeCUDA)
|
module: tests, triaged, module: flaky-tests, skipped, oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool1d_cuda_float64&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16719710151).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool1d_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
cc @mruberry @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
977 | 109,129 |
[2D] Add deprecation warning to enable_2d_with_fsdp
|
release notes: distributed (fsdp), release notes: distributed (dtensor)
|
Fixes #ISSUE_NUMBER
| 1 |
978 | 109,124 |
Decomp div.Tensor_mode and trunc
|
fb-exported, ciflow/inductor
|
Differential Revision: D49198219
| 3 |
979 | 109,113 |
The API "torch::jit::_load_for_mobile" is limited to create an object living on the stack.
|
oncall: jit
|
### π Describe the bug
To load a jit module using LibTorch-Lite under iOS, the standard way is the following method:
```
auto model = torch::jit::_load_for_mobile(modelPath, torch::kCPU);
```
However, this method is highly limited. It can only create a stack object.
By stack object, I mean the difference between:
```
void test() {
Foo a;
Foo *b = new Foo();
}
```
In the above example, "a" is a stack object, "b" is a heap object, which means I can hold "b" and repeatedly invoke method on "b". Whereas "a" is automatically released when leaving the code block of the "test" function.
The same problem goes with the API "_load_for_mobile". Since "_load_for_mobile" can only create stack object, the developer can't hold the loaded jit module. It indicates that each time I want to do a prediction, I have to load the jit module on the fly which puts extra overhead time before each prediction.
Do we have any method to load a jit module into heap memory?
### Versions
LibTorch-Lite (1.13.0.1)
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
980 | 109,112 |
Unable to install the latest version of PyTorch using mamba.
|
oncall: binaries
|
### π Describe the bug
I am encountering an issue while using mamba to install the latest version of PyTorch. The problem is that after mamba completes the download of the PyTorch package, it automatically terminates the current terminal process, causing it to switch to another terminal.

### Versions
conda 23.3.1
cc @seemethere @malfet
| 2 |
981 | 109,108 |
Cannot construct `torch.sparse_coo_tensor` (but `scipy.sparse.coo_matrix` works fine): `TypeError: only integer tensors of a single element can be converted to an index`
|
module: sparse, triaged, module: scipy compatibility
|
### π Describe the bug
Also, note the strange warning when using `weights_only=True`.
[bug.pt.gz](https://github.com/pytorch/pytorch/files/12586855/bug.pt.gz)
```python
import gzip
import torch
import scipy
bug = torch.load(gzip.open('bug.pt.gz', 'rb'), weights_only=True)
# /home/vadimkantorov/.local/lib/python3.8/site-packages/torch/_utils.py:841: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
# return self.fget.__get__(instance, owner)()
scipy.sparse.coo_matrix((bug['values'], (bug['rowinds'], bug['colinds'])), shape=bug['shape'])
# ok
torch.sparse_coo_tensor((bug['rowinds'], bug['colinds']), bug['values'], size=bug['shape'])
# TypeError: only integer tensors of a single element can be converted to an index
```
Same error appears in https://github.com/pytorch/pytorch/issues/43949 in a completely different context. This error is extremely mysterious and weird.
### Versions
2.1.0.dev20230802+cpu
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 9 |
982 | 109,103 |
DDP - "No backend type associated with device type cpu" with new Model Phi 1.5 despite everything loaded on GPUs
|
oncall: distributed
|
### π Describe the bug
When training a new Model [PHI 1.5](https://huggingface.co/microsoft/phi-1_5) with Transformers via accelerate/axolotl, I get the following error
`No backend type associated with device type cpu`.
(I am running on 2 RTX 4090s without P2P).
Training with only 1 GPU runs fine and training for Llama-based models also runs fine with the same setup and accelerate.
Debugging shows, that `DistributedDataParallel` has:
```
device: (type='cuda', index=0)
device_type: 'cuda'
```
Full stacktrace:
```python
Exception has occurred: RuntimeError
No backend type associated with device type cpu
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1980, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 2064, in _default_broadcast_coalesced
self._distributed_broadcast_coalesced(bufs, bucket_size, authoritative_rank)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 2043, in _sync_module_buffers
self._default_broadcast_coalesced(authoritative_rank=authoritative_rank)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 2039, in _sync_buffers
self._sync_module_buffers(authoritative_rank)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1408, in _pre_forward
self._sync_buffers()
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1513, in forward
inputs, kwargs = self._pre_forward(*inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2760, in compute_loss
outputs = model(**inputs)
File "/workspace/axolotl/src/axolotl/utils/trainer.py", line 294, in compute_loss
return super().compute_loss(model, inputs, return_outputs=return_outputs)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1874, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1574, in train
return inner_training_loop(
File "/workspace/axolotl/src/axolotl/train.py", line 120, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/workspace/axolotl/scripts/finetune.py", line 274, in do_cli
train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/workspace/axolotl/scripts/finetune.py", line 278, in <module>
fire.Fire(do_cli)
RuntimeError: No backend type associated with device type cpu
```
### Versions
Collecting environment information...
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
BogoMIPS: 8983.47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+b5021ba
[pip3] torch-tensorrt==1.5.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 2 |
983 | 109,100 |
FSDP do not support `ignored_parameters` when `auto_wrap_policy` is specified
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
I have a model which contains some params need to be ignored ( else flat_param will raise an error), the construction code is like:
```
not_trainable = []
for name, p in model.named_parameters():
if p.requires_grad == False:
not_trainable.append(p)
sharding_strategy=torch.distributed.fsdp.ShardingStrategy._HYBRID_SHARD_ZERO2
model = FSDP(model, sharding_strategy=sharding_strategy,
ignored_parameters = not_trainable,
auto_wrap_policy=size_based_auto_wrap_policy)
```
I came across an error: the params in `not_trainable ` would cause error when executing `_init_flat_param`.
If `auto_wrap_policy` is not passed, the code works.
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.2.0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 4 |
984 | 109,098 |
Can't initializa NVML
|
triaged
|
### π Describe the bug
>>> import torch
>>> torch.__version__
'2.0.1+cu117'
>>> torch.cuda.device_count()
/opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
0
>>> torch.cuda.is_available()
False
### Versions
>>> import torch
>>> torch.__version__
'2.0.1+cu117'
>>> torch.cuda.device_count()
/opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
0
>>> torch.cuda.is_available()
False
why is that?
| 1 |
985 | 109,094 |
Parameters of cuda module zero out when used in multiprocessing
|
module: docs, module: multiprocessing, module: cuda, triaged
|
### π Describe the bug
Inspecting the parameters of a module after it has been used in a process or pool reveals that the parameters become zero. I cannot find documentation saying this is intended behavior or a known antipattern.
Example:
```
import torch.nn as nn
import torch.multiprocessing as mp
def do_nothing(module):
return
if __name__ == "__main__":
mp.set_start_method('spawn')
module = nn.Linear(4, 4)
module.cuda()
module.share_memory() # as far as I can tell this is a no-op?
print(next(module.parameters()))
p = mp.Process(target=do_nothing, args=(module,))
p.start()
p.join()
print(next(module.parameters()))
```
Output:
```
Parameter containing:
tensor([[ 0.2924, 0.0857, -0.1374, 0.0929],
[-0.4918, -0.2784, -0.3998, -0.0934],
[ 0.1911, -0.3074, 0.0238, -0.1824],
[-0.0655, 0.1752, 0.0798, -0.4360]], device='cuda:0',
requires_grad=True)
Parameter containing:
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], device='cuda:0', requires_grad=True)
```
A workaround is to keep the module on cpu, then transfer it onto cuda inside each process.
First found on torch 1.12.1. Persists in torch 2.0.1.
Thanks!
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:29:51) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 536.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3501
DeviceID=CPU0
Family=107
L2CacheSize=3072
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3501
Name=AMD Ryzen 5 5600 6-Core Processor
ProcessorType=3
Revision=8450
Versions of relevant libraries:
[pip3] numpy==1.23.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_win64_mkl conda-forge
[conda] cudatoolkit 11.6.0 hc0ea762_10 conda-forge
[conda] libblas 3.9.0 16_win64_mkl conda-forge
[conda] libcblas 3.9.0 16_win64_mkl conda-forge
[conda] liblapack 3.9.0 16_win64_mkl conda-forge
[conda] liblapacke 3.9.0 16_win64_mkl conda-forge
[conda] mkl 2022.1.0 h6a75c08_874 conda-forge
[conda] mkl-devel 2022.1.0 h57928b3_875 conda-forge
[conda] mkl-include 2022.1.0 h6a75c08_874 conda-forge
[conda] numpy 1.23.2 py310h8a5b91a_0 conda-forge
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8_0 pytorch
[conda] pytorch-cuda 11.7 h16d0643_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
cc @svekars @carljparker @VitalyFedyunin @ptrblck
| 2 |
986 | 109,090 |
[WIP] Use user directed names for variables where possible
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109090
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
987 | 109,082 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
988 | 109,079 |
[TEST][pytorch] Use cpuinfo to determine c10::ThreadPool thread number + internal patch
|
fb-exported, ciflow/binaries, ciflow/periodic
|
Summary: Testing https://github.com/pytorch/pytorch/pull/107339 combined with internal patches.
Differential Revision: D49109231
| 3 |
989 | 109,074 |
torch.compile/triton holding GIL during compilation and CompiledKernel call results in deadlocks during distributed training
|
high priority, triaged, oncall: pt2, module: inductor, upstream triton
|
### π Describe the bug
We are seeing our training processes get deadlocked as follows. There is a NCCL error causing the allreduce in the backward pass to get stuck. When this occurs, typically the GPU locks up until a [ncclCommAbort](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/comms.html#ncclcommabort) is called and even simple cuda calls like cudaLaunchKernel, cudaEventQuery etc. get stuck on the host side. As a result, the main thread is stuck as follows:
```
Thread 432919 (idle): "MainThread"
0x155555277197 (libc.so.6)
pthread_cond_wait (libc.so.6)
__gnu_cxx::__nothrow_wait_cv::wait (compatibility-condvar.cc:100)
torch::autograd::ReadyQueue::pop (libtorch_cpu.so)
torch::autograd::Engine::thread_main (libtorch_cpu.so)
torch::autograd::Engine::execute_with_graph_task (libtorch_cpu.so)
torch::autograd::python::PythonEngine::execute_with_graph_task (libtorch_python.so)
torch::autograd::Engine::execute (libtorch_cpu.so)
torch::autograd::python::PythonEngine::execute (libtorch_python.so)
THPEngine_run_backward (libtorch_python.so)
backward (torch/autograd/__init__.py:200)
backward (torch/_tensor.py:503)
```
Simultaneously, triton is stuck due to the above NCCL issue while trying to call a compiled kernel. The bad part here though is that triton is holding GIL (see active+gil in the trace below) and is stuck. As a result, the entire Python process freezes up and no other python threads are able to execute:
```
Thread 493983 (active+gil): "Dummy-20"
sched_yield (libc.so.6)
0x15550de3cbdb (libcuda.so.535.54.03)
0x15550e10adb5 (libcuda.so.535.54.03)
0x15550e10b32a (libcuda.so.535.54.03)
0x15550df18454 (libcuda.so.535.54.03)
0x15550ddf7836 (libcuda.so.535.54.03)
0x15550ddf7f3d (libcuda.so.535.54.03)
0x15550ddfaa27 (libcuda.so.535.54.03)
0x15550dfe9b3e (libcuda.so.535.54.03)
launch (triton_.so)
launcher (<string>:6)
run (torch/_inductor/triton_ops/autotune.py:190)
call (coksmpcybzxz4bxfvl56kcd6k5czlqpovjyjwg4yry6do7p7hcim.py:871)
run (torch/_inductor/compile_fx.py:248)
_fn (torch/_dynamo/eval_frame.py:209)
call_func_with_args (torch/_functorch/aot_autograd.py:1249)
call_compiled_backward (torch/_functorch/aot_autograd.py:2341)
backward (torch/_functorch/aot_autograd.py:2365)
```
Another example is that triton also get stuck during compilation as well due to a `torch.cuda.synchronize` getting stuck as NCCL kernels are stuck. Even in this case, triton is stuck while holding the GIL and results in the entire Python process to lock up:
```
Thread 1957540 (active+gil): "Dummy-20"
synchronize (torch/cuda/__init__.py:688)
_precompile_config (torch/_inductor/triton_ops/autotune.py:94)
<listcomp> (torch/_inductor/triton_ops/autotune.py:70)
precompile (torch/_inductor/triton_ops/autotune.py:69)
_load_kernel (torch/_inductor/codecache.py:554)
result (torch/_inductor/codecache.py:574)
wait (torch/_inductor/codecache.py:715)
<module> (cpb2lazn3hvywa5maow5ngritvyt6pwlcq6ela3fl5rbblyj636x.py:1578)
load (torch/_inductor/codecache.py:528)
compile_to_module (torch/_inductor/graph.py:575)
time_wrapper (torch/_dynamo/utils.py:163)
compile_to_fn (torch/_inductor/graph.py:586)
compile_fx_inner (torch/_inductor/compile_fx.py:177)
inner (contextlib.py:75)
inner (torch/_inductor/debug.py:239)
debug_wrapper (torch/_dynamo/debug_utils.py:595)
bw_compiler (torch/_inductor/compile_fx.py:441)
time_wrapper (torch/_dynamo/utils.py:163)
_fn (torch/_dynamo/eval_frame.py:209)
_wrapped_bw_compiler (torch/_dynamo/backends/common.py:38)
call_compiled_backward (torch/_functorch/aot_autograd.py:2336)
backward (torch/_functorch/aot_autograd.py:2365)
apply (torch/autograd/function.py:274)
```
We have some custom logic in our application to recover from NCCL errors by calling `_abort` on the ProcessGroup here: https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/init.cpp#L2226 (We set `NCCL_ASYNC_ERROR_HANDLING=0` and manage errors ourselves). This logic is usually able to recover from the NCCL errors. However, now since triton is holding GIL and is stuck, the entire Python process is frozen and none our custom logic in python can run and recover from the situation.
I'm wondering if it is possible to release GIL before entering C/C++ land (similar to other PyTorch operations). This would resolve the issue and allow other python threads in the application to recover from the situation.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
PyTorch 2.0.1
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @muchulee8 @aakhundov
| 11 |
990 | 109,070 |
DISABLED test_out_randn_cuda_float32 (__main__.TestCommonCUDA)
|
high priority, triaged, skipped, oncall: pt2
|
Platforms: inductor
See https://github.com/pytorch/pytorch/issues/109061
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_ops.py%3A%3ATestCommonCUDA%3A%3Atest_out_randn_cuda_float32)).
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
991 | 109,067 |
torch.argmax fails for device='mps:0'
|
triaged, module: mps
|
### π Describe the bug
torch.argmax gives wrong results for mps device.
```
>>> import torch
>>> torch.backends.mps.is_available()
True
>>> device='mps:0'
>>> y = torch.tensor([14, 13, 11], dtype=torch.int, device=device)
>>> y
tensor([14, 13, 11], device='mps:0', dtype=torch.int32)
>>> i = torch.argmax(y, dim=0)
>>> i
tensor(-9223372036854775808, device='mps:0')
>>>
```
Should give `i=0`
It works when `device='cpu'`:
```
>>> y = torch.tensor([14, 13, 11], dtype=torch.int, device='cpu')
>>> y
tensor([14, 13, 11], dtype=torch.int32)
>>> i = torch.argmax(y, dim=0)
>>> i
tensor(0)
```
### Versions
>: python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (main, Jul 15 2023, 12:21:29) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.5.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 3 |
992 | 109,065 |
Faster gc_count update for CUDACachingAllocator (and avoid nullptr dereference)
|
fb-exported
|
Summary:
Modify the way we update gc_count in CUDACachingAlloctor to make it faster.
Originally D48481557, but reverted due to nullptr dereference in some cases (D49003756).
This diff changed to use correct constructor for search key (so avoid nullptr dereference). Also, added nullptr check (and returns 0 if it is) in gc_count functions.
Differential Revision: D49068760
| 6 |
993 | 109,062 |
CollectiveFunctionRewriteVariable for all_to_all_single
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
https://github.com/pytorch/pytorch/pull/106655 doesn't currently have the rewriting logic that will make it used even if you use the non-functional collective. Add this logic. To do this we may need to list-ify the input/output sizes first.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @yf225
### Versions
main
| 0 |
994 | 109,061 |
DISABLED test_out__refs_randn_cuda_float32 (__main__.TestCommonCUDA)
|
triaged, skipped, oncall: pt2
|
Platforms: inductor
Started after a revert:
https://hud.pytorch.org/pytorch/pytorch/commit/8caaa4f4cdac6657f72c48607b6a732a975c5d20
https://github.com/pytorch/pytorch/actions/runs/6126532843/job/16631305053
```
2023-09-08T22:02:25.0900749Z ==================================== RERUNS ====================================
2023-09-08T22:02:25.0901050Z _______________ TestCommonCUDA.test_out__refs_randn_cuda_float32 _______________
2023-09-08T22:02:25.0901338Z Traceback (most recent call last):
2023-09-08T22:02:25.0901642Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 781, in test_out
2023-09-08T22:02:25.0901949Z samples = op.sample_inputs(device, dtype)
2023-09-08T22:02:25.0902276Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 873, in <resume in test_out>
2023-09-08T22:02:25.0902567Z _compare_out(_case_one_transform)
2023-09-08T22:02:25.0902870Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 841, in _compare_out
2023-09-08T22:02:25.0903169Z self.assertEqual(expected, out)
2023-09-08T22:02:25.0903623Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3309, in assertEqual
2023-09-08T22:02:25.0903967Z raise error_metas.pop()[0].to_error(
2023-09-08T22:02:25.0904275Z AssertionError: Tensor-likes are not close!
2023-09-08T22:02:25.0904444Z
2023-09-08T22:02:25.0904550Z Mismatched elements: 10 / 10 (100.0%)
2023-09-08T22:02:25.0904923Z Greatest absolute difference: 1.1030118465423584 at index (4,) (up to 1e-05 allowed)
2023-09-08T22:02:25.0905336Z Greatest relative difference: 21.35527992248535 at index (2,) (up to 1.3e-06 allowed)
2023-09-08T22:02:25.0905532Z
2023-09-08T22:02:25.0905684Z To execute this test, run the following from the base repo dir:
2023-09-08T22:02:25.0906112Z PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_ops.py -k test_out__refs_randn_cuda_float32
2023-09-08T22:02:25.0906360Z
2023-09-08T22:02:25.0906537Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2023-09-08T22:02:25.0906888Z _______________ TestCommonCUDA.test_out__refs_randn_cuda_float32 _______________
2023-09-08T22:02:25.0907163Z Traceback (most recent call last):
2023-09-08T22:02:25.0907505Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 781, in test_out
2023-09-08T22:02:25.0907808Z samples = op.sample_inputs(device, dtype)
2023-09-08T22:02:25.0908139Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 873, in <resume in test_out>
2023-09-08T22:02:25.0908428Z _compare_out(_case_one_transform)
2023-09-08T22:02:25.0908727Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 841, in _compare_out
2023-09-08T22:02:25.0909021Z self.assertEqual(expected, out)
2023-09-08T22:02:25.0909472Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3309, in assertEqual
2023-09-08T22:02:25.0909819Z raise error_metas.pop()[0].to_error(
2023-09-08T22:02:25.0910118Z AssertionError: Tensor-likes are not close!
2023-09-08T22:02:25.0910278Z
2023-09-08T22:02:25.0910381Z Mismatched elements: 10 / 10 (100.0%)
2023-09-08T22:02:25.0910753Z Greatest absolute difference: 1.9592864513397217 at index (4,) (up to 1e-05 allowed)
2023-09-08T22:02:25.0911178Z Greatest relative difference: 11.990405082702637 at index (6,) (up to 1.3e-06 allowed)
2023-09-08T22:02:25.0911398Z
2023-09-08T22:02:25.0911539Z To execute this test, run the following from the base repo dir:
2023-09-08T22:02:25.0911960Z PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_ops.py -k test_out__refs_randn_cuda_float32
2023-09-08T22:02:25.0912170Z
2023-09-08T22:02:25.0912338Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2023-09-08T22:02:25.0912657Z =================================== FAILURES ===================================
2023-09-08T22:02:25.0912960Z _______________ TestCommonCUDA.test_out__refs_randn_cuda_float32 _______________
2023-09-08T22:02:25.0913237Z Traceback (most recent call last):
2023-09-08T22:02:25.0913531Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 781, in test_out
2023-09-08T22:02:25.0913838Z samples = op.sample_inputs(device, dtype)
2023-09-08T22:02:25.0914255Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 873, in <resume in test_out>
2023-09-08T22:02:25.0914554Z _compare_out(_case_one_transform)
2023-09-08T22:02:25.0914858Z File "/var/lib/jenkins/workspace/test/test_ops.py", line 841, in _compare_out
2023-09-08T22:02:25.0915151Z self.assertEqual(expected, out)
2023-09-08T22:02:25.0915600Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3309, in assertEqual
2023-09-08T22:02:25.0915934Z raise error_metas.pop()[0].to_error(
2023-09-08T22:02:25.0916265Z AssertionError: Tensor-likes are not close!
2023-09-08T22:02:25.0916453Z
2023-09-08T22:02:25.0916563Z Mismatched elements: 10 / 10 (100.0%)
2023-09-08T22:02:25.0916930Z Greatest absolute difference: 3.546703338623047 at index (8,) (up to 1e-05 allowed)
2023-09-08T22:02:25.0917340Z Greatest relative difference: 6.6497802734375 at index (2,) (up to 1.3e-06 allowed)
2023-09-08T22:02:25.0917523Z
2023-09-08T22:02:25.0917672Z To execute this test, run the following from the base repo dir:
2023-09-08T22:02:25.0918103Z PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_ops.py -k test_out__refs_randn_cuda_float32
2023-09-08T22:02:25.0918314Z
2023-09-08T22:02:25.0918482Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2023-09-08T22:02:25.0918986Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_ops/test_ops-8ac249cce76236d9.xml -
2023-09-08T22:02:25.0919360Z =========================== short test summary info ============================
2023-09-08T22:02:25.0919816Z FAILED [0.0022s] test_ops.py::TestCommonCUDA::test_out__refs_randn_cuda_float32 - AssertionError: Tensor-likes are not close!
```
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_ops.py%3A%3ATestCommonCUDA%3A%3Atest_out__refs_randn_cuda_float32)).
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @ysiraichi
| 2 |
995 | 109,056 |
UserError: Can't call type() on generated custom object. Please use __class__ instead
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
```
File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 1119, in CALL_FUNCTION self.call_function(fn, args, {})
File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 565, in call_function self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/builtin.py", line 618, in call_function result = handler(tx, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/builtin.py", line 1220, in call_type
raise UserError(
torch._dynamo.exc.UserError: Can't call type() on generated custom object. Please use __class__ instead
```
this error seems nonsensical to me, we should just support it
showed up in torchrec `dist_data.py`
```
return type(self._input).dist_init(
keys=self._keys,
tensors=self._output_tensors,
batch_size_per_rank=self._batch_size_per_rank,
recat=self._recat,
)
```
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
996 | 109,053 |
AOTAutograd view-replay logic does not handle dtype views well
|
high priority, triaged, oncall: pt2, module: aotdispatch
|
Example repro from @bertmaher:
```
print(lambda x: x.view(torch.int32)(torch.ones(8))
print(torch.compile(lambda x: x.view(torch.int32), backend="aot_eager")(torch.ones(8)))
# prints:
tensor([1065353216, 1065353216, 1065353216, 1065353216, 1065353216, 1065353216,
1065353216, 1065353216], dtype=torch.int32)
tensor([1., 1., 1., 1., 1., 1., 1., 1.])
```
The root cause is that:
(1) When we get a graph where an output directly aliases an input (in this case), AOTAutograd needs to manually regenerate the output view at runtime in order to appease the autograd engine
(2) AOTAutograd does this by making an attempt to re-run the same set of views through the autograd engine's view-replay tracking, but when that fails it falls back to running `as_strided()` (code: https://github.com/pytorch/pytorch/blob/main/torch/_functorch/aot_autograd.py#L669)
`as_strided()` isn't actually sufficient in this case: our inputs and outputs have different dtypes, but `as_strided()` does not perform cross-dtype views. The same is actually true for other metadata changes like e.g.`view_as_real`/`view_as_complex`, which AOTAutograd manually handles.
This is a case that would be fixed automatically if AOTAutograd carefully tracked all view-chains that it saw (which would also improve perf, as per this internal issue: https://fb.workplace.com/groups/147416452503510/posts/1211736186131257/).
That will take a decent amount work though. A simpler fix would be to manually check if the dtypes of the input and output are different before running the `.as_strided()`, and doing the bitcast view manually when that happens.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @anijain2305
| 1 |
997 | 109,052 |
Allow reductions to write into pinned memory
|
module: cuda, module: memory usage, triaged, needs research, module: reductions
|
### π The feature, motivation and pitch
Reduction operations that have small output, like `tensor.max` and `tensor.min`, ought to allow the output to be in pinned host memory if the input is in GPU.
This improves performance of `x.max().item()` by 5-7x.
### Alternatives
One alternative is to just live with it. The current way is to just let the output be on the GPU and then copy it from GPU to CPU. That uses a memcpy, even for a tiny tensor of 4 bytes, and it's wildly slow.
Another alternative is to write your own pybind function that is basically just like the pytorch one except without the error message.
Another option is to, instead of using `copy_` to bring the data from the GPU to CPU, make a function to replace `copy_` that can notice that the data to copy is so tiny that it would be faster to launch and kernel and do the copy to pinned memory than it would be to do a memcpyasync.
### Additional context
```python
a = torch.randn(1,3, device="cuda")
output = torch.empty([1], dtype=a.dtype, pin_memory=True)
torch.max(a, out=output)
```
The error is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expected self.device() == out.device() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
Here is the profiler output:

We see that it's spending *most* of the time just copying 4 bytes from GPU to CPU. In my testing using cub directly, you don't have to wait for the copy, you just need the synchronization and it's super fast, like 5-7x.
You might even consider making pinned memory the default for reductions! It's really fast.
cc @ptrblck
| 7 |
998 | 109,050 |
Test triton conda build
|
topic: not user facing
|
Fixes #ISSUE_NUMBER
| 1 |
999 | 109,046 |
[FX] Fix an issue in constructing GraphModule
|
fb-exported, release notes: fx
|
Summary:
During the enablement of IG HSTU models, we noticed an issue around model split, which the same submodule is presented in the graph modules of two split modules. (P812849675) We examined the graph of the problematic graph module, and found the graph is correct as expected (P815770164), the nodes representing the misplaced submodule is not presented in the graph, but somehow when constructing the graph module from the graph, those submodules are inserted into the module. This causing many issues when we transform the model downstream.
Further debugging shows there's some edge case issue when constructing the graph module. We created a minimal repro the issue in N4232266
Specifically, this happens when we create the submodules when constructing the graph module. It traversing the nodes in the graph, if the node's op is `call_module` or `get_attr`, the submodule from the root module is copied to the graph module.
For example, if we have simple modules like this
```
class M(torch.nn.Module):
def __init__(self, is_train: bool = False) -> None:
super().__init__()
self._is_train = is_train
self.linear1 = torch.nn.Linear(2, 2)
self.relu1 = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(2, 2)
self.relu2 = torch.nn.ReLU()
self.linear3 = torch.nn.Linear(2, 2)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self._is_train:
x = self.relu2(self.linear2(self.relu1(self.linear1(x))))
return self.linear3(x)
class Model(torch.nn.Module):
def __init__(self, is_train: bool = False) -> None:
super().__init__()
self._m = M(is_train)
self.linear = torch.nn.Linear(2, 2)
self.relu = torch.nn.ReLU()
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.relu(self.linear(x))
x = self._m(x)
return x
m = Model()
```
FX tracing the model `m` would get the graph
```
opcode name target args kwargs
----------- ---------- ---------- ------------- --------
placeholder x x () {}
call_module linear linear (x,) {}
call_module relu relu (linear,) {}
call_module _m_linear3 _m.linear3 (relu,) {}
output output output (_m_linear3,) {}
```
and graph module constructed as
```
Model(
(linear): Linear(in_features=2, out_features=2, bias=True)
(relu): ReLU()
(_m): Module(
(linear3): Linear(in_features=2, out_features=2, bias=True)
)
)
```
The `linear1/2`, `relu1/2` in the `M` are not the in the graph, thus omitted in the graph module.
Now we have some use case that the module `Model`'s `forward` calling into some other functions like
```
torch.fx.wrap
def some_func(module: torch.nn.Module, x:torch.Tensor) -> torch.Tensor:
# do something...
return x
class Model(torch.nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.relu(self.linear(x))
x = self._m(x)
x = some_func(self._m, x)
return x
```
The calling of `some_func` add a `get_attr` node in the graph, the graph becomes
```
opcode name target args kwargs
------------- ---------- -------------------------------------- ---------------- --------
placeholder x x () {}
call_module linear linear (x,) {}
call_module relu relu (linear,) {}
call_module _m_linear3 _m.linear3 (relu,) {}
get_attr _m _m () {}
call_function some_func <function some_func at 0x7f7acf35e280> (_m, _m_linear3) {}
output output output (some_func,) {}
```
So when constructing the graph module, it loops through all the `get_attr` and `call_module` nodes in the graph, and insert the submodules to the graph module
- when processing node `_m.linear3`, the `_copy_attr` first checks whether `_m` is presented, inserted `_m` module, then copied `linear3` under `_m`
- then it sees the `get_attr` node `_m`, and copied the original `_m` module over to the graph module, including the submodules not in the graph
thus we get the following wrong graph module
```
Model(
(linear): Linear(in_features=2, out_features=2, bias=True)
(relu): ReLU()
(_m): M(
(linear1): Linear(in_features=2, out_features=2, bias=True)
(relu1): ReLU()
(linear2): Linear(in_features=2, out_features=2, bias=True)
(relu2): ReLU()
(linear3): Linear(in_features=2, out_features=2, bias=True)
)
)
```
This is just the demonstration of the issue, in practice, we ended up to have the modules not in the merge graph copied to merge net in the model split, and causing various downstream issues.
To fix, we need to do some extra check when copying submodule. If the `field` already presented in the `to_module` and the node's op is not `call_module`, then we should skip copying the field.
Test Plan:
With the diff, constructing the graph module from the example
```
opcode name target args kwargs
------------- ---------- -------------------------------------- ---------------- --------
placeholder x x () {}
call_module linear linear (x,) {}
call_module relu relu (linear,) {}
call_module _m_linear3 _m.linear3 (relu,) {}
get_attr _m _m () {}
call_function some_func <function some_func at 0x7f0f459f0ca0> (_m, _m_linear3) {}
output output output (some_func,) {}
# graph module
Model(
(linear): Linear(in_features=2, out_features=2, bias=True)
(relu): ReLU()
(_m): Module(
(linear3): Linear(in_features=2, out_features=2, bias=True)
)
)
```
noticed the correct graph module is constructed
try out on IG HSTU model, the merge net's graph module is constructed correctly P818433831, unblocked the enablement
Differential Revision: D48983396
| 8 |
1,000 | 109,044 |
Disable extreme test (fix OOMs maybe)
|
with-ssh, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109044
* #108846
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.