Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
301 | 111,210 |
RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Using Google Colab)
|
triaged
|
### π Describe the bug
I'm running StableDiffusionv1.5 to pretrian on my own dataset.
I'm using Google Colab Free GPUs. But when running the below command it throwing me this error
Using torchversion '2.1.0'
""
!accelerate launch diffusers/examples/text_to_image/train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=100 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=100 \
--mixed_precision='fp16' \
--learning_rate=1e-5 \
--max_grad_norm=1 \
--num_train_epochs=100 \
--checkpointing_steps=100000 \
--lr_scheduler="constant" \
--lr_warmup_step=0 \
--output_dir=$OUTPUT_DIR
""
Below is the GPU Version
CUDA Version: 12.0
Below is the entire Error:
2023-10-13 16:26:38.246709: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
10/13/2023 16:26:44 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: fp16
{'prediction_type', 'variance_type', 'sample_max_value', 'timestep_spacing', 'clip_sample_range', 'thresholding', 'dynamic_thresholding_ratio'} was not found in config. Values will be initialized to default values.
{'force_upcast', 'scaling_factor'} was not found in config. Values will be initialized to default values.
{'resnet_skip_time_act', 'time_cond_proj_dim', 'encoder_hid_dim', 'attention_type', 'resnet_out_scale_factor', 'upcast_attention', 'addition_embed_type_num_heads', 'num_class_embeds', 'timestep_post_act', 'mid_block_only_cross_attention', 'cross_attention_norm', 'dropout', 'only_cross_attention', 'resnet_time_scale_shift', 'conv_in_kernel', 'transformer_layers_per_block', 'num_attention_heads', 'conv_out_kernel', 'addition_time_embed_dim', 'dual_cross_attention', 'time_embedding_type', 'class_embed_type', 'use_linear_projection', 'time_embedding_act_fn', 'time_embedding_dim', 'projection_class_embeddings_input_dim', 'encoder_hid_dim_type', 'class_embeddings_concat', 'mid_block_type', 'addition_embed_type'} was not found in config. Values will be initialized to default values.
{'resnet_skip_time_act', 'time_cond_proj_dim', 'encoder_hid_dim', 'attention_type', 'resnet_out_scale_factor', 'upcast_attention', 'addition_embed_type_num_heads', 'num_class_embeds', 'timestep_post_act', 'mid_block_only_cross_attention', 'cross_attention_norm', 'dropout', 'only_cross_attention', 'resnet_time_scale_shift', 'conv_in_kernel', 'transformer_layers_per_block', 'num_attention_heads', 'conv_out_kernel', 'addition_time_embed_dim', 'dual_cross_attention', 'time_embedding_type', 'class_embed_type', 'use_linear_projection', 'time_embedding_act_fn', 'time_embedding_dim', 'projection_class_embeddings_input_dim', 'encoder_hid_dim_type', 'class_embeddings_concat', 'mid_block_type', 'addition_embed_type'} was not found in config. Values will be initialized to default values.
10/13/2023 16:27:24 - INFO - __main__ - ***** Running training *****
10/13/2023 16:27:24 - INFO - __main__ - Num examples = 65418
10/13/2023 16:27:24 - INFO - __main__ - Num Epochs = 1
10/13/2023 16:27:24 - INFO - __main__ - Instantaneous batch size per device = 2
10/13/2023 16:27:24 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
10/13/2023 16:27:24 - INFO - __main__ - Gradient Accumulation steps = 4
10/13/2023 16:27:24 - INFO - __main__ - Total optimization steps = 100
Steps: 0% 0/100 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/drive/MyDrive/StableDiffusion/diffusers/examples/text_to_image/train_text_to_image.py", line 1066, in <module>
main()
File "/content/drive/MyDrive/StableDiffusion/diffusers/examples/text_to_image/train_text_to_image.py", line 948, in main
accelerator.backward(loss)
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1983, in backward
self.scaler.scale(loss).backward(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected is_sm80 || is_sm90 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Steps: 0% 0/100 [00:10<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 986, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] Could not collect
| 5 |
302 | 111,209 |
[dynamo] annotate `allow_in_graph` with soft constraints
|
module: optimizer, triaged, oncall: pt2, module: dynamo
|
### π The feature, motivation and pitch
Originally suggested: https://github.com/pytorch/pytorch/pull/110709/files#r1358391537
```python
@allow_in_graph(
constraints={
"mutates": "only self.state, and where Tensors are concerned, only add new (not remove/mutate)",
"side-effects": "no side-effects like printing or modifying global variables",
"return values": "must be derived from self.params (guarded) tensor metadata",
}
)
def _init_group(self, ..):
..
```
The constraints cannot be enforced of course - dynamo cannot inspect the interior of the annotated function. But we are explicitly asking the user to guarantee these properties hold.
It is still quite unsafe, but at least we know why it is unsafe, and the user might understand why something might break if they try to incorrectly modify this `@allow_in_graph` function.
---
This is stronger than the implicit contract suggested in: https://github.com/pytorch/pytorch/blob/35750bf9d134726b47451d78b731017ad27fbc8b/docs/source/torch.compiler_faq.rst#L497
> By using allow_in_graph to annotate a function, you must make sure your code meets the following requirements:
>
> - All outputs in your function only depend on the inputs and do not depend on any captured Tensors.
> - Your function is functional. That is, it does not mutate any state. This may be relaxed; we actually support functions that appear > to be functional from the outside: they may have in-place PyTorch operations, but may not mutate global state or inputs to the function.
> - Your function does not raise data-dependent errors.
```diff
- All outputs in your function only depend on the inputs and do not depend on any captured Tensors.
+ All outputs in your function only depend on the inputs or Tensor metadata and do not depend on any captured Tensor values.
```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 0 |
303 | 111,206 |
`torch.utils.checkpoint` drops custom Tensor attributes
|
module: checkpoint, triaged
|
### π Describe the bug
Hi,
I noticed that if I have a torch.Tensor with a custom attribute, say `my_tensor.foo = "huhu"`, PyTorch raises an error if this attribute is accessed in the backward when using `torch.utils.checkpoint`.
There are no issues without torch.utils.checkpoint.
Reproduction:
```python
from typing import List, Optional, Tuple, Union
import torch
import torch.utils.checkpoint
from torch import nn
class GPTBigCodeAttention(nn.Module):
def __init__(self, config, layer_idx=None):
super().__init__()
self.c_attn = nn.Linear(config.hidden_size, config.hidden_size + 2 * 15)
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
) -> torch.Tensor:
print("attention_mask.huhu", attention_mask.huhu)
print("attention_mask._padding_mask", attention_mask._padding_mask)
query = self.c_attn(hidden_states)
return query
class GPTBigCodeModel(nn.Module):
def __init__(self, config):
super().__init__()
self.wte = nn.Embedding(config.vocab_size, config.hidden_size)
self.h = nn.ModuleList([GPTBigCodeAttention(config, layer_idx=i) for i in range(config.num_hidden_layers)])
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
) -> Tuple:
attention_mask = torch.Tensor([4, 5, 6])
attention_mask._padding_mask = None
attention_mask.huhu = "is this working?"
hidden_states = self.wte(input_ids)
def create_custom_forward(module):
def custom_forward(*inputs):
# None for past_key_value
return module(*inputs)
return custom_forward
outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(self.h[0]),
hidden_states,
attention_mask,
)
#outputs = self.h[0](hidden_states, attention_mask)
return outputs
from transformers import AutoConfig, AutoTokenizer
cfg = AutoConfig.from_pretrained("hf-internal-testing/tiny-random-GPTBigCodeForCausalLM")
model = GPTBigCodeModel(cfg)
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-GPTBigCodeForCausalLM")
torch_device = "cuda"
inp = tokenizer("this is me", return_tensors="pt").to(torch_device)
model.to(torch_device)
model = model.train()
result = model(inp["input_ids"])
loss = result[0].sum()
print("call backward ------")
loss.backward()
```
raising:
```
attention_mask.huhu is this working?
attention_mask._padding_mask None
call backward ------
Traceback (most recent call last):
File "/fsx/felix/transformers/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py", line 77, in <module>
loss.backward()
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/autograd/function.py", line 288, in apply
return user_fn(self, *args)
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 271, in backward
outputs = ctx.run_function(*detached_inputs)
File "/fsx/felix/transformers/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py", line 44, in custom_forward
return module(*inputs)
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/felix/transformers/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py", line 17, in forward
print("attention_mask.huhu", attention_mask.huhu)
AttributeError: 'Tensor' object has no attribute 'huhu'
```
Note: subclassing `torch.Tensor` does not help. Tried with:
```python
class QTensor(torch.Tensor):
@staticmethod
def __new__(cls, x, _padding_mask, *args, **kwargs):
return super().__new__(cls, x, *args, **kwargs)
def __init__(self, data, _padding_mask=False):
self._padding_mask = _padding_mask
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.006
BogoMIPS: 6000.01
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.4.0
[pip3] numpy==1.24.3
[pip3] onnx==1.13.1
[pip3] onnxruntime-gpu==1.15.1
[pip3] onnxruntime-training==1.15.1
[pip3] torch==2.1.0+cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.1.0+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0+cu118 pypi_0 pypi
[conda] torchvision 0.16.0+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
| 0 |
304 | 111,196 |
[WIP] Support tensors as Dict keys
|
triaged, open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108420
* #110524
* __->__ #111196
* #110523
* #110522
This prepares the PR where we implement sets in terms of dicts.
While implementing this I had to make hashable keys somewhat consistent
all throughout. I did that by having an auxiliary class (similar to the
one that was inside SetVariable). This variable is opaque on purpose, so
that it fails hard if it unadvertedly leaks back into user code.
We also found and fixed a number of latent bugs, like where dict with
builtin types was going through to UserDefinedObjectVariable, even
though we had a test that was testing that this worked (the test was not
testing whether we were tracing anything or not).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
305 | 111,312 |
Trace dynamic batch size with make_fx
|
actionable, module: fx, oncall: pt2, oncall: fx, module: functorch, module: dynamic shapes
|
Hello,
is there a way to keep the batch dimension dynamic when tracing with `make_fx` as described in the README.md.
Example
```
import torch
from functorch import make_fx
from torch.func import vmap, jacrev
model = torch.nn.Sequential(
torch.nn.Linear(2, 512),
torch.nn.Tanh(),
torch.nn.Linear(512, 2),
)
trace_inp = torch.randn(1, 2)
traced_model = make_fx(vmap(jacrev(model)), tracing_mode='real')(trace_inp)
# This will work
print(traced_model(torch.randn(1, 2)))
# This will fail with: RuntimeError: shape '[1, 2]' is invalid for input of size 4
print(traced_model(torch.randn(2, 2)))
```
Switching to `tracing_mode='symbolic'` will result in `RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides`
Appreciate the help!
Best
Tim
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 3 |
306 | 111,195 |
Fix flake8-bugbear B019
|
module: cpu, triaged, open source, release notes: fx, topic: not user facing, ciflow/mps, module: inductor, ciflow/inductor, module: export
|
First time contributor helping out with issue #106571
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 4 |
307 | 111,194 |
Guards elimination for unused variables
|
triaged, oncall: pt2
|
### π The feature, motivation and pitch
For example
`def f(m : Module, a : Tensor):
return m.para + a
`
when this function compiled by dynamo, with config.guard_nn_modules true, totally three guards for m, m.para and a respectively will be generated.
In fact, m itself is not used directly which incurs a NN_MODULE guard. If this function is called with different modules and the para field within them are of the same shape, perhaps the guard for m could be eliminated for less check and better hit chance.
Consider two implementation approaches:
1. Modify bytecode handlers in InstructionTranslatorBase and call_function/method in different VariableTracker subclasses to do some liveness analysis for variable guard and do elimination.
2. Perform analysis directly on compiled fx graphs and collect live variable guards, then use this collection as the base to generate GuardedCode.
Any suggestions? Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
308 | 111,190 |
[inductor][dynamic] fused_attention pattern could not be matched due to sym_size
|
triaged, oncall: transformer/mha, module: dynamic shapes, module: inductor, module: multi-headed-attention
|
## Issue description
For fused_attention-related models like `ElectraForQuestionAnswering`, the fused_attention pattern could not be matched for dynamic shape due to `sym_size`. The failure of SDPA generation results in the regression of dynamic mode compared with static one.
For the model graph, the symbolic size param is one of the model inputs, as a placeholder: `def forward(self, sym: Sym(s0), ...)`.
For the traced graph of fused_attention pattern, the symbolic size param is deduced from one input tensor shape, with op `sym_size`: `sym = torch.ops.aten.sym_size(arg0_1, 0)`.
## Code example
Part of ``ElectraForQuestionAnswering` graph:
```
def forward(self, arg203_1, arg204_1, arg205_1, arg206_1):
......
permute_3 = torch.ops.aten.permute.default(view_6, [0, 2, 1, 3]); view_6 = None
permute_5 = torch.ops.aten.permute.default(view_9, [0, 2, 1, 3]); view_9 = None
permute_6 = torch.ops.aten.permute.default(view_10, [0, 2, 1, 3]); view_10 = None
permute_7 = torch.ops.aten.permute.default(permute_3, [0, 1, 3, 2]); permute_3 = None
expand_1 = torch.ops.aten.expand.default(permute_6, [arg203_1, 4, 512, 64]); permute_6 = None
clone_1 = torch.ops.aten.clone.default(expand_1, memory_format = torch.contiguous_format); expand_1 = None
mul_7 = arg203_1 * 4
view_11 = torch.ops.aten.reshape.default(clone_1, [mul_7, 512, 64]); clone_1 = None
expand_2 = torch.ops.aten.expand.default(permute_7, [arg203_1, 4, 64, 512]); permute_7 = None
clone_2 = torch.ops.aten.clone.default(expand_2, memory_format = torch.contiguous_format); expand_2 = None
view_12 = torch.ops.aten.reshape.default(clone_2, [mul_7, 64, 512]); clone_2 = None
bmm = torch.ops.aten.bmm.default(view_11, view_12); view_11 = view_12 = None
view_13 = torch.ops.aten.reshape.default(bmm, [arg203_1, 4, 512, 512]); bmm = None
div = torch.ops.aten.div.Tensor(view_13, 8.0); view_13 = None
add_4 = torch.ops.aten.add.Tensor(div, mul); div = None
amax = torch.ops.aten.amax.default(add_4, [-1], True)
sub_2 = torch.ops.aten.sub.Tensor(add_4, amax); add_4 = amax = None
exp = torch.ops.aten.exp.default(sub_2); sub_2 = None
sum_1 = torch.ops.aten.sum.dim_IntList(exp, [-1], True)
div_1 = torch.ops.aten.div.Tensor(exp, sum_1); exp = sum_1 = None
clone_3 = torch.ops.aten.clone.default(div_1); div_1 = None
expand_3 = torch.ops.aten.expand.default(clone_3, [arg203_1, 4, 512, 512]); clone_3 = None
view_14 = torch.ops.aten.reshape.default(expand_3, [mul_7, 512, 512]); expand_3 = None
expand_4 = torch.ops.aten.expand.default(permute_5, [arg203_1, 4, 512, 64]); permute_5 = None
clone_4 = torch.ops.aten.clone.default(expand_4, memory_format = torch.contiguous_format); expand_4 = None
view_15 = torch.ops.aten.reshape.default(clone_4, [mul_7, 512, 64]); clone_4 = None
bmm_1 = torch.ops.aten.bmm.default(view_14, view_15); view_14 = view_15 = None
view_16 = torch.ops.aten.reshape.default(bmm_1, [arg203_1, 4, 512, 64]); bmm_1 = None
......
```
Fused_attention pattern:
```
def _sfdp_pattern(query, key, value, attn_mask, inv_scale_factor, dropout_p):
q = query.permute(0, 2, 1, 3)
k = key.permute(0, 2, 1, 3)
v = value.permute(0, 2, 1, 3)
return torch.nn.functional.dropout(
(torch.matmul(q, k.transpose(-2, -1)).div(inv_scale_factor) + attn_mask).softmax(dim=-1),
p=dropout_p,
).matmul(v)
```
Graph of pattern:
```
def forward(self, arg0_1, arg1_1, arg2_1, arg3_1, arg4_1):
permute = torch.ops.aten.permute.default(arg0_1, [0, 2, 1, 3])
permute_1 = torch.ops.aten.permute.default(arg1_1, [0, 2, 1, 3]); arg1_1 = None
permute_2 = torch.ops.aten.permute.default(arg2_1, [0, 2, 1, 3]); arg2_1 = None
permute_3 = torch.ops.aten.permute.default(permute_1, [0, 1, 3, 2]); permute_1 = None
sym_size = torch.ops.aten.sym_size(arg0_1, 0); arg0_1 = None
expand = torch.ops.aten.expand.default(permute, [sym_size, 4, 512, 64]); permute = None
clone = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
mul = sym_size * 4
view = torch.ops.aten.view.default(clone, [mul, 512, 64]); clone = None
expand_1 = torch.ops.aten.expand.default(permute_3, [sym_size, 4, 64, 512]); permute_3 = None
clone_1 = torch.ops.aten.clone.default(expand_1, memory_format = torch.contiguous_format); expand_1 = None
view_1 = torch.ops.aten.view.default(clone_1, [mul, 64, 512]); clone_1 = mul = None
bmm = torch.ops.aten.bmm.default(view, view_1); view = view_1 = None
view_2 = torch.ops.aten.view.default(bmm, [sym_size, 4, 512, 512]); bmm = None
div = torch.ops.aten.div.Tensor(view_2, 8.0); view_2 = None
add = torch.ops.aten.add.Tensor(div, arg3_1); div = arg3_1 = None
amax = torch.ops.aten.amax.default(add, [-1], True)
sub = torch.ops.aten.sub.Tensor(add, amax); add = amax = None
exp = torch.ops.aten.exp.default(sub); sub = None
sum_1 = torch.ops.aten.sum.dim_IntList(exp, [-1], True)
div_1 = torch.ops.aten.div.Tensor(exp, sum_1); exp = sum_1 = None
clone_2 = torch.ops.aten.clone.default(div_1); div_1 = None
expand_2 = torch.ops.aten.expand.default(clone_2, [sym_size, 4, 512, 512]); clone_2 = None
mul_1 = sym_size * 4
view_3 = torch.ops.aten.view.default(expand_2, [mul_1, 512, 512]); expand_2 = None
expand_3 = torch.ops.aten.expand.default(permute_2, [sym_size, 4, 512, 64]); permute_2 = None
clone_3 = torch.ops.aten.clone.default(expand_3, memory_format = torch.contiguous_format); expand_3 = None
view_4 = torch.ops.aten.view.default(clone_3, [mul_1, 512, 64]); clone_3 = mul_1 = None
bmm_1 = torch.ops.aten.bmm.default(view_3, view_4); view_3 = view_4 = None
view_5 = torch.ops.aten.view.default(bmm_1, [sym_size, 4, 512, 64]); bmm_1 = sym_size = None
return view_5
```
## Reproduce
You can reproduce by running:
### dynamic
bash [inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance huggingface ElectraForQuestionAnswering float32 first dynamic default 0
profiling:
```
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::addmm 26.89% 565.370ms 34.82% 731.998ms 5.903ms 124
aten::bmm 19.34% 406.637ms 19.34% 406.640ms 8.472ms 48
Torch-Compiled Region 18.12% 380.959ms 45.22% 950.565ms 950.565ms 1
aten::copy_ 11.15% 234.356ms 11.15% 234.356ms 1.233ms 190
aten::add 7.47% 156.958ms 7.47% 156.958ms 4.130ms 38
aten::div 5.89% 123.763ms 5.90% 124.093ms 9.546ms 13
aten::_softmax 5.60% 117.785ms 5.60% 117.785ms 9.815ms 12
aten::gelu 2.86% 60.104ms 2.86% 60.104ms 5.009ms 12
aten::native_layer_norm 1.35% 28.307ms 1.37% 28.730ms 1.149ms 25
expected 0.73% 15.313ms 54.74% 1.151s 1.151s 1
aten::view 0.07% 1.384ms 0.07% 1.384ms 5.262us 263
aten::linear 0.07% 1.368ms 17.90% 376.198ms 5.084ms 74
aten::index_select 0.04% 944.000us 0.05% 963.000us 321.000us 3
aten::expand 0.04% 915.000us 0.06% 1.191ms 6.884us 173
aten::matmul 0.04% 891.000us 12.21% 256.762ms 10.698ms 24
aten::empty 0.04% 881.000us 0.04% 881.000us 6.777us 130
aten::reshape 0.03% 657.000us 2.79% 58.715ms 469.720us 125
aten::as_strided 0.03% 598.000us 0.03% 598.000us 1.851us 323
inductor::_reinterpret_tensor 0.03% 563.000us 0.03% 563.000us 2.448us 230
actual 0.03% 541.000us 45.26% 951.441ms 951.441ms 1
aten::t 0.02% 453.000us 0.05% 989.000us 13.365us 74
aten::permute 0.02% 410.000us 0.02% 522.000us 10.875us 48
aten::transpose 0.02% 409.000us 0.03% 605.000us 7.035us 86
aten::clone 0.02% 395.000us 3.31% 69.493ms 1.363ms 51
TorchDynamo Cache Lookup 0.02% 335.000us 0.02% 335.000us 335.000us 1
aten::_unsafe_view 0.02% 332.000us 0.02% 332.000us 5.443us 61
aten::layer_norm 0.01% 275.000us 1.38% 29.005ms 1.160ms 25
aten::empty_like 0.01% 202.000us 0.04% 748.000us 14.667us 51
aten::empty_strided 0.01% 201.000us 0.01% 201.000us 5.432us 37
aten::add_ 0.01% 136.000us 0.01% 136.000us 136.000us 1
aten::_to_copy 0.01% 120.000us 0.01% 301.000us 20.067us 15
aten::softmax 0.01% 120.000us 5.61% 117.905ms 9.825ms 12
aten::to 0.00% 49.000us 0.02% 350.000us 17.500us 20
aten::dropout 0.00% 44.000us 0.00% 44.000us 1.189us 37
aten::_log_softmax 0.00% 36.000us 0.00% 36.000us 18.000us 2
aten::embedding 0.00% 35.000us 0.05% 1.062ms 354.000us 3
aten::contiguous 0.00% 34.000us 0.57% 12.043ms 860.214us 14
aten::sub 0.00% 33.000us 0.00% 49.000us 49.000us 1
aten::slice 0.00% 26.000us 0.00% 31.000us 5.167us 6
aten::fill_ 0.00% 23.000us 0.00% 23.000us 23.000us 1
aten::mul 0.00% 22.000us 0.00% 26.000us 26.000us 1
aten::clamp 0.00% 21.000us 0.00% 21.000us 10.500us 2
aten::ones 0.00% 18.000us 0.00% 50.000us 50.000us 1
aten::resolve_conj 0.00% 17.000us 0.00% 17.000us 0.048us 356
aten::select 0.00% 16.000us 0.00% 17.000us 2.833us 6
aten::split 0.00% 14.000us 0.00% 30.000us 30.000us 1
aten::rsub 0.00% 13.000us 0.00% 62.000us 62.000us 1
aten::cross_entropy_loss 0.00% 13.000us 0.00% 81.000us 40.500us 2
aten::nll_loss_forward 0.00% 11.000us 0.00% 11.000us 5.500us 2
aten::squeeze 0.00% 10.000us 0.00% 10.000us 5.000us 2
aten::narrow 0.00% 9.000us 0.00% 16.000us 8.000us 2
aten::log_softmax 0.00% 8.000us 0.00% 44.000us 22.000us 2
aten::nll_loss_nd 0.00% 7.000us 0.00% 20.000us 10.000us 2
aten::nll_loss 0.00% 6.000us 0.00% 17.000us 8.500us 2
aten::unsqueeze 0.00% 5.000us 0.00% 6.000us 3.000us 2
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 2.102s
```
### static
bash [inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh) multiple inference performance huggingface ElectraForQuestionAnswering float32 first static default 0
profiling:
```
--------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::addmm 17.73% 288.257ms 22.63% 367.885ms 4.971ms 74
mkl::_mkl_linear 15.47% 251.548ms 15.51% 252.202ms 3.408ms 74
aten::bmm 12.15% 197.618ms 12.15% 197.619ms 8.234ms 24
aten::add 9.68% 157.440ms 9.68% 157.440ms 4.143ms 38
aten::copy_ 9.18% 149.283ms 9.18% 149.283ms 1.066ms 140
aten::div 7.65% 124.408ms 7.67% 124.734ms 9.595ms 13
aten::_softmax 7.26% 118.088ms 7.26% 118.088ms 9.841ms 12
Torch-Compiled Region 7.19% 116.861ms 29.33% 476.907ms 476.907ms 1
aten::_scaled_dot_product_flash_attention 6.58% 106.918ms 6.60% 107.366ms 8.947ms 12
aten::gelu 3.70% 60.135ms 3.70% 60.135ms 5.011ms 12
aten::native_layer_norm 1.73% 28.175ms 1.76% 28.605ms 1.144ms 25
expected 0.92% 14.880ms 70.62% 1.148s 1.148s 1
aten::empty 0.11% 1.722ms 0.11% 1.722ms 5.519us 312
aten::view 0.08% 1.297ms 0.08% 1.297ms 4.932us 263
aten::linear 0.08% 1.244ms 22.83% 371.151ms 5.016ms 74
aten::matmul 0.05% 826.000us 15.91% 258.615ms 10.776ms 24
aten::index_select 0.04% 718.000us 0.05% 748.000us 249.333us 3
aten::reshape 0.04% 662.000us 3.70% 60.091ms 480.728us 125
aten::expand 0.04% 597.000us 0.05% 734.000us 5.967us 123
aten::transpose 0.03% 566.000us 0.05% 842.000us 5.767us 146
aten::as_strided 0.03% 528.000us 0.03% 528.000us 1.586us 333
inductor::_reinterpret_tensor 0.03% 461.000us 0.03% 461.000us 3.115us 148
actual 0.03% 442.000us 29.38% 477.692ms 477.692ms 1
aten::t 0.03% 428.000us 0.06% 943.000us 12.743us 74
aten::clone 0.02% 405.000us 4.36% 70.951ms 1.391ms 51
aten::permute 0.02% 352.000us 0.03% 455.000us 9.479us 48
TorchDynamo Cache Lookup 0.02% 343.000us 0.02% 343.000us 343.000us 1
aten::_unsafe_view 0.02% 291.000us 0.02% 291.000us 4.770us 61
aten::layer_norm 0.02% 291.000us 1.78% 28.896ms 1.156ms 25
aten::empty_like 0.01% 198.000us 0.05% 741.000us 14.529us 51
aten::_to_copy 0.01% 129.000us 0.02% 305.000us 20.333us 15
aten::add_ 0.01% 119.000us 0.01% 119.000us 119.000us 1
aten::softmax 0.01% 112.000us 7.27% 118.200ms 9.850ms 12
aten::empty_strided 0.00% 75.000us 0.00% 75.000us 2.885us 26
aten::to 0.00% 50.000us 0.02% 355.000us 17.750us 20
aten::dropout 0.00% 42.000us 0.00% 42.000us 1.135us 37
aten::contiguous 0.00% 40.000us 0.74% 12.104ms 864.571us 14
aten::_log_softmax 0.00% 36.000us 0.00% 36.000us 18.000us 2
aten::sub 0.00% 34.000us 0.00% 58.000us 58.000us 1
aten::embedding 0.00% 29.000us 0.05% 846.000us 282.000us 3
aten::fill_ 0.00% 27.000us 0.00% 27.000us 27.000us 1
aten::slice 0.00% 26.000us 0.00% 32.000us 5.333us 6
aten::mul 0.00% 22.000us 0.00% 26.000us 26.000us 1
aten::clamp 0.00% 19.000us 0.00% 20.000us 10.000us 2
aten::ones 0.00% 18.000us 0.00% 54.000us 54.000us 1
aten::select 0.00% 18.000us 0.00% 18.000us 3.000us 6
aten::cross_entropy_loss 0.00% 18.000us 0.01% 82.000us 41.000us 2
aten::squeeze 0.00% 11.000us 0.00% 11.000us 5.500us 2
aten::split 0.00% 10.000us 0.00% 24.000us 24.000us 1
aten::nll_loss_forward 0.00% 10.000us 0.00% 10.000us 5.000us 2
aten::narrow 0.00% 9.000us 0.00% 14.000us 7.000us 2
aten::rsub 0.00% 8.000us 0.00% 66.000us 66.000us 1
aten::resolve_conj 0.00% 8.000us 0.00% 8.000us 0.041us 196
aten::log_softmax 0.00% 8.000us 0.00% 44.000us 22.000us 2
aten::nll_loss 0.00% 6.000us 0.00% 16.000us 8.000us 2
aten::unsqueeze 0.00% 5.000us 0.00% 5.000us 2.500us 2
aten::nll_loss_nd 0.00% 4.000us 0.00% 20.000us 10.000us 2
--------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 1.626s
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki @ezyang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @Xia-Weiwen
| 1 |
309 | 111,187 |
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details)
|
oncall: distributed, triaged, module: wsl
|
### π Describe the bug
When I try to finetune with ddp([LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)) in wsl2(win10 host), I get this error:
```
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO Bootstrap : Using eth0:172.23.125.43<0>
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO NET/Plugin : Plugin load (libnccl-net.so) returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO NET/Plugin : No plugin found, using internal implementation
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO cudaDriverVersion 11080
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO cudaDriverVersion 11080
NCCL version 2.18.5+cuda11.8
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO Bootstrap : Using eth0:172.23.125.43<0>
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO NET/Plugin : Plugin load (libnccl-net.so) returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO NET/Plugin : No plugin found, using internal implementation
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO NET/IB : No device found.
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO NET/Socket : Using [0]eth0:172.23.125.43<0>
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Using network Socket
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO NET/IB : No device found.
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO NET/Socket : Using [0]eth0:172.23.125.43<0>
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO Using network Socket
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO comm 0x7aef8cc0 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 4000 commId 0xfab41a312f23304f - Init START
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO comm 0x7e10b200 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 1000 commId 0xfab41a312f23304f - Init START
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO NCCL_IGNORE_DISABLED_P2P set by environment to 1.
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO NCCL_IGNORE_DISABLED_P2P set by environment to 1.
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Channel 00/02 : 0 1
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Channel 01/02 : 0 1
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO P2P Chunksize set to 131072
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO P2P Chunksize set to 131072
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
DESKTOP-VMBL43V:1354:1395 [0] transport.cc:154 NCCL WARN Cuda failure 'invalid argument'
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO init.cc:1079 -> 1
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO init.cc:1358 -> 1
DESKTOP-VMBL43V:1354:1395 [0] NCCL INFO group.cc:65 -> 1 [Async thread]
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO group.cc:406 -> 1
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO group.cc:96 -> 1
DESKTOP-VMBL43V:1355:1396 [1] transport.cc:154 NCCL WARN Cuda failure 'invalid argument'
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO init.cc:1079 -> 1
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO init.cc:1358 -> 1
DESKTOP-VMBL43V:1355:1396 [1] NCCL INFO group.cc:65 -> 1 [Async thread]
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO group.cc:406 -> 1
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO group.cc:96 -> 1
Traceback (most recent call last):
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/train_bash.py", line 14, in <module>
main()
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/train_bash.py", line 5, in main
run_exp()
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/tuner/tune.py", line 26, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/tuner/sft/workflow.py", line 29, in run_sft
dataset = preprocess_dataset(dataset, tokenizer, data_args, training_args, stage="sft")
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/dsets/preprocess.py", line 216, in preprocess_dataset
with training_args.main_process_first(desc="dataset map pre-processing"):
File "/home/marco/miniconda3/envs/lama/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/transformers/training_args.py", line 2084, in main_process_first
dist.barrier()
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.5
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
Traceback (most recent call last):
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/train_bash.py", line 14, in <module>
main()
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/train_bash.py", line 5, in main
run_exp()
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/tuner/tune.py", line 26, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/tuner/sft/workflow.py", line 29, in run_sft
dataset = preprocess_dataset(dataset, tokenizer, data_args, training_args, stage="sft")
File "/mnt/c/Users/marco/Desktop/finetunegpl/LLaMA-Efficient-Tuning/src/llmtuner/dsets/preprocess.py", line 216, in preprocess_dataset
with training_args.main_process_first(desc="dataset map pre-processing"):
File "/home/marco/miniconda3/envs/lama/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/transformers/training_args.py", line 2093, in main_process_first
dist.barrier()
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.5
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
DESKTOP-VMBL43V:1355:1355 [1] NCCL INFO comm 0x7aef8cc0 rank 1 nranks 2 cudaDev 1 busId 4000 - Abort COMPLETE
DESKTOP-VMBL43V:1354:1354 [0] NCCL INFO comm 0x7e10b200 rank 0 nranks 2 cudaDev 0 busId 1000 - Abort COMPLETE
[2023-10-13 07:17:04,991] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 1354) of binary: /home/marco/miniconda3/envs/lama/bin/python
Traceback (most recent call last):
File "/home/marco/miniconda3/envs/lama/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/accelerate/commands/launch.py", line 977, in launch_command
multi_gpu_launcher(args)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/accelerate/commands/launch.py", line 646, in multi_gpu_launcher
distrib_run.run(args)
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/marco/miniconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
src/train_bash.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-10-13_07:17:04
host : DESKTOP-VMBL43V.
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 1355)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-10-13_07:17:04
host : DESKTOP-VMBL43V.
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 1354)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
```
the command I use is `accelerate launch src/train_bash.py --stage sft --model_name_or_path /mnt/c/Users/marco/Desktop/finetunegpl/sheep-duck-llama-2-13b/ --do_train --finetuning_type lora --template default --flash_attn --shift_attn --dataset_dir data --dataset lima --cutoff_len 1024 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 100000 --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 100 --warmup_steps 0 --lora_rank 128 --lora_dropout 0.1 --lora_target q_proj,v_proj --resume_lora_training --output_dir saves\Custom\lora\2023-10-13-02-59-59 --fp16 --plot_loss`
### Versions
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 522.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5500
CPU family: 25
Model: 80
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
BogoMIPS: 7599.71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 16 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.1.0+cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] torch 2.1.0+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0+cu118 pypi_0 pypi
[conda] torchvision 0.16.0+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 2 |
310 | 111,186 |
[ONNX][Exporter] Maintain support for exporter arguments export_params and keep_initializers_as_inputs
|
module: onnx, triaged
|
### π The feature, motivation and pitch
The [`ExportOptions`](https://pytorch.org/docs/main/onnx_dynamo.html#torch.onnx.ExportOptions) supported by the dynamo based exporter does not support the arguments `export_params` and `keep_initializers_as_inputs` that were previously supported in the torch-script based exporter.
These arguments are useful for scenarios where the data associated with the parameters in the exported model are actually sourced from elsewhere (for example from the `torch.nn.Module` in the case of [`ORTModule`](https://github.com/microsoft/onnxruntime/blob/dad70ad4e80fb6461925d33f331d26ff9c52bb6c/orttraining/orttraining/python/training/ortmodule/ortmodule.py#L27).
The lack of support for these arguments implies that we end up with a duplicated copy of the model parameters and need to manually move remove them in the exported model. This creates a memory bloat. The problem is further magnified if we have a distributed training scenario where the model is exported for each of the devices that share the same host (where we have multiple copies of the same exported model). This can be a bottleneck for large models.
If the exporter supports these arguments, the exported model will be considerably smaller in size and will not lead to a bloat in the host memory.
cc @BowenBao
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
311 | 111,185 |
Reshape mask for triu to have the same shape as input.
|
triaged, open source
|
Currently this gives this warning:
[W Resize.cpp:35] Warning: An output with one or more elements was resized since it had shape [10, 10], which does not match the required output shape [1, 12, 10, 10]. This behavior is
deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function _resize_output_check)
Fixes #ISSUE_NUMBER
| 4 |
312 | 111,183 |
[BE] Enable Ruff's Flake8 PYI056
|
triaged, open source, release notes: onnx
|
Enable [unsupported-method-call-on-all (PYI056)](https://docs.astral.sh/ruff/rules/unsupported-method-call-on-all/#unsupported-method-call-on-all-pyi056)
Link: #110950
| 2 |
313 | 111,182 |
Training iresnet with torch.compile is slower than eager mode for torch 2.1.0
|
triaged, oncall: pt2
|
### π Describe the bug
When I compiled "iresnet200" using "torch.compile", I noticed that iresnet's speedup was abnormal.
The following table shows the speedup of inductor mode comapred to eager mode between networks.
| networks | resnet152 | iresnet200 |
| ------------- | --------- | ---------- |
| training mode | 1.38x | **0.7x** |
| eval mode | 2.86x | 1.4x |
* resnet152 is imported from torchvision.models
Can you help me with the following questions?
1. The inducer can give iresnet in eval mode a delightful acceleration. But why does it actually reduce the speedup of iresnet in training modeοΌ
2. Is there a way to improve the speedup of iresnet in training mode οΌ
### Error logs
## logs of resnet152
```
eager train time 0: 1.672048583984375
eager train time 29: 0.05983129501342774
~~~~~~~~~~
compile train time 0: 129.01534375
compile train time 29: 0.04168806457519531
~~~~~~~~~~
(train) eager median: 0.05907711982727051, compile median: 0.04255897521972656, speedup: 1.3881236454182198x
~~~~~~~~~~
eager eval time 0: 1.4442802734375
eager eval time 29: 0.022561792373657227
~~~~~~~~~~
compile eval time 0: 58.9843515625
compile eval time 29: 0.00785203218460083
~~~~~~~~~~
(eval) eager median: 0.022537727355957028, compile median: 0.007875584125518798, speedup: 2.8617213652672873x
~~~~~~~~~~
```
## logs of iresnet200
```
eager train time 0: 1.701213134765625
eager train time 29: 0.09520025634765625
~~~~~~~~~~
compile train time 0: 258.081875
compile train time 29: 0.12313497924804688
~~~~~~~~~~
(train) eager median: 0.0953922576904297, compile median: 0.12323174285888672, speedup: 0.774088359682325x
eager eval time 0: 1.4410618896484375
eager eval time 29: 0.03419750213623047
~~~~~~~~~~
compile eval time 0: 107.7888359375
compile eval time 29: 0.024010751724243166
~~~~~~~~~~
(eval) eager median: 0.03415500831604004, compile median: 0.02414489555358887, speedup: 1.4145850513303746x
```
### Minified repro
```python
import numpy as np
import torch
import torch._dynamo as dynamo
import torch.nn as nn
import torch.optim as optim
from torch import nn
from torch.utils.checkpoint import checkpoint
__all__ = ['iresnet18', 'iresnet34', 'iresnet50', 'iresnet100', 'iresnet200']
using_ckpt = False
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes,
out_planes,
kernel_size=3,
stride=stride,
padding=dilation,
groups=groups,
bias=False,
dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes,
out_planes,
kernel_size=1,
stride=stride,
bias=False)
class IBasicBlock(nn.Module):
expansion = 1
def __init__(self,
inplanes,
planes,
stride=1,
downsample=None,
groups=1,
base_width=64,
dilation=1):
super(IBasicBlock, self).__init__()
if groups != 1 or base_width != 64:
raise ValueError(
'BasicBlock only supports groups=1 and base_width=64')
if dilation > 1:
raise NotImplementedError(
"Dilation > 1 not supported in BasicBlock")
self.bn1 = nn.BatchNorm2d(
inplanes,
eps=1e-05,
)
self.conv1 = conv3x3(inplanes, planes)
self.bn2 = nn.BatchNorm2d(
planes,
eps=1e-05,
)
self.prelu = nn.PReLU(planes)
self.conv2 = conv3x3(planes, planes, stride)
self.bn3 = nn.BatchNorm2d(
planes,
eps=1e-05,
)
self.downsample = downsample
self.stride = stride
def forward_impl(self, x):
identity = x
out = self.bn1(x)
out = self.conv1(out)
out = self.bn2(out)
out = self.prelu(out)
out = self.conv2(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
return out
def forward(self, x):
if self.training and using_ckpt:
return checkpoint(self.forward_impl, x)
else:
return self.forward_impl(x)
class IResNet(nn.Module):
fc_scale = 7 * 7
def __init__(self,
block,
layers,
dropout=0,
num_features=512,
zero_init_residual=False,
groups=1,
width_per_group=64,
replace_stride_with_dilation=None,
fp16=False):
super(IResNet, self).__init__()
self.extra_gflops = 0.0
self.fp16 = fp16
self.inplanes = 64
self.dilation = 1
if replace_stride_with_dilation is None:
replace_stride_with_dilation = [False, False, False]
if len(replace_stride_with_dilation) != 3:
raise ValueError("replace_stride_with_dilation should be None "
"or a 3-element tuple, got {}".format(
replace_stride_with_dilation))
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2d(3,
self.inplanes,
kernel_size=3,
stride=1,
padding=1,
bias=False)
self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05)
self.prelu = nn.PReLU(self.inplanes)
self.layer1 = self._make_layer(block, 64, layers[0], stride=2)
self.layer2 = self._make_layer(block,
128,
layers[1],
stride=2,
dilate=replace_stride_with_dilation[0])
self.layer3 = self._make_layer(block,
256,
layers[2],
stride=2,
dilate=replace_stride_with_dilation[1])
self.layer4 = self._make_layer(block,
512,
layers[3],
stride=2,
dilate=replace_stride_with_dilation[2])
self.bn2 = nn.BatchNorm2d(
512 * block.expansion,
eps=1e-05,
)
self.dropout = nn.Dropout(p=dropout, inplace=True)
self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features)
self.features = nn.BatchNorm1d(num_features, eps=1e-05)
nn.init.constant_(self.features.weight, 1.0)
self.features.weight.requires_grad = False
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.normal_(m.weight, 0, 0.1)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
if zero_init_residual:
for m in self.modules():
if isinstance(m, IBasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
downsample = None
previous_dilation = self.dilation
if dilate:
self.dilation *= stride
stride = 1
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(
planes * block.expansion,
eps=1e-05,
),
)
layers = []
layers.append(
block(self.inplanes, planes, stride, downsample, self.groups,
self.base_width, previous_dilation))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(
block(self.inplanes,
planes,
groups=self.groups,
base_width=self.base_width,
dilation=self.dilation))
return nn.Sequential(*layers)
def forward(self, x):
# with torch.cuda.amp.autocast(self.fp16):
x = self.conv1(x)
x = self.bn1(x)
x = self.prelu(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2(x)
x = torch.flatten(x, 1)
x = self.dropout(x)
x = self.fc(x)
x = self.features(x)
return x
def _iresnet(arch, block, layers, pretrained, progress, **kwargs):
model = IResNet(block, layers, **kwargs)
if pretrained:
raise ValueError()
return model
def iresnet18(pretrained=False, progress=True, **kwargs):
return _iresnet('iresnet18', IBasicBlock, [2, 2, 2, 2], pretrained,
progress, **kwargs)
def iresnet34(pretrained=False, progress=True, **kwargs):
return _iresnet('iresnet34', IBasicBlock, [3, 4, 6, 3], pretrained,
progress, **kwargs)
def iresnet50(pretrained=False, progress=True, **kwargs):
return _iresnet('iresnet50', IBasicBlock, [3, 4, 14, 3], pretrained,
progress, **kwargs)
def iresnet100(pretrained=False, progress=True, **kwargs):
return _iresnet('iresnet100', IBasicBlock, [3, 13, 30, 3], pretrained,
progress, **kwargs)
def iresnet200(pretrained=False, progress=True, **kwargs):
return _iresnet('iresnet200', IBasicBlock, [6, 26, 60, 6], pretrained,
progress, **kwargs)
def generate_data(b, device_id=0):
channel = 3
heigh = 112
width = 112
return (
torch.randn(b, channel, heigh, width).to(torch.float32).cuda(device_id),
torch.randint(256, (b,)).cuda(device_id),
)
def init_model(device_id=0):
# return resnet152(num_classes=256).cuda(device_id)
return iresnet200(num_features=256).cuda(device_id)
def train(model, data, opt, grad_scaler):
opt.zero_grad(True)
with torch.cuda.amp.autocast(enabled=True):
predict = model(data[0])
loss = torch.nn.CrossEntropyLoss()(predict, data[1])
loss = grad_scaler.scale(loss)
loss.backward()
opt.step()
return loss
def eval(model, data):
with torch.cuda.amp.autocast(enabled=True):
predict = model(data[0])
loss = torch.nn.CrossEntropyLoss()(predict, data[1])
return loss
def timed(fn):
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
result = fn()
end.record()
torch.cuda.synchronize()
return result, start.elapsed_time(end) / 1000
def demo_train():
N_ITERS = 30
device_id = 0
dynamo.reset()
model = init_model(device_id=device_id)
opt = optim.SGD(model.parameters(), lr=0.001)
grad_scaler = torch.cuda.amp.GradScaler(init_scale=2.0)
eager_times = []
for i in range(N_ITERS):
inp = generate_data(16, device_id=device_id)
_, eager_time = timed(lambda: train(model, inp, opt, grad_scaler))
eager_times.append(eager_time)
print(f"eager train time {i}: {eager_time}")
print("~" * 10)
dynamo.reset()
model = init_model(device_id=device_id)
opt = optim.SGD(model.parameters(), lr=0.001)
grad_scaler = torch.cuda.amp.GradScaler(init_scale=2.0)
train_opt = torch.compile(train,
options={
"triton.cudagraphs": True,
},
backend="inductor")
compile_times = []
for i in range(N_ITERS):
inp = generate_data(16, device_id=device_id)
_, compile_time = timed(lambda: train_opt(model, inp, opt, grad_scaler))
compile_times.append(compile_time)
print(f"compile train time {i}: {compile_time}")
print("~" * 10)
eager_med = np.median(eager_times)
compile_med = np.median(compile_times)
speedup = eager_med / compile_med
print(
f"(train) eager median: {eager_med}, compile median: {compile_med}, speedup: {speedup}x"
)
print("~" * 10)
def demo_eval():
N_ITERS = 30
device_id = 0
dynamo.reset()
model = init_model(device_id=device_id)
eager_times = []
for i in range(N_ITERS):
inp = generate_data(16, device_id=device_id)
with torch.no_grad():
_, eager_time = timed(lambda: eval(model, inp))
eager_times.append(eager_time)
print(f"eager eval time {i}: {eager_time}")
print("~" * 10)
dynamo.reset()
model = init_model(device_id=device_id)
eval_opt = torch.compile(eval,
options={
"triton.cudagraphs": True,
},
backend="inductor")
compile_times = []
for i in range(N_ITERS):
inp = generate_data(16, device_id=device_id)
with torch.no_grad():
_, compile_time = timed(lambda: eval_opt(model, inp))
compile_times.append(compile_time)
print(f"compile eval time {i}: {compile_time}")
print("~" * 10)
eager_med = np.median(eager_times)
compile_med = np.median(compile_times)
speedup = eager_med / compile_med
print(
f"(eval) eager median: {eager_med}, compile median: {compile_med}, speedup: {speedup}x"
)
print("~" * 10)
if __name__ == "__main__":
demo_train()
# demo_eval()
```
### Versions
```
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.52-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.0
/usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
Stepping: 7
CPU MHz: 3200.000
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq
rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.1.0+cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchdata==0.7.0
[pip3] torchtext==0.16.0+cpu
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 2.1.0+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0+cu118 pypi_0 pypi
[conda] torchdata 0.7.0 pypi_0 pypi
[conda] torchtext 0.16.0+cpu pypi_0 pypi
[conda] torchvision 0.16.0+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
314 | 111,179 |
supported_dtypes(self, device_type) function in torch/testing/_internal/opinfo/core.py cannot enter the cuda branch
|
triaged, actionable, module: testing
|
### π Describe the bug
```
pytorch/test# PYTORCH_TESTING_DEVICE_ONLY_FOR='cuda' pytest -v test_ops.py -k "test_python_ref__refs_addcdiv_cuda_float16"
================================ test session starts ================================
platform linux -- Python 3.10.8, pytest-7.4.2, pluggy-1.0.0 -- /opt/conda/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/workspace/pytorch/test/.hypothesis/examples')
rootdir: /workspace/pytorch
configfile: pytest.ini
plugins: hypothesis-6.61.0
collected 26379 items / 26378 deselected / 1 selected
test_ops.py::TestCommonCUDA::test_python_ref__refs_addcdiv_cuda_float16
>>>>>>>>>>>>>>>>>>>>>> PDB set_trace (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>
> /opt/conda/lib/python3.10/site-packages/torch/testing/_internal/common_methods_invocations.py(1106)reference_inputs_addcmul_addcdiv()
-> supported_dtypes = op_info.supported_dtypes(device)
(Pdb) p device
'cuda:0'
(Pdb) p op_info.supported_dtypes(device)
{torch.complex64, torch.bfloat16, torch.complex128, torch.float32, torch.float64}
(Pdb) p op_info.supported_dtypes('cuda')
{torch.complex64, torch.bfloat16, torch.complex128, torch.float16, torch.float32, torch.float64}
(Pdb)
```
Getting different results when passing 'cuda' and 'cuda:0' to supported_dtypes() functionοΌI guess pytorch should expect the second behaviorοΌ{torch.complex64, torch.bfloat16, torch.complex128, torch.float16, torch.float32, torch.float64}οΌ
### Versions
(base) root@ybjjgc002:/# python collect_env.py
Collecting environment information...
//opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py:497: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
PyTorch version: 1.13.0a0+git0274371
Is debug build: True
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.8 (main, Nov 4 2022, 13:48:29) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.27.2.el7.x86_64-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2100.000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.13.0a0+git0274371
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0a0+git0274371 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch
-----
cc @mruberry @ngimel
| 0 |
315 | 111,178 |
[symnode] support symbool -> symint casting for symint arithmetic
|
open source, release notes: fx, module: inductor, ciflow/inductor
|
Possible hacky fix to: https://github.com/pytorch/pytorch/issues/110738
I doubt you would approve @lezcano, but sympy does not handle bool -> int casting as python does.
I don't think this is complete, but it is at least a start so I'll leave it up here in case anyone has a better idea for how to continue.
cc @lezcano @peterbell10
| 5 |
316 | 111,173 |
unique(return_counts=True) fails on MPS for unsorted tensors with 1M+ elements
|
triaged, module: mps
|
Trying to count many unique integers on an M2 Max fails. When trying to reproduce, it starts failing for me somewhere between half a million and about a million (unsorted) integers.
Whether it fails is not deterministic. Sometimes large tensors will work (even within the same session), but at least on my machine, for 1M elements it fails much more often than it works.
Reproduce:
```python
import torch
torch.unique(torch.randint(0, 2, (1_000_000,), device="mps"), return_counts=True)
```
Output:
```
Error: command buffer exited with error status.
The Metal Performance Shaders operations encoded on it may not have completed.
Error:
(null)
Internal Error (0000000e:Internal Error)
<AGXG14XFamilyCommandBuffer: 0x366e63060>
label = <none>
device = <AGXG14CDevice: 0x112011200>
name = Apple M2 Max
commandQueue = <AGXG14XFamilyCommandQueue: 0x111037e00>
label = <none>
device = <AGXG14CDevice: 0x112011200>
name = Apple M2 Max
retainedReferences = 1
```
The resulting counting tensor may have a few correct values at the beginning, and then too few for the rest of the elements, or (incorrect) zero values.
Adding `sorted=True` does not fix the issue.
Test to find the failure point:
```python
for elem_power in range(15, 33):
num_elems = 2 ** elem_power
rand_ints = torch.randint(0, 2, (num_elems,), device="mps")
assert torch.unique(rand_ints, return_counts=True)[1].sum() == num_elems, print(f"2**{elem_power}={num_elems} failed!")
```
I've seen it failing at 2^19, 2^20 and 2^21 elements.
### Versions
I can reproduce the bug on this machine (Mac Studio / M2 Max / 32 GB) on 2.1.0 and on current nightly.
#### 2.1.0
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.10.13 (main, Sep 11 2023, 08:24:56) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torchvision==0.15.2a0
[conda] Could not collect
---
#### Nightly
Collecting environment information...
PyTorch version: 2.2.0.dev20231012
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.10.13 (main, Sep 11 2023, 08:24:56) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.2.0.dev20231012
[pip3] torchaudio==2.2.0.dev20231012
[pip3] torchvision==0.17.0.dev20231012
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
317 | 111,170 |
add test for consecutive aot inductor compiles
|
fb-exported, topic: not user facing, module: inductor
|
Differential Revision: D50246956
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 5 |
318 | 111,169 |
Result of adding noise is very different in mps vs cuda or cpu
|
triaged, module: mps
|
### π Describe the bug
Possibly related to #84936 I was trying to follow [this tutorial](https://colab.research.google.com/drive/1NFxjNI-UIR7Ku0KERmv7Yb_586vHQW43?usp=sharing#scrollTo=F724QP16w7-A), but I noticed the results of adding noise in `cpu/cuda` are quite different than `mps`. This was tested on both `PyTorch 2.2.0.dev20231008` and `PyTorch 2.1.0`.

I created a gist to easily replicate the problem here:
https://gist.github.com/atabakd/e4fa20c23f7a6059dac794a82dccfb38
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:53:08) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchvision==0.16.0
[conda] numpy 1.24.4 py39h485cf63_0 conda-forge
[conda] pytorch 2.1.0 py3.9_0 pytorch
[conda] torchaudio 2.1.0 py39_cpu pytorch
[conda] torchvision 0.16.0 py39_cpu pytorch
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
319 | 111,168 |
Regression on CUDA 12.1 for vanilla transformer layer
|
high priority, triage review, module: performance, module: cuda
|
### π Describe the bug
Hi, we are benchmarking [deep vit](https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/deepvit.py) models and found out the CUDA 12.1 binary are actually slower than CUDA 11.8 for PyTorch 2.1.0 release. This is not expected as we see better perf on other LLMs like Megatron(GPT) and OPT. One potential reason is those repos are using flash-attention or xformers which benefit from CUDA 12 and transformer engine on H100 GPUs. While deep VIT is using vanilla implementation: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/deepvit.py. But we are not expecting a regression. Any ideas? thanks!
here is the output showing ~10% regression on single node
PT 2.1.0 + CUDA 11.8
```
[1,0]<stdout>:time taken for the last 1 steps is 2.420312336000279, the throughput is 13.221434078579314 images/stime taken for forward pass is 0.261821381000118time taken for backward pass is 1.758593597000072
[1,0]<stdout>:Average elapsed time is 2.3975933989263996, throughput is 13.34671675953439peak allocated memory is 25.845419008Average forward time is 0.24408430756846333Average backward time is 1.75420702246313
```
PT 2.1.0 + CUDA 12.1
```
[1,0]<stdout>:time taken for the last 1 steps is 2.7207883900000525, the throughput is 11.76129687910032 images/stime taken for forward pass is 0.26708757999995214time taken for backward pass is 1.9990846610003246
[1,0]<stdout>:Average elapsed time is 2.72146025645262, throughput is 11.75839328321167peak allocated memory is 25.845419008Average forward time is 0.26492700765261123Average backward time is 2.001053710136823
```
attached training script and reproduciable steps
```
pip install vit-pytorch
pip install packaging
torchrun --nnodes=1 --nproc_per_node=8 train.py
```
### Versions
PyTorch 2.1.0
cuda 11.8 vs cuda 12.1
tested on AWS P5.48xlarge
cc @ezyang @gchanan @zou3519 @kadeng @ptrblck
| 6 |
320 | 111,162 |
[dtensor] fix dtype/device conversion on nn.Modules
|
ciflow/trunk, release notes: distributed (dtensor)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111162
This PR fixes the dtype conversion issue when calling nn.Module.to after
swapping parameters with DTensors, the problem was nn.Module._apply by
default uses a `.data` setting way to swap out storages directly instead
of just replacing the nn.Parameters. For wrapper tensor subclasses, it
would not work as wrapper tensor subclass is different from normal
tensor as its real storage points separately, i.e.
* DTensor wrapper subclass instance `storage` does not point to real storage
* The real storage of DTensor pointed to its local shard. But if we do `.data=` it seems that the change will not propagate into the DTensor's local shard
This probably also fix https://github.com/pytorch/pytorch/issues/102812
| 10 |
321 | 111,159 |
Wrong onnx model from `torch.onnx.export` when using `index_add_` function with duplicate `index` values.
|
module: onnx, triaged, module: advanced indexing
|
### π Describe the bug
I have noticed that `torch.onnx.export` produces wrong models if a module contains `Tensor.index_add_` function and the `index` parameter contains duplicate values. See the minimal example below:
```python
from typing import Sequence
import torch
from torch import nn
import onnx
import onnxruntime.backend as ort_backend
import numpy as np
class DummyModel(nn.Module):
def forward(self, x: torch.Tensor, y: torch.Tensor, index: torch.Tensor) -> torch.Tensor:
x.index_add_(0, index, y)
return x
def test_model_equivalency(
model: nn.Module,
input_tensors: Sequence[torch.Tensor],
) -> None:
model.eval()
with torch.no_grad():
torch.onnx.export(model, input_tensors, "/tmp/dummy.onnx")
onnx_output = ort_backend.run(
onnx.load("/tmp/dummy.onnx"),
[x.detach().numpy() for x in input_tensors],
device="CPU"
)[0]
output = model(*input_tensors)
np.testing.assert_almost_equal(output.detach().numpy(), onnx_output, decimal=5)
print("PASS")
model = DummyModel()
```
When used with unique `index` values, the following test passes
```pyton
x = torch.randn(2)
y = torch.randn(2)
index = torch.Tensor([0, 1]).to(dtype=torch.int32)
test_model_equivalency(model, (x, y, index))
```
```bash
$ PASS
```
But when used with duplicate values, I get an error
```python
x = torch.randn(2)
y = torch.randn(2)
index = torch.Tensor([0, 0]).to(dtype=torch.int32)
test_model_equivalency(model, (x, y, index))
```
```bash
AssertionError:
Arrays are not almost equal to 5 decimals
Mismatched elements: 1 / 2 (50%)
Max absolute difference: 0.44980058
Max relative difference: 1.523156
x: array([0.74511, 1.51692], dtype=float32)
y: array([0.29531, 1.51692], dtype=float32)
```
I understand that pytorch already emits warning about this (`torch/onnx/symbolic_opset9.py:3096: UserWarning: Warning: ONNX export does not support duplicated values in 'index' field, this will cause the ONNX model to be incorrect.`), but I am wondering what is the solution here? Should I avoid `index_add_` alltogether, and if so, is there a generic solution?
### Versions
```
PyTorch version: 1.9.0a0+gitd69c22d
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.16 (main, Jan 11 2023, 16:05:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 SUPER
GPU 1: NVIDIA GeForce RTX 2080 SUPER
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper PRO 3955WX 16-Cores
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 4402.7339
CPU min MHz: 2200.0000
BogoMIPS: 7785.58
Virtualization: AMD-V
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] memory-efficient-attention-pytorch==0.0.17
[pip3] mypy==1.5.0
[pip3] mypy-boto3-s3==1.28.19
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.5.0
[pip3] numpy==1.23.4
[pip3] pytorch-ignite==0.2.0
[pip3] pytorch-lightning==1.3.8
[pip3] torch==1.9.0+cu111.av3
[pip3] torch-scatter==2.0.9+av
[pip3] torch2trt==0.2.0
[pip3] torchmetrics==0.7.2
[pip3] torchview==0.2.6
[pip3] torchvision==0.10.0+av3
[pip3] torchviz==0.0.2
[conda] memory-efficient-attention-pytorch 0.0.17 pypi_0 pypi
[conda] numpy 1.23.4 pypi_0 pypi
[conda] pytorch-ignite 0.2.0 pypi_0 pypi
[conda] pytorch-lightning 1.3.8 pypi_0 pypi
[conda] torch 1.9.0+cu111.av3 pypi_0 pypi
[conda] torch-scatter 2.0.9+av pypi_0 pypi
[conda] torch2trt 0.2.0 pypi_0 pypi
[conda] torchmetrics 0.7.2 pypi_0 pypi
[conda] torchview 0.2.6 pypi_0 pypi
[conda] torchvision 0.10.0+av3 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
```
| 1 |
322 | 111,158 |
RuntimeError in run_streaming_llama.py When Using Accelerate with Streaming LLMa Model on A4500 GPU
|
needs reproduction, module: error checking, triaged
|
### π Describe the bug
Referring to the issue https://github.com/mit-han-lab/streaming-llm/issues/37#issue-1940692615
Description:
When running the run_streaming_llama.py script with the --enable_streaming flag, I encountered a RuntimeError related to CUDA and the PyTorch Accelerate library.
Steps to Reproduce:
Set the environment variable: CUDA_VISIBLE_DEVICES=0
Run the following command:
```
python examples/run_streaming_llama.py --enable_streaming
```
Expected Behavior:
The script should run successfully and provide streaming inference results.
Actual Behavior:
The script crashes with the following error:
```
RuntimeError: handle_0 INTERNAL ASSERT FAILED at "../c10/cuda/driver_api.cpp":15, please report a bug to PyTorch.
```
GPU information
```
CUDA version: 11.7
GPU: NVIDIA RTX A4500
GPU driver information:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 525.78.01 Mon Dec 26 05:58:42 UTC 2022
GCC version: gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
```
OS information:
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
```
Full error log:
```
(streaming) zli@aes:~/streaming-llm$ CUDA_VISIBLE_DEVICES=0 python examples/run_streaming_llama.py --enable_streaming
Loading model from lmsys/vicuna-13b-v1.3 ...
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:14<00:00, 4.74s/it]
Loading data from data/mt_bench.jsonl ...
StartRecentKVCache: 4, 2000
USER: Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions.
ASSISTANT: π΄πΊ Aloha from the beautiful islands of Hawaii! I recently had the opportunity to explore the diverse culture and stunning landscapes of this tropical paradise, and I can't wait to share my adventures with you.
ποΈ First things first, let's talk about the beaches. Hawaii is known for its pristine white sand and crystal clear waters, and I was lucky enough to visit some of the most breathtaking beaches on the islands. My favorite was Waimea Bay on the North Shore of Oahu. The waves were perfect for surfing, and the surrounding cliffs and lush greenery made for a postcard-worthy backdrop.
π But Hawaii isn't just about the beaches. The culture of the islands is rich and vibrant, and I made it a point to immerse myself in it during my trip. I attended a traditional Hawaiian luau on the Big Island, where I learned about the history and customs of the native people. I also tried my hand at hula dancing and even got to sample some delicious Hawaiian cuisine, including poke bowls and shaved ice.
π― Another must-see attraction in Hawaii is the Hawaii Volcanoes National Park on the Big Island. The park is home to two active volcanoes, Kilauea and Mauna Loa, and offers a unique opportunity to see the power of nature up close. I hiked through the park and was amazed by the steam vents, lava tubes, and volcanic craters. It was truly a once-in-a-lifetime experience.
πΊ In addition to the natural wonders of Hawaii, the islands are also home to some of the most beautiful gardens and parks in the world. The Maui Tropical Plantation offered a glimpse into the island's agricultural history, while the Hilo Tropical Botanical Garden showcased the stunning flora and fauna of the area.
π― In conclusion, Hawaii is a destination that offers something for everyone. From the stunning beaches to the rich culture and unique attractions, there's no shortage of things to see and do on these beautiful islands. If you're planning a trip to Hawaii, be sure to add these must-see attractions to your itinerary. Mahalo for reading, and aloha!
USER: Rewrite your previous response. Start every sentence with the letter A.
ASSISTANT: π΄πΊ Aloha from the beautiful islands of Hawaii! I recently had the opportunity to explore the diverse culture and stunning landscapes of this tropical paradise, and I can't wait to share my adventures with you.
ποΈ First things first, let's talk about the beaches. Hawaii is known for its pristine white sand and crystal clear waters, and I was lucky enough to visit some of the most breathtaking beaches on the islands. My favorite was Waimea Bay on the North Shore of Oahu. The waves were perfect for surfing, and the surrounding cliffs and lush greenery made for a postcard-worthy backdrop.
π But Hawaii isn't just about the beaches. The culture of the islands is rich and vibrant, and I made it a point to immerse myself in it during my trip. I attended a traditional Hawaiian luau on the Big Island, where I learned about the history and customs of the native people. I also tried my hand at hula dancing and even got to sample some delicious Hawaiian cuisine, including poke bowls and shaved ice.
πΊ In addition to the natural wonders of Hawaii, the islands are also home to some of the most stunning gardens and parks in Traceback (most recent call last):
File "examples/run_streaming_llama.py", line 122, in <module>
main(args)
File "examples/run_streaming_llama.py", line 103, in main
streaming_inference(
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "examples/run_streaming_llama.py", line 73, in streaming_inference
past_key_values = greedy_generate(
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "examples/run_streaming_llama.py", line 30, in greedy_generate
outputs = model(
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 838, in forward
logits = self.lm_head(hidden_states)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/accelerate/hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/accelerate/hooks.py", line 286, in pre_forward
set_module_tensor_to_device(
File "/home/zli/anaconda3/envs/streaming/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_device
new_value = value.to(device)
RuntimeError: handle_0 INTERNAL ASSERT FAILED at "../c10/cuda/driver_api.cpp":15, please report a bug to PyTorch.
```
### Versions
torch version: 2.1.0
conda create -yn streaming python=3.8
conda activate streaming
pip install torch torchvision torchaudio
pip install transformers==4.33.0 accelerate datasets evaluate wandb scikit-learn scipy sentencepiece
python setup.py develop
git clone git@github.com:mit-han-lab/streaming-llm.git
cd streaming-llm/
python setup.py develop
cc @malfet
| 1 |
323 | 111,157 |
overloads can perhaps be more performant?
|
module: optimizer, triaged, module: dispatch, module: codegen
|
After I land the 0dim overload in #111079, foreach_add will have the following signatures, generated in the following order:
```
static PythonArgParser parser({
"_foreach_add_(TensorList self, Scalar scalar)",
"_foreach_add_(TensorList self, ScalarList scalars)",
"_foreach_add_(TensorList self, Tensor other, Scalar alpha=1)",
"_foreach_add_(TensorList self, TensorList other, *, Scalar alpha=1)",
}, /*traceable=*/false);
```
The order is probably due to Scalars being more performant on CUDA.
## Problem:
The whole reason I added a tensor overload is so that in the CPU path, we don't continuously rewrap tensors as we dispatch into add(...). However, the above ordering means that code like `_foreach_add_(TensorList tensors, Tensor scalar_tensor)`, where
`scalar_tensor` could be `torch.tensor(1.0)`, the first overload will "suffice", `scalar_tensor` will get casted to a normal Scalar, and we fall back into our loop of slowness.
Instead, I'd prefer `scalar_tensor` to not get coerced at all, and that the overload that ends up getting called should be `_foreach_add_(TensorList self, Tensor other, Scalar alpha=1)`.
## Proposal:
I wonder if there's a way to tweak the parser code to penalize for coercion. It should check all the overloads and pick the one with the smallest coercion necessary before dispatching to it. Maybe the key here is to weigh `overloaded_args`. https://github.com/pytorch/pytorch/blob/6748a14a71f235501cd050e15d1810f7a0a75dbd/torch/csrc/utils/python_arg_parser.cpp#L1655-L1660
Con: I was just made aware that this can be detrimental to dispatcher performance due to slow is_instance checks...so maybe this would not be more performant after all...
Workaround:
My workaround today is just explicitly using an alpha to force the overload to be called :)
cc @vincentqb @jbschlosser @albanD @crcrpar @ezyang @bhosmer @bdhirsh @kadeng
| 7 |
324 | 111,156 |
[wip] Add wanda pruner to torch.ao.pruning
|
release notes: AO frontend
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111156
Summary:
This PR adds in support for Wanda pruning into pytorch.
Many thanks to @Eric-mingjie who provided
[inital design and POC](https://github.com/Eric-mingjie/pytorch/tree/patch-1).
[Wanda](https://arxiv.org/abs/2306.11695) is a pruning method that takes into
acccount both the weight and activation statistics when pruning.
In order to keep track of activation norm statistics, this sparsifier
utilizes the `torch.ao.quantization` API and a custom observer,
`PerChannelNormObserver`.
A user can then calibrate their model by doing
```
for x in dataset:
model(x)
```
We only use the observer to colllect input norm statistics, which we
access when calling `update_mask`
Finally, we remove observers and mask parameterization by calling
`squash_mask()`.
Note that at this time we only support using the unpruned activations
for calibration.
WandaSparsifier supports both unstructured and semi-structured (2:4)
pruning modes.
Test Plan:
```
pytest test/ao/sparsity/test_wanda_sparsifier.py
```
Reviewers:
Subscribers:
Tasks:
Tags:
| 1 |
325 | 111,152 |
[HigherOrderOp] Move _map.py to _higher_order_ops
|
ciflow/trunk, topic: not user facing, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111152
* #111147
* #111092
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
Differential Revision: [D50332159](https://our.internmc.facebook.com/intern/diff/D50332159)
| 2 |
326 | 111,150 |
[dynamo] `ConfigModule` and `config.patch` are not thread safe
|
triaged, oncall: pt2
|
### π Describe the bug
Config is a global object that can be accessed concurrently by multiple threads. Hence
1. patching should be performed on thread-local values
2. (optional) modifying/reading the global copy should be based on RWLocks.
When accessing the config object, if a thread-local exists (it is patched) then use that. Else, use the global.
### Priority
low now, higher with https://github.com/pytorch/pytorch/pull/111074, when we expect a lot more patching due to `torch.compiled` callables being able to restore a previously saved config whenever you call them.
### Versions
main, https://github.com/pytorch/pytorch/pull/111074
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
327 | 111,147 |
[HigherOrderOp] make MapHigherOrderOp error out when graph break
|
ciflow/trunk, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111152
* __->__ #111147
* #111092
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
Differential Revision: [](https://our.internmc.facebook.com/intern/diff/)
| 2 |
328 | 111,146 |
[pytorch][PR] [Inductor][FX passes] Pre grad batch relu fusion
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: We detect independent relu operators and do the fusion in the pre grad.
Test Plan:
### unit test
```
buck2 test mode/dev-nosan //caffe2/test/inductor:group_batch_fusion
```
Test UI: https://www.internalfb.com/intern/testinfra/testrun/16888498608558485
### Inlinve cvr
f479655232
```
buck2 run mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode split_batch_group
```
before vs after transformation
https://www.internalfb.com/intern/diffing/?paste_number=851907099
```
buck2 run mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode split_batch_group -c
```
P852036786
Differential Revision: D50207610
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 4 |
329 | 111,142 |
'torch._C.Node' object has no attribute 'cs'
| null |
### π Describe the bug
Trying to convert a Torch model with [coremltools](https://github.com/apple/coremltools) raised some errors for complex inputs and it seems to be from Torch side.
The error is fired in [this line of code](https://github.com/apple/coremltools/blob/main/coremltools/converters/mil/frontend/torch/internal_graph.py#L159) but the following is a simplification.
For a node: `complex = prim::Constant[value=0.+1.j]()`
```python
name = node.attributeNames()
getattr(node, node.kindOf(name))(name)
```
This code generates `AttributeError: 'torch._C.Node' object has no attribute 'cs'` and [looking into the attributes](https://github.com/pytorch/pytorch/blob/v2.1.0/torch/_C/__init__.pyi.in#L750) it seems it should return `["c", "s"]` or `["c"]`
### Versions
```
β python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.6
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.16 (main, Oct 12 2023, 16:36:03) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti
GPU 8: NVIDIA GeForce RTX 2080 Ti
GPU 9: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 535.113.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==1.8.6
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchcrepe==0.0.20
[pip3] torchmetrics==1.2.0
[conda] Could not collect
```
| 0 |
330 | 111,139 |
[dynamo/higher order op] fix flaky / disabled tests - context fn is not `None` when a `noop_context_fn`
|
triaged, open source, topic: not user facing, module: dynamo, ciflow/inductor
|
test/dynamo/test_activation_checkpointing.py fails without this
Fails on main with
`_experimental_support_context_fn_in_torch_utils_checkpoint = True`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ydwu4
| 5 |
331 | 111,138 |
Module states cannot be fully synchronized due to the DDP broadcast_buffers breaking change
|
oncall: distributed
|
### π Describe the bug
From the recent breaking change in the `2.1.0` in the https://github.com/pytorch/pytorch/pull/100729, when the `broadcast_buffers` is set to `False`, it caused the DDP to not fully synchronize the module states (i.e., only the parameter is synchronized but the buffer is being left out out-of-sync) during the DDP initialization and the final step joining. This caused some issues in the following scenarios.
1. With the random initialization, the master process module's parameter is broadcasted but not the buffer. While we can fix this by ensuring that all ranks are initialized with the same random seed, there are however some scenarios where we still want to use a different random seed for each rank.
2. With the checkpoint loading, previously we only needed to load the checkpoint in the master process and the states (parameters and buffers) will be broadcasted to all other processes. While we can fix this by loading the exact same checkpoint in each process, there's also a scenario in which not all processes would have direct access to the checkpoint, for example, in a multi-node configuration where only the master node has access to the checkpoint which is located in the master node locally.
Also in the DDP [documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel), it specifically states that the `broadcast_buffers` flag is used to control the buffer broadcasting at the beginning of a module forward function, but not during the initialization and module final joining.
```
broadcast_buffers ([bool] β Flag that enables syncing (broadcasting) buffers of the module at the beginning of the forward function. (default: True)
```
The documentation states a different behavior from the new changes, and the new change also caused some issues mentioned above in the module loading in the DDP, is this an intended behavior or it's a bug?
### Versions
2.1.0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
332 | 111,135 |
No op for aten::where with argument types: Tensor, Tensor, bool.
|
oncall: jit
|
### π Describe the bug
I'm getting this error when I try to trace my module, which has a `z = torch.where(x, y, True)` expression. I get:
```
File "/home/me/myproject/myfile.py", line 120, in __init__
traced_rule = torch.jit.trace(
File "/home/me/my_env/lib/python3.10/site-packages/torch/jit/_trace.py", line 798, in trace
return trace_module(
File "/home/me/my_env/lib/python3.10/site-packages/torch/jit/_trace.py", line 1065, in trace_module
module._c._create_method_from_trace(
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":616, please report a bug to PyTorch. We don't have an op for aten::where but it isn't a special case. Argument types: Tensor, Tensor, bool,
Candidates:
aten::where.self(Tensor condition, Tensor self, Tensor other) -> Tensor
aten::where.ScalarOther(Tensor condition, Tensor self, Scalar other) -> Tensor
aten::where.ScalarSelf(Tensor condition, Scalar self, Tensor other) -> Tensor
aten::where.Scalar(Tensor condition, Scalar self, Scalar other) -> Tensor
aten::where(Tensor condition) -> Tensor[]
aten::where.self_out(Tensor condition, Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1041-gcp-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2200.232
BogoMIPS: 4400.46
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.2.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.1.0
[pip3] torchmetrics==0.11.4
[pip3] triton==2.1.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-lightning 2.0.2 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
333 | 111,133 |
Mismatch results of index_add_ between torch.compile Inductor backend and eager mode
|
triaged, ezyang's list, module: functionalization, bug, oncall: pt2
|
### π Describe the bug
I tried to write a function that an viewed input is processed by a inplace op.
```python
import torch
def transpose_inplace_index_add(x, a, index):
y = x.permute(1, 0)
y.index_add_(0, index, a)
return y
x = torch.ones(3, 5)
to_be_added = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
index = torch.tensor([0, 4, 2])
ref_res = transpose_inplace_index_add(x, to_be_added, index)
print('eager_x.stride: ', x.stride())
print('eager_x: ', x)
print('eager_res.stride: ', ref_res.stride())
print('eager_res: ', ref_res)
x = torch.ones(3, 5)
compiled_fn = torch.compile(transpose_inplace_index_add)
res = compiled_fn(x, to_be_added, index)
print('inductor_x.strdides:', x.stride())
print('inductor_x: ', x)
print('inductor_res.strdides:', res.stride())
print('inductor_res: ', res)
```
By running the above test, output will be:
```
eager_x.stride: (5, 1)
eager_x: tensor([[ 2., 1., 8., 1., 5.],
[ 3., 1., 9., 1., 6.],
[ 4., 1., 10., 1., 7.]])
eager_res.stride: (1, 5)
eager_res: tensor([[ 2., 3., 4.],
[ 1., 1., 1.],
[ 8., 9., 10.],
[ 1., 1., 1.],
[ 5., 6., 7.]])
inductor_x.strdides: (5, 1)
inductor_x: tensor([[ 2., 1., 8., 1., 5.],
[ 3., 1., 9., 1., 6.],
[ 4., 1., 10., 1., 7.]])
inductor_res.strdides: (3, 1)
inductor_res: tensor([[ 2., 1., 8.],
[ 1., 5., 3.],
[ 1., 9., 1.],
[ 6., 4., 1.],
[10., 1., 7.]])
```
We can see that `inductor_res` has the different strides from `eager_res` (**[3, 1]** vs. **[1, 5]**). Hence the incorrect results are generated by Inductor. By looking into the code, the output of `index_add` op is forced to generated contiguous strides [here](https://github.com/pytorch/pytorch/blob/main/torch/_refs/__init__.py#L3966)
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20231011+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-156-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 800.127
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] torch==2.2.0.dev20231011+cpu
[pip3] torchaudio==2.2.0.dev20231011+cpu
[pip3] torchvision==0.15.0a0+7d2acaa
[conda] mkl 2022.1.0 hc2b9512_224
[conda] mkl-include 2022.2.0 pypi_0 pypi
[conda] mkl-static 2022.2.0 pypi_0 pypi
[conda] numpy 1.21.2 pypi_0 pypi
[conda] torch 2.2.0.dev20231011+cpu pypi_0 pypi
[conda] torchaudio 2.2.0.dev20231011+cpu pypi_0 pypi
[conda] torchvision 0.15.0a0+7d2acaa pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 6 |
334 | 111,131 |
"Invalid Scalar type" when using bf16 allreduce with Gloo backend
|
oncall: distributed
|
### π Describe the bug
When I do a bf16 allreduce on GPU tensors with the gloo backend, I hit an "Invalid scalar type" issue. It seems that the `GENERATE_ALL_TYPES` logic in gloo does not support bf16.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
335 | 111,129 |
Update ROCm triton pin
|
module: rocm, open source, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/inductor, keep-going
|
Changes:
- Enables bfloat16 support in MFMA dot on MI200 (https://github.com/ROCmSoftwarePlatform/triton/commit/23979098c881536a52ba3edf8164e12296ecfe7b)
- Add support for int8 to bfloat16 conversion (https://github.com/ROCmSoftwarePlatform/triton/commit/2d3e38e182e7b9b21aadb90bc2ab0d6514b3c760) fixing a bug in bf16 triton gemm workloads.
- Enable scanOp lowering by adding shfl_up support https://github.com/ROCmSoftwarePlatform/triton/pull/324
- MFMA16 support - support for the mfma_16x16xX instructions - these help perf on smaller sized GEMMs - https://github.com/ROCmSoftwarePlatform/triton/commit/7e34c244c284a84191a1a7bb0cd484c6345de650
- configurable wavefront-per-eu - this helps us increase our occupancy in certain use cases such as Flash Attention - https://github.com/ROCmSoftwarePlatform/triton/commit/e801638b40dac4fd511f973d9899f033ae94dbec
- Many bug fixes and optimisations
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang
| 4 |
336 | 111,126 |
type promotion test for torch.div variants is broken
|
triaged, module: testing
|
The OpInfo for `torch.div` for the variants `floor_rounding` and `trunc_rounding` currently set `promotes_int_to_float=True`:
https://github.com/pytorch/pytorch/blob/b35279dfac2950c053d005fdbe179312c5f9d921/torch/testing/_internal/common_methods_invocations.py#L10882
This is wrong and will be removed in #110939. In turn, this will trigger regular type promotion checks for binary ops:
https://github.com/pytorch/pytorch/blob/b35279dfac2950c053d005fdbe179312c5f9d921/test/test_binary_ufuncs.py#L522-L531
This test however doesn't use the regular sample inputs defined on the op, but rather hand-crafted ones:
https://github.com/pytorch/pytorch/blob/b35279dfac2950c053d005fdbe179312c5f9d921/test/test_binary_ufuncs.py#L474-L476
For regular sample inputs, we set the `rounding_mode` through a `functools.partial` to the sample inputs function
https://github.com/pytorch/pytorch/blob/b35279dfac2950c053d005fdbe179312c5f9d921/torch/testing/_internal/common_methods_invocations.py#L10878
Meaning, the `rounding_mode="floor"` or `rounding_mode="trunc"` never make it to the type promotion test. The test calls `torch.div`, which by default performs true division and thus `promotes_int_to_float=True`, but the test expects it to perform regular type promotion.
| 0 |
337 | 111,123 |
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)
|
module: optimizer, triaged
|
### π Describe the bug
When using deepspeed to train LoRA, I want to use the resume function of the trainer. The sample code is as follows:
```python
causal_model = AutoModelForCausalLM.from_pretrained(model_pretrained_path_,
config=config,
trust_remote_code=True,
low_cpu_mem_usage=self.params["low_cpu_mem_usage"])
peft = PEFT(config_path_or_data=peft_params)
causal_model = peft.get_peft_model(model=causal_model)
trainer = Seq2SeqTrainer(
params=trainer_params,
model=causal_model,
tokenizer=tokenizer,
train_dataset=train_dataset,
data_collator=data_collator,
eval_dataset=eval_dataset,
compute_metrics=dataset_t.metric,
)
trainer.train(resume_from_checkpoint=True)
```
Encountered the following error:
```bash
File "train.py", line 82, in train
trainer.train(resume_from_checkpoint=model_params["resume_from_checkpoint"])
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/transformers/trainer.py", line 1589, in train
return inner_training_loop(
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/transformers/trainer.py", line 1890, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/transformers/trainer.py", line 2785, in training_step
self.accelerator.backward(loss)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/accelerate/accelerator.py", line 1847, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 176, in backward
self.engine.step()
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2083, in step
self._take_model_step(lr_kwargs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1989, in _take_model_step
self.optimizer.step()
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1786, in step
self._optimizer_step(i)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1697, in _optimizer_step
self.optimizer.step()
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 69, in wrapper
return wrapped(*args, **kwargs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/adamw.py", line 171, in step
adamw(
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/adamw.py", line 321, in adamw
func(
File "/mnt/bn/data-ecom-govern-kk/soft/anaconda/envs/kk/lib/python3.10/site-packages/torch/optim/adamw.py", line 615, in _fused_adamw
torch._fused_adamw_(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)
```
I think it is caused by not placing the step on the corresponding device when loading the optimizer parameters.

But I don't know how to solve this problem.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.28
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: A100-SXM-80GB
Nvidia driver version: 450.191.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 3000.014
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 55296K
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.0.1+cu118
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 1 |
338 | 111,121 |
torch::serialize::OutputArchive::save_to crash if save on C:\
|
module: windows, module: serialization, triaged
|
### π Describe the bug
```
string model_path = "C:\\model.pt";
serialize::OutputArchive output_archive;
cp_net.save(output_archive);
output_archive.save_to(model_path);
```
Then crash - empty file are created.
If add some folder like this:
`string model_path = "C:\\Folder\\model.pt";`
Works fine
### Versions
libtorch-win-shared-with-deps-2.1.0+cpu
MSVC++Windows 10 VS 2022
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @mruberry @mikaylagawarecki
| 2 |
339 | 111,116 |
[MPS] add Hardshrink to MPS backend
|
triaged, open source, release notes: mps, ciflow/mps
|
Adds support for hardshrink forward and backwared to MPS backend.
This is a resubmission of a pull request that was cancelled because I badly botched a rebase (#110816).
| 1 |
340 | 111,113 |
Running inference with model.compile gives us wrong predictions on a X3D regression model trained using PyTorch 1.0
|
high priority, triage review, triaged, module: correctness (silent), oncall: pt2
|
### π Describe the bug
I am using an X3D model modified for regression outputting values between 10 and 60. This model was trained with a 1.x version of PyTorch. I am now using `PyTorch Version: 2.1.0.dev20230726+cu118`
Here are expected predictions :
```
[[42.1592]],
[[32.6108]],
[[31.3136]],
[[35.6934]],
[[30.8296]],
[[58.7482]],
[[28.8940]],
[[32.3956]],
[[22.5576]],
[[52.4671]],
```
If I add model.compile after loading the model, I get erroneous predictions such as
`-1.1, 0.1,1.2,1.3.`
If I remove the model.compile line, outputs are predicted normally.
Here is my code for performing inference:
```
import os
from collections import OrderedDict
import numpy as np
import pandas as pd
import torch
import tqdm
from oread import datasets
from oread.models import stam, timesformer, vivit, x3d
def run_inference(
chkpt_path,
data_filename,
root,
num_classes,
mean,
std,
split="inference",
modelname="x3d",
device="default",
datapoint_loc_label="filepath",
batch_size=8,
frames=64,
task="class",
save_to_file="view_pred.csv",
save_freq=20,
append_to_previous=True,
apply_mask=False,
transforms=[],
resize=224,
target_label="Outcome",
):
"""Runs test epoch, computes metrics, and plots test set results.
Args:
data_filename ([str]): [name of csv to load]
num_classes ([str]): [number of classes to predict]
chkpt_path ([str]): [path to folder containing "best.pt"]
modelname (str, optional): [name of model architecture to use]. Defaults to 'x3d'.
datapoint_loc_label (str, optional): [name of column in csv with paths to data]. Defaults to 'filepath'.
batch_size (int, optional): [batch size]. Defaults to 8.
frames (int, optional): [number of frames to use per video]. Defaults to 64.
"""
# This mean and STD is for CathEF ONLY
####mean = [120.953316, 120.953316, 120.953316]
####std = [39.573166, 39.573166, 39.573166]
kwargs = {
"num_classes": num_classes,
"mean": mean,
"std": std,
"length": frames,
"period": 1,
"data_filename": data_filename,
"root": root,
"datapoint_loc_label": datapoint_loc_label,
"apply_mask": apply_mask,
"video_transforms": transforms,
"resize": resize,
"target_label": target_label,
}
if device == "default":
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# One-hot encode target, detect number of classes
dataset = pd.read_csv(os.path.join(data_filename), sep="Β΅")
# build model, load state_dict
# specify model
if modelname == "x3d":
model = x3d(num_classes=num_classes, task=task)
if modelname == "timesformer":
model = timesformer(num_classes=num_classes)
if modelname == "stam":
model = stam(num_classes=num_classes)
elif modelname == "vivit":
model = vivit(num_classes=num_classes)
if modelname in [
"c2d_r50",
"i3d_r50",
"slow_r50",
"slowfast_r50",
"slowfast_r101",
"slowfast_16x8_r101_50_50",
"csn_r101",
"r2plus1d_r50",
"x3d_xs",
"x3d_s",
"x3d_m",
"x3d_l",
"mvit_base_16x4",
"mvit_base_32x3",
"efficient_x3d_xs",
"efficient_x3d_s",
]:
from oread.models import pytorchvideo_model
model = pytorchvideo_model(modelname, num_classes)
if device.type == "cuda":
model = torch.nn.DataParallel(model)
### If PyTorch 2.0 is used, the following line is needed to load the model
# model = torch.compile(model)
model.to(device)
if device == torch.device("cuda"):
print("Loading checkpoint from: ", os.path.join(chkpt_path, "best.pt"))
checkpoint = torch.load(os.path.join(chkpt_path, "best.pt"))
else:
checkpoint = torch.load(
os.path.join(chkpt_path, "best.pt"), map_location=torch.device("cpu")
)
# check if checkpoint was created with torch.nn.parallel
key, value = list(checkpoint["state_dict"].items())[0]
is_parallel = "module." in key
is_parallel
print("Is_parallel: ", is_parallel)
# if created with torch.nn.parallel and we are using cpu, we want to remove the "module." from key names
if device == torch.device("cpu") and is_parallel:
state_dict = torch.load(
os.path.join(chkpt_path, "best.pt"), map_location=torch.device("cpu")
)["state_dict"]
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)
else:
try:
model.load_state_dict(checkpoint["state_dict"])
except:
state_dict = torch.load(os.path.join(chkpt_path, "best.pt"))["state_dict"]
new_state_dict = OrderedDict()
print("Keys don't match, removing _orig_mod.")
for k, v in state_dict.items():
name = k[10:] # remove `_orig_mod.`
new_state_dict[name] = v
model.eval()
# pdb.set_trace()
dataset = datasets.Echo_Inference(split=split, **kwargs)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=batch_size,
num_workers=15,
shuffle=False,
pin_memory=(device.type == "cuda"),
)
print("Starting inference epoch...")
run_inference_epoch(
model,
modelname,
dataloader,
"inference",
device=device,
save_to_file=save_to_file,
save_freq=save_freq,
root=root,
append_to_previous=append_to_previous,
num_classes=num_classes,
)
def run_inference_epoch(
model,
modelname,
dataloader,
phase,
device,
blocks=None,
save_to_file="view_pred.csv",
save_freq=20,
append_to_previous=False,
root="",
num_classes=21,
):
yhat_for_save = []
fns_for_save = []
save_count = 0
nbatch = len(dataloader)
preds_df = None
with torch.set_grad_enabled(False):
with tqdm.tqdm(total=len(dataloader)) as pbar:
for (i, (X, filenames)) in enumerate(dataloader):
X = X.to(device)
average = len(X.shape) == 6
if average:
batch, n_crops, c, f, h, w = X.shape
X = X.view(-1, c, f, h, w)
if modelname in ["vivit", "timesformer", "stam"]:
X = X.permute(0, 2, 1, 3, 4)
if blocks is None:
outputs = model(X)
else:
outputs = torch.cat(
[model(X[j : (j + blocks), ...]) for j in range(0, X.shape[0], blocks)]
)
if average:
outputs = outputs.view(batch, n_crops, -1).mean(1)
if modelname in [
"c2d_r50",
"i3d_r50",
"slow_r50",
"slowfast_r50",
"slowfast_r101",
"slowfast_16x8_r101_50_50",
"csn_r101",
"r2plus1d_r50",
"x3d_xs",
"x3d_s",
"x3d_m",
"x3d_l",
"mvit_base_16x4",
"mvit_base_32x3",
"efficient_x3d_xs",
"efficient_x3d_s",
]:
outputs = outputs[:, 1].float()
elif outputs.shape[0] == 1:
reshaped = outputs.squeeze(2)
print("outputs", outputs)
print("reshaped", reshaped)
else:
reshaped = outputs.squeeze()
print("outputs", outputs)
print("reshaped squeeze", reshaped)
yhat_for_save.append(outputs.squeeze().to("cpu").detach().numpy())
fns_for_save.append(filenames)
# pdb.set_trace()
if (i + 1) % save_freq == 0 or i == nbatch - 1:
# pdb.set_trace()
flatten_yhat_for_save = [
item for sublist in yhat_for_save for item in sublist
]
if num_classes == 1:
# pdb.set_trace()
this_preds = pd.DataFrame(flatten_yhat_for_save)
elif num_classes == 2:
this_preds = pd.DataFrame(np.hstack(yhat_for_save))
else:
this_preds = pd.DataFrame(np.row_stack(yhat_for_save))
flattened_list = [item for sublist in fns_for_save for item in sublist]
this_preds["filename"] = flattened_list
# Assuming this_preds is your DataFrame
integer_columns = [
col for col in this_preds.columns if isinstance(col, int) or col.isdigit()
]
# Select the columns with integer names
selected_columns = this_preds[integer_columns]
# Find the index of the maximum value in each row using idxmax
y_pred_cat = selected_columns.idxmax(axis=1)
# You can now assign y_pred_cat to a new column in this_preds if needed
this_preds["y_pred_cat"] = y_pred_cat
if save_count == 0:
if append_to_previous:
if os.path.exists(save_to_file):
# File exists, read it
preds_df = pd.read_csv(save_to_file)
else:
# File does not exist, create an empty DataFrame with the same columns as this_preds
preds_df = pd.DataFrame(columns=this_preds.columns)
# Concatenate this_preds to preds_df
preds_df = pd.concat([preds_df, this_preds], axis=0)
else:
preds_df = this_preds
else:
preds_df = pd.concat([preds_df, this_preds], axis=0)
preds_df.to_csv(save_to_file, index=False)
print(f"Saved round {save_count}")
yhat_for_save = []
fns_for_save = []
save_count += 1
## Save file only if preds_df is not None
if preds_df is not None:
preds_df.to_csv(save_to_file, index=False)
# pdb.set_trace()
pbar.update()
```
### Versions
PyTorch version: 2.1.0.dev20230726+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:49:35) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.62.1.el7.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.60.13
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz
Stepping: 4
CPU MHz: 2699.945
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4600.00
Virtualization: VT-x
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 24 MiB
L3 cache: 33 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full retpoline, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.1.0+9e3e10c5ed
[pip3] pytorchvideo==0.1.0
[pip3] pytorchvideo-nightly==0.0.3
[pip3] stam-pytorch==0.0.4
[pip3] timesformer-pytorch==0.3.2
[pip3] torch==2.1.0.dev20230726+cu118
[pip3] torchaudio==2.1.0.dev20230726+cu118
[pip3] torchsummary==1.5.1
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.16.0.dev20230726+cu118
[pip3] triton==2.0.0
[conda] cudatoolkit 11.8.0 h37601d7_11 conda-forge
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9e3e10c5ed pypi_0 pypi
[conda] pytorchvideo 0.1.0 pypi_0 pypi
[conda] pytorchvideo-nightly 0.0.3 pypi_0 pypi
[conda] stam-pytorch 0.0.4 pypi_0 pypi
[conda] timesformer-pytorch 0.3.2 pypi_0 pypi
[conda] torch 2.1.0.dev20230726+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230726+cu118 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230726+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
| 6 |
341 | 111,112 |
Add comment to keep PYI041 disabled
|
triaged, open source, release notes: distributed (fsdp), topic: not user facing, module: inductor, ciflow/inductor
|
Enable [redundant-numeric-union (PYI041)](https://docs.astral.sh/ruff/rules/redundant-numeric-union/)
Link: #110950
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
342 | 111,094 |
Reduce overhead in cudagraph trees
|
ciflow/trunk, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111094
* #111016
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
343 | 111,092 |
[HigherOrderOp] Move map_impl to torch.ops.higher_order
|
ciflow/trunk, module: export, release notes: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111152
* #111147
* __->__ #111092
As titled.
Test Plan:
existing tests.
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
Differential Revision: [D50232731](https://our.internmc.facebook.com/intern/diff/D50232731)
| 3 |
344 | 111,089 |
[AOTInductor] Wrapper codegen fixes
|
fb-exported, topic: not user facing, module: inductor, ciflow/inductor
|
Summary:
The changes:
1. When using fake inputs make sure they are on the same device as the original inputs.
1. Don't change the value of `self.cpp_wrapper` from `True` to `False` if can't generate a C++ wrapper, instead have a check and fail early to avoid producing Python code for C++ compiler.
Differential Revision: D50154720
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
345 | 111,086 |
Build failure with Xcode 15 linker
|
module: build, triaged, module: macos
|
### π Describe the bug
When building with Apple Clang 15, I see the following build error:
```
FAILED: lib/libtorch_cpu.dylib
...
ld: warning: duplicate -rpath '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/pthreadpool-2021-04-13-3jrtr6fgfmbe7yl2ele6dtttwljr3iqk/lib' ignored
ld: warning: duplicate -rpath '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/protobuf-3.24.3-uj3r4mhyae45adjcmliahvn3rsjyguxu/lib' ignored
ld: warning: duplicate -rpath '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/abseil-cpp-20230125.3-geqbwotmm5vdh6tdbwwnz6ox66ixihjd/lib' ignored
ld: warning: ignoring duplicate libraries: 'lib/libcpuinfo.a', 'lib/libnnpack.a'
ld: multiple errors: duplicate LC_RPATH '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/gcc-13.2.0-qhn4xdjhn2p7k2fnvllfz2ca6xhwl2yu/lib' in '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/openblas-0.3.24-lsn4jcy34p62ysj7qq2ghlsawu6sco6a/lib/libopenblas-r0.3.24.dylib'; duplicate LC_RPATH '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/gcc-13.2.0-qhn4xdjhn2p7k2fnvllfz2ca6xhwl2yu/lib' in '/Users/Adam/spack/opt/spack/darwin-sonoma-m2/apple-clang-15.0.0/openblas-0.3.24-lsn4jcy34p62ysj7qq2ghlsawu6sco6a/lib/libopenblas-r0.3.24.dylib'
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
This appears to be due to the new linker added in Xcode 15. I'm able to build successfully using the following workaround:
```
export LDFLAGS="-Wl,-ld_classic"
```
See the build logs for details:
* [spack-build-out.txt](https://github.com/pytorch/pytorch/files/12874928/spack-build-out.txt)
* [spack-build-env-mods.txt](https://github.com/pytorch/pytorch/files/12874929/spack-build-env-mods.txt)
### Versions
Script fails because I don't have pip installed.
* PyTorch 2.1.0
* Apple Clang 15.0.0
* macOS Sonoma 14.0
* Apple M2 Pro (arm64)
* No CUDA/CUDNN/NCCL
* MPS
cc @malfet @seemethere @albanD
| 1 |
346 | 111,085 |
Getting "master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified" warning when using rdzv.
|
oncall: distributed
|
### π Describe the bug
Here we print based on whether `master_addr` is defined:
- https://github.com/pytorch/pytorch/blob/4d29b40299e9af1e4287b48024b6ef28b7cbf738/torch/distributed/run.py#L695-L699
But it is always defined:
- https://github.com/pytorch/pytorch/blob/4d29b40299e9af1e4287b48024b6ef28b7cbf738/torch/distributed/run.py#L565
This PR is related @cz-37 @d4l3k :
- https://github.com/pytorch/pytorch/pull/88922
### Versions
latest
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
347 | 111,082 |
There is a performance drop because we have not yet implemented the batching rule for aten::mkldnn_rnn_layer_backward.
|
triaged, module: vmap, module: functorch
|
### π Describe the bug
Hi all,
I just received the following warning, and I am doing what is suggested there. Many thanks!
```
/home/kshamsaei/miniconda3/envs/torch/lib/python3.11/site-packages/torch/autograd/__init__.py:303: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::mkldnn_rnn_layer_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343995622/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
### Versions
N/A
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
348 | 111,311 |
There is a performance drop because we have not yet implemented the batching rule for aten::mkldnn_rnn_layer_backward.
|
triaged, module: functorch
|
Hi all,
I just received the following warning, and I am doing what is suggested there. Many thanks!
```
/home/kshamsaei/miniconda3/envs/torch/lib/python3.11/site-packages/torch/autograd/__init__.py:303: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::mkldnn_rnn_layer_backward. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343995622/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
349 | 111,081 |
AOTAutograd perf: avoid as_strided() calls when we have intermediate bases
|
triaged, oncall: pt2, module: aotdispatch
|
This is a more targeted version of an existing issue around as_strided calls in AOTAutograd, https://github.com/pytorch/pytorch/issues/109237. Came from an internal issue
Simple repro:
```
import torch
@torch.compile
def f(x):
out = x.mul(2)
return out.view(out.shape), out.view(out.shape)
inps = (torch.randn(5), torch.tensor(0))
x = torch.randn(4, requires_grad=True)
out1, out2 = f(x)
print(out1.grad_fn)
```
prints:
```
<AsStridedBackward0 object at 0x7f4c7d4d2260>
```
We end up calling as_strided in the compiled forward, so an `AsStridedBackward` node shows up in the backward, which in general is not implemented to be particularly fast.
Why does this happen?
(1) AOTAutograd has logic for "intermediate bases". If we have two outputs of our graph that are aliases of each other (and of the same graph intermediate), today, AOTAutograd will just have the shared intermediate be an output to the graph. AOTAutograd will then replay the views off of the intermediate, so that autograd properly realizes that the outputs alias.
(2) AOTAutograd has a function to try to do the view replay, but it hits a slow path in that function that causes it to go to as_strided. We should figure out why and fix this: https://github.com/pytorch/pytorch/blob/main/torch/_functorch/aot_autograd.py#L807
cc @ezyang @msaroufim @wconstab @anijain2305
| 7 |
350 | 111,077 |
dump_operator_names.cc uses std::cout but dose not include iostream
|
triaged, better-engineering, actionable
|
### π Describe the bug
I am unable to build pytorch 2.1.0 on arch linux due to this line: https://github.com/pytorch/pytorch/blob/6c7013a3dce50315bb5d3adf33efce1490266c13/binaries/dump_operator_names.cc#L31
neither dump_operator_names.cc nor its intimidate includes include iostream.
Presumably on some platforms this works as iostream is included indirectly somewhere, this should not be relied on.
I also challenge the appropriateness of printing directly to stdout instead of to PyTorch's logging functions even in a function intended for debugging
### Versions
Pytorch 2.1.0
| 1 |
351 | 111,075 |
Export swallows exception
|
triaged, oncall: pt2, module: export
|
### π Describe the bug
torch export on except raises another exception without including original exception.
That hides failure information.
code point:
https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/utils.py#L1412
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 2 |
352 | 111,070 |
Option to disable fastpath in MHA
|
needs design, oncall: transformer/mha
|
### π The feature, motivation and pitch
I have trained a transformer model but I'm not able to run inference with it because of fastpath errors commented here: #88806 and #97128. I would like to run the inference with the normal path but I can't find any options to disable fastpath.
### Alternatives
One option is to add a flag like
```python
torch._fastpath.config.enable=False
```
or use environment variables.
### Additional context
I could disable it by setting the following line with a non-empty string: https://github.com/pytorch/pytorch/blob/d3205f83770f83cfa315a38f4448c64670758b03/torch/nn/modules/activation.py#L1111
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki
| 0 |
353 | 111,062 |
DISABLED test_unfuse_bias_addmm (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
354 | 111,060 |
DISABLED test_uint4x2_mixed_mm_gating_works (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
355 | 111,059 |
DISABLED test_uint4x2_mixed_mm_fail_to_match (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
356 | 111,058 |
DISABLED test_uint4x2_mixed_mm_epi (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
357 | 111,057 |
DISABLED test_uint4x2_mixed_mm (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
358 | 111,056 |
DISABLED test_splitwithsizes_cat (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
359 | 111,055 |
[dynamo] Allow autograd.Function tracing to smuggle attributes through lifting
|
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111055
Today, we say an attribute crosses ctx boundaries when it is written to the ctx in Autograd.Function forward and accessed in backwards.
This mostly works. However, if that attribute has a proxy, (tensor, symint) - it will fail during attribute lifting as the proxy.tracer is from the forward speculation. We have prior art of bypassing lifting, as the proxy is already known the tracer, via the save_for_backwards calls in autograd.Function.
This PR extends a similar hit through the proxy to let attribute lifting know that this proxy is already known to us and is safely tracked in side_effects.
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
360 | 111,054 |
DISABLED test_mm_plus_mm (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
361 | 111,053 |
DISABLED test_mixed_mm_gating (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
362 | 111,052 |
DISABLED test_mixed_mm_epi_works (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
363 | 111,051 |
DISABLED test_mixed_mm (__main__.TestPaternMatcher)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
364 | 111,050 |
DISABLED test__int_mm (__main__.TestSelectAlgorithm)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
365 | 111,049 |
[sparse] semi-structured sparse + torch.compile support
|
Merged, Reverted, ciflow/trunk, release notes: sparse, ciflow/slow
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111049
* #110583
Summary:
This PR adds in torch.compile support for semi-structured sparsity,
using the subclass tracing @bdhirsh added.
Based on wether we are using cuSPARSELt or CUTLASS, we return a
different representation of the inner tensors.
Test Plan:
```
python test/test_sparse_semi_structured.py -k compile
```
Reviewers:
Subscribers:
Tasks:
Tags:
| 23 |
366 | 111,048 |
Remove the dependency on kineto when not using kineto
|
triaged, open source
|
The shim still requires the ActivityTYpe.h header to get the enum Activity type.
So cut-n-paste just enough of the header in to do this.
| 3 |
367 | 111,047 |
[HigherOrderOp] cond should accept pytree inputs
|
triaged, oncall: pt2
|
### π The feature, motivation and pitch
Similar to what we did in https://github.com/pytorch/pytorch/pull/109962. We should support pytree inputs for cond as well.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
368 | 111,046 |
[WIP] Persist copy_ in training graph for inputs that don't require grad
|
module: inductor, module: dynamo, ciflow/inductor, release notes: AO frontend
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111046
In this PR, we try to keep the input mutations in the forward graph IFF input mutation is data mutation and not metadata mutation and doesn't require grad. This is for optimizing inductor training graphs. (For more details: https://github.com/pytorch/pytorch/issues/109240) Previously, this was only enable for forward-only path but unconditionally disabled for joint graph. Another caveat is that when we are tracing through subclasses, we won't allow any input mutations to be preserved in the graph. The reason is that it makes the code quite ugly for no obvious performance improvement.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
369 | 111,044 |
[ROCm] Skip failing tests on ROCm
|
module: rocm, open source, topic: not user facing
|
Cherry picked https://github.com/ROCmSoftwarePlatform/pytorch/commit/2d9f3ec757d0c71d613c59e88c082c74a86f828a
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 2 |
370 | 111,043 |
[ROCm] Centos stream9 pytorch image support
|
module: rocm, open source, topic: not user facing
|
This PR is by cherry picking the below two commits.
Centos stream9 PyTorch image support - https://github.com/ROCmSoftwarePlatform/pytorch/commit/5ab8d09d51acd74ccd483e8d3b128f178dd4bca4
Updated to latest conda for CentOS stream 9 - https://github.com/ROCmSoftwarePlatform/pytorch/commit/2cb18f8d102b12b90a1635e78f8c3eaa7ccc8fcd
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
371 | 111,038 |
DISABLED test_multi_return_some_unused (__main__.TestSerialize)
|
module: rocm, triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_multi_return_some_unused) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17591397405).
Over the past 72 hours, it has flakily failed in 6 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_multi_return_some_unused`
Test file path: `export/test_serialize.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
372 | 111,037 |
[experiment] see the effect of freezing on AOTInductor
| null | null | 2 |
373 | 111,034 |
[ONNX] Export of `torch.distributions.normal.Normal` fails in functionalization
|
module: onnx, triaged, onnx-triaged, module: dynamo
|
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
self.normal = torch.distributions.normal.Normal(0, 1)
super().__init__()
def forward(self, x):
return self.normal.sample(x.shape)
model = Model()
x = torch.randn(2, 3)
print(model(x))
print(torch.onnx.dynamo_export(model, x).model_proto)
```
Fails with
```
tensor([[-0.4411, -1.2260, 0.9509],
[-1.6602, 2.4344, 2.1532]])
/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:130: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1195, in dynamo_export
).export()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 950, in export
graph_module = pre_export_passes(
File "<@beartype(torch.onnx._internal.exporter.pre_export_passes) at 0x7f9de3794f70>", line 97, in pre_export_passes
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1235, in pre_export_passes
module = passes.Functionalize(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 151, in wrapper
ctx.log_and_raise_if_error(diag)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 366, in log_and_raise_if_error
raise diagnostic.source_exception
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 135, in wrapper
return_values = fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py", line 267, in run
module = self._run(*args, **kwargs)
File "<@beartype(torch.onnx._internal.fx.passes.functionalization.Functionalize._run) at 0x7f9de37fcee0>", line 11, in _run
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/functionalization.py", line 123, in _run
graph_module = proxy_tensor.make_fx(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 841, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 461, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 817, in trace
(self.create_arg(fn(*args)),),
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 497, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/functionalization.py", line 95, in wrapped
pytree.tree_map(torch._sync, out)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_pytree.py", line 291, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_pytree.py", line 291, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_utils.py", line 881, in _functionalize_sync
torch._functionalize_sync(t)
RuntimeError: at::functionalization::impl::isFunctionalTensor(self_) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/python_torch_functions_manual.cpp":561, please report a bug to PyTorch.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/justinchu/dev/onnx-script/onnxscript/export.py", line 16, in <module>
print(torch.onnx.dynamo_export(model, x).model_proto)
File "<@beartype(torch.onnx._internal.exporter.dynamo_export) at 0x7f9de3794e50>", line 53, in dynamo_export
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1206, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
```
### SARIF
```json
{
"runs":[
{
"tool":{
"driver":{
"name":"torch.onnx.dynamo_export",
"contents":[
"localizedData",
"nonLocalizedData"
],
"language":"en-US",
"rules":[
{
"id":"FXE0010",
"fullDescription":{
"text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR.",
"markdown":"This diagnostic tracks the FX passes executed during the ONNX export process prior\nto converting from FX IR (Intermediate Representation) to ONNX IR.\n\nUnder the scope of ONNX export, an FX pass refers to a specific transformation applied to the FX GraphModule.\nThe primary aim of these passes is to streamline the graph into a format that aligns more with the ONNX IR.\nMoreover, these passes work to substitute unsupported FX IR features with those recognized and endorsed by\nONNX IR. Common transformations include, but aren't limited to, decomposition, functionalization and\ntype promotion.\n\nFor those who are interested in a comprehensive log detailing the modifications made during these passes,\nthere are a couple of options:\n\n- Set DiagnosticOptions.verbosity_level to logging.DEBUG.\n- Activate the environment variable TORCH_LOGS='onnx_diagnostics'.\n\nHowever, it's noteworthy that by default, such detailed logging is turned off. The primary reason being\nits considerable impact on performance.\n\nFor an in-depth understanding of each specific pass, please refer to the directory: torch/onnx/_internal/fx/passes.\n"
},
"name":"fx-pass",
"shortDescription":{
"text":"FX graph transformation during ONNX export before converting from FX IR to ONNX IR."
}
}
],
"version":"2.2.0.dev20231011+cpu"
}
},
"language":"en-US",
"newlineSequences":[
"\r\n",
"\n"
],
"results":[
{
"message":{
"markdown":"Running Decompose pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: <class 'torch.onnx._internal.fx.passes.decomp.Decompose'>\n- args: Tuple[length=1](\nTensor(f32[2, 3]),\n)\nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Return values\ntorch.fx.GraphModule(<lambda>)",
"text":"Running Decompose pass. "
},
"codeFlows":[
{
"threadFlows":[
{
"locations":[]
}
]
}
],
"graphs":[],
"kind":"informational",
"level":"none",
"locations":[
{
"message":{
"text":"Transform.run"
},
"physicalLocation":{
"artifactLocation":{
"uri":"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py"
},
"region":{
"snippet":{
"text":"@diagnostics.diagnose_call("
},
"startLine":232
}
}
}
],
"properties":{
"tags":[]
},
"ruleId":"FXE0010",
"stacks":[]
},
{
"message":{
"markdown":"Running Functionalize pass. \n\n## Additional Message:\n\n## Function Signature\n### Function Signature Transform.run\n- self: <class 'torch.onnx._internal.fx.passes.functionalization.Functionalize'>\n- args: Tuple[length=1](\nTensor(f32[2, 3]),\n)\nFor detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.\n## Exception log\n```\nTraceback (most recent call last):\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py\", line 135, in wrapper\n return_values = fn(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py\", line 267, in run\n module = self._run(*args, **kwargs)\n\n File \"<@beartype(torch.onnx._internal.fx.passes.functionalization.Functionalize._run) at 0x7feb91391b40>\", line 11, in _run\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/functionalization.py\", line 123, in _run\n graph_module = proxy_tensor.make_fx(\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 858, in wrapped\n t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_compile.py\", line 24, in inner\n return torch._dynamo.disable(fn, recursive)(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 401, in _fn\n return fn(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/external_utils.py\", line 17, in inner\n return fn(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 478, in dispatch_trace\n graph = tracer.trace(root, concrete_args)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 401, in _fn\n return fn(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/external_utils.py\", line 17, in inner\n return fn(*args, **kwargs)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py\", line 817, in trace\n (self.create_arg(fn(*args)),),\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py\", line 514, in wrapped\n out = f(*tensors)\n\n File \"<string>\", line 1, in <lambda>\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/functionalization.py\", line 95, in wrapped\n pytree.tree_map(torch._sync, out)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_pytree.py\", line 314, in tree_map\n return tree_unflatten([fn(i) for i in flat_args], spec)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_pytree.py\", line 314, in <listcomp>\n return tree_unflatten([fn(i) for i in flat_args], spec)\n\n File \"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_utils.py\", line 881, in _functionalize_sync\n torch._functionalize_sync(t) # type: ignore[attr-defined]\n\nRuntimeError: at::functionalization::impl::isFunctionalTensor(self_) INTERNAL ASSERT FAILED at \"../torch/csrc/autograd/python_torch_functions_manual.cpp\":598, please report a bug to PyTorch. \n\n```",
"text":"Running Functionalize pass. "
},
"codeFlows":[
{
"threadFlows":[
{
"locations":[]
}
]
}
],
"graphs":[],
"kind":"fail",
"level":"error",
"locations":[
{
"message":{
"text":"Transform.run"
},
"physicalLocation":{
"artifactLocation":{
"uri":"/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py"
},
"region":{
"snippet":{
"text":"@diagnostics.diagnose_call("
},
"startLine":232
}
}
}
],
"properties":{
"tags":[]
},
"ruleId":"FXE0010",
"stacks":[]
}
]
}
],
"version":"2.1.0",
"schemaUri":"https://docs.oasis-open.org/sarif/sarif/v2.1.0/cs01/schemas/sarif-schema-2.1.0.json"
}
```
## Environments
```
Collecting environment information...
PyTorch version: 2.2.0.dev20231011+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-1011-azure-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 6
BogoMIPS: 5586.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt avx512cd avx512bw avx512vl xsaveopt xsavec xsaves md_clear
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 40 MiB (32 instances)
L3 cache: 48 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.1
[pip3] onnx-function-experiment==1.13.0.dev20230109
[pip3] onnx-script==0.1.0
[pip3] onnx-weekly==1.15.0.dev20231002
[pip3] onnxconverter-common==1.13.0
[pip3] onnxruntime==1.16.0
[pip3] onnxscript==0.1.0.dev20230913
[pip3] torch==2.2.0.dev20231011+cpu
[pip3] torchaudio==2.2.0.dev20231011+cpu
[pip3] torchvision==0.17.0.dev20231011+cpu
[pip3] triton==2.1.0
[conda] numpy 1.25.1 pypi_0 pypi
[conda] torch 2.2.0.dev20231011+cpu pypi_0 pypi
[conda] torchaudio 2.2.0.dev20231011+cpu pypi_0 pypi
[conda] torchvision 0.17.0.dev20231011+cpu pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 15 |
374 | 111,033 |
`CapabilityBasedPartitioner` returns invalid partitions.
|
triaged, oncall: pt2
|
### π Describe the bug
Running `Super_SloMo` from torchbench with XLA fails with:
```python
File "torch/_dynamo/backends/torchxla.py", line 49, in fwd
compiled_graph = bridge.extract_compiled_graph(model, args)
File "xla/torch_xla/core/dynamo_bridge.py", line 523, in extract_compiled_graph
partitioned_graph = partitioner.fuse_partitions(partitions)
File "torch/fx/passes/infra/partitioner.py", line 217, in fuse_partitions
return fuse_by_partitions(self.graph_module, [list(partition.nodes) for partition in partitions])
File "torch/fx/passes/utils/fuser_utils.py", line 224, in fuse_by_partitions
sub_gm, orig_inputs, orig_outputs = fuse_as_graphmodule(gm, sorted_nodes, submodule_name)
File "torch/fx/passes/utils/fuser_utils.py", line 123, in fuse_as_graphmodule
assert validate_partition(nodes), "Invalid partition, found dependency cycles"
AssertionError: Invalid partition, found dependency cycles
```
By applying the patch below, we can see that the list of partitions returned by `proposed_partitions` is already invalid:
```python
diff --git a/torch/fx/passes/infra/partitioner.py b/torch/fx/passes/infra/partitioner.py
index 7693f528af5..45080905cea 100644
--- a/torch/fx/passes/infra/partitioner.py
+++ b/torch/fx/passes/infra/partitioner.py
@@ -1,6 +1,6 @@
from typing import Dict, List, Set, Iterable, Sequence, Optional, Deque
-from torch.fx.passes.utils.fuser_utils import fuse_by_partitions
+from torch.fx.passes.utils.fuser_utils import fuse_by_partitions, validate_partition
from torch.fx.graph_module import GraphModule
from torch.fx.node import Node, _get_qualified_name
@@ -208,6 +208,7 @@ class CapabilityBasedPartitioner:
logger.debug("Partitions proposed:")
for id, partition in partitions_by_id.items():
logger.debug("partition #%s: %s", id, [node.name for node in partition.nodes])
+ assert validate_partition(list(partition.nodes)), "BAD"
return list(partitions_by_id.values())
```
Command used:
```
PJRT_DEVICE=CPU python benchmarks/dynamo/torchbench.py --randomize-input --performance --trace-on-xla --backend=openxla_eval --inference --only Super_SloMo
```
### Versions
- PyTorch version: 2bf3ca1be759460cf9fbf011d96d3246001361e9 (Oct 4)
- PyTorch/XLA version: c9a132484fb89bfdc9c602ada7bd8a3cec0db1aa (Oct 3)
- PyTorch/benchmark version: bb5275f819d479baacd24fe96000474355586096 (Oct 4)
- Needs a patch in order to work with XLA
```python
diff --git a/torchbenchmark/models/Super_SloMo/slomo_model.py b/torchbenchmark/models/Super_SloMo/slomo_model.py
index a06e51eb..60160c18 100644
--- a/torchbenchmark/models/Super_SloMo/slomo_model.py
+++ b/torchbenchmark/models/Super_SloMo/slomo_model.py
@@ -249,8 +249,10 @@ class backWarp(nn.Module):
# Use torch.meshgrid instead of np.meshgrid to imrpove performance
# https://github.com/avinashpaliwal/Super-SloMo/pull/111
- self.gridX, self.gridY = torch.meshgrid(torch.arange(W, requires_grad=False, device=device),
- torch.arange(H, requires_grad=False, device=device), indexing='xy')
+ gridX, gridY = torch.meshgrid(torch.arange(W, requires_grad=False, device=device),
+ torch.arange(H, requires_grad=False, device=device), indexing='xy')
+ self.register_buffer("gridX", gridX)
+ self.register_buffer("gridY", gridY)
def forward(self, img, flow):
"""
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
375 | 111,032 |
[dynamo] allow_in_graph decorator doesn't work on autograd.Function
|
triaged, oncall: pt2
|
### π Describe the bug
sample:
```
import torch
from torch.autograd import Function
import torch._dynamo
@torch._dynamo.allow_in_graph
class Foo(Function):
@staticmethod
def forward(ctx, x):
ctx.x0 = x.size(0)
return x * 2
@staticmethod
def backward(ctx, grad_out):
return grad_out * ctx.x0
@torch.compile(backend="eager", fullgraph=True, dynamic=True)
def foo(x):
return Foo.apply(x)
foo(torch.randn(2, requires_grad=True))
```
you can see we're still trying to speculate subgraph (and failing due to https://github.com/pytorch/pytorch/issues/111031 ).
Workaround is to make a function that calls Foo.apply, and then allow in graph that.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
376 | 111,029 |
[JIT] Error when scripting wrapper of `matrix_norm` using `p: Union[str, int]`
|
oncall: jit
|
### π Describe the bug
The following fails
```python
from typing import Union
import torch
from torch import Tensor, jit
def wrapped_matrix_norm(x: Tensor, p: Union[str, int] = "fro") -> Tensor:
return torch.linalg.matrix_norm(x, ord=p)
jit.script(wrapped_matrix_norm) # β error
```
<details> <summary> with exception RuntimeError </summary>
```
Arguments for call are not valid.
The following variants are available:
aten::linalg_matrix_norm(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, ScalarType? dtype=None) -> Tensor:
Expected a value of type 'number' for argument 'ord' but instead found type 'Union[int, str]'.
aten::linalg_matrix_norm.str_ord(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, ScalarType? dtype=None) -> Tensor:
Expected a value of type 'str' for argument 'ord' but instead found type 'Union[int, str]'.
aten::linalg_matrix_norm.out(Tensor self, Scalar ord, int[] dim=[-2, -1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!):
Expected a value of type 'number' for argument 'ord' but instead found type 'Union[int, str]'.
aten::linalg_matrix_norm.str_ord_out(Tensor self, str ord="fro", int[] dim=[-2, -1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!):
Expected a value of type 'str' for argument 'ord' but instead found type 'Union[int, str]'.
```
</details>
Adding a completely redundant if-clause makes the error disappear:
```python
def wrapped_matrix_norm2(x: Tensor, p: Union[str, int] = "fro") -> Tensor:
if isinstance(p, int): # completely redundant
return torch.linalg.matrix_norm(x, ord=p)
return torch.linalg.matrix_norm(x, ord=p)
jit.script(wrapped_matrix_norm2) # β no error
```
### Versions
<details>
```
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-annotations==3.0.1
[pip3] flake8-bugbear==23.9.16
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-docstrings==1.7.0
[pip3] flake8-pyi==23.6.0
[pip3] flake8-rst==0.8.0
[pip3] flake8-rst-docstrings==0.3.0
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torchinfo==1.8.0
[pip3] triton==2.1.0
[conda] Could not collect
```
</details>
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
377 | 111,027 |
[PT2.1] SIGSEGV seen with view + sgn operator inside torch.compile
|
triage review, triaged, ZeroTensor, oncall: pt2
|
### π Describe the bug
when view operator with sgn used inside torch.compile, then signal segmentation violation error show.
Please use below code to reproduce the issue.
```
import torch
def fn(a):
b = a.view((2, 2))
return b.sgn()
x_cpu =torch.tensor([[2.0, 2], [-2, -2]], requires_grad=True)
compiled_fn = torch.compile(fn)
y_cpu = compiled_fn(x_cpu)
print("y_hpu", y_cpu)
```
### Error logs
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fffeb281440 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_() ()
from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so
(gdb) bt
#0 0x00007fffeb281440 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_() ()
from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so
#1 0x00007fffeb2a9999 in at::FunctionalTensorWrapper::replace_(at::Tensor const&) () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so
#2 0x00007fffeb2aa48c in at::FunctionalTensorWrapper::regenerate_from_base() () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so
#3 0x00007ffff6710e3b in torch::autograd::THPVariable__sync(_object*, _object*, _object*) ()
from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_python.so
#4 0x00000000005f6939 in PyCFunction_Call ()
#5 0x00000000005f7506 in _PyObject_MakeTpCall ()
#6 0x0000000000570b8e in _PyEval_EvalFrameDefault ()
#7 0x00007ffff673afcb in custom_eval_frame_shim () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_python.so
#8 0x00000000005f6ce6 in _PyFunction_Vectorcall ()
#9 0x000000000056b4ed in _PyEval_EvalFrameDefault ()
#10 0x00007ffff673afcb in custom_eval_frame_shim () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_python.so
#11 0x00000000005697da in _PyEval_EvalCodeWithName ()
#12 0x00000000005f6ec3 in _PyFunction_Vectorcall ()
#13 0x000000000056b4ed in _PyEval_EvalFrameDefault ()
#14 0x00007ffff673afcb in custom_eval_frame_shim () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_python.so
#15 0x00000000005697da in _PyEval_EvalCodeWithName ()
#16 0x00000000005f6ec3 in _PyFunction_Vectorcall ()
#17 0x0000000000570556 in _PyEval_EvalFrameDefault ()
#18 0x00007ffff673afcb in custom_eval_frame_shim () from /tmp/lib/python3.8/site-packages/torch/lib/libtorch_python.so
#19 0x00000000005697da in _PyEval_EvalCodeWithName ()
#20 0x00000000005f6ec3 in _PyFunction_Vectorcall ()
### Minified repro
_No response_
### Versions
[pip3] numpy==1.24.4
[pip3] torch==2.1.0
[pip3] torchaudio==2.0.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.2.0
[pip3] torchtext==0.15.2a0
[pip3] torchvision==0.15.1a0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
378 | 111,025 |
Unprompted UserWarning
|
module: optimizer, triaged
|
### π Describe the bug
When running pytorch code for machine learning, I receive error for a function for training one epoch. The code is:
```
def train_one_epoch(model, epoch_index, tb_writer, optimizer, scheduler):
running_loss = 0.
last_loss = 0.
for i, data in enumerate(training_loader):
inputs, labels = data['image'].to(device), data['rez'].to(device)
optimizer.zero_grad()
try:
outputs = model(inputs.float())
except: continue
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 10 == 9:
last_loss = running_loss / 10
tb_x = epoch_index * len(training_loader) + i + 1
tb_writer.add_scalar('Loss/train', last_loss, tb_x)
running_loss = 0.
before_lr = optimizer.param_groups[0]["lr"]
scheduler.step()
after_lr = optimizer.param_groups[0]["lr"]
target = [data['rez'] for data in training_loader]
return last_loss, scheduler, model, before_lr, after_lr
```
Although the order of optimizer and lr_scheduler is correct I receive error:
```
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before optimizer.step().
```
I believe it is a bug for it's the only part of the code that uses optimizer and r_scheduler. They are initialized like this:
```
optimizer = torch.optim.SGD(model.parameters(), lr=l, momentum=m)
scheduler = torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=1.0, end_factor=0.3, total_iters=10)
```
### Versions
wget line returned 404 so I am pasting this:
```
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
abseil-cpp 20211102.0 h27087fc_1 conda-forge
absl-py 1.4.0 pyhd8ed1ab_0 conda-forge
aiohttp 3.8.1 py310h5764c6d_1 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
asttokens 2.0.5 pyhd3eb1b0_0 anaconda
async-timeout 4.0.2 py310h06a4308_0
attrs 23.1.0 pyh71513ae_1 conda-forge
backcall 0.2.0 pyhd3eb1b0_0 anaconda
blas 1.0 mkl
blinker 1.6.2 pyhd8ed1ab_0 conda-forge
blosc 1.21.3 h6a678d5_0
boost-cpp 1.73.0 h7f8727e_12
bottleneck 1.3.5 py310ha9d4c09_0 anaconda
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py310h7f8727e_1002
brunsli 0.1 h2531618_0 anaconda
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.0 h5eee18b_0
ca-certificates 2023.01.10 h06a4308_0 anaconda
cachetools 5.3.1 pyhd8ed1ab_0 conda-forge
cairo 1.16.0 hb05425b_5
certifi 2022.12.7 py310h06a4308_0 anaconda
cffi 1.15.1 py310h5eee18b_3
cfitsio 3.470 h5893167_7
charls 2.2.0 h2531618_0 anaconda
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.6 unix_pyh707e725_0 conda-forge
cloudpickle 2.0.0 pyhd3eb1b0_0 anaconda
contourpy 1.0.5 py310hdb19cb5_0
cryptography 41.0.2 py310h774aba0_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
curl 8.1.1 h37d81fd_2
cycler 0.11.0 pyhd3eb1b0_0
cytoolz 0.12.0 py310h5eee18b_0 anaconda
dask-core 2022.7.0 py310h06a4308_0 anaconda
dbus 1.13.18 hb2f20db_0
decorator 5.1.1 pyhd3eb1b0_0 anaconda
executing 0.8.3 pyhd3eb1b0_0 anaconda
expat 2.4.9 h6a678d5_0
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.9.0 py310h06a4308_0
fontconfig 2.14.1 h52c9d5c_1
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 h4a9f257_0
freexl 1.0.6 h27cfd23_0
frozenlist 1.3.3 py310h5eee18b_0
fsspec 2022.11.0 py310h06a4308_0 anaconda
gdal 3.6.2 py310h708d02d_0
geos 3.8.0 he6710b0_0
geotiff 1.7.0 h2a26cda_1
giflib 5.2.1 h5eee18b_3
glib 2.69.1 he621ea3_2
gmp 6.2.1 h295c915_3
gmpy2 2.1.2 py310heeb90bb_0
gnutls 3.6.15 he1e5248_0
google-auth 2.22.0 pyh1a96a4e_0 conda-forge
google-auth-oauthlib 1.0.0 pyhd8ed1ab_1 conda-forge
grpc-cpp 1.48.2 h5bf31a4_0
grpcio 1.48.2 py310h5bf31a4_0
gst-plugins-base 1.14.1 h6a678d5_1
gstreamer 1.14.1 h5eee18b_1
hdf4 4.2.13 h3ca952b_2
hdf5 1.10.6 h3ffc7dd_1
icu 58.2 he6710b0_3
idna 3.4 py310h06a4308_0
imagecodecs 2021.8.26 py310h46e8fbd_2
imageio 2.19.3 py310h06a4308_0 anaconda
importlib-metadata 6.8.0 pyha770c72_0 conda-forge
intel-openmp 2023.1.0 hdb19cb5_46305
ipython 8.8.0 py310h06a4308_0 anaconda
jedi 0.18.1 py310h06a4308_1 anaconda
jinja2 3.1.2 py310h06a4308_0
jpeg 9e h5eee18b_1
json-c 0.16 h5eee18b_0
jxrlib 1.1 h7b6447c_2 anaconda
kealib 1.5.0 hd940352_0
kiwisolver 1.4.4 py310h6a678d5_0
krb5 1.20.1 h568e23c_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libaec 1.0.4 he6710b0_1 anaconda
libboost 1.73.0 h28710b8_12
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libclang 10.0.1 default_hb85057a_2
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.7.1.12 0 nvidia
libcurand 10.3.3.129 0 nvidia
libcurl 8.1.1 h91b91d3_2
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_0
libedit 3.1.20221030 h5eee18b_0
libev 4.33 h7f8727e_1
libevent 2.1.12 h8f2d780_0
libffi 3.4.4 h6a678d5_0
libgcc-ng 11.2.0 h1234567_1
libgdal 3.6.2 hc0e11bb_0
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libkml 1.3.0 h096b73e_6
libllvm10 10.0.1 hbcb73fb_5
libnetcdf 4.8.1 h8322cc2_2
libnghttp2 1.52.0 ha637b67_1
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libpq 12.15 h37d81fd_1
libprotobuf 3.20.3 he621ea3_0
libspatialite 4.3.0a h71b31bf_21
libssh2 1.10.0 h37d81fd_2
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.0 h6a678d5_2
libunistring 0.9.10 h27cfd23_0
libuuid 1.41.5 h5eee18b_0
libwebp 1.2.4 h11a3e52_1
libwebp-base 1.2.4 h5eee18b_1
libxcb 1.15 h7f8727e_0
libxkbcommon 1.0.1 hfa300c1_0
libxml2 2.9.14 h74e7548_0
libxslt 1.1.35 h4e12654_0
libzip 1.8.0 h5cef20c_0
libzopfli 1.0.3 he6710b0_0 anaconda
locket 1.0.0 py310h06a4308_0 anaconda
lz4-c 1.9.4 h6a678d5_0
markdown 3.4.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.1 py310h7f8727e_0
matplotlib 3.7.1 py310h06a4308_1
matplotlib-base 3.7.1 py310h1128e8f_1
matplotlib-inline 0.1.6 py310h06a4308_0 anaconda
mkl 2023.1.0 h213fc3f_46343
mkl-service 2.4.0 py310h5eee18b_1
mkl_fft 1.3.6 py310h1128e8f_1
mkl_random 1.2.2 py310h1128e8f_1
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.3.0 py310h06a4308_0
multidict 6.0.2 py310h5eee18b_0
munkres 1.1.4 py_0
mypy-extensions 1.0.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
networkx 3.1 py310h06a4308_0
nspr 4.35 h6a678d5_0
nss 3.89.1 h6a678d5_0
numexpr 2.8.4 py310h85018f9_1
numpy 1.25.2 py310h5f9d8c6_0
numpy-base 1.25.2 py310hb5e798b_0
oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge
openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 1.1.1v h7f8727e_0
packaging 23.0 py310h06a4308_0
pandas 1.5.2 py310h1128e8f_0 anaconda
parso 0.8.3 pyhd3eb1b0_0 anaconda
partd 1.2.0 pyhd3eb1b0_1 anaconda
pcre 8.45 h295c915_0
pcre2 10.37 he7ceb23_1
pexpect 4.8.0 pyhd3eb1b0_3 anaconda
pickleshare 0.7.5 pyhd3eb1b0_1003 anaconda
pillow 9.4.0 py310h6a678d5_0
pip 23.2.1 py310h06a4308_0
pixman 0.40.0 h7f8727e_1
ply 3.11 py310h06a4308_0
poppler 22.12.0 h381b16e_0
poppler-data 0.4.11 h06a4308_1
postgresql 12.15 h16c4e8d_1
proj 6.2.1 h05a3930_0
prompt-toolkit 3.0.36 py310h06a4308_0 anaconda
protobuf 3.20.3 py310h6a678d5_0
psutil 5.9.5 pypi_0 pypi
ptyprocess 0.7.0 pyhd3eb1b0_2 anaconda
pure_eval 0.2.2 pyhd3eb1b0_0 anaconda
pyasn1 0.4.8 py_0 conda-forge
pyasn1-modules 0.2.7 py_0 conda-forge
pycparser 2.21 pyhd3eb1b0_0
pygments 2.11.2 pyhd3eb1b0_0 anaconda
pyjwt 2.8.0 pyhd8ed1ab_0 conda-forge
pyopenssl 23.2.0 py310h06a4308_0
pyparsing 3.0.9 py310h06a4308_0
pyqt 5.15.7 py310h6a678d5_1
pyqt5-sip 12.11.0 pypi_0 pypi
pyre-extensions 0.0.30 pypi_0 pypi
pysocks 1.7.1 py310h06a4308_0
python 3.10.12 h7a1cb2a_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.10 2_cp310 conda-forge
pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2022.7 py310h06a4308_0 anaconda
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pywavelets 1.4.1 py310h5eee18b_0 anaconda
pyyaml 6.0 py310h5eee18b_1 anaconda
qhull 2020.2 hdb19cb5_2
qt-main 5.15.2 h327a75a_7
qt-webengine 5.15.9 hd2b0992_4
qtwebkit 5.212 h4eab89a_4
re2 2022.04.01 h27087fc_0 conda-forge
readline 8.2 h5eee18b_0
requests 2.31.0 py310h06a4308_0
requests-oauthlib 1.3.1 pyhd8ed1ab_0 conda-forge
rsa 4.9 pyhd8ed1ab_0 conda-forge
scikit-image 0.19.3 py310h6a678d5_1 anaconda
scipy 1.11.1 py310h5f9d8c6_0
setuptools 68.0.0 py310h06a4308_0
sip 6.6.2 py310h6a678d5_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 h295c915_0 anaconda
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0 anaconda
sympy 1.11.1 py310h06a4308_0
tbb 2021.8.0 hdb19cb5_0
tensorboard 2.14.0 pyhd8ed1ab_0 conda-forge
tensorboard-data-server 0.7.0 py310h52d8a92_0
tifffile 2021.7.2 pyhd3eb1b0_2 anaconda
tiledb 2.3.3 h1132f93_2
tk 8.6.12 h1ccaba5_0
toml 0.10.2 pyhd3eb1b0_0
toolz 0.12.0 py310h06a4308_0 anaconda
torchaudio 2.0.2 py310_cu117 pytorch
torcheval 0.0.6 pypi_0 pypi
torchtnt 0.2.0 pypi_0 pypi
torchtriton 2.0.0 py310 pytorch
torchvision 0.15.2 py310_cu117 pytorch
tornado 6.3.2 py310h5eee18b_0
tqdm 4.65.0 py310h2f386ee_0
traitlets 5.7.1 py310h06a4308_0 anaconda
typing-inspect 0.9.0 pypi_0 pypi
typing_extensions 4.7.1 py310h06a4308_0
tzdata 2023c h04d1e81_0
urllib3 1.26.16 py310h06a4308_0
wcwidth 0.2.5 pyhd3eb1b0_0 anaconda
werkzeug 2.3.6 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xerces-c 3.2.4 h94c2ce2_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7b6447c_0 anaconda
yarl 1.7.2 py310h5764c6d_2 conda-forge
zfp 0.5.5 h295c915_6 anaconda
zipp 3.16.2 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0
```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 0 |
379 | 111,024 |
ncu python conv2d.py runs indefinitely after activating cudnn.benchmark
|
module: cudnn, module: cuda, triaged
|
### π Describe the bug
I was trying to see how cudnn lib selects the most performant algorithm for depthwise convolution, so I used ncu to display the candidates perfomance, but unfortunately the kernels kept running indefinitely. Here is the script I used
```python
import torch
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
[N, C, H, W] = [2, 512, 64, 128]
kernel_size = (3, 3)
stride = 1
perf_repeat = 1
bias = False
dtype = torch.float
memory_format = torch.channels_last
input_1 = torch.ones(N, C, H, W, dtype = dtype).to("cuda")
if memory_format == torch.channels_last:
input_1 = input_1.to(memory_format = memory_format)
dwconv_layer = torch.nn.Conv2d(in_channels = C, out_channels = C, kernel_size = kernel_size,
stride = stride, dilation = 1, padding = 1, groups = C, bias = bias, dtype = dtype).to("cuda")
if memory_format == torch.channels_last:
dwconv_layer = dwconv_layer.to(memory_format = memory_format)
for i in range(perf_repeat):
output = dwconv_layer(input_1)
```
the command I used is `ncu python the_above_script.py`, and the profiling output is
> ==PROF== Connected to process 2373 (/usr/bin/python3.10)
==PROF== Profiling "elementwise_kernel" - 0: 0%....50%....100% - 10 passes
==PROF== Profiling "conv2d_c1_k1_nhwc_kernel" - 1: 0%....50%....100% - 10 passes
==PROF== Profiling "conv2d_c1_k1_nhwc_kernel" - 2: 0%....50%....100% - 10 passes
==PROF== Profiling "conv2d_grouped_direct_kernel" - 3: 0%....50%....100% - 10 passes
==PROF== Profiling "conv2d_grouped_direct_kernel" - 4: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 5: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 6: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 7: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 8: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 9: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 10: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 11: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 12: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 13: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 14: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 15: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 16: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 17: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 18: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 19: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 20: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 21: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 22: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 23: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 24: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 25: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 26: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 27: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 28: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 29: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 30: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 31: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 32: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 33: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 34: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 35: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 36: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 37: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 38: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 39: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 40: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 41: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 42: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 43: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 44: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 45: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 46: 0%....50%....100% - 10 passes
==PROF== Profiling "nhwcToNchwKernel" - 47: 0%....50%....100% - 10 passes
==PROF== Profiling "kern_precompute_indices" - 48: 0%....50%....100% - 10 passes
==PROF== Profiling "precomputed_convolve_sgemm" - 49: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 50: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 51: 0%....50%....100% - 10 passes
==PROF== Profiling "implicit_convolve_sgemm" - 52: ^C0%....50%...==PROF== Received signal
==PROF== Trying to shutdown target application
It didn't seem to gonna stop, so I had to kill it in the end.
But without ncuοΌi.e. just `python the_above_script.py`, it finishes in a flash.
The environment I ran the test is nvcr.io/nvidia/pytorch:23.08-py3.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+29c30b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5589.71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] onnx==1.14.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+29c30b1
[pip3] torch-tensorrt==2.0.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0+440fd1b
[conda] Could not collect
cc @csarofeen @ptrblck @xwang233
| 0 |
380 | 111,020 |
[Dynamo] Error in speculate_subgraph doesn't report inner user stack trace
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
self explanatory
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
381 | 111,019 |
[Dynamo] Support more argument types for autograd Function speculate: HigherOrderOperator with body that accepts non-Tensors as input
|
triaged, module: dynamo
|
### π Describe the bug
If I pass a user defined object, or a list, etc to an autograd.Function, speculate subgraph should work as long as I can flatten all accessed elements into the supported types.
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 0 |
382 | 111,008 |
[optim] be explicit about CPU scalar tensor dtypes
|
triaged, open source, release notes: optim
|
Fixes https://github.com/pytorch/pytorch/issues/110940
cc @janeyx99 @crcrpar
| 2 |
383 | 111,007 |
Disable FlashAttenion for is_causal=True when seqlen q not equal kv
|
ciflow/trunk, topic: not user facing, module: multi-headed-attention
|
# Summary:
This pull request **removes** support for non-square sequence lengths in causal attention when using FlashAttention V2.
### Why are doing this
// FlashAttention 2 updated the default mask meaning for causal in this PR:
// 9e5e8bc91e it is now aligned to lower_right which would be a BC break
// for non-square masks. We will not support non-square masks for causal w/ FAV2
For more context see:
https://github.com/pytorch/pytorch/issues/108108
### Followup
A large number of people will likely want to use FAV2 with lower_right causal attention for non equal sequence lengths. See this RFC : https://github.com/pytorch/pytorch/issues/110681
| 5 |
384 | 111,003 |
Dynamo inlining should compile partial subgraphs
|
triaged, module: dynamo
|
Consider a simple case of:
```
def h(x):
x = x.cos()
print(x)
x = x.cos()
return x
def g(x):
x = x.cos()
x = h(x)
x = x.cos()
return x
def f(x):
x = x.cos()
x = g(x)
x = x.cos()
return x
```
In an ideal state, dynamo would produce 2 graphs.
```
cos()
cos()
cos()
```
`print` in eager
and
```
cos()
cos()
cos()
```
However, today, we produce 6 due to how inlining runs (nesting inlining down) and how it fails (bubbling up). Each graph has a cos() in it.
This is pretty inefficient, and we lose a lot of compilation value from this - it is also probably representative of a real world problem where graph breaks in leaves have an outsized impact on the # of graphs we produce, and 1 graph break != 1 split in the graph.
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
385 | 110,997 |
Increase ROCm test shards to 6
|
module: rocm, open source, ciflow/trunk, topic: not user facing
|
To reduce signal time
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
386 | 110,995 |
DISABLED test_mem_leak (__main__.TestProfilerCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
387 | 110,994 |
DISABLED test_cublas_baddbmm_large_input_1_10000_10000_10000_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
388 | 110,993 |
DISABLED test_cublas_baddbmm_large_input_2_1000_1000_1000_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
389 | 110,992 |
DISABLED test_cublas_baddbmm_large_input_2_100_100_100_cuda_float16 (__main__.TestMatmulCudaCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on MI210 runners as part of https://github.com/pytorch/pytorch/pull/105980:
https://github.com/pytorch/pytorch/actions/runs/5744430164/job/15572649474
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
390 | 110,980 |
DISABLED test_jvp_linalg_det_singular_cpu_float32 (__main__.TestOperatorsCPU)
|
triaged, module: macos, module: regression, skipped, module: functorch
|
Platforms: macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/functorch%2Ftest_ops.py%3A%3ATestOperatorsCPU%3A%3Atest_jvp_linalg_det_singular_cpu_float32)).
cc @albanD @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
391 | 110,971 |
`pip install deepspeed` fails if number of GPUs greater than a certain small number?
|
needs reproduction, oncall: binaries, triaged, module: third_party
|
### π Describe the bug
These attempts at installing fail π₯²:
Attempt 1:
```bash
pip install deepspeed --no-cache-dir
```
Attempt 2:
```bash
export CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7'
pip install deepspeed --no-cache-dir
```
These attempts succeed π:
Attempt 3:
```bash
export CUDA_VISIBLE_DEVICES='0,1,2,3'
pip install deepspeed --no-cache-dir
```
Attempt 4:
```bash
export CUDA_VISIBLE_DEVICES=0
pip install deepspeed --no-cache-dir
```
When installation fails π₯², the following is printed in the terminal:
```bash
Collecting deepspeed
Downloading deepspeed-0.11.1.tar.gz (1.1 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.1/1.1 MB 12.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [38 lines of output]
[2023-10-10 18:49:53,030] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-10 18:49:55,635] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 260, in _lazy_init
queued_call()
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 145, in _check_capability
capability = get_device_capability(d)
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 381, in get_device_capability
prop = get_device_properties(device)
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 399, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined]
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-29c3v_m5/deepspeed_7b78606fcc504b8795ce3e3f5bbf6990/setup.py", line 173, in <module>
op_compatible = builder.is_compatible()
File "/tmp/pip-install-29c3v_m5/deepspeed_7b78606fcc504b8795ce3e3f5bbf6990/op_builder/spatial_inference.py", line 31, in is_compatible
cuda_capability = torch.cuda.get_device_properties(0).major
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 395, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 264, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch.
CUDA call was originally invoked at:
[' File "<string>", line 2, in <module>\n', ' File "<pip-setuptools-caller>", line 34, in <module>\n', ' File "/tmp/pip-install-29c3v_m5/deepspeed_7b78606fcc504b8795ce3e3f5bbf6990/setup.py", line 31, in <module>\n import torch\n', ' File "<frozen importlib._bootstrap>", line 1027, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 688, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 883, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed\n', ' File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/__init__.py", line 1146, in <module>\n _C._initExtension(manager_path())\n', ' File "<frozen importlib._bootstrap>", line 1027, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 688, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 883, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed\n', ' File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 197, in <module>\n _lazy_call(_check_capability)\n', ' File "/home/davidlee/anaconda3/envs/lumpyllama/lib/python3.10/site-packages/torch/cuda/__init__.py", line 195, in _lazy_call\n _queued_calls.append((callable, traceback.format_stack()))\n']
DS_BUILD_OPS=0
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.31
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.73.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3391.453
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.60
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @seemethere @malfet @osalpekar @atalman
| 3 |
392 | 110,970 |
[Nested tensor]support nested tensor in hook
|
fb-exported, topic: new features, release notes: nested tensor
|
Summary:
## Motivation:
In hook, the check_variable_result check the grad size to make sure that the hook function doesn't change the grad size.
Since C++ Nested tensor doesn't support size, it got blocked when using hook.
## This diff:
Replace the sym_sizes call with is_same_size which supports nested tensor.
To use is_same_size, also change the signatures from TensorBase to Tensor.
Test Plan: add a test case that the hook changes the tensor size and it throws.
Differential Revision: D50106928
| 7 |
393 | 110,967 |
[dynamo] add infinite generators `itertools.{count, repeat, cycle}`
|
triaged, open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Fixes https://github.com/pytorch/pytorch/pull/110953/files#r1352868935
Depends on: https://github.com/pytorch/pytorch/pull/110953
Why not use these for `repeat(item, count)`:
> These are not preferred as they return an opaque VariableTracker. In particular, one cannot do `enumerate(repeat(1))`. `repeat(1, 10)` benefits from the integration enjoyed by `ListVariableIterator`
Follow ups:
- [ ] make listiterator an IteratorVariable, define iterator integrations on base IteratorVariable where unspecialized https://github.com/pytorch/pytorch/pull/110967#discussion_r1356656469
- Please make a new issue for this
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ezyang
| 3 |
394 | 110,966 |
`torch.is_autocast_enabled()` always False on CPU
|
high priority, triage review, module: amp (automated mixed precision)
|
### π Describe the bug
`torch.is_autocast_enabled()` seems to always returns `False` on CPU. For example:
```python
import torch
device = torch.device("cpu")
with torch.autocast(device.type, torch.bfloat16, enabled=True):
assert torch.is_autocast_enabled()
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: version 3.26.2
Libc version: N/A
Python version: 3.10.10 (main, Feb 16 2023, 02:46:59) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)
Python platform: macOS-13.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.2.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.1.0
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==2.1.0
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.16.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 5 |
395 | 110,965 |
[ONNX] Update ACPT to support Python 3.11
|
module: onnx, triaged, onnx-triaged, release notes: onnx
|
### π The feature, motivation and pitch
[Azure Container for PyTorch (aka ACPT)](https://learn.microsoft.com/en-us/azure/machine-learning/resource-azure-container-for-pytorch?view=azureml-api-2) is currently distributed using python 3.8/3.9. Although this is ok for Pytorch 1.x series, the latest PyTorch 2.x leverages Python 3.11 features for better graph lowering from `torch.nn.Module` to `torch.fx.GraphModule`. It also has model transformation/optimization that are better maintained for newer python versions. Below is a list of several places in which we can see Dynamo forking behavior between old (mostly 3.8 and 3.9) and newer python versions (3.10 and 3.11)
```bash
(ptca) root@88c3e49b4bcf:/opt/pytorch# grep sys\.version_info torch/_dynamo -Rn
torch/_dynamo/bytecode_transformation.py:114: inst = "JUMP_FORWARD" if sys.version_info >= (3, 11) else "JUMP_ABSOLUTE"
torch/_dynamo/bytecode_transformation.py:144: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:162: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:168: if sys.version_info < (3, 8) and n >= 4:
torch/_dynamo/bytecode_transformation.py:170: if sys.version_info < (3, 10) and n >= 5:
torch/_dynamo/bytecode_transformation.py:196: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:208: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:222: assert sys.version_info < (3, 10)
torch/_dynamo/bytecode_transformation.py:244: assert sys.version_info >= (3, 10) and sys.version_info < (3, 11)
torch/_dynamo/bytecode_transformation.py:294: assert sys.version_info >= (3, 11)
torch/_dynamo/bytecode_transformation.py:441: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:463: if sys.version_info < (3, 10):
torch/_dynamo/bytecode_transformation.py:474: if sys.version_info >= (3, 10):
torch/_dynamo/bytecode_transformation.py:503: if sys.version_info < (3, 11):
torch/_dynamo/bytecode_transformation.py:538: if sys.version_info < (3, 10):
torch/_dynamo/bytecode_transformation.py:540: elif sys.version_info < (3, 11):
torch/_dynamo/bytecode_transformation.py:550: if sys.version_info < (3, 11):
torch/_dynamo/bytecode_transformation.py:558: if sys.version_info >= (3, 11) and "BACKWARD" in inst.opname:
torch/_dynamo/bytecode_transformation.py:560: if sys.version_info >= (3, 10):
torch/_dynamo/bytecode_transformation.py:740: assert sys.version_info >= (3, 11)
torch/_dynamo/bytecode_transformation.py:864: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:922: if sys.version_info < (3, 11):
torch/_dynamo/bytecode_transformation.py:955: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1001: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1004: if sys.version_info >= (3, 10):
torch/_dynamo/bytecode_transformation.py:1008: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1040: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1055: if sys.version_info < (3, 10):
torch/_dynamo/bytecode_transformation.py:1065: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1081: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_transformation.py:1087: if sys.version_info < (3, 11):
torch/_dynamo/resume_execution.py:64: "NOP" if sys.version_info < (3, 11) else "PUSH_EXC_INFO"
torch/_dynamo/resume_execution.py:78: if sys.version_info < (3, 11):
torch/_dynamo/resume_execution.py:105: if sys.version_info < (3, 9):
torch/_dynamo/resume_execution.py:113: elif sys.version_info < (3, 11):
torch/_dynamo/resume_execution.py:161: if sys.version_info < (3, 9):
torch/_dynamo/resume_execution.py:178: elif sys.version_info < (3, 11):
torch/_dynamo/resume_execution.py:355: is_py311_plus = sys.version_info >= (3, 11)
torch/_dynamo/resume_execution.py:440: if sys.version_info >= (3, 11):
torch/_dynamo/resume_execution.py:507: if sys.version_info >= (3, 11):
torch/_dynamo/output_graph.py:774: if sys.version_info >= (3, 11):
torch/_dynamo/output_graph.py:1288: if sys.version_info >= (3, 11) and kind in (
torch/_dynamo/symbolic_convert.py:214: if sys.version_info < (3, 9):
torch/_dynamo/symbolic_convert.py:467: if sys.version_info >= (3, 11) and inst.opname == "CALL":
torch/_dynamo/symbolic_convert.py:489: if sys.version_info >= (3, 11) and inst.opname == "CALL":
torch/_dynamo/symbolic_convert.py:668: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:846: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1156: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1200: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1437: if sys.version_info < (3, 11):
torch/_dynamo/symbolic_convert.py:1440: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1664: if sys.version_info < (3, 11):
torch/_dynamo/symbolic_convert.py:1668: if sys.version_info < (3, 11):
torch/_dynamo/symbolic_convert.py:1719: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1802: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:1970: if sys.version_info >= (3, 10):
torch/_dynamo/symbolic_convert.py:2141: if sys.version_info >= (3, 11):
torch/_dynamo/symbolic_convert.py:2320: if sys.version_info >= (3, 11):
torch/_dynamo/variables/misc.py:1053: if sys.version_info < (3, 11):
torch/_dynamo/codegen.py:251: if push_null and sys.version_info >= (3, 11):
torch/_dynamo/codegen.py:279: assert sys.version_info >= (3, 11)
torch/_dynamo/codegen.py:292: if sys.version_info >= (3, 11) and push_null:
torch/_dynamo/codegen.py:300: if sys.version_info < (3, 11):
torch/_dynamo/codegen.py:347: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_analysis.py:13:if sys.version_info >= (3, 9):
torch/_dynamo/bytecode_analysis.py:15:if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_analysis.py:64: if sys.version_info >= (3, 11):
torch/_dynamo/bytecode_analysis.py:222: sys.version_info < (3, 9) and inst.opcode == dis.opmap["CALL_FINALLY"]
torch/_dynamo/backends/registry.py:102: if sys.version_info < (3, 10):
torch/_dynamo/types.py:17:if sys.version_info >= (3, 11):
torch/_dynamo/testing.py:344: if sys.version_info >= (3, 11):
torch/_dynamo/eval_frame.py:506: if sys.version_info < (3, 11):
torch/_dynamo/eval_frame.py:595: if sys.version_info >= (3, 12):
torch/_dynamo/test_case.py:24: or sys.version_info >= (3, 12)
torch/_dynamo/guards.py:123:if sys.version_info[:2] <= (3, 8):
torch/_dynamo/utils.py:1925: assert sys.version_info >= (3, 11)
```
By using dynamo with python 3.8/3.9, we can obtain suboptimal graphs that don't leverage the latest optimizations coming from latest dynamo. Another side effect is that important third party dependencies may not be optimal for old python too, such as `transformers` and others
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
396 | 110,963 |
increase CPU memory requirement for test_nll_loss_large
|
triaged, open source, ciflow/trunk, topic: not user facing
|
Running `python test_nn.py -v -k test_nll_loss_large_tensor` on a machine with a small host RAM availability (e.g. ~50GB) fails with a `SIGKILL` even though the currently specified memory requirements for CPU (and GPU) are set to 48GB and are thus met.
Profiling the peak memory usage via:
```
\time -v python test_nn.py -v -k test_nll_loss_large_tensor
```
and adding `print(torch.cuda.memory_summaryu())` at the end of the test shows a higher host RAM usage of >100GB and a device memory usage of ~32GB.
```
Command being timed: "python test_nn.py -v -k test_nll_loss_large_tensor"
User time (seconds): 81.66
System time (seconds): 229.02
Percent of CPU this job got: 671%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:46.30
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 118150096
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 90280839
Voluntary context switches: 1669
Involuntary context switches: 1214548
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
```
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 32769 MiB | 32769 MiB | 81923 MiB | 49154 MiB |
| from large pool | 32768 MiB | 32768 MiB | 81921 MiB | 49152 MiB |
| from small pool | 0 MiB | 0 MiB | 1 MiB | 1 MiB |
|---------------------------------------------------------------------------|
| Active memory | 32769 MiB | 32769 MiB | 81923 MiB | 49154 MiB |
| from large pool | 32768 MiB | 32768 MiB | 81921 MiB | 49152 MiB |
| from small pool | 0 MiB | 0 MiB | 1 MiB | 1 MiB |
|---------------------------------------------------------------------------|
| Requested memory | 32769 MiB | 32769 MiB | 81923 MiB | 49154 MiB |
| from large pool | 32768 MiB | 32768 MiB | 81921 MiB | 49152 MiB |
| from small pool | 0 MiB | 0 MiB | 1 MiB | 1 MiB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 32774 MiB | 32774 MiB | 81938 MiB | 49164 MiB |
| from large pool | 32772 MiB | 32772 MiB | 81930 MiB | 49158 MiB |
| from small pool | 2 MiB | 2 MiB | 8 MiB | 6 MiB |
|---------------------------------------------------------------------------|
...
```
We haven't seen this issue before as the majority of our runners have sufficient host RAM and I just ran into it by chance.
CC @atalman @malfet @crcrpar
| 1 |
397 | 110,961 |
[v2.1.1] Release Tracker
|
triaged
|
### π Describe the bug
This issue is for tracking cherry-picks to the release branch. Following is [release branch](https://github.com/pytorch/pytorch/tree/release/2.1) for the 2.1.1 release.
Our plan from this point from this point is roughly:
* Phase 1 (until 11/3): Cherry-pick post deadline (End of day 5PM PST)
* Phase 2 (after 11/3): Perform extended integration/stability/performance testing based on Release Candidate builds.
**Only issues that have βcherry-picksβ in this tracker will be considered for the release.**
## Cherry-Pick Criteria
**Phase 1 (until 11/3):**
The Releng team relies on the cherry pick process to manage risk to release quality, i.e. by porting a small set of commit from trunk that are "must-have" into the release branch, we limit the change to the minimal to address pressing issues. Thus, not everything a developer land into the trunk will make it into the release. So, please consider the criteria below and follow the cherry picking process. Only low-risk changes may be cherry-picked from master:
1. Fixes to regressions against the most recent release (e.g. 2.1.0 for 2.1.1 release; see [module: regression issue list](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22module%3A+regression%22+))
2. Low risk critical fixes for: silent correctness, backwards compatibility, crashes, deadlocks, (large) memory leaks
3. Fixes to new features being introduced in 2.1.0 release
4. Documentation improvements
5. Release branch specific changes (e.g. blocking ci fixes, change version identifiers)
Any other change requires special dispensation from the release managers (currently @atalman, @huydhn, @osalpekar, @malfet). If this applies to your change please write "Special Dispensation" in the "Criteria Category:" template below and explain.
**Phase 2 (after 11/3):**
Note that changes here require us to rebuild a Release Candidate and restart extended testing (likely delaying the release). Therefore, the only accepted changes are **Release-blocking** critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
Changes will likely require a discussion with the larger release team over VC or Slack.
## Cherry-Pick Process
1. Ensure your PR has landed in master. This does not apply for release-branch specific changes (see Phase 1 criteria).
2. Create (but do not land) a PR against the [release branch](https://github.com/pytorch/pytorch/tree/release/2.0).
<details>
```bash
# Find the hash of the commit you want to cherry pick
# (for example, abcdef12345)
git log
git fetch origin release/2.1
git checkout release/2.1
git cherry-pick abcdef12345
# Submit a PR based against 'release/2.1' either:
# via the GitHub UI
git push my-fork
# via the GitHub CLI
gh pr create --base release/2.1
```
</details>
3. Make a request below with the following format:
```
Link to landed master PR (if applicable):
*
Link to release branch PR:
*
Criteria Category:
*
```
1. Someone from the release team will reply with approved / denied or ask for more information.
2. If approved, someone from the release team will merge your PR once the tests pass. **Do not land the release branch PR yourself.**
**NOTE: Our normal tools (ghstack / ghimport, etc.) do not work on the release branch.**
See [HUD 2.1](https://hud.pytorch.org/hud/pytorch/pytorch/release%2F2.1/1?per_page=50)
### Versions
2.1.1
| 12 |
398 | 110,960 |
nccl flight recorder
|
release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110960
Keep a buffer of the last 16384 nccl work actions, including the stack
trace that launched the event.
When torch._C._distributed_c10d._dump_nccl_trace(), it an dump these to
a pickled archive.
For each action we get:
process_group_id, seq_id, collective_name, size_of_first_tensor, stack trace
state - issued, started, completed (based on cuda events and queried if
necessary when the dump is requested)
I tested that it is possible to query event state when the streams are
otherwise stuck.
Differential Revision: [D50138956](https://our.internmc.facebook.com/intern/diff/D50138956)
| 4 |
399 | 110,959 |
`model.named_buffers()` fails if module not hashable.
|
module: nn, triaged
|
### π Describe the bug
Today, I learned that subclassing `Mapping` removes the `object.__hash__` function (https://github.com/python/cpython/issues/110645). Thereafter, `model.named_buffers()` fails because it tries to use a hash-based `__contains__`.
```python
from collections.abc import Mapping
from torch import Tensor, nn
class Foo(nn.Module, Mapping):
tensors: dict[str, Tensor]
def __len__(self):
return len(self.tensors)
def __iter__(self):
return iter(self.tensors)
def __getitem__(self, key):
return self.tensors[key]
def __init__(self, tensors: dict[str, Tensor]) -> None:
super().__init__()
self.tensors = tensors
m = Foo({})
print(dict(m.named_buffers()))
```
fails with `TypeError: unhashable type: 'Foo'`, because of:
https://github.com/pytorch/pytorch/blob/57f6368b8ea3b1d16dabd236edf716bf87a95ca1/torch/nn/modules/module.py#L2368-L2370
There is a second place in the code that might be affected:
https://github.com/pytorch/pytorch/blob/57f6368b8ea3b1d16dabd236edf716bf87a95ca1/torch/nn/modules/module.py#L1495-L1503
### Versions
<details>
```
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4800,0000
CPU min MHz: 800,0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-annotations==3.0.1
[pip3] flake8-bugbear==23.9.16
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-docstrings==1.7.0
[pip3] flake8-pyi==23.6.0
[pip3] flake8-rst==0.8.0
[pip3] flake8-rst-docstrings==0.3.0
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torchinfo==1.8.0
[pip3] triton==2.1.0
```
</details>
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 5 |
400 | 110,958 |
[sparse] Add wanda pruner to torch.ao.pruning
|
release notes: AO frontend
|
This PR adds in an implementation of WandA pruning: https://arxiv.org/abs/2306.11695
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.