Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
โ | Body
stringlengths 9
74.5k
โ | Comments
int64 0
867
|
---|---|---|---|---|---|
3,901 | 91,210 |
Odd/hand-wavy mathematical notation for Conv2D
|
module: docs, triaged
|
### ๐ The doc issue
Beginning with https://github.com/pytorch/pytorch/blob/cdbca3563e12097a8bf359e937d6d6196fa6dc89/torch/nn/modules/conv.py#L188
there is a description of the convolution result per out-channel $C_{out_j}$ and batch sample $N_i$. While this is used to indicate that $i$ and $j$ iterate over $N$ and $C_{out}$, respectively, it is unusual/misleading notation (correct me if I am wrong). Despite a strong mathematical background it has confused me and my peers. E.g., using $N$ and $N_i$ in the same context usually means that $N_i$ is something like $N$ (i.e. a cardinality), but we generalize for different instances indexed by $i$, where $N$ would otherwise occur as a constant. This is not the case here.
### Suggest a potential alternative/fix
Use the more common and less confusing notation $(i\leq N,\ j\leq C_{out})$ or $(i\in [N],\ j \in [C_{out}])$:
```
\text{out}(i,j) = \text{bias}(j) +
\sum_{k = 0}^{C_{in} - 1} \text{weight}(j, k)
\star \text{input}(i, k) \quad i \leq N,\ j \leq C_{out}
cc @svekars @carljparker
| 0 |
3,902 | 91,205 |
dcp resharding does not work for optimizer state_dict
|
triaged, module: distributed_checkpoint
|
### ๐ The feature, motivation and pitch
See branch for more details: https://github.com/pytorch/pytorch/pull/91204
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
3,903 | 91,199 |
functorch.functionalize doesn't work with torch.autograd.grad
|
triaged, module: functorch
|
### ๐ Describe the bug
functorch.functionalize doesn't work with torch.autograd.grad
```
import torch
import torch.nn as nn
import functorch
def poc5():
device = 'cpu'
m = nn.Conv1d(4, 6, kernel_size=3, groups=2)
a = torch.rand(2, 4, 6, requires_grad=True)
output = m(a)
grad_input = torch.autograd.grad(
output, (a,) + tuple(m.parameters()), output, create_graph=True)
grad_grad_input = torch.autograd.grad(
output.sum() + sum(map(lambda x: x.sum(), grad_input)),
(a, output) + tuple(m.parameters()),
retain_graph=True)
print(output, grad_input, grad_grad_input)
print("CPU")
poc5()
print()
print("CPU Functionalize")
functorch.functionalize(poc5)()
```
Output:
```
CPU
tensor([[[-0.0416, 0.0633, 0.1877, 0.1563],
[ 0.5807, 0.8847, 0.8411, 0.7011],
[-0.2374, 0.4458, -0.1048, 0.4874],
[-0.1426, 0.0703, 0.0341, -0.2153],
[-0.3945, -0.5313, -0.2777, -0.3541],
[ 0.4357, 0.0254, 0.4761, 0.2220]],
[[ 0.0705, -0.0600, 0.0560, -0.1300],
[ 0.5297, 0.6658, 0.4965, 0.3662],
[ 0.4612, 0.0911, 0.1681, 0.1920],
[-0.0510, -0.2265, -0.0296, -0.0428],
[-0.5988, -0.4101, -0.6353, -0.4182],
[ 0.0980, 0.4569, 0.1632, 0.3944]]],
grad_fn=<ConvolutionBackward0>) (tensor([[[ 0.1978, 0.0159, 0.5327, -0.2142, 0.2105, -0.2667],
[-0.0689, -0.1114, 0.5579, 0.3765, 0.7924, 0.2061],
[ 0.0759, 0.0995, 0.4027, 0.0226, 0.1847, 0.1128],
[ 0.0998, 0.0047, 0.4515, 0.2834, 0.1925, 0.1348]],
[[-0.0979, 0.3522, -0.0266, 0.0480, 0.0330, -0.0885],
[-0.1017, 0.2365, 0.2303, 0.3968, 0.2941, 0.0952],
[ 0.0590, 0.2283, 0.1790, 0.4196, 0.1189, 0.1172],
[-0.0765, 0.3494, 0.2414, 0.4521, 0.3344, 0.1329]]],
grad_fn=<ConvolutionBackwardBackward0>), tensor([[[ 0.1523, 0.1114, 0.1164],
[ 0.1564, 0.3719, 0.2319]],
[[ 2.3688, 2.3834, 1.9017],
[ 2.8668, 2.8862, 2.7958]],
[[ 0.0638, 1.0662, 0.2218],
[ 1.0382, 0.8604, 0.7477]],
[[-0.2262, -0.3722, -0.5417],
[-0.4131, -0.3037, -0.3188]],
[[-1.9698, -2.2514, -2.0218],
[-1.8486, -2.1851, -1.9120]],
[[ 1.5098, 1.0615, 1.6217],
[ 1.4141, 1.1910, 1.2088]]], grad_fn=<ConvolutionBackwardBackward0>), tensor([ 0.3023, 5.0660, 1.5035, -0.6036, -3.6199, 2.2716],
grad_fn=<ConvolutionBackwardBackward0>)) (tensor([[[-0.9171, 3.2532, 0.6846, 2.3250, 1.4339, -1.6889],
[-1.1733, 5.6259, 8.2622, 9.9275, 9.5273, 3.3577],
[ 1.4693, -2.3686, -1.5956, -1.9603, -2.6840, -0.3330],
[ 1.8973, -0.8998, -2.4363, -2.4392, -3.6129, -2.0951]],
[[-0.0349, 3.0964, 1.2196, 0.1238, 0.5640, -1.8088],
[-0.2971, 5.2504, 8.3502, 6.9556, 6.5995, 2.0023],
[ 0.7931, -1.7679, -2.9909, -2.3164, -3.6156, -0.0857],
[ 1.1485, -0.3616, -3.2510, -3.0696, -4.6184, -2.1542]]]), tensor([[[5.4868, 5.2254, 5.7954, 5.1858],
[6.1318, 5.8703, 6.4403, 5.8307],
[5.1361, 4.8746, 5.4446, 4.8350],
[5.4957, 4.5772, 4.7318, 4.4624],
[4.9110, 3.9925, 4.1471, 3.8777],
[6.2895, 5.3710, 5.5256, 5.2562]],
[[5.0298, 5.2085, 4.2526, 3.8369],
[5.6747, 5.8534, 4.8975, 4.4818],
[4.6790, 4.8577, 3.9018, 3.4861],
[4.6068, 5.6864, 5.2705, 5.2502],
[4.0221, 5.1018, 4.6858, 4.6655],
[5.4006, 6.4802, 6.0643, 6.0440]]]), tensor([[[19.0322, 18.2713, 16.3716],
[22.8101, 22.7071, 20.6910]],
[[26.1237, 25.3532, 23.1478],
[30.4675, 30.2632, 27.9566]],
[[18.9673, 18.2118, 16.4783],
[22.4376, 22.3898, 20.5315]],
[[22.6013, 22.5619, 23.8033],
[21.3588, 22.2065, 20.4382]],
[[16.9326, 16.8373, 17.9731],
[15.8470, 16.5156, 15.0074]],
[[29.0779, 29.1143, 30.4990],
[27.6223, 28.7131, 26.5916]]]), tensor([40.0211, 45.1806, 37.2150, 40.0808, 35.4036, 46.4314]))
CPU Functionalize
Traceback (most recent call last):
File "poc5.py", line 24, in <module>
functorch.functionalize(poc5)()
File "/workspaces/work/pytorch/torch/_functorch/vmap.py", line 35, in fn
return f(*args, **kwargs)
File "/workspaces/work/pytorch/torch/_functorch/eager_transforms.py", line 1445, in wrapped
func_outputs = func(*func_args, **func_kwargs)
File "poc5.py", line 11, in poc5
grad_input = torch.autograd.grad(
File "/workspaces/work/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
functionalize error on `grad_input = torch.autograd.grad`.
### Versions
Nightly
cc @zou3519 @Chillee @samdow @soumith @bdhirsh @wonjoolee95 @JackCaoG
| 2 |
3,904 | 91,184 |
DISABLED test_index_add_correctness (__main__.TestTorch)
|
triaged, module: flaky-tests, skipped, module: scatter & gather ops
|
Platforms: linux, rocm, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_index_add_correctness&suite=TestTorch) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10207821466).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 6 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_index_add_correctness`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @mikaylagawarecki
| 10 |
3,905 | 91,182 |
onednn(mkldnn) backend support for quantized operators
|
oncall: quantization, triaged, module: arm
|
### ๐ The feature, motivation and pitch
Feature request: Onednn backend support for quantized operators, INT8 at least.
Motivation: Given the acceptable accuracies from INT8 quantized inference and significant performance gains, the feature is getting attention for server class use cases as well, not just the mobile cpus.
PyTorch currently supports only QNNPACK backend for quantization on arm cpus with the assumption that the feature is of interest only for mobile devices. As you know QNNPACK doesn't support any of the neoverse optimizations. However, there are optimized gemm libraries like Arm Compute Library already supported via onednn backend. So, enabling onednn for quantized operators similar to how it's been done for full precision operators (fp32 and bf16) would enable applications to make use of the optimized gemm kernels for latest arm socs.
### Alternatives
I haven't considered fbgemm as an alternative here because currently fbgemm doesn't have optimized kernels for aarch64.
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @malfet
| 15 |
3,906 | 91,173 |
not able to import pipelines as torch.distributed is missing
|
oncall: distributed, pipeline parallelism
|
### ๐ Describe the bug
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): No module named 'torch.distributed'
This is the error while im trying to import pipeline from transformer. My tensorflow is upraded to latesr version 2.11.0 and i am running this on windows 11 on visual code using python 3.10
Detailed Error if this helps:
File c:\Users\varun\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\import_utils.py:1093, in _LazyModule._get_module(self, module_name)
1092 try:
-> 1093 return importlib.import_module("." + module_name, self.__name__)
1094 except Exception as e:
File c:\Users\varun\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File c:\Users\varun\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\__init__.py:48
40 from ..utils import (
...
1097 f" traceback):\n{e}"
1098 ) from e
### Versions
4.4.0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,907 | 91,165 |
[FSDP] FSDP with CPU offload consumes `1.65X` more GPU memory when training models with most of the params frozen
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### ๐ Describe the bug
Context: We have more and more situations where a large part of the model that's being trained is frozen. As these are very large LLMs, we want to leverage FSDP with CPU offloading to fit such large model training with only a tiny fraction of training params on consumer GPUs. To this end, below is an example of finetuning `bigscience/mt0-small` using LoRA parameter efficient fine-tuning method with/without FSDP.
To fine-tune with FSDP:
1. Following https://github.com/huggingface/accelerate/issues/807, to avoid `AssertionError: expects all parameters to have same requires_grad`, created a custom `auto_wrap_policy` such that the layers with trainable params are in separate FSDP units than those which are frozen. The result model along with FSDP options are given below. We are using Accelerate's FSDP integration . the trainable params have `lora_` prefix:
```
FullyShardedDataParallelPlugin(sharding_strategy=<ShardingStrategy.FULL_SHARD: 1>, backward_prefetch=<BackwardPrefetch.BACKWARD_PRE: 1>, mixed_precision_policy=None, auto_wrap_policy=functools.partial(<function _or_policy at 0x7f022c1a8430>, policies=[functools.partial(<function lambda_auto_wrap_policy at 0x7f022c1a8160>, lambda_fn=<function fsdp_auto_wrap_policy.<locals>.lambda_policy_fn at 0x7f01ec31e3b0>), functools.partial(<function transformer_auto_wrap_policy at 0x7f022c1a8310>, transformer_layer_cls=(<class 'pet.tuners.prefix_tuning.PrefixEncoder'>, <class 'pet.tuners.p_tuning.PromptEncoder'>, <class 'pet.tuners.prompt_tuning.PromptEmbedding'>, <class 'transformers.models.t5.modeling_t5.T5Block'>))]), cpu_offload=CPUOffload(offload_params=True), ignored_modules=None, state_dict_type=<StateDictType.FULL_STATE_DICT: 1>, state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=True), limit_all_gathers=False)
FullyShardedDataParallel(
(_fsdp_wrapped_module): PETModelForSeq2SeqLM(
(base_model): LoRAModel(
(model): MT5ForConditionalGeneration(
(shared): Embedding(250112, 512)
(encoder): T5Stack(
(embed_tokens): Embedding(250112, 512)
(block): ModuleList(
(0): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
(relative_attention_bias): Embedding(32, 6)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(1): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(2): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(3): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(4): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(5): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(6): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(7): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(final_layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(decoder): T5Stack(
(embed_tokens): Embedding(250112, 512)
(block): ModuleList(
(0): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
(relative_attention_bias): Embedding(32, 6)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(1): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(2): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(3): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(4): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(5): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(6): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(7): FullyShardedDataParallel(
(_fsdp_wrapped_module): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(
in_features=512, out_features=384, bias=False
(lora_dropout): Dropout(p=0.1, inplace=False)
(lora_A): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=512, out_features=8, bias=False)
)
(lora_B): FullyShardedDataParallel(
(_fsdp_wrapped_module): Linear(in_features=8, out_features=384, bias=False)
)
)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(final_layer_norm): FusedRMSNorm(torch.Size([512]), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(lm_head): Linear(in_features=512, out_features=250112, bias=False)
)
)
)
)
```
2. The number of trainable params are given below:
```
trainable params: 344064 || all params: 300520832 || trainable%: 0.11448923447676333
```
3. Now, the issue is that in comparison to **plain Pytorch**, FSDP consumes **1.65X more GPU memory** instead of reducing the same by a large amount (expectation) while also greatly increasing the memory consumed on CPU. Below are the screenshots for the same. The hardware used is 1 A100 80GB GPU.
`Plain PyTorch`:

`FSDP Full Shard with CPU offloading`:

4. **When trying to use FSDP with CPU offloading using `bigscience/mt0-xxl` model (13B params) on a A100 80GB GPU it results in OOM GPU error whereas Plain Pytorch consumes 56GB GPU memory.**
5. **Expected behaviour**: Efficiently deal with frozen weights during training such that large models could be offloaded on CPUs/sharded across GPUs properly with storage of the optimizer state only for the trainable parameters, e.g., we can see that using plain PyTorch, mt0-xxl (13B params) model takes up 56GB on GPU, now, it would be really helpful if one could do CPU offloading such that training could work on a 16GB or 24GB GPU using FSDP with CPU offloading.
### Versions
Collecting environment information...
PyTorch version: 1.14.0.dev20221117+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA DGX Display
GPU 4: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.7.1
[pip3] numpy==1.23.4
[pip3] torch==1.14.0.dev20221117+cu117
[pip3] torchaudio==0.14.0.dev20221117
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221117
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] msgpack-numpy 0.4.7.1 pypi_0 pypi
[conda] numpy 1.23.4 pypi_0 pypi
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.14.0.dev20221117+cu117 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20221117 py310_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221117 py310_cu117 pytorch-nightly
cc @ezyang @gchanan @zou3519 @kadeng @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 17 |
3,908 | 91,156 |
`quantile` fails for `float16`/`half` inputs
|
module: cpu, triaged, enhancement, module: half
|
### ๐ Describe the bug
```python
import torch
a = torch.randn(10, 10, dtype=torch.float16)
a.quantile(0.25)
```
results in
```python
RuntimeError: quantile() input tensor must be either float or double dtype
```
Looking at the [relevant piece of code](https://github.com/pytorch/pytorch/blob/511fbad830543c703c0b78a18ed015a4333cdbc2/aten/src/ATen/native/Sorting.cpp#L215), the intention was probably to disallow integer dtypes.
*Edit: the following code produces the same error, so it's not a CPU-specific issue*
```python
a = torch.randn(10, 10, dtype=torch.float16).cuda()
a.quantile(0.25)
```
### Versions
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 12.2.0
Clang version: 14.0.6
CMake version: version 3.24.3
Libc version: glibc-2.36
Python version: 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.78-1-MANJARO-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050
Nvidia driver version: 520.56.06
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.6.0
/usr/lib/libcudnn_adv_infer.so.8.6.0
/usr/lib/libcudnn_adv_train.so.8.6.0
/usr/lib/libcudnn_cnn_infer.so.8.6.0
/usr/lib/libcudnn_cnn_train.so.8.6.0
/usr/lib/libcudnn_ops_infer.so.8.6.0
/usr/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.1
[pip3] mypy==0.982
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.13.0
[pip3] torchmetrics==0.10.1
[pip3] torchvision==0.14.0
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 5 |
3,909 | 91,136 |
[Composable] Enable summon_full_params for fully_shard
|
oncall: distributed, triaged, module: fsdp
|
### ๐ The feature, motivation and pitch
FSDP.summon_full_params does not work if pass in a composable module as the API uses some class-based only FSDP paths.
There is also a larger discussion around whether it is correct / right to use FSDP class methods in composable API, or if we should have a different interface.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,910 | 91,135 |
[BE] Investigate FSDP test _zero_model
|
oncall: distributed, triaged, module: fsdp
|
### ๐ Describe the bug
We shouldn't technically need summon_full_params in `_zero_model` in FSDP test, but removing this fails some DDP parity tests in `test_fsdp_core`. We should investigate and fix.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,911 | 93,491 |
Is there a way to write passes?
|
triaged, enhancement, oncall: pt2
|
Is there a way torch.compile stack allows writing passes? For example, can I register a pass (FX IR => FX IR) that gets run after Dynamo generates FX graphs and before FX IR gets passed to inductor?
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,912 | 91,099 |
[Dynamo] Graph Re-compilation Invoked by Changes of Unused Dict Values
|
triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
We find that unused dict values are taken as constant in TorchDynamo, thus once the unused values change across iterations then the graph will re-compile. One example is that user may append the raw text prompt to the input dict, but the actually used values are the tokens of text prompt. I have brief search in the Github issues but didn't find realted issue, so I think this might be one bug.
Here's one minimal example to reproduce:
```python
#!/usr/bin/env python
import torch
from torch._dynamo import optimize, config
config.cache_size_limit = 4
@optimize("inductor")
def toy_example(adict):
# adict["text"] not used here
return adict["data"].norm()
def test():
adict = {"data": torch.ones(10), "text": ""}
for i in range(5):
adict["text"] = f"text-{i}"
toy_example(adict)
if __name__ == "__main__":
test()
```
The result I got with PyTorch `1.14.0a0+410ce96`:
```
$ python bug-1.py
[2022-12-19 14:40:43,753] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (4)
function: 'toy_example' (bug-1.py:7)
reasons: ["adict['text'] == 'text-0'"]
to diagnose recompilation issues, see https://github.com/pytorch/torchdynamo/blob/main/TROUBLESHOOTING.md.
```
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0a0+410ce96
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.14.0a0+410ce96
[pip3] torch-tensorrt==1.3.0a0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[conda] Could not collect
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @jjsjann123 @kevinstephano
| 2 |
3,913 | 91,098 |
[TorchScript] Failed to Forward Correct Number of Arguments to Different Functions
|
oncall: jit
|
### ๐ Describe the bug
We are trying to apply `jit.script` to one of our model inside NVIDIA, but find it's hard to support following functions, where the model need to dispatch correct number of arguments to different sub-modules depending on their types, and this type can be decided statically. A minimal example is as follows:
```python
#!/usr/bin/env bash
import torch
from enum import Enum
class ForwardType(Enum):
One = 1
Two = 2
Three = 3
class NetA(torch.nn.Module):
forward_type : ForwardType = ForwardType.Two
def __init__(self):
super(NetA, self).__init__()
self.linear = torch.nn.Linear(8, 8)
def forward(self, x, y):
return self.linear(x) * y
class NetB(torch.nn.Module):
forward_type : ForwardType = ForwardType.Three
def __init__(self):
super(NetB, self).__init__()
self.linear = torch.nn.Linear(8, 8)
def forward(self, x, y, z):
return self.linear(x) * y + z
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
layers = [NetA(), torch.nn.BatchNorm1d(8), NetB()]
self.layers = torch.nn.ModuleList(layers)
def forward(self, x):
# Assume layer can be any unknown Module with 1 to 3 forward args
# Say there's user defined NetC, NetD in self.layers.
for layer in self.layers:
forward_type = getattr(layer, "forward_type", ForwardType.One)
if forward_type == ForwardType.Two:
x = layer(x, x)
elif forward_type == ForwardType.Three:
x = layer(x, x, x)
else:
x = layer(x)
return x
model = Model()
x = torch.randn(16, 8)
print(model(x).norm())
scripted = torch.jit.script(model)
print(scripted(x).norm())
print(scripted.graph)
```
When we run above snippet with PyTorch `1.14.0a0+410ce96`, we get following error message:
```
$ python tsv2.py
tensor(13.4283, grad_fn=<NormBackward1>)
Traceback (most recent call last):
File "tsv2.py", line 52, in <module>
scripted = torch.jit.script(model)
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_recursive.py", line 477, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_recursive.py", line 543, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_recursive.py", line 394, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
forward(__torch__.NetA self, Tensor x, Tensor y) -> Tensor:
Expected at most 3 arguments but found 4 positional arguments.
:
File "tsv2.py", line 43
x = layer(x, x)
elif forward_type == ForwardType.Three:
x = layer(x, x, x)
~~~~~ <--- HERE
else:
x = layer(x)
```
How to resolve the issue?
### Versions
```
Collecting environment information...
PyTorch version: 1.14.0a0+410ce96
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.14.0a0+410ce96
[pip3] torch-tensorrt==1.3.0a0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 4 |
3,914 | 93,489 |
Remove redundant memory copy for HF multi-attention submodule for cpu path using MKL prepack
|
triaged, bug, oncall: pt2, module: cpu inductor
|
### ๐ Describe the bug
There has a redundant memory copy for HF multi-attention submodule for CPU path.
### Error logs
output code:
```
from ctypes import c_void_p, c_long
import torch
import random
from torch import empty_strided, as_strided, device
from torch._inductor.codecache import AsyncCompile
aten = torch.ops.aten
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
async_compile = AsyncCompile()
kernel_cpp_0 = async_compile.cpp('''
#include "/tmp/torchinductor_xiaobing/77/c7773nj5pwikpmm2pwa62rcudlf7p3if7eyqb5k4sjsvewwje4le.h"
extern "C" void kernel(float* __restrict__ in_out_ptr0,
float* __restrict__ in_out_ptr1)
{
for(long i0=0; i0<384; i0+=1)
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_out_ptr0 + 16*i0);
tmp0.store(in_out_ptr0 + 16*i0);
}
#pragma omp simd simdlen(8)
for(long i0=6144; i0<6144; i0+=1)
{
auto tmp0 = in_out_ptr0[i0];
in_out_ptr0[i0] = tmp0;
}
for(long i0=0; i0<384; i0+=1)
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_out_ptr1 + 16*i0);
tmp0.store(in_out_ptr1 + 16*i0);
}
#pragma omp simd simdlen(8)
for(long i0=6144; i0<6144; i0+=1)
{
auto tmp0 = in_out_ptr1[i0];
in_out_ptr1[i0] = tmp0;
}
}
''')
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1, arg9_1 = args
args.clear()
buf0 = torch.ops.mkl._mkl_linear(arg9_1, arg2_1, arg0_1, arg1_1, 8)
del arg0_1
del arg1_1
del arg2_1
buf1 = torch.ops.mkl._mkl_linear(arg9_1, arg5_1, arg3_1, arg4_1, 8)
del arg3_1
del arg4_1
del arg5_1
del arg9_1
buf3 = as_strided(buf0, (12, 8, 64), (64, 768, 1)); del buf0 # reuse
buf4 = as_strided(buf1, (12, 64, 8), (64, 1, 768)); del buf1 # reuse
kernel_cpp_0(c_void_p(buf3.data_ptr()), c_void_p(buf4.data_ptr()))
buf5 = empty_strided((12, 8, 8), (64, 8, 1), device='cpu', dtype=torch.float32)
aten.bmm.out(buf3, buf4, out=buf5)
return (as_strided(buf5, (1, 12, 8, 8), (768, 64, 8, 1)), )
if __name__ == "__main__":
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((768, 128), (128, 1), device='cpu', dtype=torch.float32)
arg1_1 = rand_strided((768, ), (1, ), device='cpu', dtype=torch.float32)
arg2_1 = rand_strided((2310369, 1), (1, 0), device='cpu', dtype=torch.float32)
arg3_1 = rand_strided((768, 128), (128, 1), device='cpu', dtype=torch.float32)
arg4_1 = rand_strided((768, ), (1, ), device='cpu', dtype=torch.float32)
arg5_1 = rand_strided((2310369, 1), (1, 0), device='cpu', dtype=torch.float32)
arg6_1 = rand_strided((768, 128), (128, 1), device='cpu', dtype=torch.float32)
arg7_1 = rand_strided((768, ), (1, ), device='cpu', dtype=torch.float32)
arg8_1 = rand_strided((2310369, 1), (1, 0), device='cpu', dtype=torch.float32)
arg9_1 = rand_strided((1, 8, 128), (1024, 128, 1), device='cpu', dtype=torch.float32)
print_performance(lambda: call([arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1, arg8_1, arg9_1]))
```
### Minified repro
```
import torch._dynamo
import torch.fx.experimental.optimization as optimization
import copy
from typing import Dict, List, Optional
import time
import torch.profiler as profiler
from torch.fx import symbolic_trace
from torch._inductor import config
config.debug = True
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.num_attention_heads = 12
self.attention_head_size = 64
self.query = torch.nn.Linear(128, 768)
self.key = torch.nn.Linear(128, 768)
self.value = torch.nn.Linear(128, 768)
def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor:
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(new_x_shape)
return x.permute(0, 2, 1, 3)
def forward(self, hidden_states):
mixed_query_layer = self.query(hidden_states)
mixed_key_layer = self.key(hidden_states)
mixed_value_layer = self.value(hidden_states)
query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
#value_layer = self.transpose_for_scores(mixed_value_layer)
# Take the dot product between "query" and "key" to get the raw attention scores.
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
return attention_scores
x = torch.randn(1, 8, 128)
model = Model().eval()
y = model(x)
with torch.no_grad():
opt_model = torch._dynamo.optimize('inductor')(model)
with torch.no_grad():
for i in range(2):
y1 = opt_model(x)
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 3 |
3,915 | 91,088 |
[Inductor] `test_tmp_not_defined_issue1_cuda` raises `RuntimeError` but passes
|
triaged, oncall: pt2, module: inductor
|
### ๐ Describe the bug
Steps to reproduce:
```python
python inductor/test_torchinductor.py -v -k test_tmp_not_defined_issue1_cuda
...
File "/usr/local/lib/python3.8/dist-packages/torch/_ops.py", line 284, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: !schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED at "../aten/src/ATen/FunctionalizeFallbackKernel.cpp":32, please report a bug to PyTorch. mutating and aliasing ops should all have codegen'd kernels
While executing %broadcast_in_dim_default : [#users=1] = call_function[target=torch.ops.prims.broadcast_in_dim.default](args = (%var_default_1, [1, 512, 1], [0, 1]), kwargs = {})
Original traceback:
Module stack: {}
File "inductor/test_torchinductor.py", line 4685, in forward
broadcast_in_dim_default_2 = torch.ops.prims.broadcast_in_dim.default(
| File "inductor/test_torchinductor.py", line 316, in run
return model(*ex, **kwargs)
...
[2022-12-19 06:53:09,515] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
inductor/test_torchinductor.py:244: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor._storage() instead of tensor.storage()
buffer = torch.as_strided(x, (x.storage().size(),), (1,), 0).clone()
Failed to collect metadata on function, produced code may be suboptimal. Known situations this can occur are inference mode only compilation involving resize_ or prims (!schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED); if your situation looks different please file a bug to PyTorch.
Traceback (most recent call last):
...
RuntimeError: !schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED at "../aten/src/ATen/FunctionalizeFallbackKernel.cpp":32, please report a bug to PyTorch. mutating and aliasing ops should all have codegen'd kernels
...
----------------------------------------------------------------------
Ran 1 test in 2.063s
OK
```
Based on this output the test passes with `OK` but a `RuntimeError` is raised (it could thus be missed in CI and I found it by manually checking some tests).
I don't think this test is expected to check for this `RuntimeError` (unless I'm missing something from its definition [here](https://github.com/pytorch/pytorch/blob/ce4900f3bb64da2b5bced9106d26c44363189155/test/inductor/test_torchinductor.py#L4656-L4703)).
### Versions
```
python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.0.0.dev20221218+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.0.76
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.57.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.0.dev20221218+cu117
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221218+cu117
[conda] Could not collect
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 4 |
3,916 | 91,073 |
Implement L1 and L2 gradient as hooks with the option of changing the weight decay value.
|
module: nn, module: optimizer, triaged, needs design
|
### ๐ The feature, motivation and pitch
The L1 and L2 weight decay are important regularizations in deep learning, but their current implementation in PyTorch is not as flexible as one may desire. The L2 weight decay is available in PyTorch optimizers, but the weight decay value cannot be modified during training. L1 is not implemented in an efficient manner in the PyTorch library.
I propose implementing the L1 and L2 weight decay as gradient hooks and placing them in the `torch.nn.utils` namespace. Other regularizations, such as weight and spectral normalization, are already implemented there, as are pruning utilities, which often go with L1 regularization.
The implementation should allow easy attachment and detachment of the hook on any `nn.Module` with options to separate parameter groups. The weight decay value should be modifiable to allow weight decay scheduling.
### Alternatives
Implementing in the Optimizer base class is an option but has failed to gain traction.
Currently, the TorchLayers library offers a Keras-like interface with the L1 and L2 regularizations, but this requires a wrapper class and cannot separate parameter groups very efficiently or change the weight decay value.
### Additional context
I have already implemented the hooks and am willing to contribute. Separation of parameter groups has already been implemented, but I am considering a better interface for exposing the weight decay value for scheduling.
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @vincentqb @janeyx99
| 0 |
3,917 | 91,072 |
Unexpected behavior when running torch.max in cuda
|
needs reproduction, module: cuda, triaged
|
### ๐ Describe the bug
I used pytorch to train a simple CNN model on google colab. In validation step, I try to select the index with highest value from a tensor in cuda using torch.max. However, torch.max returns a random large index, way larger than input length. This error seems to occur alongside other cuda error, such as RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR or RuntimeError: CUDA error: device-side assert triggered.
```
t = torch.tensor([[-2.9808e-23, 0.0000e+00, -2.6129e+00, -3.4929e-01, -4.0383e-01,
-2.0949e-01, -1.5482e-01, -1.6440e-01, -4.0626e-01, -2.5096e-01,
-2.2770e-01, -1.0349e-01, -9.4043e-02, -4.8122e-01, -3.2553e-01,
-2.3519e-01, -4.4436e-01, -3.5913e-01, -3.1900e-01, -6.3017e-02,
-3.4288e-01, -1.3599e-01, -3.1509e-01, -2.6539e-01, -9.8018e-02,
-3.9029e-01, -2.4317e-01, -1.4794e-01, -3.9406e-01, -5.0739e-01,
-2.5979e+00, -3.1726e-01, -4.5700e-01, -7.1152e-01, -2.9527e-01,
-3.8173e-01, -2.6285e+00, -1.7692e-01, -2.2875e-01, -5.5956e-01,
-3.1436e-01, -3.8268e-01, -1.2306e-01, -5.5486e-01, -1.7683e-01,
-2.6236e+00, -3.0367e-01, -5.5862e-01, -1.6077e-01, -5.3180e-01,
-5.6198e-01, -4.0909e-01, -2.0121e-01, -1.9322e-01, -1.3304e-01,
2.1249e-02, -3.6515e-01, -2.0839e-01, -2.3793e-01, -3.6420e-01,
-1.1649e-01, -2.5894e+00, -2.6888e+00, -3.5755e-01, -6.7334e-01,
-3.7351e-01, -3.1000e-01, -3.6834e-01, -3.5927e-01, -7.7074e-02,
-2.6725e+00, -4.5349e-01, -3.0461e-01, -2.7283e+00, -2.5328e-01,
-2.2305e-01, -3.9784e-01, -5.0807e-01, -4.4298e-01, -2.2444e-01,
-1.7673e-01, -2.6070e-01, -3.3998e-01, -5.1303e-01, -1.6730e-01,
-4.3395e-01, -2.6815e+00, -3.2732e-01, -2.1247e-01],
[-3.9961e-01, 1.6578e-01, -2.9537e+00, -4.5602e-01, -9.4190e-01,
-2.9841e-01, -6.3686e-01, -7.5595e-01, -4.0157e-01, -9.3485e-01,
-2.4404e-01, -5.4300e-01, -4.6023e-01, -6.5109e-01, -6.4473e-01,
-5.9362e-01, -5.1901e-01, -8.8782e-01, -5.1866e-01, -3.1990e-01,
-6.1264e-01, -6.0653e-01, -7.9555e-01, -5.7329e-01, -3.0271e-01,
-4.0621e-01, -1.0171e+00, -2.4200e-01, -1.0000e-01, -5.2754e-01,
-2.8763e+00, -9.5346e-01, -2.5961e-01, -2.7388e-01, 7.5644e-02,
-3.5344e-01, -2.9006e+00, -7.6167e-01, -9.0409e-01, -2.7503e-01,
-2.6359e-01, -4.0277e-01, -8.5128e-01, -3.9613e-01, -3.2920e-01,
-2.8844e+00, -3.0337e-01, -4.4127e-01, 9.0336e-02, -4.5992e-01,
-2.7454e-02, -4.9103e-01, -6.3099e-01, -2.0726e-01, -3.0036e-01,
-5.9835e-01, -3.8050e-01, -3.4423e-01, -2.9218e-01, -2.0366e-01,
-3.0304e-01, -2.9197e+00, -2.9926e+00, -6.3344e-01, -2.8802e-01,
-1.1261e-01, -6.4091e-01, -1.0381e-01, -3.5489e-01, -3.7880e-02,
-2.8335e+00, -7.8121e-01, -2.7977e-01, -2.9593e+00, -2.5279e-01,
-4.3704e-01, -8.2254e-01, -3.0320e-01, -8.3122e-01, -7.8798e-01,
-6.7372e-01, -4.5139e-01, -4.4267e-01, -5.1234e-01, -6.0082e-01,
-5.2498e-01, -3.0431e+00, -6.7507e-01, -6.8243e-01],
[-4.5808e-01, -4.8066e-01, -2.5531e+00, -2.4340e-01, -3.4281e-01,
-1.6778e-01, -1.5655e-01, -5.2654e-02, -4.2752e-01, -8.0808e-02,
-2.4970e-01, 1.7490e-03, -6.2921e-02, -4.3482e-01, -2.2288e-01,
-1.6406e-01, -4.6503e-01, -3.2364e-01, -2.9755e-01, -4.4653e-02,
-2.7008e-01, 2.1750e-02, -2.6459e-01, -2.9233e-01, -4.1502e-02,
-2.9121e-01, -6.0573e-02, -1.6509e-01, -4.6651e-01, -4.6953e-01,
-2.5964e+00, -2.0548e-01, -4.9913e-01, -8.0150e-01, -4.3226e-01,
-4.3090e-01, -2.6524e+00, 6.7035e-03, -1.7863e-01, -5.5503e-01,
-2.8126e-01, -3.6196e-01, 4.1430e-02, -6.3228e-01, -1.6864e-01,
-2.5754e+00, -3.2991e-01, -5.9287e-01, -2.9245e-01, -5.1240e-01,
-7.4233e-01, -3.8063e-01, -9.0290e-02, -1.9527e-01, -1.3816e-01,
1.4352e-01, -3.4068e-01, -1.9564e-01, -2.2657e-01, -4.1834e-01,
-2.5276e-02, -2.4929e+00, -2.6141e+00, -3.7231e-01, -7.9536e-01,
-4.4755e-01, -1.7181e-01, -4.1642e-01, -3.5402e-01, -7.9624e-02,
-2.6531e+00, -4.4046e-01, -3.0131e-01, -2.7066e+00, -1.9543e-01,
-1.8573e-01, -2.9396e-01, -5.1982e-01, -2.6817e-01, -1.0732e-01,
-8.4123e-02, -1.2346e-01, -3.0960e-01, -5.2758e-01, -2.1551e-02,
-4.4173e-01, -2.5881e+00, -2.6153e-01, -1.3117e-01],
[-4.4190e-01, -3.9657e-01, -2.7732e+00, -2.2546e-01, -4.1165e-01,
-1.9700e-01, -1.9752e-01, -1.4089e-01, -4.5835e-01, -1.8212e-01,
-2.2118e-01, -2.9388e-02, -6.6650e-02, -4.2861e-01, -2.7263e-01,
-1.9268e-01, -4.5309e-01, -3.7358e-01, -3.3004e-01, -6.2383e-02,
-2.9332e-01, -1.0190e-01, -2.9808e-01, -2.7539e-01, -7.2598e-02,
-2.9892e-01, -1.9889e-01, -1.8971e-01, -4.4991e-01, -5.0726e-01,
-2.7958e+00, -3.4468e-01, -4.8980e-01, -7.9468e-01, -3.2875e-01,
-3.9282e-01, -2.8805e+00, -1.1058e-01, -2.3956e-01, -5.3175e-01,
-3.1619e-01, -4.3418e-01, -6.6878e-02, -6.1019e-01, -1.4052e-01,
-2.8403e+00, -3.2096e-01, -5.8502e-01, -2.0753e-01, -5.1693e-01,
-6.7902e-01, -4.1668e-01, -1.3372e-01, -2.0669e-01, -1.1902e-01,
7.6200e-02, -3.3212e-01, -2.2819e-01, -2.3068e-01, -4.3119e-01,
-4.4670e-02, -2.7582e+00, -2.8768e+00, -4.2379e-01, -7.5642e-01,
-4.3265e-01, -2.5192e-01, -4.1074e-01, -3.3417e-01, -5.7136e-02,
-2.8823e+00, -4.6207e-01, -2.9271e-01, -2.9707e+00, -1.6782e-01,
-2.3061e-01, -4.0023e-01, -4.9194e-01, -3.8972e-01, -1.9206e-01,
-1.4955e-01, -1.6620e-01, -3.2029e-01, -5.0159e-01, -1.1917e-01,
-4.8244e-01, -2.8490e+00, -3.5706e-01, -1.6855e-01],
[-5.5025e-01, -5.8855e-01, -3.0669e+00, -3.4897e-01, -4.1335e-01,
-2.4297e-01, -1.4819e-01, -1.1491e-01, -5.4043e-01, -8.9940e-02,
-3.1333e-01, -4.1062e-02, -7.0230e-02, -5.4323e-01, -2.6586e-01,
-2.1344e-01, -5.7078e-01, -3.7120e-01, -3.8843e-01, -4.0332e-02,
-3.3520e-01, -3.2996e-02, -3.2606e-01, -3.5599e-01, -7.2711e-02,
-3.9851e-01, -1.2143e-01, -2.1913e-01, -5.7805e-01, -5.9290e-01,
-3.1540e+00, -2.5828e-01, -6.1004e-01, -9.8273e-01, -5.1350e-01,
-5.2332e-01, -3.1887e+00, -8.0390e-03, -1.6865e-01, -7.1400e-01,
-3.6896e-01, -4.5574e-01, -1.2979e-02, -7.5833e-01, -1.9203e-01,
-3.1354e+00, -3.9395e-01, -7.5431e-01, -3.8152e-01, -6.6217e-01,
-8.6812e-01, -5.0743e-01, -1.7299e-01, -2.5107e-01, -1.5972e-01,
2.1470e-01, -4.0968e-01, -2.6435e-01, -3.1674e-01, -4.7360e-01,
-2.1579e-02, -3.0315e+00, -3.1690e+00, -4.2908e-01, -9.5660e-01,
-5.4581e-01, -2.5748e-01, -5.3975e-01, -4.8235e-01, -1.1911e-01,
-3.1924e+00, -5.4878e-01, -3.6280e-01, -3.2480e+00, -2.3659e-01,
-2.4418e-01, -3.8497e-01, -6.5964e-01, -3.7702e-01, -1.9714e-01,
-9.5154e-02, -2.2553e-01, -3.9716e-01, -6.2804e-01, -5.6505e-02,
-5.8904e-01, -3.1137e+00, -3.1355e-01, -1.5024e-01],
[-5.2698e-01, -6.2453e-01, -2.8343e+00, -3.2546e-01, -3.5720e-01,
-2.0996e-01, -1.6489e-01, -2.6181e-02, -5.2760e-01, -5.4436e-02,
-3.0116e-01, 1.7668e-02, -8.1519e-02, -4.8728e-01, -2.2360e-01,
-1.7458e-01, -5.4216e-01, -3.4917e-01, -3.4495e-01, -2.8586e-02,
-2.8118e-01, 3.0609e-02, -2.9666e-01, -3.1377e-01, -7.2063e-02,
-3.5397e-01, -8.7759e-03, -2.1653e-01, -5.6348e-01, -5.3917e-01,
-2.8651e+00, -2.2681e-01, -5.9335e-01, -9.4614e-01, -5.3888e-01,
-5.1958e-01, -2.9527e+00, 5.6199e-02, -1.3891e-01, -6.9060e-01,
-3.4208e-01, -4.3024e-01, 8.3096e-02, -7.3435e-01, -1.7645e-01,
-2.8619e+00, -3.7958e-01, -7.1092e-01, -3.4937e-01, -6.3227e-01,
-9.4289e-01, -4.3585e-01, -9.0388e-02, -2.5380e-01, -1.5628e-01,
1.6850e-01, -4.2113e-01, -2.2250e-01, -2.7563e-01, -4.6665e-01,
-2.7575e-02, -2.7827e+00, -2.9138e+00, -4.0327e-01, -9.4931e-01,
-5.6447e-01, -1.9289e-01, -5.0960e-01, -4.2893e-01, -1.2452e-01,
-2.9714e+00, -4.7776e-01, -3.7410e-01, -3.0154e+00, -2.1841e-01,
-2.1061e-01, -2.9357e-01, -6.1583e-01, -2.8657e-01, -8.5437e-02,
-7.1150e-02, -1.3502e-01, -3.5952e-01, -6.1769e-01, 7.5009e-04,
-5.0966e-01, -2.8673e+00, -2.8539e-01, -1.1753e-01],
[-4.3252e-01, 1.9695e-01, -3.3032e+00, -4.5870e-01, -1.0697e+00,
-3.4811e-01, -7.1210e-01, -8.7620e-01, -4.7859e-01, -1.0205e+00,
-2.5673e-01, -5.8002e-01, -5.2838e-01, -7.1609e-01, -7.0423e-01,
-6.8199e-01, -5.5739e-01, -9.8411e-01, -6.1129e-01, -3.7185e-01,
-6.6617e-01, -6.9559e-01, -8.7136e-01, -6.5952e-01, -3.4492e-01,
-4.1483e-01, -1.1715e+00, -3.1782e-01, -1.2169e-01, -5.8277e-01,
-3.2285e+00, -1.0931e+00, -2.6633e-01, -3.4119e-01, 9.4899e-02,
-3.7998e-01, -3.2759e+00, -8.2581e-01, -1.0247e+00, -2.8633e-01,
-2.9780e-01, -4.5818e-01, -9.5456e-01, -4.2535e-01, -3.5335e-01,
-3.2619e+00, -3.3853e-01, -5.0987e-01, 7.7503e-02, -4.8933e-01,
-2.3903e-02, -5.6703e-01, -6.9318e-01, -2.2282e-01, -3.0569e-01,
-6.4329e-01, -3.9338e-01, -4.1244e-01, -3.2905e-01, -2.3803e-01,
-3.1020e-01, -3.2768e+00, -3.3607e+00, -7.2475e-01, -3.1787e-01,
-1.2520e-01, -7.0924e-01, -1.2875e-01, -3.6864e-01, -4.1061e-02,
-3.1852e+00, -8.7667e-01, -3.0897e-01, -3.3494e+00, -2.1646e-01,
-5.0964e-01, -9.3419e-01, -3.3475e-01, -9.2844e-01, -8.8214e-01,
-7.4403e-01, -5.0813e-01, -5.0173e-01, -5.4668e-01, -6.8613e-01,
-6.1847e-01, -3.4036e+00, -7.8160e-01, -7.6348e-01],
[-5.3152e-01, -6.5004e-01, -2.8640e+00, -3.1019e-01, -3.6612e-01,
-2.1596e-01, -1.4831e-01, -2.4307e-02, -5.2879e-01, -3.3119e-02,
-3.1236e-01, 9.8837e-03, -7.5107e-02, -5.0004e-01, -2.3522e-01,
-1.8431e-01, -5.4472e-01, -3.3397e-01, -3.4872e-01, -3.2245e-02,
-2.9193e-01, 4.3241e-02, -2.9800e-01, -3.3682e-01, -6.6857e-02,
-3.5815e-01, -7.8080e-03, -2.1986e-01, -5.6088e-01, -5.4229e-01,
-2.9115e+00, -2.0855e-01, -6.1273e-01, -9.5707e-01, -5.5450e-01,
-5.3075e-01, -2.9841e+00, 7.1242e-02, -1.4242e-01, -6.9260e-01,
-3.4608e-01, -4.3146e-01, 9.9981e-02, -7.5333e-01, -2.0506e-01,
-2.8932e+00, -3.8000e-01, -7.2683e-01, -3.7222e-01, -6.3134e-01,
-9.3961e-01, -4.3834e-01, -1.0027e-01, -2.4466e-01, -1.6033e-01,
1.7717e-01, -4.1549e-01, -2.2137e-01, -2.7459e-01, -4.8566e-01,
-2.9419e-02, -2.8001e+00, -2.9335e+00, -4.0650e-01, -9.6968e-01,
-5.6127e-01, -1.8503e-01, -5.1403e-01, -4.3293e-01, -1.2798e-01,
-2.9952e+00, -5.0376e-01, -3.7856e-01, -3.0451e+00, -2.2716e-01,
-2.1112e-01, -2.9760e-01, -6.2482e-01, -2.7578e-01, -7.7261e-02,
-5.8737e-02, -1.3791e-01, -3.6632e-01, -6.2360e-01, -5.2439e-03,
-5.1944e-01, -2.8939e+00, -2.9270e-01, -1.1549e-01]]).cuda()
prob, idx = torch.max(t, 1)
print(idx)
```
Code above outputs:
tensor([ 32509184, 27, 4294967295, 7146984122335833646, 4702977657927722101, 7957695015192261958,
6827635, 2608622272], device='cuda:0')
### Versions
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.6
Libc version: glibc-2.27
Python version: 3.8.16 (default, Dec 7 2022, 01:12:13) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.133+-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 460.32.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] pytorch-resample==0.1.0
[pip3] torch==1.13.0+cu116
[pip3] torchaudio==0.13.0+cu116
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.14.0
[pip3] torchvision==0.14.0+cu116
[conda] Could not collect
cc @ngimel
| 1 |
3,918 | 93,488 |
If minifier test fails, stderr/stdout of subprocess calls is not printed
|
triaged, bug
|
### ๐ Describe the bug
```
# Runs the minifier launcher script in `repro_dir`, patched with `patch_code`.
def _run_minifier_launcher(self, patch_code, repro_dir):
self.assertIsNotNone(repro_dir)
launch_file = os.path.join(repro_dir, "minifier_launcher.py")
self.assertTrue(os.path.exists(launch_file))
launch_code = self._inject_code(patch_code, launch_file)
launch_proc = subprocess.run(
["python3", launch_file],
capture_output=True,
cwd=repro_dir,
)
return launch_proc, launch_code
```
this captures output, but then there is nothing that ensures this output is printed if, e.g., something else fails
### Error logs
_No response_
### Minified repro
_No response_
| 0 |
3,919 | 91,039 |
Simple deleting from the sys cache fails on reimport
|
triaged, enhancement, module: python frontend
|
### ๐ Describe the bug
### Bug description
I am working on a project in which I need to throw `torch` package out of the sys cache in order to be able to reload new version of the torch package. However, this minimal example fails:
```
import torch
import sys
if __name__ == "__main__":
sys.modules.pop("torch", None)
import torch
```
With the following error:
```
Traceback (most recent call last):
File "/home/andi/Memgraph/code/mage/python/test_mage/test_submodules.py", line 10, in <module>
import torch
File "/home/andi/.local/lib/python3.10/site-packages/torch/__init__.py", line 233, in <module>
for name in dir(_C):
NameError: name '_C' is not defined
```
### Versions
```
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] numpydoc==1.5.0
[pip3] torch==1.12.1+cpu
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.12.1+cpu
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.4 py39h14f4228_0
[conda] numpy-base 1.23.4 py39h31eccc5_0
[conda] pytorch 1.13.1 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 0.13.1 py39_cpu pytorch
[conda] torchvision 0.14.1 py39_cpu pytorch
```
cc @albanD
| 3 |
3,920 | 93,486 |
A Simple Function Causing Graph Break
|
triaged, bug
|
### ๐ Describe the bug
An error raised when calling `torch._dynamo.optimize` when optimizing huggingface GPT2 and enforcing a single optimized graph for the whole model.
```
File "conda/dexport/lib/python3.9/site-packages/torch/_dynamo/variables/user_defined.py", line 243, in call_function
return super().call_function(tx, args, kwargs)
File "conda/dexport/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 230, in call_function
unimplemented(f"call_function {self} {args} {kwargs}")
File "conda/dexport/lib/python3.9/site-packages/torch/_dynamo/exc.py", line 67, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function UserDefinedObjectVariable(_lru_cache_wrapper) [] {}
from user code:
File "dexport/tests/loop.py", line 12, in forward
if transformers.is_torch_tpu_available():
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
Below is a minimal repro. If `transformers.is_torch_tpu_available()` is removed, then the code works fine.
```python
import torch
import torch._dynamo
from torch import nn
import transformers
class ToyModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
def forward(self, x):
if transformers.is_torch_tpu_available():
x = x + 1.0
else:
x = x + 2.0
return self.linear(x)
model = ToyModel()
x = torch.rand(10, 10, dtype=torch.float32)
y = model(x)
class GraphCaptureCompiler:
def __init__(self):
self.captured_graph = None
self.captured_graph_count = 0
def compile(self, gm, _):
assert self.captured_graph_count == 0
self.captured_graph = gm
self.captured_graph_count += 1
return gm
compiler = GraphCaptureCompiler()
y_optimized = torch._dynamo.optimize(compiler.compile, nopython=True)(model)(x)
```
I'd expect dynamo can just trace through this simple function.
### Error logs
_No response_
### Minified repro
_No response_
| 1 |
3,921 | 91,000 |
node.stack_trace does not handle escaping correctly
|
oncall: fx
|
### ๐ Describe the bug
While working on our importer in Torch-MLIR, I noticed that `node.stack_trace` is just a string rather than a more structured piece of data. This repro is extracted from that importer code. I am needing to regex match `node.stack_trace` to extract the source location, but of course that is perilous. The example below, if put in a file `repro".py` fails to extract a correct stack trace. I could make my regex smarter but I think the solution is to expose the source location info in a more structured manner.
```
from typing import List
import operator
import re
import torch
import torch._dynamo as dynamo
def my_backend(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
for node in gm.graph.nodes:
if node.op == "call_function" and node.target == operator.mul:
print(f"node.stack_trace is of type {type(node.stack_trace)}")
print(node.stack_trace)
m = re.search(
r"""File "([^"]+)", line ([0-9]+),""", node.stack_trace)
if m is None:
print("FAILED TO MATCH")
else:
file, line = m.group(1), int(m.group(2))
print(f"MATCHED {file}:{line}")
return gm
@dynamo.optimize(my_backend)
def f(x):
return x * x
example_inputs = (torch.randn(3, 4),)
f(*example_inputs)
```
Output if the file is called `repro".py`
```
$ python /tmp/repro'"'.py
node.stack_trace is of type <class 'str'>
Module stack: {}
File "/tmp/repro".py", line 27, in f
return x * x
FAILED TO MATCH
```
Example output if the file is called `repro2.py`
```
$ python /tmp/repro2.py
node.stack_trace is of type <class 'str'>
Module stack: {}
File "/tmp/repro2.py", line 27, in f
return x * x
MATCHED /tmp/repro2.py:27
```
### Versions
PyTorch version: 2.0.0.dev20221213+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 12.2.0-3) 12.2.0
Clang version: 14.0.6-2
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.8 (main, Nov 3 2022, 15:17:13) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.19.11-1rodete1-amd64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0rc2
[pip3] torch==2.0.0.dev20221213+cpu
[pip3] torchvision==0.15.0.dev20221213+cpu
[conda] Could not collect
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 1 |
3,922 | 90,999 |
overflow (?) on cuda tensor after matrix multiplication
|
module: numerical-stability, module: cuda, triaged
|
### ๐ Describe the bug
Two ways of matrix division gives not the same results if using cuda context.
Using CPU context working as expected, difference matrix is filled zeroes..
Using Cuda gives difference matrix, where 255 pos is not zero.
```import torch
def test_func(GT):
GT_1 = GT.clone()
GT_2 = GT.clone()
GT_1 /= GT_1[..., 2].unsqueeze(-1)
GT_2 = GT_2[...] / GT_2[..., 2].unsqueeze(-1)
dif = GT_1 - GT_2
print(f"argmin: {dif.argmin().item()}, min: {dif.min().item()}")
print(f"argmax: {dif.argmax().item()}, max: {dif.max().item()}")
if __name__ == "__main__":
canonical = torch.rand([100,3]).type(torch.float32)
Homo = torch.rand([3,3]).type(torch.float32)
print("_________CPU EXEC_________")
GT = canonical @ Homo.transpose(-1,-2)
test_func(GT)
print("_________GPU EXEC_________")
canonical = canonical.cuda()
Homo = Homo.cuda()
GT = canonical @ Homo.transpose(-1,-2)
test_func(GT)
OUTPUT:
_________CPU EXEC_________
argmin: 0, min: 0.0
argmax: 0, max: 0.0
_________GPU EXEC_________
argmin: 255, min: -0.9257388710975647
argmax: 0, max: 0.0
```
### Versions
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.8.4.post0
[pip3] pytorchvideo==0.1.5
[pip3] torch==1.12.1+cu116
[pip3] torchmetrics==0.11.0
[pip3] torchvision==0.13.1+cu116
cc @ngimel
| 1 |
3,923 | 90,998 |
Crash in `index_select` with singleton `self`, non-singleton `index`
|
triaged
|
### ๐ Describe the bug
Running the following can cause pytorch 1.13 to segfault (or give some other memory-related abort):
```python
torch.index_select(torch.tensor(10), 0, torch.tensor([0, 0, 0, 0, 0, 0, 0, 0]))
```
Example error messages:
```
munmap_chunk(): invalid pointer
Aborted (core dumped)
```
```
corrupted size vs. prev_size
Aborted (core dumped)
```
It doesn't happen every time, but i'm pretty much guaranteed to get it after ~3 runs.
I've also only found it to happen if `index` has at least four elements (though it probably just crashes less often with fewer).
### Versions
PyTorch version: 1.13.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: Quadro T2000 with Max-Q Design
Nvidia driver version: 510.108.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.0+cpu
[pip3] torchaudio==0.13.0+cpu
[pip3] torchvision==0.14.0+cpu
[conda] Could not collect
| 4 |
3,924 | 90,992 |
as_strided_scatter : INTERNAL_ASSERT_FAILED for requires_grad=True and non-config input
|
triaged, module: functionalization
|
### ๐ Describe the bug
```python
import torch
from torch.testing._internal.common_methods_invocations import op_db
as_strided_scatter_opinfo = list(filter(lambda op_info : op_info.name == 'as_strided_scatter', op_db))[0]
device = 'cpu'
dtype = torch.float
requires_grad = True
for sample in as_strided_scatter_opinfo.sample_inputs(device, dtype, requires_grad=True):
noncontig_sample = sample.noncontiguous()
args = [sample.input] + list(sample.args)
kwargs = sample.kwargs
out = as_strided_scatter_opinfo.op(*args, **kwargs)
out.sum().backward()
```
Fails with
`RuntimeError: !self.requires_grad() || self.is_contiguous() INTERNAL ASSERT FAILED at "ATen/native/TensorShape.cpp":3827, please report a bug to PyTorch. as_strided_scatter is currently only supported for contiguous inputs
`
This should probably be a `TORCH_CHECK` instead of `TORCH_INTERNAL_ASSERT`.
cc: @bdhirsh
### Versions
master
cc @bdhirsh @ezyang @soumith
| 1 |
3,925 | 93,484 |
TorchDynamo doesn't inline modified nn.Modules forward - Fails with Huggingface Accelerate
|
high priority, triaged, bug, oncall: pt2
|
### ๐ Describe the bug
Error was raised when running the huggingface BLOOM model with accelerate support for large model after applying `dynamo.optimize`.
The example model and code were taken from the [PyTorch conference demo](https://www.youtube.com/watch?v=vbtGZL7IrAw&t=25048s).
Note that the standard version, without arguments `device_map` and `offload_state_dict`, works fine.
#### Requirements
```
torch 2.0.0a0+gitfa946ae
transformers 4.26.0.dev0
accelerate 0.15.0
```
### Error logs
```
Traceback (most recent call last):
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 1092, in run_node
return nnmodule(*args, **kwargs)
File "/home/bowbao/pytorch/torch/nn/modules/module.py", line 1482, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bowbao/stable_diffusion/lib/python3.8/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/bowbao/pytorch/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/bowbao/pytorch/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
File "/home/bowbao/pytorch/torch/_subclasses/fake_tensor.py", line 812, in __torch_dispatch__
raise Exception(
Exception: Invoking operators with non-Fake Tensor inputs in FakeTensorMode is not yet supported. Please convert all Tensors to FakeTensors first. Found in aten.embedding.default(*(Parameter containing:
tensor([[-0.0099, -0.0048, -0.0111, ..., -0.0426, 0.0099, 0.0212],
[ 0.0048, -0.0127, 0.0138, ..., -0.0448, 0.0003, -0.0120],
[ 0.0065, 0.0239, 0.0050, ..., -0.0431, -0.0067, 0.0137],
...,
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012],
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012],
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012]],
device='cuda:3', dtype=torch.float16, requires_grad=True), FakeTensor(FakeTensor(..., device='meta', size=(1, 14), dtype=torch.int64), cuda:3)), **{})
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 1046, in get_fake_value
return wrap_fake_exception(
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 712, in wrap_fake_exception
return fn()
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 1047, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 1096, in run_node
raise RuntimeError(
RuntimeError: Failed running call_module self_transformer_word_embeddings(*(FakeTensor(FakeTensor(..., device='meta', size=(1, 14), dtype=torch.int64), cuda:3),), **{}):
Invoking operators with non-Fake Tensor inputs in FakeTensorMode is not yet supported. Please convert all Tensors to FakeTensors first. Found in aten.embedding.default(*(Parameter containing:
tensor([[-0.0099, -0.0048, -0.0111, ..., -0.0426, 0.0099, 0.0212],
[ 0.0048, -0.0127, 0.0138, ..., -0.0448, 0.0003, -0.0120],
[ 0.0065, 0.0239, 0.0050, ..., -0.0431, -0.0067, 0.0137],
...,
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012],
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012],
[-0.0028, -0.0038, -0.0012, ..., -0.0252, 0.0013, 0.0012]],
device='cuda:3', dtype=torch.float16, requires_grad=True), FakeTensor(FakeTensor(..., device='meta', size=(1, 14), dtype=torch.int64), cuda:3)), **{})
(scroll up for backtrace)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "repro_dynamo.py", line 56, in <module>
run_dynamo(model, inputs, tokenizer)
File "repro_dynamo.py", line 41, in run_dynamo
run_model(opt_model, inputs, tokenizer)
File "repro_dynamo.py", line 30, in run_model
outputs = model(**inputs, return_dict=False)
File "/home/bowbao/pytorch/torch/nn/modules/module.py", line 1482, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/home/bowbao/stable_diffusion/lib/python3.8/site-packages/accelerate/hooks.py", line 151, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/bowbao/stable_diffusion/lib/python3.8/site-packages/accelerate/hooks.py", line 156, in <graph break in new_forward>
output = old_forward(*args, **kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/eval_frame.py", line 332, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/bowbao/pytorch/torch/_dynamo/convert_frame.py", line 479, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/bowbao/pytorch/torch/_dynamo/convert_frame.py", line 103, in _fn
return fn(*args, **kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 90, in time_wrapper
r = func(*args, **kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/home/bowbao/pytorch/torch/_dynamo/convert_frame.py", line 398, in _compile
out_code = transform_code_object(code, transform)
File "/home/bowbao/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/home/bowbao/pytorch/torch/_dynamo/convert_frame.py", line 385, in transform
tracer.run()
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1686, in run
super().run()
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 537, in run
and self.step()
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 500, in step
getattr(self, inst.opname)(inst)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 306, in wrapper
return inner_fn(self, inst)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1014, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 434, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/bowbao/pytorch/torch/_dynamo/variables/nn_module.py", line 220, in call_function
return tx.inline_user_function_return(
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 470, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1764, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1819, in inline_call_
tracer.run()
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 537, in run
and self.step()
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 500, in step
getattr(self, inst.opname)(inst)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 306, in wrapper
return inner_fn(self, inst)
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/bowbao/pytorch/torch/_dynamo/symbolic_convert.py", line 434, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/bowbao/pytorch/torch/_dynamo/variables/nn_module.py", line 201, in call_function
return wrap_fx_proxy(
File "/home/bowbao/pytorch/torch/_dynamo/variables/builder.py", line 731, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/home/bowbao/pytorch/torch/_dynamo/variables/builder.py", line 768, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/home/bowbao/pytorch/torch/_dynamo/utils.py", line 1066, in get_fake_value
raise TorchRuntimeError() from e
torch._dynamo.exc.TorchRuntimeError:
from user code:
File "/home/bowbao/transformers/src/transformers/models/bloom/modeling_bloom.py", line 903, in forward
transformer_outputs = self.transformer(
File "/home/bowbao/transformers/src/transformers/models/bloom/modeling_bloom.py", line 729, in forward
inputs_embeds = self.word_embeddings(input_ids)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Minified repro
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
sentence = "Question: Can I run BLOOM on a single GPU? Answer:"
# Load model
def load_model(model_name: str = "bigscience/bloom-560m"):
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
offload_state_dict=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer(sentence, return_tensors="pt").to(0)
print(inputs.keys())
return model, inputs, tokenizer
# Inference in PyTorch
def run_model(model, inputs, tokenizer):
with torch.no_grad():
outputs = model(**inputs, return_dict=False)
token_id = outputs[0][0][-1].argmax()
answer = tokenizer.decode([token_id])
print(f"{sentence}\n{answer}")
# Inference in dynamo
def run_dynamo(model, inputs, tokenizer):
from torch import _dynamo as torchdynamo
opt_model = torchdynamo.optimize("eager")(model)
run_model(opt_model, inputs, tokenizer)
model, inputs, tokenizer = load_model()
run_model(model, inputs, tokenizer) # this works
run_dynamo(model, inputs, tokenizer) # this fails
```
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 7 |
3,926 | 90,954 |
[Composable] Enable setting state_dict_type
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### ๐ The feature, motivation and pitch
There are a couple issues we need to fix to enable set_state_dict type for composable path, such as the issue of `fsdp_modules` which is an FSDP method being used in set_state_dict_type.
One suggestion is to use the common composable State APIs landed in https://github.com/pytorch/pytorch/pull/89147/files.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
3,927 | 90,952 |
Add support for torch.zero_grad in dynamo w/ dynamic shapes
|
triaged, oncall: pt2
|
### ๐ Describe the bug
curr graph breaks on call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {}'
### Versions
.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
3,928 | 90,923 |
dynamo.optimizations.training.aot_autograd does not trace correct overload
|
triaged, oncall: pt2
|
### ๐ Describe the bug
In the following Python program, `x + 2` gets traced as `torch.ops.aten.add.Tensor` instead of `torch.ops.aten.add.Scalar` which would be more technically correct (and is the op that torch.jit.script gives). I noticed this because in Torch-MLIR we verify that the operands match the schema exactly. Is it possible to have aot_autograd trace the more technically correct overload? Or can you provide guidance about how to correctly emulate the promotion semantics in appropriate generality?
Also, to add onto that, `torch.ops.aten.add.Tensor` is missing an argument. I need special-case code in my importer to handle this (by querying the schema for the default value). One possibility could be that PyTorch provides a utility that normalizes the graph of aten ops to "typecheck" correctly against the schemas of the constituent ops including reifying all default values.
```python
from typing import List
import torch
import torch._dynamo as dynamo
from torch._dynamo.optimizations.training import aot_autograd
def my_backend(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
gm.print_readable()
return gm
@dynamo.optimize(aot_autograd(fw_compiler=my_backend))
def f(x):
return x + 2
example_inputs = (torch.randn(3, 4),)
f(*example_inputs)
```
Output:
```
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[3, 4]):
# File: /tmp/repro2.py:15, code: return x + 2
add: f32[3, 4] = torch.ops.aten.add.Tensor(arg0_1, 2); arg0_1 = None
return (add,)
```
### Versions
PyTorch version: 2.0.0.dev20221213+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 12.2.0-3) 12.2.0
Clang version: 14.0.6-2
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.8 (main, Nov 3 2022, 15:17:13) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.19.11-1rodete1-amd64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0rc2
[pip3] torch==2.0.0.dev20221213+cpu
[pip3] torchvision==0.15.0.dev20221213+cpu
[conda] Could not collect
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 10 |
3,929 | 90,920 |
Support for Transformer Models on Android with Vulkan Backend
|
triaged, module: vulkan
|
### ๐ The feature, motivation and pitch
Hello
We are currently using a number of different transformer models (plain BERT encoders with attached classification head) on Android. In order to increase the performance of the inference on mobile devices, we're currently evaluating the Pytorch Vulkan backend. We therefore compiled Pytorch from source for desktop to export our model for mobile and specifically for the Vulkan backend. Afterwards, we compiled Pytorch with Vulkan backend for Android, included the resulting aars in our Testing-App and tried running the model. The following error occured:
```
com.facebook.jni.CppException: Vulkan vk_format(): no corresponding format for dtype
```
After some digging around in the source code, we found that the dtype for which there is no corresponding format is `Long`. We assume that the model requires a `Long` as index to the embedding bag. To make sure that nothing was wrong with our custom build, we also tried to run a basic mobilenet v2 model on Android with Vulkan backend and it seemed to work as expected. We therefore assume that this is an unsupported feature.
Just out of curiosity we also tried passing `Int` and `Float` values to the model, just to see what would happen. With `Int` we received the same error whereas with `Float` the following error occured:
```
com.facebook.jni.CppException: Could not run 'aten::as_strided' with arguments from the 'Vulkan' backend. This could be because the operator doesn't exist for this backend
```
Can you please confirm that this is in fact an unsupported feature? Is there currently some kind of workaround for this? Is there a plan to support this in the future?
Thank you in advance for your reply and have a great day!
### Alternatives
_No response_
### Additional context
_No response_
| 4 |
3,930 | 90,916 |
Functorch does not work with CrossEntropyLoss and label=-100
|
triaged, actionable, module: functorch
|
### ๐ Describe the bug
The following code triggers index out of bound exception (for '-100'), but '-100' is the value of `ignore_index`.
Below is a minimal reproducing example
```python
import torch
from functorch import grad_and_value, vmap, make_functional_with_buffers
import torch.nn as nn
model = nn.Linear(10, 10)
fmodel, _fparams, buffers = make_functional_with_buffers(model)
criterion = nn.CrossEntropyLoss(reduction="mean")
def compute_loss_stateless_model(params, buffers, sample, target):
batch = sample.unsqueeze(0)
targets = target.unsqueeze(0)
predictions = fmodel(params, buffers, batch)
loss = criterion(predictions, targets)
return loss
ft_compute_grad = grad_and_value(compute_loss_stateless_model)
ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, None, 0, 0))
params = list(model.parameters())
B = 256
T = 64
D = 10
inputs = torch.randn(B, D)
targets = torch.randint(0, D, (B,))
targets[1] = -100
grads, losses = ft_compute_sample_grad(params, buffers, inputs, targets)
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 470.141.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==1.13.0
[pip3] numpy==1.23.3
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] functorch 1.13.0 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.3 py39h14f4228_1
[conda] numpy-base 1.23.3 py39h31eccc5_1
[conda] pytorch 1.13.0 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.13.0 py39_cu116 pytorch
[conda] torchvision 0.14.0 py39_cu116 pytorch
cc @zou3519 @Chillee @samdow @soumith
| 1 |
3,931 | 90,915 |
Torch SummaryWriter import fails with torch 2.0 with an error on numpy.object
|
module: tensorboard, oncall: visualization
|
### ๐ Describe the bug
On pytorch 2.0 the code line
'from torch.utils.tensorboard import SummaryWriter'
produces the following error.

### Versions
pytorch 2.0
numpy 1.24.0rc2
| 1 |
3,932 | 93,483 |
Error in guard code crashes process NULL ERROR: /Users/ezyang/Dev/pytorch-metal/torch/csrc/dynamo/eval_frame.c:251
|
triaged, bug, oncall: pt2
|
### ๐ Describe the bug
Crashing processes makes your SREs unhappy. Don't crash; propagate the exception back to Python.
This is annoying to do as lookup() function in eval_frame.c already uses null return to mean something different.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,933 | 90,905 |
Retrieve Tensor from Tensor.data_ptr()
|
triaged, core issue
|
### ๐ The feature, motivation and pitch
We are working on a DSL implementation which would use the PyTorch to manage the allocation of tensors and the kernel would accept the Tensor.data_ptr() and other attributes, e.g. stride, type, shape as arguments. However we need assemble these info automatically to a tensor-like object now, but it difficult to correspond these info scattering in arguments.
### Alternatives
There are 4 choices:
1. accept torch tensor as input
2. encapsulate torch tensor to hide implementation or encapsulate info from tensor to one new type
3. assemble these info automatically as stated before - difficult to correspond arguments and seems prone to errors
4. recover the tensor from the data_ptr(), but other arguments are useless except explicit reference
We will try the 1st and 2nd options and also seize the opportunity of 4th option
### Additional context
_No response_
| 5 |
3,934 | 90,900 |
Check that SymPy semantics match Python semantics
|
triaged
|
### ๐ Describe the bug
Currently, we _assume_ that Python and SymPy behave identically (e.g., promotion), but it might not be the case. Someone needs to verify this.
cc @ezyang
### Versions
master
| 3 |
3,935 | 90,895 |
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
|
module: build, triaged
|
### ๐ Describe the bug
When I build PyTorch on OS X, and then attempt to import torch.distributed, I get this error:
```
$ python -c "import torch.distributed.distributed_c10d"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/ezyang/Dev/pytorch-cpu/torch/distributed/distributed_c10d.py", line 16, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
OS X build command:
```
MACOSX_DEPLOYMENT_TARGET=10.11 CC=clang CXX=clang++ python setup.py develop
```
Cmake says
```
-- Building with distributed package:
-- USE_TENSORPIPE=False
-- USE_GLOO=False
-- USE_MPI=False
-- USE_DISTRIBUTED : OFF
```
but I would still expect the import to gracefully fail lazily
### Versions
master
cc @malfet @seemethere
| 0 |
3,936 | 90,894 |
DISABLED test_numpy_ref_mps_nn_functional_group_norm_mps_float32 (__main__.TestCommonMPS)
|
triaged, module: flaky-tests, skipped, module: mps
|
Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_numpy_ref_mps_nn_functional_group_norm_mps_float32) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10102049593).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_numpy_ref_mps_nn_functional_group_norm_mps_float32`
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
3,937 | 93,482 |
[inductor] Add more matmul configurations to `TORCHINDUCTOR_MAX_AUTOTUNE=1` mode
|
triaged, enhancement, oncall: pt2
|
https://github.com/pytorch/pytorch/pull/90738 contains 12 configurations for mm/addmm/bmm/etc:
https://github.com/pytorch/pytorch/blob/2b04caf71f09728cb655ccd9c7569d04ecb8b883/torch/_inductor/ops/mm_common.py#L22-L57
While writing this PR I observed speedups by adding/tweaking configurations, and I still think there is room for even more improvements.
When run with `TORCHINDUCTOR_MAX_AUTOTUNE=1`,
those 12 configurations are benchmarked, and the fastest configuration is chosen (at compile time) here:
https://github.com/pytorch/pytorch/blob/2b04caf71f09728cb655ccd9c7569d04ecb8b883/torch/_inductor/select_algorithm.py#L576
Upstream Triton has some similar configurations:
https://github.com/openai/triton/blob/8650b4d1cbc750d659156e2c17a058736614827b/python/triton/ops/matmul.py#L30-L51
plus 100+ more that are generated in `get_configs_io_bound()`.
It then prunes those configurations via a performance model:
https://github.com/openai/triton/blob/8650b4d1cbc750d659156e2c17a058736614827b/python/triton/ops/matmul_perf_model.py#L33
We might be able to use a similar perf model + pruning approach, though we should spend time benchmarking -- espeically for the mm variants like bmm that upstream Triton doesn't have ops for.
Note we will need to filter out `SPLIT_K>1` configurations, since our template doesn't allow that in order to do epilogue fusions.
We could also try making other tweaks to the templates, such as different load orders (see the GROUP_M reordering). For small sizes we could also try alternate approaches such as removing the loop and loading things into a single Triton block.
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,938 | 90,885 |
TorchScript with complex abs doesn't work in backward
|
oncall: jit
|
### ๐ Describe the bug
Reproduce:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.bias = torch.nn.Parameter(torch.zeros(1, dtype=torch.float))
self.act = torch.nn.LeakyReLU()
def forward(self, z):
out = self.act(torch.abs(z) + self.bias) * torch.exp(1.j * torch.angle(z))
return out
x = torch.ones(10, dtype=torch.complex64, requires_grad=True)
model = Model()
model = torch.jit.script(model)
# model = torch.compile(model)
for i in range(10):
out = model(x).sum()
out.backward()
print('i = ', i)
```
Stacktrace:
```python
i = 0
Traceback (most recent call last):
File "/home/xwang/Developer/pytorch-test/complex-abs/repro.py", line 21, in <module>
out.backward()
File "/home/xwang/Developer/pytorch/torch/_tensor.py", line 484, in backward
torch.autograd.backward(
File "/home/xwang/Developer/pytorch/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<string>", line 183, in <backward op>
def abs(self):
def backward(grad_output):
return grad_output * self.sign()
~~~~~~~~~ <--- HERE
return torch.abs(self), backward
RuntimeError: Unlike NumPy, torch.sign is not intended to support complex numbers. Please use torch.sgn instead.
Unlike NumPy, torch.sign is not intended to support complex numbers. Please use torch.sgn instead.
Exception raised from meta at /home/xwang/Developer/pytorch/aten/src/ATen/native/UnaryOps.cpp:276 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xaa (0x7fb1caf2b39a in /home/xwang/Developer/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x79 (0x7fb1caef4c3d in /home/xwang/Developer/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x1700b87 (0x7fb191291b87 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0x21caab5 (0x7fb191d5bab5 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x21cab41 (0x7fb191d5bb41 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #5: at::_ops::sign::redispatch(c10::DispatchKeySet, at::Tensor const&) + 0x84 (0x7fb191ab7f64 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x36c075e (0x7fb19325175e in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x36c0c94 (0x7fb193251c94 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x463e417 (0x7fb1941cf417 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x420e8c7 (0x7fb193d9f8c7 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x420fc7d (0x7fb193da0c7d in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x41f1344 (0x7fb193d82344 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x41f2094 (0x7fb193d83094 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x3e55e1b (0x7fb1939e6e1b in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #14: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0xaeb (0x7fb1939e05db in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #15: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x63e (0x7fb1939e17be in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #16: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x434 (0x7fb1939dcbf4 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #17: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x4b (0x7fb19a2f721b in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
frame #18: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x99a (0x7fb1939df42a in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #19: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x6a (0x7fb19a2f717a in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
frame #20: THPEngine_run_backward(_object*, _object*, _object*) + 0x55d (0x7fb19a2f5c6d in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #37: <unknown function> + 0x23290 (0x7fb1ea3d1290 in /usr/lib/libc.so.6)
frame #38: __libc_start_main + 0x8a (0x7fb1ea3d134a in /usr/lib/libc.so.6)
```
However, if I modify this `self.sign()` to `self.sgn()`, here
https://github.com/pytorch/pytorch/blob/86269852de0841d5a876fef1570ef0a88797b3a9/torch/csrc/jit/runtime/symbolic_script.cpp#L996-L1001
I got
```python
i = 0
Traceback (most recent call last):
File "/home/xwang/Developer/pytorch-test/complex-abs/repro.py", line 21, in <module>
out.backward()
File "/home/xwang/Developer/pytorch/torch/_tensor.py", line 484, in backward
torch.autograd.backward(
File "/home/xwang/Developer/pytorch/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Exception raised from validate_outputs at /home/xwang/Developer/pytorch/torch/csrc/autograd/engine.cpp:820 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xaa (0x7f4bd4f5839a in /home/xwang/Developer/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x79 (0x7f4bd4f21c3d in /home/xwang/Developer/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x3e48941 (0x7f4ba37d9941 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #3: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x6d0 (0x7f4ba37e01c0 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #4: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x63e (0x7f4ba37e17be in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #5: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x434 (0x7f4ba37dcbf4 in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #6: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) + 0x4b (0x7f4baa0f721b in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
frame #7: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x99a (0x7f4ba37df42a in /home/xwang/Developer/pytorch/torch/lib/libtorch_cpu.so)
frame #8: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x6a (0x7f4baa0f717a in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
frame #9: THPEngine_run_backward(_object*, _object*, _object*) + 0x55d (0x7f4baa0f5c6d in /home/xwang/Developer/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #26: <unknown function> + 0x23290 (0x7f4bfa0ec290 in /usr/lib/libc.so.6)
frame #27: __libc_start_main + 0x8a (0x7f4bfa0ec34a in /usr/lib/libc.so.6)
```
It doesn't work with `torch.compile` either:
```python
Traceback (most recent call last):
File "/home/xwang/Developer/pytorch/torch/_dynamo/output_graph.py", line 659, in call_user_compiler
compiled_fn = compiler_fn(gm, self.fake_example_inputs())
File "/home/xwang/Developer/pytorch/torch/_dynamo/debug_utils.py", line 913, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs, **kwargs)
File "/home/xwang/Developer/pytorch/torch/__init__.py", line 1153, in __call__
return self.compile_fn(model_, inputs_)
File "/home/xwang/Developer/pytorch/torch/_inductor/compile_fx.py", line 398, in compile_fx
return aot_autograd(
File "/home/xwang/Developer/pytorch/torch/_dynamo/optimizations/training.py", line 78, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 2333, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/home/xwang/Developer/pytorch/torch/_dynamo/utils.py", line 90, in time_wrapper
r = func(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 2030, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_tensor_args, aot_config)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 1297, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 1498, in aot_dispatch_autograd
fx_g = make_fx(joint_forward_backward, aot_config.decompositions)(
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 704, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/home/xwang/Developer/pytorch/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 448, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/home/xwang/Developer/pytorch/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/home/xwang/Developer/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 464, in wrapped
out = f(*tensors)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 824, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/home/xwang/Developer/pytorch/torch/_functorch/aot_autograd.py", line 791, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/home/xwang/Developer/pytorch/torch/autograd/__init__.py", line 266, in grad
return handle_torch_function(
File "/home/xwang/Developer/pytorch/torch/overrides.py", line 1520, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/home/xwang/Developer/pytorch/torch/_inductor/overrides.py", line 37, in __torch_function__
return func(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 491, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 516, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/home/xwang/Developer/pytorch/torch/fx/experimental/proxy_tensor.py", line 352, in proxy_call
out = func(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_ops.py", line 284, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: aten::_conj() Expected a value of type 'Tensor' for argument 'self' but instead found type 'complex'.
Position: 0
Value: 1j
Declaration: aten::_conj(Tensor(a) self) -> Tensor(a)
Cast error details: Unable to cast 1j to Tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xwang/Developer/pytorch-test/complex-abs/repro.py", line 20, in <module>
out = model(x).sum()
File "/home/xwang/Developer/pytorch/torch/nn/modules/module.py", line 1482, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_dynamo/eval_frame.py", line 332, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/xwang/Developer/pytorch/torch/_dynamo/convert_frame.py", line 480, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/xwang/Developer/pytorch/torch/_dynamo/convert_frame.py", line 103, in _fn
return fn(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_dynamo/utils.py", line 90, in time_wrapper
r = func(*args, **kwargs)
File "/home/xwang/Developer/pytorch/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/home/xwang/Developer/pytorch/torch/_dynamo/convert_frame.py", line 400, in _compile
out_code = transform_code_object(code, transform)
File "/home/xwang/Developer/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/home/xwang/Developer/pytorch/torch/_dynamo/convert_frame.py", line 387, in transform
tracer.run()
File "/home/xwang/Developer/pytorch/torch/_dynamo/symbolic_convert.py", line 1687, in run
super().run()
File "/home/xwang/Developer/pytorch/torch/_dynamo/symbolic_convert.py", line 538, in run
and self.step()
File "/home/xwang/Developer/pytorch/torch/_dynamo/symbolic_convert.py", line 501, in step
getattr(self, inst.opname)(inst)
File "/home/xwang/Developer/pytorch/torch/_dynamo/symbolic_convert.py", line 1753, in RETURN_VALUE
self.output.compile_subgraph(self)
File "/home/xwang/Developer/pytorch/torch/_dynamo/output_graph.py", line 511, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/xwang/Developer/pytorch/torch/_dynamo/output_graph.py", line 583, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/xwang/Developer/pytorch/torch/_dynamo/output_graph.py", line 664, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised RuntimeError: aten::_conj() Expected a value of type 'Tensor' for argument 'self' but instead found type 'complex'.
Position: 0
Value: 1j
Declaration: aten::_conj(Tensor(a) self) -> Tensor(a)
Cast error details: Unable to cast 1j to Tensor
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
```
### Versions
1e347b737bbb7666dc240bb94b4c60403ad882ae
cuda 11.8
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ptrblck @ngimel
| 0 |
3,939 | 90,878 |
Exporter for ONNX GroupNormalization operator
|
module: onnx, triaged
|
### ๐ The feature, motivation and pitch
GroupNormalization has been introduced into ONNX opset 18. https://github.com/onnx/onnx/blob/main/docs/Operators.md#GroupNormalization
It would be more efficient if the torch-to-ONNX converter could directly export to this new op instead of using a combination of primitive ops.
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
3,940 | 93,481 |
Umbrella issue for weakref related Dynamo PyTorch test suite failures
|
triaged, bug, oncall: pt2
|
### ๐ Describe the bug
collected from stack at https://github.com/pytorch/pytorch/pull/90825
### Error logs
_No response_
### Minified repro
_No response_
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,941 | 93,480 |
Umbrella issue for only populate real_value_cache in export test suite fails
|
triaged, bug, oncall: pt2
|
### ๐ Describe the bug
See https://github.com/pytorch/pytorch/pull/90468 for the list of introduced skips
### Error logs
_No response_
### Minified repro
_No response_
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,942 | 90,866 |
Support arbitrary masks for _nested_tensor_from_mask in nn.TransformerEncoder
|
triaged, module: nestedtensor, oncall: transformer/mha
|
### ๐ The feature, motivation and pitch
I am working on transformers and am using the Better Transformer. To enjoy the speed improvement, we need to pass a src_key_padding_mask to the nn.TransformerEncoder.
The problem is that, the src_key_padding_mask has to be left-aligned, which means all True values must be in front of all False values. That is the common case in NLP as sentences have different lengths, but in computer vision, the True values in a mask are usually arbitrarily dispersed. I am wondering if you could support transforming a tensor with an arbitrary mask to a nested tensor, and also support transforming the nested tensor back to a normal tensor with 0 paddings according to the same mask.
This feature would be very useful for vision transformers when people want to use masks and enjoy the inference speed improvement for whatever their tasks, for example, dynamic pruning, etc.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @cpuhrsch @bhosmer @drisspg @mikaylagawarecki @erichan1
| 1 |
3,943 | 93,479 |
Umbrella issue for PyTorch test suite failures from torch.* returned non-Tensor output unimplemented
|
triaged, bug, oncall: pt2
|
### ๐ Describe the bug
_No response_
### Error logs
_No response_
### Minified repro
_No response_
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 1 |
3,944 | 90,857 |
[FSDP] Prepare to deprecate `FullyShardedDataParallel.<attrs>`
|
oncall: distributed, triaged, module: fsdp
|
Attributes including `mixed_precision`, `sharding_strategy`, and `cpu_offload` should be applied per `FlatParamHandle` / `FlatParameter`, not at the module level (i.e. `FullyShardedDataParallel` or plain module for composable path).
We should prepare to deprecate using those attributes directly and instead lower the logic to the `FlatParamHandle`. This issue is used as a unified way to track to-dos that indicate a usage that will need to be migrated.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,945 | 90,854 |
[FSDP] Investigate the need for public `check_is_root()` method
|
oncall: distributed, triaged, module: fsdp
|
Should we expose this method publicly for stable release?
It is currently used for `fsdp_modules()` and `register_comm_hook()`.
https://github.com/pytorch/pytorch/blob/0cd69d7cda15634cb9a86aa18ca759a4ff16b4a0/torch/distributed/fsdp/fully_sharded_data_parallel.py#L473
https://github.com/pytorch/pytorch/blob/0cd69d7cda15634cb9a86aa18ca759a4ff16b4a0/torch/distributed/fsdp/fully_sharded_data_parallel.py#L1744
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,946 | 90,848 |
[Distributed] Destruction order fiasco in ProcessGroupNCCL workCleanupLoop()
|
oncall: distributed, module: c10d
|
### ๐ Describe the bug
# The Problem
We are starting to see this race condition pop up more (thanks @rohan-varma, @kwen2501, @awgu for providing assistance in root causing). As part of some debug flags (`NCCL_DESYNC_DEBUG`, `NCCL_ASYNC_ERROR_HANDLING`, `TORCH_DISTRIBUTED_DEBUG` we are creating a `workCleanUpThread_` to analyze the status of work objects and throw exceptions
https://github.com/pytorch/pytorch/blob/f21cb7d77e562eb8e721c7889a041819c9938a2d/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L661
In this thread we are calling CUDA APIs to retrieve the exceptions or error out
https://github.com/pytorch/pytorch/blob/f21cb7d77e562eb8e721c7889a041819c9938a2d/aten/src/ATen/cuda/CUDAEvent.h#L87-L103
This causes issues during the destruction of this thread as there is sometimes a case when the thread is being joined but the driver has already be unloaded, leading to a `cudaErrorCudartUnloading` error.
# Solution? (Still unknown, open to suggestions)
I am still not sure why the CUDA driver has already been shutdown by the time we reach the destructor of `ProcessGroupNCCL`. Perhaps it is some interaction with python/pybind that is causing to the process group to be cleaned up after the process exits? [It is not recommended to use CUDA Runtime APIs outside of main()](https://forums.developer.nvidia.com/t/cuda-run-time-library-unload/51956/2) which is what I think is happening.
Perhaps we should consider removing these calls, but this would require changing the existing functionality of these flags, some of which our users have already turned on by default. Still, we should not have debug flags that cause errors that crash the program.
# Reproduction
```
CUDA_LAUNCH_BLOCKING=1 NCCL_DESYNC_DEBUG=1 pytest test/distributed/fsdp/test_fsdp_mixed_precision.py -vsk test_float16_on_one_submodule_skip_inputs_error
```
output (truncated):
```
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-7.1.2, pluggy-0.13.1 -- /private/home/howardhuang/.conda/envs/pytorch/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/private/home/howardhuang/pytorch-projects/pytorch/.hypothesis/examples')
rootdir: /private/home/howardhuang/pytorch-projects/pytorch, configfile: pytest.ini
plugins: hypothesis-5.43.3
collecting ... collected 56 items / 55 deselected / 1 selected
test/distributed/fsdp/test_fsdp_mixed_precision.py::TestFSDPDifferentSubmodulePrecision::test_float16_on_one_submodule_skip_inputs_error /private/home/howardhuang/pytorch-projects/pytorch/torch/backends/cudnn/__init__.py:93: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1213 17:18:52.884027 1074672 ProcessGroupNCCL.cpp:638] [Rank 1] NCCL_DESYNC_DEBUG and NCCL_ASYNC_ERROR_HANDLING must both be enabled. Enabling NCCL_ASYNC_ERROR_HANDLING.
I1213 17:18:52.884213 1074672 ProcessGroupNCCL.cpp:665] [Rank 1] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 1
NCCL_DESYNC_DEBUG: 1
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
I1213 17:18:52.884229 1074732 ProcessGroupNCCL.cpp:842] [Rank 1] NCCL watchdog thread started!
/private/home/howardhuang/pytorch-projects/pytorch/torch/backends/cudnn/__init__.py:93: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1213 17:18:53.125203 1074671 ProcessGroupNCCL.cpp:665] [Rank 0] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 1
NCCL_DESYNC_DEBUG: 1
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
I1213 17:18:53.125247 1074734 ProcessGroupNCCL.cpp:842] [Rank 0] NCCL watchdog thread started!
/private/home/howardhuang/pytorch-projects/pytorch/torch/backends/cudnn/__init__.py:93: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
I1213 17:18:53.126336 1074671 ProcessGroupNCCL.cpp:2389] Rank 0 using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
I1213 17:18:59.732851 1074672 ProcessGroupNCCL.cpp:2389] Rank 1 using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
I1213 17:18:59.801664 1074671 ProcessGroupNCCL.cpp:1276] NCCL_DEBUG: N/A
INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 1
INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 0
dist init r=1, world=2
dist init r=0, world=2
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: driver shutting down
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:40 (most recent call first):
frame #0: <unknown function> + 0xdf0a5 (0x7f8a7c24c0a5 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>::operator()() const + 0x50 (0x7f8ababb1b00 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #2: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x44 (0x7f8a7c24b394 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7f (0x7f8a7c249359 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #4: c10::cuda::c10_cuda_check_implementation(char const*, char const*, int, bool) + 0x17f (0x7f8a7c3cf96a in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x2e8e63a (0x7f8a7f2bb63a in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #6: c10d::ProcessGroupNCCL::WorkNCCL::startedGPUExecutionInternal() const + 0xbc (0x7f8a7f27f0e2 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #7: c10d::ProcessGroupNCCL::WorkNCCL::isStarted() + 0x72 (0x7f8a7f27ebf6 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #8: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x1d4 (0x7f8a7f283f8c in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #9: void std::__invoke_impl<void, void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*>(std::__invoke_memfun_deref, void (c10d::ProcessGroupNCCL::*&&)(), c10d::ProcessGroupNCCL*&&) + 0x6a (0x7f8a7f2fe945 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #10: std::__invoke_result<void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*>::type std::__invoke<void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*>(void (c10d::ProcessGroupNCCL::*&&)(), c10d::ProcessGroupNCCL*&&) + 0x3b (0x7f8a7f2fe7fd in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #11: void std::thread::_Invoker<std::tuple<void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) + 0x47 (0x7f8a7f2fe6bd in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #12: std::thread::_Invoker<std::tuple<void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*> >::operator()() + 0x1c (0x7f8a7f2fe4c0 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #13: std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (c10d::ProcessGroupNCCL::*)(), c10d::ProcessGroupNCCL*> > >::_M_run() + 0x20 (0x7f8a7f2fe1c4 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cuda.so)
frame #14: <unknown function> + 0xc9039 (0x7f8ae16d7039 in /public/apps/anaconda3/2022.05/lib/./libstdc++.so.6)
frame #15: <unknown function> + 0x8609 (0x7f8b02451609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #16: clone + 0x43 (0x7f8b02376133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 1074672, Thread 1074672:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x7f (0x7f8a7c254ad1 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::stacktraceSignalHandler(int, siginfo_t*, void*) + 0x42 (0x7f8a7c254f4e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #2: c10::FatalSignalHandler::stacktraceSignalHandlerStatic(int, siginfo_t*, void*) + 0x31 (0x7f8a7c254f09 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x14420 (0x7f8b0245d420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: <unknown function> + 0x2ccd667 (0x7f8ab5a10667 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #5: c10::impl::OperatorEntry::updateDispatchTable_(c10::Dispatcher const&, c10::DispatchKey) + 0x6f (0x7f8ab5a0dc8b in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #6: c10::impl::OperatorEntry::deregisterKernel_(c10::Dispatcher const&, c10::optional<c10::DispatchKey>, std::_List_iterator<c10::impl::AnnotatedKernel>) + 0x1f5 (0x7f8ab5a0ce87 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #7: c10::Dispatcher::deregisterImpl_(c10::OperatorHandle const&, c10::OperatorName const&, c10::optional<c10::DispatchKey>, std::_List_iterator<c10::impl::AnnotatedKernel>) + 0x66 (0x7f8ab59fabc4 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x2cb7803 (0x7f8ab59fa803 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x2cba627 (0x7f8ab59fd627 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #10: std::function<void ()>::operator()() const + 0x36 (0x7f8ac954639e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x2930c1c (0x7f8ab5673c1c in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x293baa2 (0x7f8ab567eaa2 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x293a0c3 (0x7f8ab567d0c3 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #14: <unknown function> + 0x2938424 (0x7f8ab567b424 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x2936865 (0x7f8ab5679865 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x293415b (0x7f8ab567715b in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x29312ee (0x7f8ab56742ee in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #18: <unknown function> + 0x293d366 (0x7f8ab5680366 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libtorch_cpu.so)
frame #19: <unknown function> + 0x468a7 (0x7f8b0229d8a7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #20: on_exit + 0 (0x7f8b0229da60 in /lib/x86_64-linux-gnu/libc.so.6)
frame #21: <unknown function> + 0x11018e (0x558e113d218e in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #22: <unknown function> + 0x1101bb (0x558e113d21bb in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #23: <unknown function> + 0x11020a (0x558e113d220a in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #24: PyRun_SimpleStringFlags + 0x4a (0x558e113d785e in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #25: <unknown function> + 0x115cf6 (0x558e113d7cf6 in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #26: Py_BytesMain + 0x39 (0x558e11510d19 in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
frame #27: __libc_start_main + 0xf3 (0x7f8b0227b083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #28: <unknown function> + 0x1dee93 (0x558e114a0e93 in /private/home/howardhuang/.conda/envs/pytorch/bin/python)
SIGABRT(6), PID: 1074672, Thread 1074677:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x7f (0x7f8a7c254ad1 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::stacktraceSignalHandler(int, siginfo_t*, void*) + 0x42 (0x7f8a7c254f4e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #2: c10::FatalSignalHandler::stacktraceSignalHandlerStatic(int, siginfo_t*, void*) + 0x31 (0x7f8a7c254f09 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x14420 (0x7f8b0245d420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: pthread_cond_wait + 0x216 (0x7f8b02458376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: <unknown function> + 0x32604b (0x7f8a69e4b04b in /private/home/howardhuang/.conda/envs/pytorch/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libopenblasp-r0-2d23e62b.3.17.so)
frame #6: <unknown function> + 0x8609 (0x7f8b02451609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f8b02376133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 1074672, Thread 1074678:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x7f (0x7f8a7c254ad1 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::stacktraceSignalHandler(int, siginfo_t*, void*) + 0x42 (0x7f8a7c254f4e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #2: c10::FatalSignalHandler::stacktraceSignalHandlerStatic(int, siginfo_t*, void*) + 0x31 (0x7f8a7c254f09 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x14420 (0x7f8b0245d420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: pthread_cond_wait + 0x216 (0x7f8b02458376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: <unknown function> + 0x32604b (0x7f8a69e4b04b in /private/home/howardhuang/.conda/envs/pytorch/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libopenblasp-r0-2d23e62b.3.17.so)
frame #6: <unknown function> + 0x8609 (0x7f8b02451609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f8b02376133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 1074672, Thread 1074679:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x7f (0x7f8a7c254ad1 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::stacktraceSignalHandler(int, siginfo_t*, void*) + 0x42 (0x7f8a7c254f4e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #2: c10::FatalSignalHandler::stacktraceSignalHandlerStatic(int, siginfo_t*, void*) + 0x31 (0x7f8a7c254f09 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x14420 (0x7f8b0245d420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: pthread_cond_wait + 0x216 (0x7f8b02458376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: <unknown function> + 0x32604b (0x7f8a69e4b04b in /private/home/howardhuang/.conda/envs/pytorch/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libopenblasp-r0-2d23e62b.3.17.so)
frame #6: <unknown function> + 0x8609 (0x7f8b02451609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f8b02376133 in /lib/x86_64-linux-gnu/libc.so.6)
SIGABRT(6), PID: 1074672, Thread 1074680:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x7f (0x7f8a7c254ad1 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::stacktraceSignalHandler(int, siginfo_t*, void*) + 0x42 (0x7f8a7c254f4e in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #2: c10::FatalSignalHandler::stacktraceSignalHandlerStatic(int, siginfo_t*, void*) + 0x31 (0x7f8a7c254f09 in /private/home/howardhuang/pytorch-projects/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x14420 (0x7f8b0245d420 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: pthread_cond_wait + 0x216 (0x7f8b02458376 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #5: <unknown function> + 0x32604b (0x7f8a69e4b04b in /private/home/howardhuang/.conda/envs/pytorch/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libopenblasp-r0-2d23e62b.3.17.so)
frame #6: <unknown function> + 0x8609 (0x7f8b02451609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x43 (0x7f8b02376133 in /lib/x86_64-linux-gnu/libc.so.6)
...
```
# Context
There are issues (https://github.com/pytorch/pytorch/issues/77374, https://github.com/pytorch/pytorch/issues/56390) with this signature going back over 1.5 years ago. So this has been a longstanding issue.
### Versions
trunk
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu @penguinwu
| 5 |
3,947 | 90,844 |
AOT Autograd doesn't respect no_grad() during input mutations
|
triaged, module: aotdispatch
|
min repro:
```
@torch.compile(backend='aot_eager')
def f(x):
with torch.no_grad():
x.add_(1)
x = torch.zeros(2, requires_grad=True)
# Fails with "RuntimeError: a leaf Variable that requires grad is being used in an in-place operation"
out = f(x)
```
You can see the problem when I print out:
(1) the graph that dynamo traced
(2) the functionalized, lowered-to-aten, forward graph that AOT Autograd created
(1):
```
class GraphModule(torch.nn.Module):
def forward(self, x : torch.Tensor):
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False)
# File: tmp3.py:7, code: x.add_(1)
add_ = x.add_(1); x = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return ()
```
(2):
```
class GraphModule(torch.nn.Module):
def forward(self, primals_1: f32[2]):
# No stacktrace found for following nodes
clone: f32[2] = torch.ops.aten.clone.default(primals_1); primals_1 = None
# File: tmp3.py:7, code: x.add_(1)
add: f32[2] = torch.ops.aten.add.Tensor(clone, 1); clone = None
return [add]
```
Dynamo adds `set_grad_enabled()` calls into the torch graph, but they disappear (and get executed) when aot autograd retraces the graph and lowers to ATen.
In this case, the last API to be called was `_set_grad_enabled_1 = torch._C._set_grad_enabled(True)` when aot autograd traced the dynamo graph, and so we're stuck with grad being enabled: at runtime when we execute the compiled graph + input mutations, `torch.is_grad_enabled() == True`, and doesn't reflect the fact that the input mutation happened inside of a no_grad() block.
I think the (easy) fix is that when we have input mutations, we should always execute them in `torch.no_grad()` mode. One thing we should double check to be careful about is that we expect errors that occurred in eager mode to still happen in torch.compile() mode. I.e. if a user tries to mutate a leaf that requires grad, we should confirm that this fails at compilation time.
| 1 |
3,948 | 90,842 |
nn.MultiheadAttention softmax inconsistent in training mode
|
oncall: transformer/mha
|
### ๐ Describe the bug
I was playing around with multiple head attention, and stumbled upon a weird interaction when having a key which has only one token:
```
import torch
from torch import nn
d=5
n=3
b=3
x=torch.normal(0,1,(b,n,d))
x_cls=torch.normal(0,1,(b,1,d))
attn = nn.MultiheadAttention(
d,
1,
dropout=0.2,
batch_first=True
)
print(attn( x,x_cls, x_cls,)[1])
print(attn.eval()( x,x_cls, x_cls,)[1])
```
which outputs:
```
tensor([[[1.2500],
[0.0000],
[1.2500]],
[[1.2500],
[1.2500],
[1.2500]],
[[1.2500],
[1.2500],
[1.2500]]], grad_fn=<DivBackward0>)
tensor([[[1.],
[1.],
[1.]],
[[1.],
[1.],
[1.]],
[[1.],
[1.],
[1.]]], grad_fn=<DivBackward0>)
```
From the theory I would expect that both give one as I am calculating the softmax over a nx1 matrix rowwise - however dropout makes it deviate from that and I get weird values away from 1, which seem to be determined by the number tokens. I know this is not relevant in practice but maybe it should still be a sanity test that should be past
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 9.3.0
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: version 3.20.3
Libc version: glibc-2.17
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.0
[pip3] performer-pytorch==1.1.4
[pip3] pytorch-lightning==1.6.1
[pip3] torch==1.11.0
[pip3] torch-cluster==1.6.0
[pip3] torch-geometric==2.0.4
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.13
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.11.0
[pip3] torchinfo==1.7.0
[pip3] torchmetrics==0.7.3
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] libblas 3.9.0 14_linux64_mkl conda-forge
[conda] libcblas 3.9.0 14_linux64_mkl conda-forge
[conda] liblapack 3.9.0 14_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h06a4308_117
[conda] numpy 1.21.0 py38h9894fe3_0 conda-forge
[conda] performer-pytorch 1.1.4 pypi_0 pypi
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-lightning 1.6.1 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-cluster 1.6.0 pypi_0 pypi
[conda] torch-geometric 2.0.4 pypi_0 pypi
[conda] torch-scatter 2.0.9 pypi_0 pypi
[conda] torch-sparse 0.6.13 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.11.0 py38_cu113 pytorch
[conda] torchinfo 1.7.0 pypi_0 pypi
[conda] torchmetrics 0.7.3 pyhd8ed1ab_0 conda-forge
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.12.0 py38_cu113 pytorch
```
cc @jbschlosser @bhosmer @cpuhrsch @erichan1
| 4 |
3,949 | 90,837 |
[FSDP] `fully_shard` Follow-Ups & Known Issues
|
oncall: distributed, triaged, module: fsdp
|
This issue is tracking work items for `fully_shard`. The list will be dynamically evolving.
I am assigning this to myself for organizational purposes, but others may feel free to also assign themselves.
**Core Features**
- [x] Support manual "wrapping": For `fully_shard`, this means having multiple applications of `fully_shard` share data structures and only have a single local FSDP root module instead of having each application be its own distinct FSDP.
- [x] Support model checkpointing (i.e. `module.state_dict` / `module.load_state_dict`).
- [x] Support optimizer state checkpointing (i.e. `optim.state_dict` / `optim.load_state_dict`).
**Composability**
- [x] Ignore modules annotated `replicate` and test `fully_shard` above `replicate`.
- [x] Change `_get_fsdp_states()` to not traverse down through any module annotated with a different composable API. This requires the global registry to check a module's annotation.
- [ ] Investigate `fully_shard` and `checkpoint(reentrant=True)` compatibility.
- [ ] Investigate `fully_shard` and `checkpoint(reentrant=False)` compatibility.
**Better Engineering**
- [ ] Add comprehensive docstring to `fully_shard` explaining the usage and composability.
- [x] Refactor `_lazy_init` to take a sequence of constructor arguments that need to be uniform across all FSDP instances / `fully_shard` applications and verify that. Currently, we have the check only for `limit_all_gathers`. However, `use_orig_params` needs to be uniform. In this way, we will know explicitly which arguments we expect to be uniform and which we do not.
- [x] Remove `HandleConfig` and store its attributes directly on the `FlatParamHandle`.
- [x] https://github.com/pytorch/pytorch/issues/90788
- [ ] Move all FSDP attributes to the `_FSDPState` class to enable direct type checking.
- [x] Split `test_fully_shard.py` by sub-functionality. Eventually, `test_fully_shard.py` will become too long, which will make it hard to read through. I have been keeping tests grouped by sub-functionality (e.g. initialization, runtime, checkpointing). We can separate these into their own files.
- [x] After introducing "fully sharded module", I missed updating the variable names in `torch/distributed/fsdp/_wrap_utils.py` and `test/distributed/fsdp/test_utils.py` -- fix these variable names to use "fully sharded module" instead of "submodule".
https://github.com/pytorch/pytorch/blob/212873c615dd3455a24d390605335aeeebd76236/torch/distributed/fsdp/_wrap_utils.py#L119
https://github.com/pytorch/pytorch/blob/212873c615dd3455a24d390605335aeeebd76236/test/distributed/fsdp/test_utils.py#L163
**For Discussion**
- FSDP currently raises a `ValueError` if a user passes in a `FullyShardedDataParallel` instance into the `ignored_modules` argument. We can relax this by skipping the `FullyShardedDataParallel` instance instead taking its wrapped module.
https://github.com/pytorch/pytorch/blob/212873c615dd3455a24d390605335aeeebd76236/torch/distributed/fsdp/_init_utils.py#L541-L542
- `fully_shard` handles shared parameters slightly differently than `FullyShardedDataParallel` in its pseudo-auto-wrapping. We should decide on how we want to proceed with this in the ideal state and add unit tests to capture the differences.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,950 | 90,832 |
Inference Mode docs
|
module: docs, triaged, inference mode
|
### ๐ Describe the bug
A part of the explanation on inference mode [docs](https://pytorch.org/cppdocs/notes/inference_mode.html) says:
> A non-view tensor is an inference tensor if and only if it was allocated inside InferenceMode. A view tensor is an inference tensor if and only if the tensor it is a view of is an inference tensor.
where `A view tensor is an inference tensor if and only if the tensor it is a view of is an inference tensor.` is a bit ambiguous to me, does it mean: `A view tensor is an inference tensor if and only if the tensor is a view of an inference tensor.`?
### Versions
master
cc @svekars @carljparker
| 2 |
3,951 | 90,831 |
Error when using torch.compile with Pytorch2.0
|
needs reproduction, triaged, oncall: pt2
|
### ๐ Describe the bug
I tried `torch.compile` on the stable_diffusion model. But I got the following error. Could you give me a hand? Thanks!
#### Codebase
https://github.com/huggingface/diffusers/blob/main/examples/text_to_image
#### Code modification
https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py#L541
```
text_encoder.to(accelerator.device, dtype=weight_dtype)
vae.to(accelerator.device, dtype=weight_dtype)
# add this
text_encoder = torch.compile(text_encoder)
```
#### my command
```
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export dataset_name="lambdalabs/pokemon-blip-captions"
accelerate launch train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--mixed_precision="fp16" \
--max_train_steps=100 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
```
#### error log
```
Steps: 0%| | 0/100 [00:00<?, ?it/s][2022-12-14 09:00:20,033] torch._inductor.graph: [WARNING] Creating implicit fallback for:
target: aten.triu_.default
args[0]: TensorBox(StorageBox(
ComputedBuffer(name='buf0', layout=FlexibleLayout('cpu', torch.float16, size=[2, 12, 12], stride=[144, 12, 1]), data=Pointwise(
'cpu',
torch.float16,
to_dtype(constant(-65504.0, torch.float32), torch.float16),
ranges=[2, 12, 12],
origins={empty, _tensor_constant0, fill_, lift_fresh_copy}
))
))
args[1]: 1
[2022-12-14 09:00:20,035] torch._inductor.lowering: [WARNING] make_fallback(aten.triu_.default): a decomposition exists, we should switch to it
[2022-12-14 09:00:20,039] torch._inductor.ir: [WARNING] Using FallbackKernel: torch.ops.aten.triu_.default
[2022-12-14 09:00:20,040] torch._inductor.ir: [WARNING] DeviceCopy
Traceback (most recent call last):
File "/opt/conda/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/opt/conda/lib/python3.8/site-packages/accelerate/commands/launch.py", line 1104, in launch_command
simple_launcher(args)
File "/opt/conda/lib/python3.8/site-packages/accelerate/commands/launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python3.8', 'train_text_to_image.py', '--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '--dataset_name=lambdalabs/pokemon-blip-captions', '--use_ema', '--resolution=512', '--center_crop', '--random_flip', '--train_batch_size=2', '--gradient_accumulation_steps=1', '--mixed_precision=fp16', '--max_train_steps=100', '--learning_rate=1e-05', '--max_grad_norm=1', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--output_dir=sd-pokemon-model']' died with <Signals.SIGSEGV: 11>.
```
### Versions
PyTorch version: 2.0.0.dev20221213+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0rc2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.0.dev20221213+cu117
[pip3] torch-tensorrt==1.2.0a0
[pip3] torchtext==0.11.0a0
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.14.0a0
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-include 2020.4 h726a3e6_304 conda-forge
[conda] numpy 1.24.0rc2 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 2.0.0.dev20221213+cu117 pypi_0 pypi
[conda] torch-tensorrt 1.2.0a0 pypi_0 pypi
[conda] torchtext 0.11.0a0 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.14.0a0 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 2 |
3,952 | 90,828 |
Compare oneDNN and OpenBLAS backend of PyTorch on arm64 architecture
|
module: cpu, triaged, module: bfloat16, module: arm
|
### ๐ The feature, motivation and pitch
I am optimizing PyTorch performance on ARM server because the customer has this requirement.
Currently, I am focusing on inference task, and I have found that the current version of PyTorch can use two backends: one is oneDNN, which is connected to the Arm Compute Library (ACL), and the other is using the BLAS library. On ARM server devices, it is typically the OpenBLAS library.
I am using the ARM Neoverse-N2 architecture, which supports BF16 instruction extension. Both ACL and OpenBLAS have implemented optimization for BF16 GEMM, and if PyTorch users can use this optimization, it will significantly improve performance.
In fact, users can already use BF16 in the following way:
1. If build with BLAS, the way of use `.to(torch.bfloat16)`
```python
# Assuming we are using the ResNet-50 model.
model = model.to(torch.bfloat16)
x = x.to(torch.bfloat16)
pred = model(x)
```
This approach will cause all operations to be performed in bf16 type.
This will speed up gemm, but other ops such as activation layers and addition op will perform worse than fp32, and some ops do not support bf16, such as the roi-head commonly used in object detection networks.
2. the way of use `ONEDNN_DEFAULT_FPMATH_MODE`
oneDNN + ACL has a better solution, where they set an environment variable that only converts ops with improvements to bf16, while others continue to use fp32.
```bash
ONEDNN_DEFAULT_FPMATH_MODE=BF16 python resnet.py
```
Users do not need to change the code, they just specify that the value of the environment variable is BF16 before the command.
So, I tested resnet50 with pytorch profiler.
1. If build with BLAS, the way of use `.to(torch.bfloat16)`
```bash
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::conv2d 0.14% 247.000us 57.84% 102.213ms 1.929ms 53
aten::convolution 0.21% 371.000us 57.70% 101.966ms 1.924ms 53
aten::_convolution 0.30% 525.000us 57.49% 101.595ms 1.917ms 53
aten::thnn_conv2d 0.11% 189.000us 57.19% 101.056ms 1.907ms 53
aten::_slow_conv2d_forward 56.86% 100.473ms 57.08% 100.867ms 1.903ms 53
aten::batch_norm 0.25% 436.000us 19.34% 34.178ms 644.868us 53
aten::_batch_norm_impl_index 0.28% 493.000us 19.23% 33.980ms 641.132us 53
aten::native_batch_norm 18.72% 33.078ms 18.92% 33.425ms 630.660us 53
aten::add_ 9.58% 16.927ms 9.58% 16.927ms 1.058ms 16
aten::relu_ 0.30% 529.000us 9.38% 16.571ms 338.184us 49
aten::clamp_min_ 9.25% 16.348ms 9.25% 16.348ms 333.633us 49
aten::max_pool2d 0.01% 23.000us 2.95% 5.219ms 5.219ms 1
aten::max_pool2d_with_indices 2.94% 5.196ms 2.94% 5.196ms 5.196ms 1
aten::linear 0.02% 28.000us 0.36% 642.000us 642.000us 1
aten::addmm 0.33% 576.000us 0.34% 593.000us 593.000us 1
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 176.709ms
```
2. the way of use `ONEDNN_DEFAULT_FPMATH_MODE`
```bash
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::conv2d 0.08% 296.000us 95.89% 343.604ms 6.483ms 53
aten::convolution 0.11% 392.000us 95.81% 343.308ms 6.478ms 53
aten::_convolution 0.16% 586.000us 95.70% 342.916ms 6.470ms 53
aten::mkldnn_convolution 68.22% 244.440ms 69.49% 249.009ms 12.450ms 20
aten::thnn_conv2d 0.04% 129.000us 26.03% 93.264ms 2.826ms 33
aten::_slow_conv2d_forward 25.92% 92.882ms 25.99% 93.135ms 2.822ms 33
aten::batch_norm 0.05% 187.000us 1.39% 4.996ms 94.264us 53
aten::_batch_norm_impl_index 0.15% 549.000us 1.33% 4.773ms 90.057us 53
aten::max_pool2d 0.01% 22.000us 1.27% 4.553ms 4.553ms 1
aten::max_pool2d_with_indices 1.26% 4.531ms 1.26% 4.531ms 4.531ms 1
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 358.335ms
```
So far, the OpenBLAS backend is significantly faster than the ACL.
So now there is a problem, better performance, but not easy to use. Why can't we provide convenient bf16 optimization support for the pytorch+BLAS solution? pytorch does not read environment variables, so we can provide a similar interface:
```python
# Add this line before user's code
torch.set_float32_fast_math_mode("BF16")
```
The user adds this line in front of his code, and then PyTorch can use the BF16 optimization when calling gemm, while the rest of the operations still use fp32 by default, thus getting the best performance.
### Alternatives
_No response_
### Additional context
_No response_
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet
| 5 |
3,953 | 90,827 |
Support for Pylint
|
module: lint, triaged
|
### ๐ The feature, motivation and pitch
Using linters is crucial for code safety and clarity. One of the commonly used linters Pylint (e.g. default for Visual Studio Code), fails to find members in the `torch` module as these are supposedly generated (see https://github.com/PyCQA/pylint/issues/7937).
### Alternatives
Pylint supports external C libraries through the use of the `--extension-pkg-allow-list` option, and this works well e.g. for NumPy, but not for PyTorch.
The common solution people use is to completely ignore `torch` members by using `--generated-members=numpy.*,torch.*`, which is like using no linting for PyTorch at all.
### Additional context
This issue has been repeatedly brought up with no solution, see https://github.com/pytorch/pytorch/issues/24807, https://github.com/pytorch/pytorch/issues/1942, https://github.com/pytorch/pytorch/issues/701. PyLint devs claim they are unable to receive member information from PyTorch, see https://github.com/PyCQA/pylint/issues/7937.
| 0 |
3,954 | 90,820 |
Support `divmod` for tensors
|
triaged, module: numpy
|
### ๐ The feature, motivation and pitch
Python has a nice builtin function called `divmod` defined as `divmod(x, y) = (x // y, x % y)`. There are some previous issues about supporting it in pytorch https://github.com/pytorch/pytorch/pull/20979 and https://github.com/pytorch/pytorch/issues/18627 but they appear to only work on integers and not tensors.
For example, in pytorch (1.11) I get the following error:
```
>>> ti = torch.randint(10, size=(3,3))
>>> divmod(i, 3)
TypeError: unsupported operand type(s) for divmod(): 'Tensor' and 'int'
```
whereas in numpy I can do this without issue:
```
>>> ni = np.random.randint(10, size=(3,3))
>>> divmod(ni,3)
(array([[2, 3, 0],
[2, 3, 1],
[2, 1, 0]]),
array([[2, 0, 1],
[2, 0, 1],
[0, 1, 1]]))
```
### Alternatives
Alternatively pytorch could support a `tensor.divmod` method, but I think it makes more sense to just follow the standard Python API.
### Additional context
One may have to think about how broadcasting would work if you called `divmod(x, y)` where both `x` and `y` were tensors. However, in general it should probably just follow the equation above `divmod(x, y) = (x // y, x % y)`.
cc @mruberry @rgommers
| 2 |
3,955 | 90,793 |
nn.CrossEntropyLoss error out when the sample size is large
|
module: nn, module: cuda, module: memory usage, triaged, module: edge cases
|
### ๐ Describe the bug
I'm getting the error `RuntimeError: CUDA error: an illegal memory access was encountered` when running `nn.CrossEntropyLoss`. I was able to reproduce the error with the following sample code:
```
import torch
from torch.nn import CrossEntropyLoss
logits_high = 8.8125
logits_low = -6.3125
number_samples = 18423
number_class = 131072
output_class = 50263
# Create logits with uniform distribution between [logits_low, logits_high]
logits = ((logits_high - logits_low) * torch.randn(number_samples, 131072, dtype=torch.bfloat16) + logits_low).to(torch.bfloat16).to(0)
labels = torch.empty(number_samples, dtype=torch.int64)
# Create labels in range [0, output_class)
labels = labels.random_(to=output_class)
labels = labels.to(0)
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits, labels)
print(loss)
```
Running the above script will get the illegal memory error. The tensor size, distribution ranges are picked from real data. The error is sample size related, for example if I change the `number_samples` to 15000 the error will be gone. I'm running on A100 GPU with 40GB memory, this should not be an OOM.
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.18.2
Libc version: glibc-2.31
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-1080-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] sagemaker-pytorch-training==2.7.0
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchdata==0.4.1
[pip3] torchnet==0.0.4
[pip3] torchvision==0.13.1+cu113
[conda] magma-cuda113 2.5.2 1 pytorch
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.4 py38h7042d01_1 conda-forge
[conda] sagemaker-pytorch-training 2.7.0 pypi_0 pypi
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torchaudio 0.12.1+cu113 pypi_0 pypi
[conda] torchdata 0.4.1 pypi_0 pypi
[conda] torchnet 0.0.4 pypi_0 pypi
[conda] torchvision 0.13.1+cu113 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @saketh-are @ngimel
| 1 |
3,956 | 90,786 |
[Composable API] Add `fully_shard` state dict unit test after manual "wrapping" is supported
|
oncall: distributed, triaged, module: fsdp
|
We should remember to add `fully_shard` state dict tests after manual "wrapping" is supported.
https://github.com/pytorch/pytorch/pull/90767#discussion_r1047482214
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,957 | 90,784 |
[FSDP] Investigate `test_fsdp_pure_fp16.py` inaccuracy
|
oncall: distributed, triaged, module: fsdp
|
`test_fsdp_pure_fp16.py` has been mostly flaky or broken. For now, we are disabling the parameter check after the training step. However, this means that the backward pass could be incorrect for pure FP16 models using FSDP.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,958 | 90,775 |
Extend "torch.utils.cpp_extension.load" for both lib64 and **lib**
|
module: cpp-extensions, triaged
|
### ๐ The feature, motivation and pitch
Currently, `torch.utils.cpp_extension.load()` has an assumption that CUDA lib directory is named "lib64" under `${CUDA_HOME}`. (Doc: https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.load)
However, when using conda to install cuda, it is actually under conda env's "lib" but not "lib64".
Can we extend `torch.utils.cpp_extension.load()` to also consider this case?
Otherwise users can only hack this by linking "lib" to "lib64".
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @zou3519
| 0 |
3,959 | 90,774 |
Cannot compile torchtext model
|
triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
It would be nice to be able to compile torchtext models, e.g. RobertaModel:
```py
import torch
from torchtext.models import RobertaEncoderConf, RobertaModel, RobertaClassificationHead
roberta_encoder_conf = RobertaEncoderConf(vocab_size=250002)
classifier_head = RobertaClassificationHead(num_classes=2, input_dim=768)
classifier = RobertaModel(encoder_conf=roberta_encoder_conf, head=classifier_head)
input = torch.randint(250002, (7,13))
classifier = torch.compile(classifier)
print(classifier(input))
```
Produces
```
Traceback (most recent call last):
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 398, in _compile
out_code = transform_code_object(code, transform)
File "/home/awf/dev/pytorch/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
transformations(instructions, code_options)
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 385, in transform
tracer.run()
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 1686, in run
super().run()
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 537, in run
and self.step()
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 500, in step
getattr(self, inst.opname)(inst)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 306, in wrapper
return inner_fn(self, inst)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 434, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/awf/dev/pytorch/torch/_dynamo/variables/nn_module.py", line 220, in call_function
return tx.inline_user_function_return(
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 470, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 1764, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 1819, in inline_call_
tracer.run()
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 537, in run
and self.step()
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 500, in step
getattr(self, inst.opname)(inst)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 306, in wrapper
return inner_fn(self, inst)
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/awf/dev/pytorch/torch/_dynamo/symbolic_convert.py", line 434, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/awf/dev/pytorch/torch/_dynamo/variables/torch.py", line 446, in call_function
proxy=tx.output.create_proxy(
File "/home/awf/dev/pytorch/torch/_dynamo/output_graph.py", line 734, in create_proxy
rv = super().create_proxy(
File "/home/awf/dev/pytorch/torch/fx/proxy.py", line 150, in create_proxy
args_ = self.create_arg(args)
File "/home/awf/dev/pytorch/torch/fx/_symbolic_trace.py", line 351, in create_arg
return super().create_arg(a)
File "/home/awf/dev/pytorch/torch/fx/proxy.py", line 224, in create_arg
return type(a)(self.create_arg(elem) for elem in a)
File "/home/awf/dev/pytorch/torch/fx/proxy.py", line 224, in <genexpr>
return type(a)(self.create_arg(elem) for elem in a)
File "/home/awf/dev/pytorch/torch/fx/_symbolic_trace.py", line 351, in create_arg
return super().create_arg(a)
File "/home/awf/dev/pytorch/torch/fx/proxy.py", line 252, in create_arg
raise NotImplementedError(f"argument of type: {type(a)}")
NotImplementedError: argument of type: <class 'typing._GenericAlias'>
from user code:
File "/home/awf/dev/torchtext/torchtext/models/roberta/model.py", line 122, in forward
features = self.encoder(tokens, masked_tokens)
File "/home/awf/dev/torchtext/torchtext/models/roberta/model.py", line 70, in forward
if torch.jit.isinstance(output, List[Tensor]):
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/awf/dev/gh-awf/fx-tools/dynamo_transformer.py", line 13, in <module>
print(classifier(input))
File "/home/awf/dev/pytorch/torch/nn/modules/module.py", line 1482, in _call_impl
return forward_call(*args, **kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/eval_frame.py", line 332, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 479, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 103, in _fn
return fn(*args, **kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/utils.py", line 90, in time_wrapper
r = func(*args, **kwargs)
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
return _compile(
File "/home/awf/dev/pytorch/torch/_dynamo/convert_frame.py", line 469, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0a0+git5adc18d
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX A2000 Laptop GPU
Nvidia driver version: 517.00
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0a0+git5adc18d
[pip3] torchdata==0.6.0a0+ab04b09
[pip3] torchtext==0.15.0a0+c4f1e84
[conda] blas 1.0 mkl
[conda] mkl 2022.2.1 pypi_0 pypi
[conda] mkl-include 2022.2.1 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpy-base 1.23.4 py310h8e6c178_0
[conda] torch 2.0.0a0+git5adc18d dev_0 <develop>
[conda] torchdata 0.6.0a0+ab04b09 dev_0 <develop>
[conda] torchtext 0.15.0a0+c4f1e84 dev_0 <develop>
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 3 |
3,960 | 90,768 |
PyTorch 2.0 not working on Windows
|
module: windows, triaged, oncall: pt2
|
### ๐ Describe the bug
When I try installing the nightly build with the following command:
`pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117`
It gives the following warning
`WARNING: torch 2.0.0.dev20221213+cu117 does not provide the extra 'dynamo'`
And when I try to check the installation with
```
git clone https://github.com/pytorch/pytorch
cd tools/dynamo
python verify_dynamo.py
```
This results in the following error
```
Traceback (most recent call last):
File ".\verify_dynamo.py", line 167, in <module>
main()
File ".\verify_dynamo.py", line 155, in main
cuda_ver = check_cuda()
File ".\verify_dynamo.py", line 64, in check_cuda
cuda_ver = get_cuda_version()
File ".\verify_dynamo.py", line 39, in get_cuda_version
raise VerifyDynamoError(cpp_extension.CUDA_NOT_FOUND_MESSAGE)
__main__.VerifyDynamoError:
CUDA was not found on the system, please set the CUDA_HOME or the CUDA_PATH
environment variable or add NVCC to your system PATH. The extension compilation will fail.
```
The torch installation is working but dynamo seems not to work. When I run the benchmarks from [https://gist.github.com/Chillee/f86675147366a7a0c6e244eaa78660f7](https://gist.github.com/Chillee/f86675147366a7a0c6e244eaa78660f7) I get the following
```
C:\Users\user\anaconda3\envs\auxiliar\lib\site-packages\torch\_dynamo\eval_frame.py:428: UserWarning: Windows is not currently supported, torch._dynamo.optimize() will do nothing
warnings.warn(
eager: 1386.2895965576172us
PT 2.0: 1406.266689300537us
```
Which again shows that dynamo is not working. If I try installing torchdynamo apart, it gives this error
```
Installing collected packages: torchdynamo
Running setup.py install for torchdynamo ... error
error: subprocess-exited-with-error
ร Running setup.py install for torchdynamo did not run successfully.
โ exit code: 1
โฐโ> [108 lines of output]
running install
C:\Users\user\anaconda3\envs\auxiliar\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-38
creating build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\allowed_functions.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\bytecode_analysis.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\bytecode_transformation.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\codegen.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\config.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\convert_frame.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\debug_utils.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\eval_frame.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\exc.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\guards.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\logging.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\mutation_guard.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\output_graph.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\profiler.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\replay_record.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\resume_execution.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\side_effects.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\skipfiles.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\source.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\symbolic_convert.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\testing.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\utils.py -> build\lib.win-amd64-cpython-38\torchdynamo
copying torchdynamo\__init__.py -> build\lib.win-amd64-cpython-38\torchdynamo
creating build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\codecache.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\compile_fx.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\config.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\debug.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\decomposition.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\dependencies.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\exc.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\graph.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\ir.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\lowering.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\metrics.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\overrides.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\scheduler.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\sizevars.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\utils.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\virtualized.py -> build\lib.win-amd64-cpython-38\torchinductor
copying torchinductor\__init__.py -> build\lib.win-amd64-cpython-38\torchinductor
creating build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\analysis.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\backends.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\distributed.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\inference.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\log_args.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\normalize.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\subgraph.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\training.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
copying torchdynamo\optimizations\__init__.py -> build\lib.win-amd64-cpython-38\torchdynamo\optimizations
creating build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\base.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\builder.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\builtin.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\constant.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\dicts.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\functions.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\lists.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\misc.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\nn_module.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\tensor.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\torch.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\user_defined.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
copying torchdynamo\variables\__init__.py -> build\lib.win-amd64-cpython-38\torchdynamo\variables
creating build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\autotuner.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\common.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\cpp.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\triton.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\triton_template.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\wrapper.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\__init__.py -> build\lib.win-amd64-cpython-38\torchinductor\codegen
creating build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\autotune.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\batched_matmul.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\conv.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\conv1x1.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\conv_perf_model.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\matmul.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\mm_perf_model.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\utils.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\triton_ops\__init__.py -> build\lib.win-amd64-cpython-38\torchinductor\triton_ops
copying torchinductor\codegen\cpp_prefix.h -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\triton_conv_delta_x.j2 -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\triton_conv_delta_x_hwc.j2 -> build\lib.win-amd64-cpython-38\torchinductor\codegen
copying torchinductor\codegen\triton_mm.j2 -> build\lib.win-amd64-cpython-38\torchinductor\codegen
running build_ext
building 'torchdynamo._eval_frame' extension
creating build\temp.win-amd64-cpython-38
creating build\temp.win-amd64-cpython-38\Release
creating build\temp.win-amd64-cpython-38\Release\torchdynamo
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\user\anaconda3\envs\auxiliar\include -IC:\Users\user\anaconda3\envs\auxiliar\Include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" /Tctorchdynamo/_eval_frame.c /Fobuild\temp.win-amd64-cpython-38\Release\torchdynamo/_eval_frame.obj -Wall
_eval_frame.c
C:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt\corecrt_io.h(49): warning C4820: '_finddata32i64_t': '4' bytes de relleno agregados despuโs de miembro de datos 'name'
C:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt\corecrt_io.h(54): warning C4820: '_finddata64i32_t': '4' bytes de relleno agregados despuโs de miembro de datos 'attrib'
C:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt\corecrt_io.h(64): warning C4820: '__finddata64_t': '4' bytes de relleno agregados despuโs de miembro de datos 'attrib'
C:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt\corecrt_io.h(69): warning C4820: '__finddata64_t': '4' bytes de relleno agregados despuโs de miembro de datos 'name'
c:\users\user\anaconda3\envs\auxiliar\include\pyconfig.h(205): fatal error C1083: No se puede abrir el archivo incluir: 'basetsd.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
ร Encountered error while trying to install package.
โฐโ> torchdynamo
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
I am using Windows 10, with AMD64 processors (the wheels for python show win_amd64 at the end). I also use 2 NVIDIA RTX 3090 with CUDA 12.0.
### Versions
Collecting environment information...
PyTorch version: 2.0.0.dev20221213+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 526.86
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0rc2
[pip3] torch==2.0.0.dev20221213+cu117
[conda] numpy 1.24.0rc2 pypi_0 pypi
[conda] torch 2.0.0.dev20221213+cu117 pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 18 |
3,961 | 90,760 |
Large slow down by not calling `torch.set_num_threads`
|
module: performance, module: cpu, triaged, module: linear algebra
|
### ๐ Describe the bug
By default PyTorch sets the number of threads to the number of cores. Strangely, I found that if I don't call `torch.set_num_threads` in a computer with 20 cores (and use the default value of 20) the job runs very slowly. However, if I call `torch.set_num_threads(20)` the job runs much faster. Very stange ...
The minimal working example below illustrates this problem. If I call this script without arguments, it will not call `torch.set_num_threads` and in a computer with 20 cores it takes 7 seconds. But if I call this script with argument `--set_num_threads` it takes 4 seconds.
In another computer with 6 cores, if I do not call `set_num_threads` the script takes 5.3 seconds, but if I call `set_num_threads` the script takes 4.98 seconds.
The script below is a minimal working example. I am experiencing very large slow downs, on the order of 10, on a more complex piece of code.
This [discussion](https://discuss.pytorch.org/t/how-to-check-if-pytorch-is-using-blas/50068/3) pointed me in the right direction to understand this problem.
``` python
import sys
import time
import argparse
import torch
def main(argv):
parser = argparse.ArgumentParser()
parser.add_argument("--set_num_threads", action="store_true",
help="set the number of threads to the number of cores")
parser.add_argument("--n_iter", type=int, help="number of iterations", default=100)
parser.add_argument("--n_rows", type=int, help="number of rows",
default=1000)
parser.add_argument("--n_cols", type=int, help="number of columns",
default=1000)
args = parser.parse_args()
set_num_threads = args.set_num_threads
n_iter = args.n_iter
n_rows = args.n_rows
n_cols = args.n_cols
if set_num_threads > 0:
print("called torch.set_num_threads")
torch.set_num_threads(torch.get_num_threads())
else:
print("NOT called torch.set_num_threads")
print(f"Using {torch.get_num_threads()} threads")
start_time = time.time()
for i in range(n_iter):
m = torch.randn((n_rows, n_cols), dtype=torch.double)
b = torch.randn((n_rows, 1), dtype=torch.double)
m.requires_grad = True
L = torch.linalg.cholesky(torch.matmul(m, m.T))
x = torch.cholesky_solve(b, L)
m.requires_grad = False
end_time = time.time()
print(f"Processing time {end_time-start_time}")
breakpoint()
if __name__ == "__main__":
main(sys.argv)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.17
Python version: 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-132-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] gpytorch==1.2.1
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1a0
[conda] _pytorch_select 0.2 gpu_0
[conda] blas 1.0 mkl
[conda] cpuonly 1.0 0 pytorch
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] gpytorch 1.2.1 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.21.5 py37h6c91a56_3
[conda] numpy-base 1.21.5 py37ha15fc14_3
[conda] numpydoc 1.4.0 py37h06a4308_0
[conda] pytorch 1.12.1 cpu_py37he8d8e81_0
[conda] torchvision 0.13.1 cpu_py37h164cc8f_0
cc @ngimel @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 6 |
3,962 | 90,753 |
Pytorch 2.0 document issue
|
triaged
|
### ๐ The doc issue
Hello, according to the brand new pytorch 2.0 introduction document, it seems that ddp mode is only compatible with
static_graph=True and find_unused_parameters=True for now.

However, in the detailed post, DDP needs to be run with static_graph=False.

So I suppose the right one is DDP needs to be run with static_graph=False, as said in the detailed post? Could you please check this, thanks.
Doc addr:
https://pytorch.org/get-started/pytorch-2.0/#torchinductor-fast-codegen-using-a-define-by-run-ir
### Suggest a potential alternative/fix
_No response_
| 4 |
3,963 | 90,752 |
Adam (fused=True) issues
|
module: performance, module: optimizer, module: cuda, triaged
|
### ๐ Describe the bug
I am having serious doubt on the 1.13 Adam(fused=true) implementation.
First it should be faster than fused=False, in our case it is not, but also the numerical seems off.
Adam(fused=False)
[2022-12-09 12:21:37,894 INFO] Start training loop and validate every 10000 steps...
[2022-12-09 12:23:49,851 INFO] Step 100/100000; acc: 16.1; ppl: 6103.2; xent: 8.7; lr: 0.00002; sents: 169920; bsz: 8605/10659/340; 32607/40388 tok/s; 132 sec;
[2022-12-09 12:25:22,693 INFO] Step 200/100000; acc: 21.4; ppl: 957.8; xent: 6.9; lr: 0.00005; sents: 153056; bsz: 8588/10584/306; 46250/57000 tok/s; 225 sec;
[2022-12-09 12:26:55,347 INFO] Step 300/100000; acc: 25.8; ppl: 414.5; xent: 6.0; lr: 0.00007; sents: 139424; bsz: 8796/10695/279; 47467/57717 tok/s; 317 sec;
Adam(fused=True)
[2022-12-09 12:29:46,843 INFO] Step 100/100000; acc: 15.9; ppl: 7363.9; xent: 8.9; lr: 0.00002; sents: 169920; bsz: 8605/10659/340; 32465/40212 tok/s; 133 sec;
[2022-12-09 12:31:19,458 INFO] Step 200/100000; acc: 18.1; ppl: 1606.6; xent: 7.4; lr: 0.00005; sents: 153056; bsz: 8588/10584/306; 46363/57140 tok/s; 225 sec;
[2022-12-09 12:32:53,634 INFO] Step 300/100000; acc: 22.1; ppl: 702.1; xent: 6.6; lr: 0.00007; sents: 139424; bsz: 8796/10695/279; 46700/56784 tok/s; 319 sec;
Fusedadam apex O2
[2022-12-09 12:35:51,087 INFO] Step 100/100000; acc: 16.1; ppl: 6136.2; xent: 8.7; lr: 0.00002; sents: 169920; bsz: 8605/10659/340; 36301/44964 tok/s; 119 sec;
[2022-12-09 12:37:11,869 INFO] Step 200/100000; acc: 21.4; ppl: 959.9; xent: 6.9; lr: 0.00005; sents: 153056; bsz: 8588/10584/306; 53155/65510 tok/s; 199 sec;
[2022-12-09 12:38:33,601 INFO] Step 300/100000; acc: 25.8; ppl: 415.1; xent: 6.0; lr: 0.00007; sents: 139424; bsz: 8796/10695/279; 53810/65430 tok/s; 281 sec;
Fusedadam apex old legacy code
[2022-12-09 12:42:04,245 INFO] Step 100/100000; acc: 16.1; ppl: 6136.2; xent: 8.7; lr: 0.00002; sents: 169920; bsz: 8605/10659/340; 37141/46004 tok/s; 116 sec;
[2022-12-09 12:43:21,080 INFO] Step 200/100000; acc: 21.4; ppl: 959.9; xent: 6.9; lr: 0.00005; sents: 153056; bsz: 8588/10584/306; 55886/68875 tok/s; 193 sec;
[2022-12-09 12:44:41,898 INFO] Step 300/100000; acc: 25.8; ppl: 415.1; xent: 6.0; lr: 0.00007; sents: 139424; bsz: 8796/10695/279; 54419/66169 tok/s; 274 sec;
As you can see the second one (Adam(fused=True) is not faster, clearly much slower than the Apex implementations, and accuracy, ppl, loss are off compared to the 3 others.
### Versions
pytorch 1.13
Ubuntu 20.04
OpentNMT-py 3.0.1
cc @ngimel @vincentqb @jbschlosser @albanD @janeyx99
| 12 |
3,964 | 90,751 |
pytorch prune in libtorch
|
module: cpp, triaged
|
### ๐ The doc issue
torch.nn.utils.prune is the model pruning function of pytorch
How to call this function in libtorch
### Suggest a potential alternative/fix
_No response_
cc @jbschlosser
| 1 |
3,965 | 90,749 |
Importing torch causes segfault when using python installed with conda
|
high priority, triage review, needs reproduction, oncall: binaries, module: crash
|
### ๐ Describe the bug
Cross-posted to: https://discuss.pytorch.org/t/importing-torch-causes-segfault-when-using-python-installed-with-conda/168212
I create a conda environment with: `conda create -y -n dev python=3.7`.
In install torch with:
```
conda run -n dev pip install torch==1.14.0.dev20221027+cu116 --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu116
```
running: `python3 -c import torch` gives a segfault.
Here's the gdb backtrace:
```
-c (gdb) r -c 'import torch'
Starting program: /opt/conda/envs/dev/bin/python3 -c 'import torch'
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[Detaching after fork from child process 31265]
Program received signal SIGSEGV, Segmentation fault.
0x000055555564d27c in type_name (context=<optimized out>, type=0x555558aa7630)
at /home/conda/feedstock_root/build_artifacts/python_1635226063427/work/Objects/typeobject.c:433
433 /home/conda/feedstock_root/build_artifacts/python_1635226063427/work/Objects/typeobject.c: No such file or directory.
```
### Versions
The script segfaults.
cc @ezyang @gchanan @zou3519 @seemethere @malfet
| 2 |
3,966 | 93,477 |
[Inductor] [CPU] optimize thread parallel and loop collapse
|
triaged, oncall: pt2, module: cpu inductor
|
### Description
Current thread parallel and loop collapse implementation in `TorchInductor` are not optimal, e.x.: not consider the loop body workload but just loop times at each level. The config parameter `min_chunk_size` determines the minimum loop times one thread should at least take, regardless of the loop body workload which varies a lot.
### Real Case
The model `speech_transformer` suffers from the issue. We verified this by simply tuning `min_chunk_size` to see if the performance improves.
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/xuanliao/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/xuanliao/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:Calibri, sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
.xl63
{border:.5pt solid windowtext;}
.xl64
{color:gray;
border:.5pt solid windowtext;}
.xl65
{mso-number-format:Percent;
border:.5pt solid windowtext;}
.xl66
{font-weight:700;
mso-number-format:Percent;
border:.5pt solid windowtext;}
-->
</head>
<body link="#0563C1" vlink="#954F72">
min_chunk_size | inductor time | RSD | #collapse | #parallel | increased performance
-- | -- | -- | -- | -- | --
4096 | 0.077303238 | 0.57% | 30 | 55 | /
0 | 0.063956184 | 0.48% | 31 | 98 | **17.27%**
</body>
</html>
The default `min_chunk_size` is 4096. When it is set to 0, we do not limit the loop times one thread takes. The above result shows that the performance could improve **17.27%** if we optimize thread parallel and loop collapse.
We also want to know what has happened in kernels when `min_chunk_size` changes. `Graph8, kernel0` is taken as an example.
**min_chunk_size=4096**
```
kernel_cpp_0 = async_compile.cpp('''
#include <ATen/record_function.h>
#include "/tmp/torchinductor_root/rp/crpdeql3xwpfmcyakwtqpzihz525if6mt25mozau77xvmnh7vqyu.h"
extern "C" void kernel(const long* __restrict__ in_ptr0,
const float* __restrict__ in_ptr1,
const float* __restrict__ in_ptr2,
float* __restrict__ out_ptr0,
float* __restrict__ out_ptr1,
float* __restrict__ out_ptr2)
{
RECORD_FUNCTION("kernel_cpp_0", c10::ArrayRef<c10::IValue>({}));
#pragma GCC ivdep
for(long i0=0; i0<10; i0+=1)
{
#pragma GCC ivdep
for(long i1=0; i1<22; i1+=1)
{
#pragma GCC ivdep
for(long i2=0; i2<512; i2+=1)
{
{
{
auto tmp0 = in_ptr0[i1 + (22*i0)];
auto tmp4 = in_ptr2[i2 + (512*i1)];
auto tmp1 = in_ptr1[i2 + (512*tmp0)];
auto tmp2 = static_cast<float>(22.627416997969522);
auto tmp3 = tmp1 * tmp2;
auto tmp5 = tmp3 + tmp4;
out_ptr0[i2 + (512*i1) + (11264*i0)] = tmp5;
out_ptr1[i2 + (512*i1) + (11264*i0)] = tmp5;
out_ptr2[i2 + (512*i1) + (11264*i0)] = tmp5;
}
}
}
}
}
}
''')
```
**min_chunk_size=0**
```
kernel_cpp_0 = async_compile.cpp('''
#include <ATen/record_function.h>
#include "/tmp/torchinductor_root/rp/crpdeql3xwpfmcyakwtqpzihz525if6mt25mozau77xvmnh7vqyu.h"
extern "C" void kernel(const long* __restrict__ in_ptr0,
const float* __restrict__ in_ptr1,
const float* __restrict__ in_ptr2,
float* __restrict__ out_ptr0,
float* __restrict__ out_ptr1,
float* __restrict__ out_ptr2)
{
RECORD_FUNCTION("kernel_cpp_0", c10::ArrayRef<c10::IValue>({}));
#pragma omp parallel num_threads(28)
{
#pragma omp for collapse(2)
for(long i0=0; i0<10; i0+=1)
{
for(long i1=0; i1<22; i1+=1)
{
#pragma GCC ivdep
for(long i2=0; i2<512; i2+=1)
{
{
{
auto tmp0 = in_ptr0[i1 + (22*i0)];
auto tmp4 = in_ptr2[i2 + (512*i1)];
auto tmp1 = in_ptr1[i2 + (512*tmp0)];
auto tmp2 = static_cast<float>(22.627416997969522);
auto tmp3 = tmp1 * tmp2;
auto tmp5 = tmp3 + tmp4;
out_ptr0[i2 + (512*i1) + (11264*i0)] = tmp5;
out_ptr1[i2 + (512*i1) + (11264*i0)] = tmp5;
out_ptr2[i2 + (512*i1) + (11264*i0)] = tmp5;
}
}
}
}
}
}
}
''')
```
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 0 |
3,967 | 90,742 |
Adopt full_backward_pre_hook in DDP
|
oncall: distributed, triaged, module: ddp
|
### ๐ The feature, motivation and pitch
Since https://github.com/pytorch/pytorch/pull/86700 has landed supporting the full backward pre hook, we should enable this with DDP for a true module level pre-backward hook and eliminate things such as _DDPSink.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,968 | 93,476 |
test_copy_broadcast
|
triaged
|
Repro:
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_torch.py -k test_copy_broadcast
```
```
FAIL: test_copy_broadcast (__main__.TestTorch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/scratch/binbao/work/pytorch/test/test_torch.py", line 7650, in test_copy_broadcast
self.assertRaises(RuntimeError, lambda: torch.zeros(5, 6).copy_(torch.zeros(30)))
File "/scratch/binbao/work/pytorch/torch/testing/_internal/common_utils.py", line 2927, in assertRaises
return super().assertRaises(expected_exception, *args, **kwargs)
AssertionError: RuntimeError not raised by <lambda>
```
Note: there are two tests with this name, we can rename it.
| 0 |
3,969 | 90,708 |
PixelShuffle/Unshuffle Channels Last Support on GPU
|
triaged, module: memory format
|
### ๐ The feature, motivation and pitch
According to https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support pixel_shuffle & pixel_unshuffle have channels last support on CPU, but not GPU.
This appears to be the case based on anecdotal training performance for networks using PixelShuffle/Unshuffle & channels last (sorry no hard numbers on hand right now).
It is a very useful op that can really make a difference in high-fidelity pixel-level reconstruction / segmentation / salient object detection type tasks, and would be greatly appreciated!
### Alternatives
One could probably write a hand-crafted python nn.Module to detect if the incoming tensor is channels last & then do the reshuffling "manually"?
### Additional context
@bdhirsh in https://github.com/pytorch/pytorch/pull/86608 "Is pixel_shuffle/unshuffle commonly used with cuda? Maybe we should just add a fast cuda kernel for it if that's the case." - yes please!
cc @jamesr66a
| 0 |
3,970 | 90,703 |
RuntimeError: kind_.is_prim() INTERNAL ASSERT FAILED. Only prim ops are allowed to not have a registered operator but aten::mul doesn't have one either. We don't know if this op has side effects.
|
module: onnx, triaged, module: primTorch
|
### ๐ Describe the bug
I got an error while converting a torch script module to ONNX.
I'm looking for some help on what does the error message mean and how to resolve it.
my goal is to implement a custom image preprocessing function/module on top of torch.Resize and convert it to ONNX operators.
**Steps to reproduce the issue**
1. install opencv-python = "^4.6.0.66", torch = "<1.13" pytest with python 3.8
2. save the below code as `test_transofrms.py`
3. download the image to a folder under `tests/data`
4. run `python -m pytest test_transofrms.py`
Below is the code to reproduce it.
```python
import cv2
import torch
import torchvision.transforms.functional as F
def test_rescale_with_padding():
img = torch.tensor(
cv2.imread("tests/data/2022-11-10T02-18-31-036Z-1066.jpg"), dtype=torch.float32
).permute(2, 0, 1)[None, :]
result_img = RescaleWithPadding(512, 512)(img)
module = torch.jit.script(RescaleWithPadding(512, 512))
torch.onnx.export(
model=module,
args=(img,),
f="transforms.onnx",
export_params=True,
verbose=True,
input_names=["input"],
output_names=["output"],
)
print(f"finished: {result_img.shape}")
class RescaleWithPadding(torch.nn.Module):
def __init__(self, height: int, width: int, padding_value: int = 0):
super(RescaleWithPadding, self).__init__()
self.height = height
self.width = width
self.padding_value = padding_value
self.max_size = max(height, width)
self.interpolation = F.InterpolationMode.BILINEAR
def forward(self, img: torch.Tensor):
b, c, image_height, image_width = img.shape
smaller_edge_size = min(image_height, image_width)
img = F.resize(
img=img,
size=[smaller_edge_size],
interpolation=self.interpolation,
max_size=self.max_size,
)
return img
```
Here is the detailed stacktrace:
```
__________________________ test_rescale_with_padding ___________________________
def test_rescale_with_padding():
img = torch.tensor(
cv2.imread("tests/data/2022-11-10T02-18-31-036Z-1066.jpg"), dtype=torch.float32
).permute(2, 0, 1)[None, :]
result_img = RescaleWithPadding(512, 512)(img)
expected_aspect_ratio = img.shape[2] / img.shape[3]
actual_aspect_ratio = result_img.shape[2] / result_img.shape[3]
assert actual_aspect_ratio == pytest.approx(expected_aspect_ratio, 0.01)
module = torch.jit.script(RescaleWithPadding(512, 512))
# module = RescaleWithPadding(512, 512)
> torch.onnx.export(
model=module,
args=(img,),
f="transforms.onnx",
export_params=True,
verbose=True,
input_names=["input"],
output_names=["output"],
)
tests/test_transforms.py:17:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torch/onnx/__init__.py:350: in export
return utils.export(
../../Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torch/onnx/utils.py:163: in export
_export(
../../Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torch/onnx/utils.py:1074: in _export
graph, params_dict, torch_out = _model_to_graph(
../../Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torch/onnx/utils.py:731: in _model_to_graph
graph = _optimize_graph(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
graph = graph(%img.1 : Float(1, 3, 723, 352, strides=[3, 1, 1056, 3], requires_grad=0, device=cpu)):
%1 : __torch__.tests.te...tensor.py:569:14
-> (%img.33)
block1():
-> (%img.47)
-> (%img.46)
return (%img.10)
operator_export_type = <OperatorExportTypes.ONNX: 0>
_disable_torch_constant_prop = False, fixed_batch_size = False, params_dict = {}
dynamic_axes = {}, input_names = ['input']
module = <torch.ScriptModule object at 0x17480a9f0>
def _optimize_graph(
graph: _C.Graph,
operator_export_type: _C_onnx.OperatorExportTypes,
_disable_torch_constant_prop: bool = False,
fixed_batch_size: bool = False,
params_dict=None,
dynamic_axes=None,
input_names=None,
module=None,
):
# Inline everything
_C._jit_pass_inline(graph)
# Remove fork/wait nodes
_C._jit_pass_inline_fork_wait(graph)
_C._jit_pass_lint(graph)
> _C._jit_pass_lower_all_tuples(graph)
E RuntimeError: kind_.is_prim() INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/jit/ir/ir.cpp":1219, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::mul doesn't have one either. We don't know if this op has side effects.
../../Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torch/onnx/utils.py:234: RuntimeError
----------------------------- Captured stdout call -----------------------------
Torch IR graph at exception: graph(%img.1 : Float(1, 3, 723, 352, strides=[3, 1, 1056, 3], requires_grad=0, device=cpu)):
%1 : __torch__.tests.test_transforms.___torch_mangle_4.RescaleWithPadding = prim::CreateObject()
%2 : int[] = prim::Constant[value=[0, 1, 2, 3, 4]]()
%3 : int = prim::Constant[value=4]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:548:18
%4 : int[] = prim::Constant[value=[6, 7]]()
%5 : int = prim::Constant[value=-2]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:27:30
%6 : int[] = prim::Constant[value=[1, 2]]()
%7 : str = prim::Constant[value="Tensor is not a torch image."]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:15:24
%8 : str = prim::Constant[value="builtins.TypeError"]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:15:14
%9 : bool = prim::Constant[value=0]()
%10 : int = prim::Constant[value=0]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:470:70
%11 : int = prim::Constant[value=2]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:32
%12 : int = prim::Constant[value=1]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:29
%13 : str = prim::Constant[value="builtins.ValueError"]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:444:14
%14 : str = prim::Constant[value="Size must be an int or a 1 or 2 element tuple/list, not a {} element tuple/list"]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:452:16
%15 : str = prim::Constant[value="max_size should only be passed if size specifies the length of the smaller edge, i.e. size should be an int or a sequence of length 1 in torchscript mode."]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:456:16
%16 : str = prim::Constant[value="max_size = {} must be strictly greater than the requested size for the smaller edge size = {}"]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:477:20
%17 : int = prim::Constant[value=6]() # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:491:69
%18 : str = prim::Constant[value="bilinear"]()
%19 : NoneType = prim::Constant()
%self.max_size : int = prim::Constant[value=512]()
%21 : int[] = aten::size(%img.1) # <string>:13:9
%b : int, %c : int, %image_height.1 : int, %image_width.1 : int = prim::ListUnpack(%21)
%smaller_edge_size.1 : int = prim::min(%image_height.1, %image_width.1) # /Users/asia/work/edge-client/tests/test_transforms.py:41:28
%27 : int[] = prim::ListConstruct(%smaller_edge_size.1)
%28 : Tensor = prim::Uninitialized()
%29 : int? = prim::Uninitialized()
%30 : int = aten::dim(%img.1) # <string>:3:9
%31 : bool = aten::ge(%30, %11) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:10:11
%32 : bool = aten::__not__(%31) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:14:7
= prim::If(%32) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:14:4
block0():
= prim::RaiseException(%7, %8) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:15:8
-> ()
block1():
-> ()
%33 : int = aten::len(%27) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:11
%34 : bool = aten::__contains__(%6, %33) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:11
%35 : bool = aten::__not__(%34) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:11
= prim::If(%35) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:450:8
block0():
%36 : str = aten::format(%14, %33) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:452:16
= prim::RaiseException(%36, %13) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:451:12
-> ()
block1():
-> ()
%37 : bool = aten::ne(%33, %12) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:454:36
%max_size.59 : int? = prim::If(%37) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:454:8
block0():
= prim::RaiseException(%15, %13) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:455:12
-> (%29)
block1():
-> (%self.max_size)
= prim::If(%32) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:14:4
block0():
= prim::RaiseException(%7, %8) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:15:8
-> ()
block1():
-> ()
%39 : int[] = aten::slice(%21, %5, %19, %12) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:27:20
%height.1 : int, %width.1 : int = prim::ListUnpack(%39)
%42 : bool = aten::eq(%33, %12) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:468:32
%43 : bool, %44 : Tensor, %new_h : int, %new_w : int = prim::If(%42) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:468:4
block0():
%47 : bool = aten::le(%width.1, %height.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:469:32
%99 : Tensor, %100 : Tensor = prim::If(%47) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:469:22
block0():
%49 : (int, int) = prim::TupleConstruct(%width.1, %height.1)
-> (%width.1, %height.1)
block1():
%50 : (int, int) = prim::TupleConstruct(%height.1, %width.1)
-> (%height.1, %width.1)
%98 : (int, int) = prim::TupleConstruct(%99, %100) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:469:22
%short.1 : int, %long.1 : int = prim::TupleUnpack(%98)
%requested_new_short.1 : int = aten::__getitem__(%27, %10) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:470:65
%54 : int = aten::mul(%requested_new_short.1, %100) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:472:55
%55 : float = aten::div(%54, %99) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:472:55
%new_long.1 : int = aten::Int(%55) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:472:51
%57 : bool = aten::__isnot__(%max_size.59, %19) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:474:11
%new_short : int, %new_long : int = prim::If(%57) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:474:8
block0():
%max_size.27 : int = prim::unchecked_cast(%max_size.59)
%61 : bool = aten::le(%max_size.27, %requested_new_short.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:475:15
= prim::If(%61) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:475:12
block0():
%62 : str = aten::format(%16, %max_size.27, %27) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:477:20
= prim::RaiseException(%62, %13) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:476:16
-> ()
block1():
-> ()
%63 : bool = aten::gt(%new_long.1, %max_size.27) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:480:15
%new_short.27 : int, %new_long.29 : int = prim::If(%63) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:480:12
block0():
%66 : int = aten::mul(%max_size.27, %requested_new_short.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:481:42
%67 : float = aten::div(%66, %new_long.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:481:42
%new_short.3 : int = aten::Int(%67) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:481:38
-> (%new_short.3, %max_size.27)
block1():
-> (%requested_new_short.1, %new_long.1)
-> (%new_short.27, %new_long.29)
block1():
-> (%requested_new_short.1, %new_long.1)
%102 : Tensor, %103 : Tensor = prim::If(%47) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:483:23
block0():
%70 : (int, int) = prim::TupleConstruct(%new_short, %new_long)
-> (%new_short, %new_long)
block1():
%71 : (int, int) = prim::TupleConstruct(%new_long, %new_short)
-> (%new_long, %new_short)
%101 : (int, int) = prim::TupleConstruct(%102, %103) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:483:23
%new_w.1 : int, %new_h.1 : int = prim::TupleUnpack(%101)
%74 : int[] = prim::ListConstruct(%width.1, %height.1)
%75 : int[] = prim::ListConstruct(%102, %103)
%76 : bool = aten::eq(%74, %75) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:485:11
%77 : Tensor = prim::If(%76) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:485:8
block0():
-> (%img.1)
block1():
-> (%28)
-> (%76, %77, %103, %102)
block1():
%new_w.5 : int = aten::__getitem__(%27, %12) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:489:23
%new_h.5 : int = aten::__getitem__(%27, %10) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:489:32
-> (%9, %28, %new_h.5, %new_w.5)
%img.10 : Tensor = prim::If(%43)
block0():
-> (%44)
block1():
%81 : bool = aten::lt(%30, %3) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:548:7
%img.37 : Tensor = prim::If(%81) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:548:4
block0():
%img.7 : Tensor = aten::unsqueeze(%img.1, %10) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:549:14
-> (%img.7)
block1():
-> (%img.1)
%out_dtype.1 : int = prim::dtype(%img.37)
%85 : bool = aten::__contains__(%4, %out_dtype.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:554:7
%86 : bool = aten::__not__(%85) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:554:7
%img.42 : Tensor = prim::If(%86) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:554:4
block0():
%img.23 : Tensor = aten::to(%img.37, %17, %9, %9, %19) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:557:14
-> (%img.23)
block1():
-> (%img.37)
%89 : int[] = prim::ListConstruct(%new_h, %new_w)
%img.15 : Tensor = aten::__interpolate(%img.42, %89, %19, %18, %9, %19, %9) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:496:10
%img.47 : Tensor = prim::If(%81) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:562:4
block0():
%img.5 : Tensor = aten::squeeze(%img.15, %10) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:563:14
-> (%img.5)
block1():
-> (%img.15)
%img.46 : Tensor = prim::If(%86) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:565:4
block0():
%94 : bool = aten::__contains__(%2, %out_dtype.1) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:566:11
%img.49 : Tensor = prim::If(%94) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:566:8
block0():
%img.19 : Tensor = aten::round(%img.47) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:568:18
-> (%img.19)
block1():
-> (%img.47)
%img.33 : Tensor = aten::to(%img.49, %out_dtype.1, %9, %9, %19) # /Users/asia/Library/Caches/pypoetry/virtualenvs/edge-client-NU7TENB2-py3.10/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:569:14
-> (%img.33)
block1():
-> (%img.47)
-> (%img.46)
return (%img.10)
=========================== short test summary info ============================
FAILED tests/test_transforms.py::test_rescale_with_padding - RuntimeError: ki...
```
## Testing data

any pointers / documentations are appreciate to help me better understand the context. (i'm fairly new to pytorch)
i've been reading these docs, but i still don't understand what the error message means?
https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script
https://pytorch.org/docs/master/onnx.html
https://pytorch.org/vision/stable/transforms.html#scriptable-transforms
https://pytorch.org/vision/stable/generated/torchvision.transforms.Resize.html#torchvision.transforms.Resize
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html#running-the-model-on-an-image-using-onnx-runtime
Thank you!!
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.0.1 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.3)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-12.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] numpy 1.23.4 pypi_0 pypi
```
cc @ezyang @mruberry @ngimel @Lezcano @fdrocha @peterbell10
| 0 |
3,971 | 90,695 |
`torch.empty` produces incorrect tensors with `layout=sparse_csr|sparse_csc` on the CPU
|
module: sparse, module: docs, module: cpu, triaged
|
### ๐ Describe the bug
I would argue that this issue is potentially rather critical given how many sparse compressed algorithms rely on it.
A potential cause of https://github.com/pytorch/pytorch/issues/90693.
This also raises a question of the quality of the corresponding tests which probably do not test extreme cases.
```python
In [1]: import torch
In [2]: torch.empty(2, 2, layout=torch.sparse_csr)
<ipython-input-2-76b85c15c1f2>:1: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /home/nik/git/Quansight/pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:54.)
torch.empty(2, 2, layout=torch.sparse_csr)
Out[2]:
tensor(crow_indices=tensor([139874393660768, 139874393660768, 139870036047680]),
col_indices=tensor([], size=(0,)),
values=tensor([], size=(0,)), size=(2, 2), nnz=0,
layout=torch.sparse_csr)
In [3]: torch.empty(2, 2, layout=torch.sparse_csc)
Out[3]:
tensor(ccol_indices=tensor([2314885530818453536, 8319099727289262112,
7591715963103638117]),
row_indices=tensor([], size=(0,)),
values=tensor([], size=(0,)), size=(2, 2), nnz=0,
layout=torch.sparse_csc)
In [4]: torch.empty(2, 2, layout=torch.sparse_csr, device='cuda')
Out[4]:
tensor(crow_indices=tensor([0, 0, 0]),
col_indices=tensor([], size=(0,)),
values=tensor([], size=(0,)), device='cuda:0', size=(2, 2), nnz=0,
layout=torch.sparse_csr)
In [5]: torch.empty(2, 2, layout=torch.sparse_csc, device='cuda')
Out[5]:
tensor(ccol_indices=tensor([0, 0, 0]),
row_indices=tensor([], size=(0,)),
values=tensor([], size=(0,)), device='cuda:0', size=(2, 2), nnz=0,
layout=torch.sparse_csc)
```
### Versions
Current master.
cc @pearu @cpuhrsch @amjames @bhosmer @svekars @carljparker @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 1 |
3,972 | 90,691 |
[ONNX] Exporting the operator ::concat to ONNX opset version 13 is not supported.
|
module: onnx, triaged
|
### ๐ Describe the bug
I want to export a UNet-like model to ONNX, but I get: `UnsupportedOperatorError: Exporting the operator ::concat to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.`
```python
# traced_model was generated by torch.jit.trace
image = torch.rand((1,3,1024,1024))
torch.onnx.export(traced_model, image, "model.onnx", input_names=input_names, output_names=output_names)
```
```
.../python3.9/site-packages/torch/onnx/utils.py:461: UserWarning: no signature found for <torch.ScriptMethod object at 0x7fad714a5090>, skipping _decide_input_format
warnings.warn("%s, skipping _decide_input_format" % e)
Traceback (most recent call last):
File ".../python3.9/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_17474/1981464861.py", line 1, in <module>
torch.onnx.export(traced_model, image, "model.onnx", input_names=input_names, output_names=output_names)
File ".../python3.9/site-packages/torch/onnx/__init__.py", line 350, in export
return utils.export(
File ".../python3.9/site-packages/torch/onnx/utils.py", line 163, in export
_export(
File ".../python3.9/site-packages/torch/onnx/utils.py", line 1074, in _export
graph, params_dict, torch_out = _model_to_graph(
File ".../python3.9/site-packages/torch/onnx/utils.py", line 731, in _model_to_graph
graph = _optimize_graph(
File ".../python3.9/site-packages/torch/onnx/utils.py", line 308, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File ".../python3.9/site-packages/torch/onnx/__init__.py", line 416, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File ".../python3.9/site-packages/torch/onnx/utils.py", line 1421, in _run_symbolic_function
raise symbolic_registry.UnsupportedOperatorError(
torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::concat to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
```
The model is constructed dynamically, therefore it is hard to post it here. I wanted to create a MWE, but it does not display this behavior:
```python
class Concatenate(nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = nn.Linear(10,10)
def forward(self, x,y):
x = self.linear(x)
y = self.linear(y)
return torch.concat([x,y], 1)
concatenate_model = Concatenate()
example_input = (torch.rand(1,1,10), torch.rand(1,2,10))
traced_model = torch.jit.trace(concatenate_model, example_input)
torch.onnx.export(traced_model, example_input, "concatenate.onnx", input_names=["x", "y"], output_names=["result"])
```
```
.../python3.9/site-packages/torch/onnx/utils.py:461: UserWarning: no signature found for <torch.ScriptMethod object at 0x7f34a56d0950>, skipping _decide_input_format
warnings.warn("%s, skipping _decide_input_format" % e)
(no error)
```
It seems that, due to its easier structure, a different approach is taken for the creation of the ONNX graph?
### Versions
```
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Linux Mint 18.1 Serena (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Clang version: Could not collect
CMake version: version 3.5.1
Libc version: glibc-2.23
Python version: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:45:29) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-4.4.0-109-generic-x86_64-with-glibc2.23
Is CUDA available: True
CUDA runtime version: 8.0.61
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce GTX TITAN X
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.8.3.post1
[pip3] torch==1.12.1
[pip3] torchmetrics==0.10.3
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 10.2.89 h8f6ccaa_8 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.23.5 py39h3d75532_0 conda-forge
[conda] pytorch 1.12.1 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-lightning 1.8.3.post1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.10.3 pypi_0 pypi
[conda] torchvision 0.13.1 py39_cu102 pytorch
```
| 4 |
3,973 | 90,688 |
In distributed get SIGTERM and run crash
|
oncall: distributed
|
### ๐ Describe the bug
Hi everyone,
I'm tying to fine-tune [GPT2-small](https://huggingface.co/gpt2) on the [OpenWebText dataset](https://huggingface.co/datasets/openwebtext) (A big dataset consist of 40GB+), by running a slightly modified HuggingFace [run_clm.py script](https://github.com/huggingface/transformers/blob/v4.24.0/examples/pytorch/language-modeling/run_clm.py) in distributed.
However, I get SIGTERM in different states of the training/inference without further explanation.
Didn't find a similar reported issue with a working solution yet so decided to post and ask for your help.
### Scheme of how I run my code:
torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=${NUM_GPU} \
run_clm.py
--ddp_timeout 3240000 \
--ddp_find_unused_parameters False \
--arg_1 --arg_2 ... arg_n
## My errors:
### Error on fine-tune:
When I tried to fine-tune my model, I suffered from error mid-way while tokenizing my loaded dataset after successfully tokenized chunk of data, but errored in the next chunk:
> Running tokenizer on dataset: 1%| | 1/81 [00:04<06:15, 4.70s/ba]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 464 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 465 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 466 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 467 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 468 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 469 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 463) of binary: /venv/bin/python3
Traceback (most recent call last):
File "/venv/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2022-12-12_08:13:08
host : openwebtext-pcfz2-2zz8p
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 463)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 463
============================================================
### Error on inference:
I was running my model in inference mode only, it took 3.5 hours to load, process the validation dataset (about 310MB of data) and run inference on those examples, and after it finished all steps, errored again the same non-informative SIGTERM error.
### Notes:
- I'm using high value for `ddp_timeout` in order to avoid timeout when loading and processing my huge dataset.
- I track my memory usage and it doesn't seem like OOM errors.
- It doesn't seem to error due to timeout, since I dealt with it when placing high values for `ddp_timeout`
- The error doesn't appear in single GPU.
- The error appear much sooner when I use all my 8 GPUs (still 1 node), rather than when I use only few of them.
Would really appreciate any help on the issue!
### Versions
Ubuntu version = 20.04.4 LTS (Focal Fossa)
Python = 3.8.10
transformers = 4.24.0
torch = 1.10.2+cu113
datasets = 2.7.0
tokenizers = 0.13.2
accelerate = 0.14.0
evaluate = 0.3.0
numpy = 1.22.3
| 0 |
3,974 | 90,676 |
RuntimeError: Error in dlopen: libnvJitLink.so.12: cannot open shared object file: No such file or directory
|
module: crash, module: distributions, triaged
|
### ๐ Describe the bug
After running the code below:
```python
import torch
from torch.distributions import MultivariateNormal
mean = torch.ones(256, 9)
covariance = torch.eye(9)
mean = mean.to("cuda:0")
covariance = covariance.to("cuda:0")
distribution = MultivariateNormal(mean, scale_tril=covariance)
actions = distribution.sample()
actions_log_prob = distribution.log_prob(actions)
```
I get the error
```console
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/bolun/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/distributions/multivariate_normal.py", line 216, in log_prob
M = _batch_mahalanobis(self._unbroadcasted_scale_tril, diff)
File "/home/bolun/miniconda3/envs/mvp/lib/python3.7/site-packages/torch/distributions/multivariate_normal.py", line 59, in _batch_mahalanobis
M_swap = torch.linalg.solve_triangular(flat_L, flat_x_swap, upper=False).pow(2).sum(-2) # shape = b x c
RuntimeError: Error in dlopen: libnvJitLink.so.12: cannot open shared object file: No such file or directory
```
I searched in the `lib` directory of my `conda` environment and the `lib` directory of my `torch` package I did not find `libnvJitLink.so.12`.
### Versions
```console
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.10
Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:21) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-53-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A4000
GPU 1: NVIDIA RTX A4000
GPU 2: NVIDIA RTX A4000
GPU 3: NVIDIA RTX A4000
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.21.6 py37h976b520_0 conda-forge
[conda] pytorch 1.13.0 py3.7_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.13.0 py37_cu116 pytorch
[conda] torchvision 0.14.0 py37_cu116 pytorch
```
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
3,975 | 90,663 |
[Feature Request] An alternative sampling routine for Dirichlet to fix Dirichlet and Beta sampling bugs
|
module: distributions, feature, triaged
|
### ๐ The feature, motivation and pitch
I'm a long-time user but a first-time contributor, and this is my first issue. I propose the implementation of the log-gamma distribution into Pytorch and to use it to increase precision in the sampling of Dirichlet/Beta distributions.
This feature proposal is in part related to [issue #84625](https://github.com/pytorch/pytorch/issues/84625). This issue reveals that for small parameter values, the beta distribution behaves incorrectly. More generally, the Dirichlet distribution behaves incorrectly for small parameter values.
With small parameter values (below 1e-3), the Dirichlet distribution gives a constant vector. This is not the expected behaviour. We would expect to see a vector with one entry equal to 1 and the others equal to 0 (that is, a coordinate on the N-simplex).
The reason why this occurs is because of the way we generate samples of the Dirichlet distribution. Pytorch, like Numpy, samples a Dirichlet distribution by using gamma variables (see [Distributions.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Distributions.cpp)) This involves dividing a gamma variable by a sum of gamma variables. By doing a manual example of this we can see how the error arises
```
import numpy as np
concentration = [1e-5, 1e-5, 1e-5, 1e-5]
gamma_vars = [torch.distributions.Gamma(p, 1) for p in concentration]
print("Gamma variable output is incredibly small: ", gamma_vars[1].sample())
samples = [gamma_var.sample() for gamma_var in gamma_vars]
vector = [sample / np.sum(samples) for sample in samples]
print("Now calculating the Dirichlet distributed vector gives us", vector)
print("Since the Gamma variables are so small, we end up getting a constant output here")
print("This is what causes us to observe the behaviour of very large values of the parameter when we input very small values")
```
This error doesn't occur in Numpy, because Numpy truncates very small gamma values to 0, meaning that instead of calculating something like `1e-38/(4 * 1e-38) = 1/4`, it just outputs `NaN` due to division by 0.
A possible fix would be to:
- Use logarithms to avoid excessively small output values from gamma. Sample in log-space to achieve better precision for small parameter values. This would involve writing a routine to sample from the log-gamma distribution. This is exactly [what Tensorflow does ](https://github.com/tensorflow/probability/blob/dc5b6b094cfc1f18f949e120c3491acb2e2af6eb/tensorflow_probability/python/distributions/gamma.py#L453-L459)
- Add a "hack" for small values to force them to be 0/output NaN like Numpy does. This means that gamma won't output such exceptionally small values anymore.
**My feature request** is to implement my first suggestion, to implement the log-gamma distribution into Pytorch, and use it to fix this bug.
### Alternatives
Numpy's solution is to allow gamma to generate infinitely small values, in which case it outputs NaN. See the following example.
Alternatively (or additionally), could add a warning to the documentation.
### Additional context
I would be happy to work on solving this when the issue has been discussed further.
cc @fritzo @neerajprad @alicanb @nikitaved
| 2 |
3,976 | 90,657 |
[FSDP][BE] Post-backward hook and `FlatParamHandle` dtype cleanup
|
oncall: distributed, triaged, module: fsdp
|
**Context**
I have been working on moving all gradient tensor allocations in the post-backward hook to the default computation stream out of the post-backward stream to see if that improves memory fragmentation. In this process, I have found that our post-backward hook code and the related mixed precision code is not in the healthiest state: There is non-intuitive branching, non-obvious variable names, and incorrect comments.
Because we may not land the change to move the gradient tensor allocations unless it shows meaningful improvement, this issue is meant to track cleaning up the code regardless.
**`FlatParamHandle` Work Items**
- [x] Rework the storage of parameter and gradient reduction dtypes on the `FlatParamHandle`. Specifically, save the original parameter dtype (`_orig_param_dtype`), the parameter dtype for forward/backward (`_fwd_bwd_param_dtype`), and the gradient reduction dtype (`_reduce_dtype`). If parameter mixed precision is not enabled, then `_fwd_bwd_param_dtype == _orig_param_dtype`, and otherwise, it is the low precision dtype. This separation makes the post-backward reduction logic simpler.
- The gradient computed by autograd must be of `_fwd_bwd_param_dtype`.
- The input tensors to the gradient reduction must be of `_reduce_dtype`.
- The gradient must be cast to `_orig_param_dtype` after the reduction and before the optimizer step.
- We must perform a tensor allocation for the gradient reduction destination tensor (1) if the gradient needs padding or (2) if the gradient computed by autograd does not have dtype equal to `_reduce_dtype`.
- (1) can only be true for sharded strategies. `NO_SHARD` never needs padding.
- (2) is equivalent to checking `_fwd_bwd_param_dtype != _reduce_dtype`.
- We must perform a tensor allocation for the gradient reduction source tensor if using a sharded strategy. This tensor is padded.
- We must perform a tensor allocation for the gradient tensor used by the optimizer if `_orig_param_dtype != _reduce_dtype`.
- For sharded strategies, if `keep_low_precision_grads`, we currently cast the gradient back to `_reduce_dtype` in the post-backward final callback via `prepare_gradient_for_optim()`. IIUC, this is done because we need an existing gradient in the `.grad` field to be able to use the `grad.data = grad.to(dtype)` hack that bypasses the check for matching parameter/gradient dtype. If we want to support applying the optimizer in the backward pass with `keep_low_precision_grads == True`, then we need to move this to the post-backward hook, in which case, this contributes one more tensor allocation.
- [ ] Rename `keep_low_precision_grads` to `keep_grads_in_reduce_dtype` in the `MixedPrecision` dataclass. This is BC breaking. However, this most directly reflects the semantics (since `reduce_dtype` is allowed to be any dtype, not necessary a low precision one). As a niche example, if a user passes `MixedPrecision(param_dtype=torch.float16, reduce_dtype=torch.float64, keep_low_precision_grads=True)`, in the current implementation, the post-backward hook will cast the gradients to FP32 (to match `_orig_param_dtype`), and the post-backward final callback will cast the gradients to FP64 (to match `_reduce_dtype`).
https://github.com/pytorch/pytorch/blob/e33f1eeeb73a8d680b8aae7944011389f76faaff/torch/distributed/fsdp/flat_param.py#L1107-L1113
**Post-Backward Hook Work Items**
- [x] Rename `sharded_grad` to `grad_to_offload` since the variable's only purpose is to be offloaded to CPU.
https://github.com/pytorch/pytorch/blob/e33f1eeeb73a8d680b8aae7944011389f76faaff/torch/distributed/fsdp/_runtime_utils.py#L647
https://github.com/pytorch/pytorch/blob/e33f1eeeb73a8d680b8aae7944011389f76faaff/torch/distributed/fsdp/_runtime_utils.py#L654
https://github.com/pytorch/pytorch/blob/e33f1eeeb73a8d680b8aae7944011389f76faaff/torch/distributed/fsdp/_runtime_utils.py#L662-L664
- [x] Fix the comment about why we need to call `record_stream(grad_to_offload, post_backward_stream)` that I added previously. Assume that we do not move any tensor allocations. For sharded strategies, the relevant gradient tensor is allocated in the post-backward stream and consumed in the post-backward stream (offloading to CPU). For `NO_SHARD`, the relevant gradient tensor is allocated in the post-backward stream if it requires a dtype change and is allocated in the default computation stream (by autograd) otherwise. The comment has the directionality wrong and does not mention that the `record_stream` may be a no-op depending on the case.
https://github.com/pytorch/pytorch/blob/e33f1eeeb73a8d680b8aae7944011389f76faaff/torch/distributed/fsdp/_runtime_utils.py#L665-L667
**Communication Hook Work Items**
- I do not have explicit work items here. I only want to mention that the effort to move tensor allocations to the default stream does not apply for communication hook allocations. Given our current design, any allocations in communication hooks will happen in the post-backward stream without choice.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,977 | 90,643 |
[FSDP][BE] `test_fsdp_comm_hooks.py` cleanup
|
oncall: distributed, triaged, module: fsdp
|
`test_fsdp_comm_hooks.py` takes too long to run (374 sec / 6:14 min on AWS cluster -- for perspective, `test_fsdp_mixed_precision.py` takes 524 sec / 8:44 min). This issue outlines some ways to make it run faster.
1. We should only test the FSDP model with nested wrapping since nested wrapping is orthogonal to the communication hook. This involves removing the `has_wrapping` argument from `Net` and each unit test in favor of only the `has_wrapping=True` path.
2. We should subtest some of the configurations. For example, `FULL_SHARD` and `SHARD_GRAD_OP` take exact same code path (both now and in the future) since they both require a reduce-scatter. These do not need to run in new processes.
3. We should consider if we want to run separate tests for both the FP16 and BF16 compression hooks. Their logic is identical except the dtype.
---
Some other miscellaneous items:
- This comment does not seem to align with the `assertEqual` since it checks the parameter, not its gradient:
https://github.com/pytorch/pytorch/blob/c7d2fb7f8649b798d805c18f3c1d21abd0bdb2dd/test/distributed/fsdp/test_fsdp_comm_hooks.py#L399-L401
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501
| 0 |
3,978 | 90,633 |
torch.min document not up to date
|
module: docs, triaged
|
### ๐ The doc issue
torch.min document not up to date
https://pytorch.org/docs/stable/generated/torch.min.html#torch.min
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker
| 1 |
3,979 | 90,613 |
`torch.inverse` multi-threading RuntimeError: lazy wrapper should be called at most once
|
triaged, module: multithreading, module: linear algebra
|
### ๐ Describe the bug
`torch.inverse` and the related modules won't work with multi-threading:
```py
import concurrent.futures
import torch
def run_task(req):
out = torch.inverse(torch.eye(4, device="cuda:0"))
return req, out
infer_tasks = [1, 2, 3, 4, 5, 6, 7, 8]
with concurrent.futures.ThreadPoolExecutor(4, "BatchTask") as executor:
futures = {executor.submit(run_task, t) for t in infer_tasks}
for fut in concurrent.futures.as_completed(futures):
print("outcome", fut.result())
```
run this script will raise a runtime error:
```
Traceback (most recent call last):
File "my_test.py", line 15, in <module>
print("outcome", fut.result())
File "/home/anaconda3/envs/py38/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/home/anaconda3/envs/py38/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/anaconda3/envs/py38/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "my_test.py", line 5, in run_task
out = torch.inverse(torch.eye(4, device="cuda:0"))
RuntimeError: lazy wrapper should be called at most once
```
### Versions
Collecting environment information...
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.4.48
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN Xp COLLECTORS EDITION
GPU 1: NVIDIA TITAN Xp COLLECTORS EDITION
(monai ref: https://github.com/Project-MONAI/MONAI/issues/5696)
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 12 |
3,980 | 90,608 |
Operator overload priority should not rely on static initialization order
|
oncall: jit
|
### ๐ Describe the bug
In our internal build, calling `torch.ops.aten.add()` on two scalar float tensors will incorrectly return a Python native integer (instead of a float tensor).
### Root Cause:
I found this is because the operator overloads for `aten::add` looks like this:
```
print(torch._C._jit_get_operation('aten::add'))
(<built-in method add of PyCapsule object at 0x7fb54234aae0>, ['t', 'str', 'int', 'complex', 'float', 'int_complex', 'complex_int', 'float_complex', 'complex_float', 'int_float', 'float_int', '', 'Tensor', 'out', 'Scalar', 'Scalar_out'])
```
while in a correct build, the output should look like this:
```
(<built-in method add of PyCapsule object at 0x7fb231070930>, ['Tensor', 'Scalar', 'out', 'Scalar_out', 't', 'str', 'int', 'complex', 'float', 'int_complex', 'complex_int', 'float_complex', 'complex_float', 'int_float', 'float_int', ''])
```
i.e., the `Tensor` overload must come before the "int" overload.
If I'm not mistaken, the operator overload priority is defined by the order they are registered - which is defined by the ambiguous static initialization order. In particular, the `Tensor` overload for `aten::add` is registered in https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/templates/RegisterSchema.cpp while the `int` overload is registered in https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/runtime/register_prim_ops.cpp.
### Versions
I'm building pytorch 1.13.1-rc1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 5 |
3,981 | 90,607 |
Export to ONNX of `as_strided()` hard codes stride in the graph, although it should be dynamic
|
module: onnx, triaged, release notes: onnx
|
### ๐ Describe the bug
Exporting to ONNX a very simple PyTorch model with a `tensor.as_strided()` operation, no warning or error is raised during the export, but the exported ONNX model is wrong.
I suspect the issue is related to a limited dynamic shape/stride support, either in `as_strided` or in `tensor.stride()`.
### To reproduce
Define the model:
```python
import torch
import torch.nn as nn
class MyModel3(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor):
a, b, c = x.size()
strides_original = x.stride()
shape = (a, b // 2, 4)
stride = (strides_original[0] // 2, strides_original[1] - 3, 3)
# stride = (x[0][0][0], 4, 3) <-- if used instead, this will raise an error
x_strided = x.as_strided(size=shape, stride=stride)
return x_strided
```
Export to ONNX:
```python
model = MyModel3()
x = torch.randint(8, (50, 30, 15)) + 1
res = model(x)
print(res)
torch.onnx.export(
model,
(x,),
"/home/fxmarty/asstrided_model.onnx",
input_names=["x"],
output_names=["x_out"],
dynamic_axes={"x": {0: "axis0", 1: "axis1", 2: "axis2"}},
opset_version=14
)
```
The issue remains with opset 15, 16, 17.
**No warning or error is shown during the export**.
The exported model can be found [here](https://huggingface.co/fxmarty/broken-onnx-as-strided/blob/main/asstrided_model.onnx) and a preview in netron [here](https://netron.app/?url=https://huggingface.co/fxmarty/broken-onnx-as-strided/blob/main/asstrided_model.onnx).
Trying to put dynamic stride by using the shape values (`a, b, c`), or e.g. `x[0][0][0]` (see the comment in the original model for an example), then `torch.onnx.export` rightfully raises:
```
torch.onnx.errors.SymbolicValueError: Failed to export a node '%17 : Long(requires_grad=0, device=cpu) = onnx::Gather[axis=0](%16, %1), scope: __main__.MyModel3:: # /home/fxmarty/test_torchsript.py:48:0
' (in list node %21 : int[] = prim::ListConstruct(%17, %18, %20), scope: __main__.MyModel3::
) because it is not constant. Please try to make things (e.g. kernel sizes) static if possible. [Caused by the value '21 defined in (%21 : int[] = prim::ListConstruct(%17, %18, %20), scope: __main__.MyModel3::
)' (type 'List[int]') in the TorchScript graph. The containing node has kind 'prim::ListConstruct'.]
```
You can as well compare the results of ORT and PyTorch following the code in https://github.com/microsoft/onnxruntime/issues/13920 .
Thank you, pinging @justinchuby as requested!
### Versions
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~[22.04) 11.3.0](callto:22.04) 11.3.0)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] torch==1.13.0
[pip3] torch-model-archiver==0.6.1
[pip3] torch-workflow-archiver==0.2.5
[pip3] torchaudio==0.13.0
[pip3] torchinfo==1.7.0
[pip3] torchserve==0.6.1
[pip3] torchvision==0.14.0
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 anaconda
[conda] numpy 1.23.1 pypi_0 pypi
[conda] torch 1.13.0 pypi_0 pypi
[conda] torch-model-archiver 0.6.1 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.5 pypi_0 pypi
[conda] torchaudio 0.13.0 pypi_0 pypi
[conda] torchinfo 1.7.0 pypi_0 pypi
[conda] torchserve 0.6.1 pypi_0 pypi
[conda] torchvision 0.14.0 pypi_0 pypi
| 2 |
3,982 | 90,585 |
[threaded pg] All threads share one Random Number Generator
|
triaged, module: random, module: dtensor
|
### ๐ Describe the bug
In `MultiThreadedTestCase`'s worker threads, `torch.rand()` shares one Random Number Generator which hurts the assumption in some tests that `torch.rand()` after `torch.manual_seed(CONST)` will give the same result within each worker.
To verify:
```
# add this test
class ReproduceRNG(MultiThreadedTestCase):
def test_random_seed_consistency(self):
self_tensor = torch.rand(3, 3)
print(f"from rank {dist.get_rank()}")
print(self_tensor)
# modify MultiThreadedTestCase
class MultiThreadedTestCase(TestCase):
...
def perThreadSetUp(self):
super().setUp()
torch.manual_seed(5)
print("from master thread:")
for i in range(4):
print(torch.rand(3, 3))
torch.manual_seed(5)
```
Output:
```
from master thread:
tensor([[0.8303, 0.1261, 0.9075],
[0.8199, 0.9201, 0.1166],
[0.1644, 0.7379, 0.0333]])
tensor([[0.9942, 0.6064, 0.5646],
[0.0724, 0.6593, 0.7150],
[0.5793, 0.9809, 0.6502]])
tensor([[0.0566, 0.9201, 0.6698],
[0.2615, 0.0407, 0.7850],
[0.9752, 0.0903, 0.5273]])
tensor([[0.6794, 0.2639, 0.3906],
[0.1661, 0.2636, 0.0442],
[0.4884, 0.7965, 0.7432]])
from rank 3
from rank 0
from rank 1
from rank 2
tensor([[0.8303, 0.1261, 0.9075],
[0.8199, 0.9201, 0.1166],
[0.1644, 0.7379, 0.0333]])
tensor([[0.9942, 0.6064, 0.5646],
[0.0724, 0.6593, 0.7150],
[0.5793, 0.9809, 0.6502]])
tensor([[0.6794, 0.2639, 0.3906],
[0.1661, 0.2636, 0.0442],
[0.4884, 0.7965, 0.7432]])
tensor([[0.0566, 0.9201, 0.6698],
[0.2615, 0.0407, 0.7850],
[0.9752, 0.0903, 0.5273]])
```
### Versions
Collecting environment information...
PyTorch version: 1.14.0a0+git3dda91e
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.112
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] torch==1.14.0a0+git3dda91e
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] numpy-base 1.23.4 py310h8e6c178_0
[conda] torch 1.14.0a0+git3dda91e dev_0 <develop>
cc @pbelevich
| 0 |
3,983 | 93,474 |
AttributeError: 'tuple' object has no attribute 'grad'
|
triaged
|
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_nn.py -k test_AdaptiveAvgPool1d_cuda --verbose
```
passes with graph break;
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_nn.py -k test_Adaptive --verbose
```
fails with
```
test_AdaptiveAvgPool1d_cuda (__main__.TestNN) ... Traceback (most recent call last):
File "<string>", line 2, in <lambda>
AttributeError: 'tuple' object has no attribute 'grad'
ERROR RUNNING GUARDS <graph break in _backward> /scratch/binbao/work/pytorch/test/test_nn.py:90
___guarded_code.valid and
___check_type_id(input.grad, 109098144) and
___check_tensors(input, input.grad)
NULL ERROR: /scratch/binbao/work/pytorch/torch/csrc/dynamo/eval_frame.c:251
```
| 9 |
3,984 | 90,578 |
Multiprocessing "Error Propagation" doesn't work for FullyShardedDataParallelism.
|
oncall: distributed, module: fsdp
|
https://github.com/pytorch/pytorch/blob/b7dfbf876f640591c399de2c92f523432b3455a1/torch/multiprocessing/spawn.py#L131
The multiprocessing spawn error propagation relies on sending a "process.terminate" call to request that each process should stop executing. Terminate sends a _request_ that should be honored by the receiving process - but in FSDP, that request is not honored. If one process OOMs, but the others do not, that process will die and the terminate call will be sent out while the others simply end up waiting indefinitely. Due to how FSDP works, I'm assuming they're hanging waiting for the weight all-gather phase, and aren't willing to accept the terminate signal. This is a serious blocking point for FSDP users, since the typical setting (large-model training) will often lead to CUDA OOMs that cannot be handled or propagated correctly.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
3,985 | 90,574 |
Bfloat16 tensor .numpy() support
|
triaged, module: numpy, module: bfloat16
|
### ๐ The feature, motivation and pitch
Numpy doesn't support bfloat16, and doesn't plan to do so. The effect of this is that code that makes any `tensor.numpy()` call breaks when you make it use bfloat16. I was thinking that bfloat16 getting outputted to `np.float32` would make sense, as it just keeps the exponent and ads a few mantissa bits. This must be very quick. This would make all code that is supported with float32 or float16 be compatible with bfloat16 out of the box, and feels like reasonable behavior to me.
### Additional context
The `to_numpy` function seems to be here https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/tensor_numpy.cpp#L159
and the function that decides the output `np.dtype` seems to be here:
https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/tensor_numpy.cpp#L267
cc @mruberry @rgommers
| 10 |
3,986 | 90,560 |
[discussion, idea] Batched, vectorized base64 decoding / encoding + maybe RLE decoding / encoding
|
feature, triaged, module: vision, module: nestedtensor
|
### ๐ The feature, motivation and pitch
Discussed in context of scriptable base64 decoding here: https://github.com/pytorch/vision/issues/6878#issuecomment-1343439120, http://www.alfredklomp.com/programming/sse-base64/, https://r-libre.teluq.ca/1362/1/base64.pdf
But then I thought that base64 can be implemented in a vectorized way - and probably some design could probably do batching/parallelization (especially if inputs/outputs of same/padded size or nestedtensor/tensorlist)
The usecases are probably limited but could include some visualization utils that need to encode many images / audio files to base64 for further burning into html as data-uris for self-containedness
### Alternatives
_No response_
### Additional context
_No response_
cc @datumbox @vfdev-5 @pmeier @cpuhrsch @jbschlosser @bhosmer @drisspg @mikaylagawarecki
| 7 |
3,987 | 90,555 |
[RFC] Add torch.backends.tbb.is_available()
|
triaged, enhancement, module: tbb
|
### ๐ The feature, motivation and pitch
Add a similar API to `torch.backends.openmp.is_available()` for TBB, `torch.backends.tbb.is_available()`.
### Usage
The expected usage of `torch.backends.tbb.is_available()` is of similar to that of `torch.backends.openmp.is_available()`. I assume users use `torch.backends.openmp.is_available()` to verify their pytorch config, and `torch.backends.tbb.is_available()` can be of similar usage to that - users can verify whether TBB is enabled on their PyTorch or not with this API rather than using `TBB_VERSION=1`, `MKL_VERBOSE=1`, etc.
```python
import torch
torch.backends.tbb.is_available() # True if TBB enabled, False otherwise
```
### Alternatives
_No response_
### Additional context
_No response_
| 4 |
3,988 | 90,553 |
Embedding dynamic quantization is not documented and hard to use
|
oncall: quantization, triaged
|
### ๐ Describe the bug
PyTorch quantization supports dynamic quantization of embeddings. For example, here is a working code snippet from `test/quantization/eager/test_quantize_eager_ptq.py`:
```
class EmbeddingBagWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.EmbeddingBag(num_embeddings=10, embedding_dim=12,
include_last_offset=True, scale_grad_by_freq=False, mode='sum')
self.fc = torch.nn.Linear(5, 5)
def forward(self, indices, offsets, linear_in):
return self.emb(indices, offsets), self.fc(linear_in)
model = EmbeddingBagWithLinear().eval()
qconfig_dict = {
torch.nn.EmbeddingBag : float_qparams_weight_only_qconfig,
torch.nn.Linear: default_dynamic_qconfig
}
indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3])
offsets = torch.tensor([0, 19, 20, 28, 28, 32])
q_model = quantize_dynamic(model, qconfig_dict)
```
It's currently impossible for the user to figure this out without reading the source code. We should make it easy for the user to read https://pytorch.org/docs/master/quantization.html and https://pytorch.org/docs/master/generated/torch.quantization.quantize_dynamic.html#torch.quantization.quantize_dynamic and know how to call the `quantize_dynamic` API with the correct syntax for dynamically quantizing embeddings.
### Versions
master
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @jgong5 @Xia-Weiwen @leslie-fang-intel
| 8 |
3,989 | 90,552 |
AOT Autograd should differentiate intermediate leaves.
|
triaged, oncall: pt2, module: aotdispatch, module: dynamo
|
This failure was first noticed by @desertfire in https://github.com/pytorch/pytorch/issues/93589.
A minimum repro looks like this:
```
@torch.compile(...)
def f(x):
leaf = torch.ones(2, requires_grad=True)
return leaf, leaf * 2
leaf, out = f(torch.ones(2, requires_grad=True))
out.backward()
# This incorrectly prints None. autograd-ing through "out" should have populated `leaf.grad`.
print(leaf.grad)
```
This case (hopefully) is pretty uncommon: Most autograd leaves in user code come from model parameters, which are always guaranteed to be function inputs. The bad situation happens when you create a differentiable leaf tensor through a factory function, inside of a graph (so it's not a graph input), and return both (leaf, tensor-computation-on-leaf) as outputs.
The problem is that when AOT Autograd traces a fwd + bwd graph, it only differentiates w.r.t graph inputs. One solution (from talking with @ezyang) would be to detect this situation, and "lift" leaves into graph inputs inside of AOT Autograd.
Ideally, we should do this in a way that doesn't add too much overhead to compilation (i.e. it would be nice handling this situation didn't require running the forward() an extra time during compilation).
cc @ezyang @soumith @msaroufim @wconstab @ngimel @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire
| 1 |
3,990 | 90,551 |
Could not run 'aten::as_strided' with arguments from the 'Metal' backend.
|
triaged, module: ios
|
### ๐ The feature, motivation and pitch
I would like to run Yolov7 (which includes keypoint detection) in iOS using GPU acceleration (with the metal framework)
[see github repo here](https://github.com/WongKinYiu/yolov7)
However, just subbing in the [model](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6-pose.pt) into the Hello-world Metal example
https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld-Metal
I get the following error
`HelloWorld-Metal[67812:4283712] Could not run 'aten::as_strided' with arguments from the 'Metal' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process `
### Alternatives
I have tried converting to coreml, which also fails.
### Additional context
_No response_
| 2 |
3,991 | 90,549 |
Abort called in FSDP tests
|
oncall: distributed, module: fsdp
|
### ๐ Describe the bug
When running the test suite of PyTorch 1.12.1 I get (e.g.)
```
distributed/fsdp/test_fsdp_input failed!
distributed/fsdp/test_fsdp_mixed_precision failed!
```
Tracing this down to the origin I see:
```
terminate called without an active exception
SIGABRT(6), PID: 2450054, Thread 2450054:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
<omitting python frames>
frame #14: __libc_start_main + 0xf3 (0x14f861828cf3 in /lib64/libc.so.6)
frame #15: _start + 0x2e (0x40106e in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/Python/3.10.4-GCCcore-11.3.0/bin/python)
SIGABRT(6), PID: 2450054, Thread 2450055:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: __poll + 0x51 (0x14f861912ac1 in /lib64/libc.so.6)
frame #3: <unknown function> + 0x292929 (0x14f831ff4929 in /lib64/libcuda.so.1)
frame #4: <unknown function> + 0x3481f4 (0x14f8320aa1f4 in /lib64/libcuda.so.1)
frame #5: <unknown function> + 0x28e3a8 (0x14f831ff03a8 in /lib64/libcuda.so.1)
frame #6: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #7: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450056:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: pthread_cond_timedwait + 0x25a (0x14f861dc579a in /lib64/libpthread.so.0)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdogInternal() + 0x4ec (0x14f82098e01c in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x79 (0x14f82098eef9 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0xdfa44 (0x14f8337c1a44 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #6: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #7: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450057:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: __poll + 0x51 (0x14f861912ac1 in /lib64/libc.so.6)
frame #3: <unknown function> + 0x292929 (0x14f831ff4929 in /lib64/libcuda.so.1)
frame #4: <unknown function> + 0x3481f4 (0x14f8320aa1f4 in /lib64/libcuda.so.1)
frame #5: <unknown function> + 0x28e3a8 (0x14f831ff03a8 in /lib64/libcuda.so.1)
frame #6: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #7: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450058:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: __read + 0x44 (0x14f861dc8aa4 in /lib64/libpthread.so.0)
frame #3: ibv_get_async_event + 0x33 (0x14f81dbf5f73 in /lib64/libibverbs.so.1)
frame #4: <unknown function> + 0x640f2 (0x14f80d1530f2 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0/lib/libnccl.so.2)
frame #5: <unknown function> + 0x74c4b (0x14f80d163c4b in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0/lib/libnccl.so.2)
frame #6: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #7: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450061:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: __poll + 0x51 (0x14f861912ac1 in /lib64/libc.so.6)
frame #3: <unknown function> + 0x5ccd2 (0x14f80d14bcd2 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0/lib/libnccl.so.2)
frame #4: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #5: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450062:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #2: pthread_cond_wait + 0x1fc (0x14f861dc544c in /lib64/libpthread.so.0)
frame #3: <unknown function> + 0x5c410 (0x14f80d14b410 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0/lib/libnccl.so.2)
frame #4: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #5: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
SIGABRT(6), PID: 2450054, Thread 2450064:
frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x5f (0x14f81facb4df in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::FatalSignalHandler::fatalSignalHandler(int) + 0x15a (0x14f81facb8fa in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x12ce0 (0x14f861dc9ce0 in /lib64/libpthread.so.0)
frame #3: gsignal + 0x10f (0x14f86183ca9f in /lib64/libc.so.6)
frame #4: abort + 0x127 (0x14f86180fe05 in /lib64/libc.so.6)
frame #5: <unknown function> + 0xa996a (0x14f83378b96a in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #6: <unknown function> + 0xb51ca (0x14f8337971ca in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #7: <unknown function> + 0xb5235 (0x14f833797235 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #8: __gxx_personality_v0 + 0x2bc (0x14f833796b8c in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #9: <unknown function> + 0x115c4 (0x14f8487595c4 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libgcc_s.so.1)
frame #10: _Unwind_ForcedUnwind + 0x132 (0x14f848759cb2 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libgcc_s.so.1)
frame #11: __pthread_unwind + 0x46 (0x14f861dc8616 in /lib64/libpthread.so.0)
frame #12: <unknown function> + 0x945b (0x14f861dc045b in /lib64/libpthread.so.0)
<omitting python frames>
frame #16: <unknown function> + 0x33e265 (0x14f83130b265 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #17: <unknown function> + 0x602666 (0x14f8315cf666 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #18: <unknown function> + 0x339f1a (0x14f831306f1a in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #19: <unknown function> + 0x602362 (0x14f8315cf362 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #20: <unknown function> + 0x34a28e6 (0x14f82ace38e6 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #21: <unknown function> + 0xf602fa (0x14f8287a12fa in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #22: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x16d (0x14f82acde5ad in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #23: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x80 (0x14f82acd6360 in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #24: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x5e (0x14f8315d0cae in /tmp/eb-6mbkb9je/tmpaivgfiz_/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #25: <unknown function> + 0xdfa44 (0x14f8337c1a44 in /user/gent/400/vsc40023/eb_scratch/RHEL8/zen3-ampere-ib/software/GCCcore/11.3.0/lib64/libstdc++.so.6)
frame #26: <unknown function> + 0x81cf (0x14f861dbf1cf in /lib64/libpthread.so.0)
frame #27: clone + 0x43 (0x14f861827dd3 in /lib64/libc.so.6)
```
There are some missing symbols but the main point is "terminate called without an active exception".
I've seen similar `abort` calls in e.g. TensorFlow when `pybind11::gil_scoped_release` is used in destructors as it is not allowed to call the GIL release function when Python is finalizing. With NCCL in the traceback I found [`destroy_nccl_comm`](https://github.com/pytorch/pytorch/blob/master/torch/csrc/cuda/python_nccl.cpp#L46) as a potential candidate for such an issue: The `nccl_init_rank` creates a capsule with that function set as the "destructor" and it might be possible that it is destroyed when Python gets finalized.
So I'd suggest to visit such uses and protect them, see https://github.com/pybind/pybind11/commit/79cb013f1f67d5565700e528cb5d8ef079fe5e8a and the PR https://github.com/pybind/pybind11/pull/2657 which explicitly discusses PyTorch.
### Versions
1.12.1
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
3,992 | 90,547 |
Unable to link LibTorch against CUDA and CUDNN statically
|
triaged, module: static linking
|
### ๐ Describe the bug
Hey,
So far I have managed to link against IntelMKL statically, now I am fighting with CUDA and CUDNN. While with CUDNN the issue is obvious (libcudnn_static.a got split into several other archives, https://github.com/pytorch/pytorch/issues/81692), I still struggle with CUDA itself.
Reading CMake logic in the repository, one can see that PyTorch will prefer static libs over dynamic ones: https://github.com/pytorch/pytorch/blob/c371542efc31b1abfe6f388042aa3ab0cef935f2/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake#L882 .
However in my case, always dynamic ones are picked.
Using PyTorch 1.13 sources, I call:
```
export BUILD_DOCS=OFF
export BUILD_TEST=OFF
export BLAS=MKL
export USE_CUDA=ON
export USE_DISTRIBUTED=OFF
export USE_FBGEMM=OFF
export USE_MKLDNN=OFF
export USE_NINJA=ON
export USE_PYTORCH_QNNPACK=OFF
export USE_QNNPACK=OFF
export USE_SYSTEM_NCCL=ON
export USE_XNNPACK=OFF
export TORCH_CUDA_ARCH_LIST='8.6'
# Static linkage
export USE_STATIC_MKL=1
export CAFFE2_STATIC_LINK_CUDA=ON
export CUDA_USE_STATIC_CUDA_RUNTIME=ON
#export USE_STATIC_CUDNN=1 # TODO: doesn't work, drops CUDNN completely
# https://github.com/pytorch/pytorch/issues/81692
python3 tools/build_libtorch.py
```
### Versions
1.13
| 1 |
3,993 | 90,544 |
[Dispatchable Collectives] Follow up tasks
|
oncall: distributed, triaged
|
Creating this issue to aggregate a list of follow up tasks for https://github.com/pytorch/pytorch/pull/88330 based on BE changes identified while developing the feature as well as follow up work items for the feature itself based on review feedback (thanks @kwen2501!). The list of follow up tasks can be farmed out as separate GH issues.
BE changes:
- [ ] Bug with `@require_backends_available` test decorator. It requires that ALL backends (GLOO, NCCL, UCC) in that list be available or the test skips. This causes issues since some jobs do not have UCC built by default (also local development UCC is not on by default) so the test is skipped for that job
- [ ] [linux-bionic-cuda11.6-py3.10-gcc7-bazel-test / build-and-test](https://github.com/pytorch/pytorch/actions/runs/3635142627/jobs/6133965664#logs) does not build with GLOO enabled. It should be enabled by default.
- [x] Remove `Backend.TCP`, there is no longer a TCP backend.
- [ ] Investigate removing `create_device` from ProcessGroupGloo as there is no documented use for it and we don't want users to create python ProcessGroupGloo instances. https://github.com/pytorch/pytorch/pull/88330#discussion_r1044744269
- [x] Figure out why NCCL_DESYNC_DEBUG environment variable is getting set in our tests
- [ ] Fix destruction fiasco in `ProcessGroupNCCL` https://github.com/pytorch/pytorch/issues/90848
Feature related follow up tasks:
- [x] Update ProcessGroup.hpp to call Ops.cpp directly, remove the pybinded function definitions for ProcessGroup https://github.com/pytorch/pytorch/issues/90932
- [ ] Lazy initialization of backend
- [x] Remove Backend from the pybind definition (only expose ProcessGroup)
- [x] Handle case of PythonProcessGroup, alert extension writers to migration for C++ PG extensions in order to use the dispatcher
- [ ] Update C++ classes from ProcessGroupGloo, ProcesGroupNCCL, etc into the respective BackendGloo, BackendNCCL...
- [x] https://github.com/pytorch/pytorch/issues/90659
- [ ] Update tensor to be a static member of ProcessGroup for `barrier` https://github.com/pytorch/pytorch/pull/88330#discussion_r1043939495
- [ ] Update https://pytorch.org/tutorials/intermediate/process_group_cpp_extension_tutorial.html
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu
| 0 |
3,994 | 90,540 |
torch.compile() BackendCompilerFailed: _compile_fn raised RuntimeError
|
triaged, bug, oncall: pt2, module: inductor
|
### ๐ Describe the bug
Could not run the code with torch.compile().
BackendCompilerFailed: _compile_fn raised RuntimeError: It appears that you're trying to get value out of a tracing tensor with aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent control flow or similar.
I am not sure what this error means.
Note: my code uses a custom LSTM and consists of nested models with multiple input parameters (>2).
**Full trace**
> RuntimeError Traceback (most recent call last)
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/output_graph.py in call_user_compiler(self, gm)
> 585 else:
> --> 586 compiled_fn = compiler_fn(gm, self.fake_example_inputs())
> 587 _step_logger()(logging.INFO, f"done compiler function {name}")
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/debug_utils.py in debug_wrapper(gm,
> example_inputs, **kwargs)
> 914 else:
> --> 915 compiled_gm = compiler_fn(gm, example_inputs, **kwargs)
> 916
>
> ~/anaconda3/lib/python3.7/site-packages/torch/__init__.py in _compile_fn(model_, inputs_)
> 1202 with cm:
> -> 1203 return compile_fn(model_, inputs_)
> 1204
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_inductor/compile_fx.py in compile_fx(model_,
> example_inputs_, inner_compile)
> 407 force_compile_tiny_graphs=True,
> --> 408 )(model_, example_inputs_)
> 409
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/optimizations/training.py in
> compiler_fn(gm, example_inputs)
> 79 # NB: NOT cloned!
> ---> 80 cg = aot_module_simplified(gm, example_inputs, **kwargs)
> 81 counters["aot_autograd"]["ok"] += 1
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions,
> hasher_type, static_argnums)
> 2095 full_args,
> -> 2096 aot_config,
> 2097 )
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/utils.py in time_wrapper(*args, **kwargs)
> 89 t0 = time.time()
> ---> 90 r = func(*args, **kwargs)
> 91 latency = time.time() - t0
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
> 1791
> -> 1792 compiled_fn = compiler_fn(flat_fn, fake_flat_tensor_args, aot_config)
> 1793
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> aot_wrapper_dedupe(flat_fn, flat_args, aot_config, compiler_fn)
> 1196 if ok:
> -> 1197 return compiler_fn(flat_fn, leaf_flat_args, aot_config)
> 1198
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> aot_dispatch_autograd(flat_fn, flat_args, aot_config)
> 1378 joint_forward_backward, aot_config.decompositions
> -> 1379 )(*joint_inputs)
> 1380
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in wrapped(*args)
> 682 sym_mode, proxy_mode, disable_autocast_cache(): # type: ignore[attr-defined]
> --> 683 t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer,
> concrete_args=tuple(phs))
> 684
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py in _fn(*args, **kwargs)
> 210 try:
> --> 211 return fn(*args, **kwargs)
> 212 finally:
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in
> dispatch_trace(root, tracer, concrete_args)
> 440 ) -> GraphModule:
> --> 441 graph = tracer.trace(root, concrete_args)
> 442 name = root.__class__.__name__ if isinstance(root, torch.nn.Module) else root.__name__
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py in _fn(*args, **kwargs)
> 210 try:
> --> 211 return fn(*args, **kwargs)
> 212 finally:
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/_symbolic_trace.py in trace(self, root,
> concrete_args)
> 738 "output",
> --> 739 (self.create_arg(fn(*args)),),
> 740 {},
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/_symbolic_trace.py in flatten_fn(*args)
> 613 tree_args = pytree.tree_unflatten(list(args), in_spec)
> --> 614 tree_out = root_fn(*tree_args)
> 615 out_args, out_spec = pytree.tree_flatten(tree_out)
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in
> wrapped(*proxies)
> 456
> --> 457 out = f(*tensors)
> 458 out = pytree.tree_map_only(
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> functionalized_joint(primals, tangents)
> 745 # Run the joint
> --> 746 outs = joint_forward_backward(f_primals, f_tangents)
> 747 finally:
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_functorch/aot_autograd.py in
> joint_forward_backward(primals, tangents)
> 716 grad_outputs=needed_tangents,
> --> 717 allow_unused=True,
> 718 )
>
> ~/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in grad(outputs, inputs,
> grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched)
> 275 allow_unused=allow_unused,
> --> 276 is_grads_batched=is_grads_batched,
> 277 )
>
> ~/anaconda3/lib/python3.7/site-packages/torch/overrides.py in handle_torch_function(public_api,
> relevant_args, *args, **kwargs)
> 1519 with _pop_mode_temporarily() as mode:
> -> 1520 result = mode.__torch_function__(public_api, types, args, kwargs)
> 1521 if result is not NotImplemented:
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_inductor/overrides.py in __torch_function__(self,
> func, types, args, kwargs)
> 36 return replacements[func](*args, **kwargs)
> ---> 37 return func(*args, **kwargs)
> 38
>
> ~/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in grad(outputs, inputs,
> grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched)
> 301 t_outputs, grad_outputs_, retain_graph, create_graph, t_inputs,
> --> 302 allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the
> backward pass
> 303
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in
> __torch_dispatch__(self, func, types, args, kwargs)
> 482 with self.sym_mode.enable(False):
> --> 483 return self.inner_torch_dispatch(func, types, args, kwargs)
> 484
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in
> inner_torch_dispatch(self, func, types, args, kwargs)
> 507
> --> 508 out = proxy_call(self, func, args, kwargs)
> 509 return out
>
> ~/anaconda3/lib/python3.7/site-packages/torch/fx/experimental/proxy_tensor.py in
> proxy_call(proxy_mode, func, args, kwargs)
> 284 raise RuntimeError(
> --> 285 f"It appears that you're trying to get value out of a tracing tensor with {func} -
> erroring out! "
> 286 "It's likely that this is caused by data-dependent control flow or similar."
>
>
>
> RuntimeError:` It appears that you're trying to get value out of a tracing tensor with
>
> aten._local_scalar_dense.default - erroring out! It's likely that this is caused by data-dependent
> control flow or similar.
>
> The above exception was the direct cause of the following exception:
>
> BackendCompilerFailed Traceback (most recent call last)
> <ipython-input-16-d2e9339dd69c> in <module>
> 76
> 77 t1 = time.time()
> ---> 78 output_N = model_N(inputs)
> 79 t2 = time.time()
> 80 print ("time noncustom", t2-t1)
>
> ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *args,
> **kwargs)
> 1478 or _global_backward_pre_hooks or _global_backward_hooks
> 1479 or _global_forward_hooks or _global_forward_pre_hooks):
> -> 1480 return forward_call(*args, **kwargs)
> 1481 # Do not call functions when jit is used
> 1482 full_backward_hooks, non_full_backward_hooks = [], []
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py in forward(self, *args, **kwargs)
> 80
> 81 def forward(self, *args, **kwargs):
> ---> 82 return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
> 83
> 84
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py in _fn(*args, **kwargs)
> 209 dynamic_ctx.__enter__()
> 210 try:
> --> 211 return fn(*args, **kwargs)
> 212 finally:
> 213 set_eval_frame(prior)
>
> <ipython-input-10-c07d89dd9fcd> in forward(self, x)
> 437 out = self.relu(out)
> 438 out = self.fcLayer(out)
> --> 439 out = softmax(out)
> 440 return out
> 441
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/eval_frame.py in catch_errors(frame, cache_size)
> 330
> 331 with compile_lock:
> --> 332 return callback(frame, cache_size, hooks)
> 333
> 334 catch_errors._torchdynamo_orig_callable = callback # type: ignore[attr-defined]
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py in _convert_frame(frame, cache_size, hooks)
> 474 counters["frames"]["total"] += 1
> 475 try:
> --> 476 result = inner_convert(frame, cache_size, hooks)
> 477 counters["frames"]["ok"] += 1
> 478 return result
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py in _fn(*args, **kwargs)
> 101 torch.fx.graph_module._forward_from_src = fx_forward_from_src_skip_result
> 102 try:
> --> 103 return fn(*args, **kwargs)
> 104 finally:
> 105 torch._C._set_grad_enabled(prior_grad_mode)
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/utils.py in time_wrapper(*args, **kwargs)
> 88 compilation_metrics[key] = []
> 89 t0 = time.time()
> ---> 90 r = func(*args, **kwargs)
> 91 latency = time.time() - t0
> 92 # print(f"Dynamo timer: key={key}, latency={latency:.2f} sec")
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py in _convert_frame_assert(frame, cache_size, hooks)
> 346 export,
> 347 hooks,
> --> 348 frame,
> 349 )
> 350
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, hooks, frame)
> 393 for attempt in itertools.count():
> 394 try:
> --> 395 out_code = transform_code_object(code, transform)
> 396 orig_code_map[out_code] = code
> 397 break
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/bytecode_transformation.py in transform_code_object(code, transformations, safe)
> 339 propagate_line_nums(instructions)
> 340
> --> 341 transformations(instructions, code_options)
> 342
> 343 fix_vars(instructions, code_options)
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/convert_frame.py in transform(instructions, code_options)
> 380 export,
> 381 )
> --> 382 tracer.run()
> 383 output = tracer.output
> 384 assert output is not None
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py in run(self)
> 1619 def run(self):
> 1620 _step_logger()(logging.INFO, f"torchdynamo start tracing {self.f_code.co_name}")
> -> 1621 super().run()
> 1622
> 1623 def match_nested_cell(self, name, cell):
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py in run(self)
> 483 self.instruction_pointer is not None
> 484 and not self.output.should_exit
> --> 485 and self.step()
> 486 ):
> 487 pass
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py in step(self)
> 452 if not hasattr(self, inst.opname):
> 453 unimplemented(f"missing: {inst.opname}")
> --> 454 getattr(self, inst.opname)(inst)
> 455
> 456 return inst.opname != "RETURN_VALUE"
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/symbolic_convert.py in inner(self, inst)
> 233 self,
> 234 reason=GraphCompileReason(
> --> 235 f"generic_jump {typestr(value)}", [self.frame_summary()]
> 236 ),
> 237 )
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/output_graph.py in compile_subgraph(self, tx, partial_convert, reason)
> 437 # optimization to generate better code in a common case
> 438 self.add_output_instructions(
> --> 439 self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
> 440 + [create_instruction("UNPACK_SEQUENCE", len(stack_values))]
> 441 )
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/output_graph.py in compile_and_call_fx_graph(self, tx, rv, root)
> 508
> 509 assert_no_fake_params_or_buffers(gm)
> --> 510 compiled_fn = self.call_user_compiler(gm)
> 511 compiled_fn = disable(compiled_fn)
> 512
>
> ~/anaconda3/lib/python3.7/site-packages/torch/_dynamo/output_graph.py in call_user_compiler(self, gm)
> 589 except Exception as e:
> 590 compiled_fn = gm.forward
> --> 591 raise BackendCompilerFailed(self.compiler_fn, e) from e
> 592 return compiled_fn
> 593
> `
Thank you.
### Versions
Collecting environment information...
PyTorch version: 1.14.0.dev20221208
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.1 (x86_64)
GCC version: Could not collect
Clang version: 3.4 (tags/RELEASE_34/final)
CMake version: version 3.18.0
Libc version: N/A
Python version: 3.7.3 (default, Mar 27 2019, 16:54:48) [Clang 4.0.1 (tags/RELEASE_401/final)] (64-bit runtime)
Python platform: Darwin-21.6.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] numpydoc==0.8.0
[pip3] torch==1.14.0.dev20221208
[pip3] torch-ac==1.1.0
[pip3] torchaudio==0.14.0.dev20221208
[pip3] torchmetrics==0.11.0
[pip3] torchrec-nightly-cpu==2022.5.12
[pip3] torchvision==0.15.0.dev20221208
[pip3] torchx-nightly==2022.11.30
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl-service 1.1.2 py37hfbe908c_5
[conda] mkl_fft 1.0.10 py37h5e564d8_0
[conda] mkl_random 1.0.2 py37h27c97d8_0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] numpydoc 0.8.0 py37_0
[conda] torch 1.14.0.dev20221208 pypi_0 pypi
[conda] torch-ac 1.1.0 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20221208 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchrec-nightly-cpu 2022.5.12 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221208 pypi_0 pypi
[conda] torchx-nightly 2022.11.30 pypi_0 pypi
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 2 |
3,995 | 90,537 |
Bugs about BART of Hugging Face using Pytorch 2.0
|
high priority, module: crash, triaged, oncall: pt2, module: inductor
|
### ๐ Describe the bug
This is the code of using BART of Hugging Face with Pytorch 2.0:
```python
import torch
from transformers import BartTokenizer, BartForConditionalGeneration
device = torch.device('cuda')
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
model = model.to(device)
model = torch.compile(model)
inputs = tokenizer(
"Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
return_tensors="pt",
)
labels = tokenizer("Bad Reasons To Quit Your Job", return_tensors="pt")["input_ids"]
input_ids = inputs["input_ids"].to(device)
attention_mask = inputs["attention_mask"].to(device)
labels = labels.to(device)
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
```
When using Pytorch 2.0 without `torch.compile` with `device=torch.device('cpu')` or `device=torch.device('cuda')`, it works well without any warning.
When using Pytorch 2.0 with `torch.compile` and `device=torch.device('cpu')`, it works well but with the following warning:
```
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
```
When using Pytorch 2.0 with `torch.compile` and `device=torch.device('cuda')`, it doesn't work well with the following errors:
```
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
/home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/utils/stateless.py:44: UserWarning: functional_call was passed multiple values for tied weights. This behavior is deprecated and will be an error in future versions
warnings.warn("functional_call was passed multiple values for tied weights. "
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ <stdin>:1 in <module> โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1480 in โ
โ _call_impl โ
โ โ
โ 1477 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1478 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1479 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1480 โ โ โ return forward_call(*args, **kwargs) โ
โ 1481 โ โ # Do not call functions when jit is used โ
โ 1482 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1483 โ โ backward_pre_hooks = [] โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:82 in โ
โ forward โ
โ โ
โ 79 โ โ return getattr(self._orig_mod, name) โ
โ 80 โ โ
โ 81 โ def forward(self, *args, **kwargs): โ
โ โฑ 82 โ โ return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs) โ
โ 83 โ
โ 84 โ
โ 85 def remove_from_cache(f): โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:211 in _fn โ
โ โ
โ 208 โ โ โ dynamic_ctx = enable_dynamic(self.dynamic) โ
โ 209 โ โ โ dynamic_ctx.__enter__() โ
โ 210 โ โ โ try: โ
โ โฑ 211 โ โ โ โ return fn(*args, **kwargs) โ
โ 212 โ โ โ finally: โ
โ 213 โ โ โ โ set_eval_frame(prior) โ
โ 214 โ โ โ โ dynamic_ctx.__exit__(None, None, None) โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.p โ
โ y:1328 in forward โ
โ โ
โ 1325 โ def set_output_embeddings(self, new_embeddings): โ
โ 1326 โ โ self.lm_head = new_embeddings โ
โ 1327 โ โ
โ โฑ 1328 โ @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING) โ
โ 1329 โ @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC โ
โ 1330 โ @add_end_docstrings(BART_GENERATION_EXAMPLE) โ
โ 1331 โ def forward( โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:211 in _fn โ
โ โ
โ 208 โ โ โ dynamic_ctx = enable_dynamic(self.dynamic) โ
โ 209 โ โ โ dynamic_ctx.__enter__() โ
โ 210 โ โ โ try: โ
โ โฑ 211 โ โ โ โ return fn(*args, **kwargs) โ
โ 212 โ โ โ finally: โ
โ 213 โ โ โ โ set_eval_frame(prior) โ
โ 214 โ โ โ โ dynamic_ctx.__exit__(None, None, None) โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:2107 in โ
โ forward โ
โ โ
โ 2104 โ โ full_args = [] โ
โ 2105 โ โ full_args.extend(params_flat) โ
โ 2106 โ โ full_args.extend(runtime_args) โ
โ โฑ 2107 โ โ return compiled_fn(full_args) โ
โ 2108 โ โ
โ 2109 โ # Just for convenience โ
โ 2110 โ forward.zero_grad = mod.zero_grad โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:811 in โ
โ g โ
โ โ
โ 808 โ
โ 809 def make_boxed_func(f): โ
โ 810 โ def g(args): โ
โ โฑ 811 โ โ return f(*args) โ
โ 812 โ โ
โ 813 โ g._boxed_call = True โ
โ 814 โ return g โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1687 in โ
โ debug_compiled_function โ
โ โ
โ 1684 โ โ โ โ โ f"{describe_input(i, aot_config)} would not require grad" โ
โ 1685 โ โ โ โ ) โ
โ 1686 โ โ โ
โ โฑ 1687 โ โ return compiled_function(*args) โ
โ 1688 โ โ
โ 1689 โ return debug_compiled_function โ
โ 1690 โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1551 in โ
โ compiled_function โ
โ โ
โ 1548 โ โ else: โ
โ 1549 โ โ โ args_with_synthetic_bases = args โ
โ 1550 โ โ โ
โ โฑ 1551 โ โ all_outs = CompiledFunction.apply(*args_with_synthetic_bases) โ
โ 1552 โ โ if CompiledFunction.num_aliasing_metadata_outs > 0: โ
โ 1553 โ โ โ outs = all_outs[:-CompiledFunction.num_aliasing_metadata_outs] โ
โ 1554 โ โ โ aliasing_metadata_outs = all_outs[-CompiledFunction.num_aliasing_metadata_ou โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:1455 in โ
โ forward โ
โ โ
โ 1452 โ โ โ # (*mutated_data_inputs, *non_aliased_fw_outs, *saved_tensors, *saved_symint โ
โ 1453 โ โ โ # - Note that in the synthetic bases case, mutated_inputs will correspond to โ
โ 1454 โ โ โ # of the original view, and not the synthetic base โ
โ โฑ 1455 โ โ โ fw_outs = call_func_with_args( โ
โ 1456 โ โ โ โ CompiledFunction.compiled_fw, deduped_flat_tensor_args, disable_amp=disa โ
โ 1457 โ โ โ ) โ
โ 1458 โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py:836 in โ
โ call_func_with_args โ
โ โ
โ 833 โ โ guard = torch._C._DisableAutocast() โ
โ 834 โ try: โ
โ 835 โ โ if hasattr(f, "_boxed_call"): โ
โ โฑ 836 โ โ โ out = normalize_as_list(f(args)) โ
โ 837 โ โ else: โ
โ 838 โ โ โ # TODO: Please remove soon โ
โ 839 โ โ โ # https://github.com/pytorch/pytorch/pull/83137#issuecomment-1211320670 โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/compile_fx.py:199 in run โ
โ โ
โ 196 โ โ for i in check_inputs: โ
โ 197 โ โ โ if new_inputs[i].data_ptr() % ALIGNMENT: โ
โ 198 โ โ โ โ new_inputs[i] = clone_preserve_strides(new_inputs[i]) โ
โ โฑ 199 โ โ return model(new_inputs) โ
โ 200 โ โ
โ 201 โ return run โ
โ 202 โ
โ โ
โ /tmp/torchinductor_tangtianyi/mt/cmtrlxnsp7o7om6qxzj7wbwah2qxc6ker2iphq6omeo3332peb3m.py:1184 in โ
โ call โ
โ โ
โ 1181 โ buf4 = empty_strided((1, 31, 768), (23808, 768, 1), device='cuda', dtype=torch.float โ
โ 1182 โ buf471 = empty_strided((1, 31, 1), (31, 1, 31), device='cuda', dtype=torch.float32) โ
โ 1183 โ stream3 = get_cuda_stream(3) โ
โ โฑ 1184 โ triton_fused_add_add_1_add_2_add_3_arange_div_50_embedding_embedding_1_expand_mul_0. โ
โ 1185 โ del primals_1 โ
โ 1186 โ del primals_3 โ
โ 1187 โ buf5 = empty_strided((31, 768), (768, 1), device='cuda', dtype=torch.float32) โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py:1 โ
โ 69 in run โ
โ โ
โ 166 โ โ โ if len(self.launchers) == 0: โ
โ 167 โ โ โ โ self.precompile() โ
โ 168 โ โ โ if len(self.launchers) > 1: โ
โ โฑ 169 โ โ โ โ self.autotune_to_one_config(*args, grid=grid) โ
โ 170 โ โ โ
โ 171 โ โ (launcher,) = self.launchers โ
โ 172 โ โ if launcher.config.pre_hook is not None: โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_dynamo/utils.py:90 in โ
โ time_wrapper โ
โ โ
โ 87 โ โ if key not in compilation_metrics: โ
โ 88 โ โ โ compilation_metrics[key] = [] โ
โ 89 โ โ t0 = time.time() โ
โ โฑ 90 โ โ r = func(*args, **kwargs) โ
โ 91 โ โ latency = time.time() - t0 โ
โ 92 โ โ # print(f"Dynamo timer: key={key}, latency={latency:.2f} sec") โ
โ 93 โ โ compilation_metrics[key].append(latency) โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py:1 โ
โ 56 in autotune_to_one_config โ
โ โ
โ 153 โ โ โ else: โ
โ 154 โ โ โ โ cloned_args.append(arg) โ
โ 155 โ โ โ
โ โฑ 156 โ โ timings = { โ
โ 157 โ โ โ launcher: self.bench(launcher, *cloned_args, **kwargs) โ
โ 158 โ โ โ for launcher in self.launchers โ
โ 159 โ โ } โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py:1 โ
โ 57 in <dictcomp> โ
โ โ
โ 154 โ โ โ โ cloned_args.append(arg) โ
โ 155 โ โ โ
โ 156 โ โ timings = { โ
โ โฑ 157 โ โ โ launcher: self.bench(launcher, *cloned_args, **kwargs) โ
โ 158 โ โ โ for launcher in self.launchers โ
โ 159 โ โ } โ
โ 160 โ โ self.launchers = [builtins.min(timings, key=timings.get)] โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py:1 โ
โ 38 in bench โ
โ โ
โ 135 โ โ โ
โ 136 โ โ from triton.testing import do_bench โ
โ 137 โ โ โ
โ โฑ 138 โ โ return do_bench(kernel_call, rep=40, fast_flush=True) โ
โ 139 โ โ
โ 140 โ @dynamo_utils.dynamo_timed โ
โ 141 โ def autotune_to_one_config(self, *args, **kwargs): โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/triton/testing.py:140 in do_bench โ
โ โ
โ 137 โ """ โ
โ 138 โ โ
โ 139 โ # Estimate the runtime of the function โ
โ โฑ 140 โ fn() โ
โ 141 โ torch.cuda.synchronize() โ
โ 142 โ start_event = torch.cuda.Event(enable_timing=True) โ
โ 143 โ end_event = torch.cuda.Event(enable_timing=True) โ
โ โ
โ /home/tangtianyi/miniconda3/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py:1 โ
โ 30 in kernel_call โ
โ โ
โ 127 โ โ โ โ launcher.config.pre_hook( โ
โ 128 โ โ โ โ โ {**zip(self.arg_names, args), **launcher.config.kwargs} โ
โ 129 โ โ โ โ ) โ
โ โฑ 130 โ โ โ launcher( โ
โ 131 โ โ โ โ *args, โ
โ 132 โ โ โ โ grid=grid, โ
โ 133 โ โ โ โ stream=stream, โ
โ <string>:4 in launcher โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
```
### Versions
PyTorch version: 1.14.0.dev20221208+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.54
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.0rc2
[pip3] pytorch-transformers==1.0.0
[pip3] torch==1.14.0.dev20221208+cu116
[pip3] torchaudio==0.14.0.dev20221208+cu116
[pip3] torchtriton==2.0.0+0d7e753227
[pip3] torchvision==0.15.0.dev20221208+cu116
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] faiss-gpu 1.7.2 py3.8_h28a55e0_0_cuda11.3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] ffmpeg 4.3 hf484d3e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] libfaiss 1.7.2 hfc2d529_0_cuda11.3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py38h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py38hd3c417c_0 defaults
[conda] mkl_random 1.2.2 py38h51133e4_0 defaults
[conda] numpy 1.24.0rc2 pypi_0 pypi
[conda] numpy-base 1.21.5 py38hf524024_2 defaults
[conda] pytorch 1.13.0 py3.8_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_0 pytorch
[conda] pytorch-mutex 1.0 cuda https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] pytorch-transformers 1.0.0 pypi_0 pypi
[conda] torch 1.14.0.dev20221208+cu116 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20221208+cu116 pypi_0 pypi
[conda] torchtriton 2.0.0+0d7e753227 pypi_0 pypi
[conda] torchvision 0.15.0.dev20221208+cu116 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 2 |
3,996 | 90,536 |
Illegal hardware instruction following Real Time Inference on Raspberry Pi 4 tutorial
|
oncall: quantization, triaged, module: arm
|
### ๐ Describe the bug
Following the [tutorial](https://pytorch.org/tutorials/intermediate/realtime_rpi.html) the program stops with
`[1] 182488 illegal hardware instruction (core dumped)`
See minimal reproduction code below and output:
```
import torch
from torchvision import models
torch.backends.quantized.engine = "qnnpack"
print("This prints")
net = models.quantization.mobilenet_v2( pretrained=True, quantize=True) # Illegal instruction occurs here
print("This doens't print")
net = torch.jit.script(net)
```
The code runs fine with torch 1.10.2
Output:
```
$ python torchy.py
/home/max/.pyenv/versions/pytorch/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
A
/home/max/.pyenv/versions/pytorch/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/max/.pyenv/versions/pytorch/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_QuantizedWeights.IMAGENET1K_QNNPACK_V1`. You can also use `weights=MobileNet_V2_QuantizedWeights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
[1] 182488 illegal hardware instruction (core dumped) python torchy.py
```
### Versions
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (aarch64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Dec 7 2022, 13:19:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1012-raspi-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @malfet
| 1 |
3,997 | 90,535 |
Illegal hardware instruction using torch.nn.Conv2d on aarch64 (Raspberry Pi 4)
|
module: crash, triaged
|
### ๐ Describe the bug
The following code returns an
`[1] 179220 illegal hardware instruction (core dumped)` error when executed on a Rapsberry Pi (Python 3.9, torch 1.13.0).
It runs successfully on torch 1.10.2
This is a minimal reproduction setup.
```
#! /usr/bin/python
import torch
m = torch.nn.Conv2d(16, 33, 3, stride=2)
input = torch.randn(20, 16, 50, 100)
output = m(input) # Illegal instruction happens here
```
$ python repro.py
[1] 181810 illegal hardware instruction (core dumped) python repro.py
### Versions
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (aarch64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Dec 7 2022, 13:19:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1012-raspi-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] Could not collect
| 2 |
3,998 | 90,526 |
valgrind failure `Conditional jump or move depends on uninitialised value(s)`
|
triaged, module: sanitizers
|
### ๐ Describe the bug
Hi friends! Our internal valgrind build found this report
```
==4329== Warning: noted but unhandled ioctl 0x30000001 with no size/direction hints.
==4329== This could cause spurious value errors to appear.
==4329== See README_MISSING_SYSCALL_OR_IOCTL for guidance on writing a proper wrapper.
==4329== Warning: noted but unhandled ioctl 0x27 with no size/direction hints.
==4329== This could cause spurious value errors to appear.
==4329== See README_MISSING_SYSCALL_OR_IOCTL for guidance on writing a proper wrapper.
==4329== Warning: noted but unhandled ioctl 0x25 with no size/direction hints.
==4329== This could cause spurious value errors to appear.
==4329== See README_MISSING_SYSCALL_OR_IOCTL for guidance on writing a proper wrapper.
==4329== Warning: noted but unhandled ioctl 0x17 with no size/direction hints.
==4329== This could cause spurious value errors to appear.
==4329== See README_MISSING_SYSCALL_OR_IOCTL for guidance on writing a proper wrapper.
==4329== Warning: set address range perms: large range [0x200000000, 0x400200000) (noaccess)
==4329== Warning: set address range perms: large range [0x122027000, 0x142026000) (noaccess)
==4329== Conditional jump or move depends on uninitialised value(s)
==4329== at 0x8AE240D: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt), &torch::ADInplaceOrView::(anonymous namespace)::slice_Tensor>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt) (external/pytorch/c10/util/Optional.h:281)
==4329== by 0x5167FA88: at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long&&, c10::optional<c10::SymInt>&&, c10::optional<c10::SymInt>&&, c10::SymInt&&) (external/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50)
==4329== by 0x516804B2: at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt)> const&, c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt) const (external/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90)
==4329== by 0x515C6657: at::_ops::slice_Tensor::redispatch(c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt) (external/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:453)
==4329== by 0x833CFBB: at::redispatch::slice_symint(c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt) (bazel-out/k8-fastbuild/bin/external/pytorch/aten/src/ATen/RedispatchFunctions.h:5887)
==4329== by 0x82DE8F7: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt), &torch::autograd::VariableType::(anonymous namespace)::slice_Tensor>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt) (bazel-out/k8-fastbuild/bin/external/pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:11754)
==4329== by 0x82E06B2: c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt), &torch::autograd::VariableType::(anonymous namespace)::slice_Tensor>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, long, c10::optional<c10::SymInt>, c10::optional<c10::SymInt>, c10::SymInt> >, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) (external/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:496)
==4329== by 0x75EFBB8: c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const (external/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:41)
==4329== by 0x6C5708F: torch::jit::InterpreterStateImpl::runImpl(std::vector<c10::IValue, std::allocator<c10::IValue> >&) (external/gcc/x86_64-linux-gnu/include/c++/10.3.0/bits/std_function.h:622)
==4329== by 0x6C496E9: torch::jit::InterpreterStateImpl::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) (external/pytorch/torch/csrc/jit/runtime/interpreter.cpp:1009)
==4329== by 0x6C35273: torch::jit::GraphExecutorImplBase::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) (external/pytorch/torch/csrc/jit/runtime/graph_executor.cpp:583)
==4329== by 0x68158D9: torch::jit::Function::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) (external/pytorch/aten/src/ATen/core/function.h:62)
==4329== by 0x6804380: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) const (external/pytorch/torch/csrc/jit/api/module.cpp:209)
==4329== by 0x40C22E3: torch::jit::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) (external/pytorch/torch/csrc/jit/api/module.h:114)
... <some our code>
```
I binary-searched it to the commit cf3ce329b548e1e8da65542af53b16f27d5dc72a
PR https://github.com/pytorch/pytorch/pull/78947
Unfortunately, I cannot provide a good repro since it's all tied up into our monorepo, but I wanted to have a discussion about it in case it's a real `Conditional jump or move depends on uninitialised value(s)` failure.
### Versions
PyTorch 1.13
| 0 |
3,999 | 90,521 |
[A ERROR in Docker] RuntimeError: CUDA error: no kernel image is available for execution on the device
|
needs reproduction, module: cuda, triaged, module: docker
|
### ๐ Describe the bug
Traceback (most recent call last):
File "main.py", line 146, in <module>
trainer.train(train_loader, valid_loader, max_epoch)
File "/app/stable-dreamfusion/nerf/utils.py", line 456, in train
self.train_one_epoch(train_loader)
File "/app/stable-dreamfusion/nerf/utils.py", line 668, in train_one_epoch
self.model.update_extra_state()
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/app/stable-dreamfusion/nerf/renderer.py", line 586, in update_extra_state
sigmas = self.density(cas_xyzs)['sigma'].reshape(-1).detach()
File "/app/stable-dreamfusion/nerf/network_grid.py", line 150, in density
sigma, albedo = self.common_forward(x)
File "/app/stable-dreamfusion/nerf/network_grid.py", line 80, in common_forward
h = self.encoder(x, bound=self.bound)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/app/stable-dreamfusion/gridencoder/grid.py", line 149, in forward
outputs = grid_encode(inputs, self.embeddings, self.offsets, self.per_level_scale, self.base_resolution, inputs.requires_grad, self.gridtype_id, self.align_corners)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/autocast_mode.py", line 110, in decorate_fwd
return fwd(*args, **kwargs)
File "/app/stable-dreamfusion/gridencoder/grid.py", line 52, in forward
outputs = outputs.permute(1, 0, 2).reshape(B, L * C)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torch-ema==0.3
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect
cc @ngimel
| 1 |
4,000 | 90,509 |
Can torchrun have a shell completion?
|
oncall: distributed
|
### ๐ The feature, motivation and pitch
Like
- <https://github.com/andreafrancia/trash-cli/blob/master/setup.cfg#L31-L33>
- <https://github.com/inducer/pudb/blob/main/setup.py#L24>
- <https://github.com/piccolomo/plotext/blob/master/setup.py#L25>
- <https://github.com/wookayin/gpustat/blob/master/setup.py#L116>
```
โฏ torchrun -<TAB>
option
--help show this help message and exit
-h show this help message and exit
--log_dir Base directory to use for log files (e.g. /var/log/torch/elastic). The same directory is re-used for multiple runs (a unique job-level sub-directory is created with rdzv_id as the prefix).
--master_addr Address of the master node (rank 0). It should be either the IP address or the hostname of rank 0. For single node multi-proc training the --master_addr can simply be 127.0.0.1; IPv6 should have the pattern `[0:0:0:0:0:0:0:1]`.
--master_port Port on the master node (rank 0) to be used for communication during distributed training.
--max_restarts Maximum number of worker group restarts before failing.
-m Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
--module Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
--monitor_interval Interval, in seconds, to monitor the state of workers.
--nnodes Number of nodes, or the range of nodes in form <minimum_nodes>:<maximum_nodes>.
--node_rank Rank of the node for multi-node distributed training.
...
```
### Alternatives
```
shtab -s zsh torch.distributed.run.get_args_parser --prog torchrun| sudo tee /usr/share/zsh/site-functions/_torchrun
```
See <https://docs.iterative.ai/shtab>
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.