Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
1,501 | 107,211 |
[ONNX] ONNX doesn't support exporting non-persistent buffer included models in FakeMode
|
module: onnx, triaged, onnx-triaged
|
To avoid out of memory issue during exporting models to ONNX, we need to detach the parameters and persistent buffers with state_dict().
```python
# Create the toy model with real weight.
real_model = create_model()
with tempfile.NamedTemporaryFile(
prefix=model_name, suffix=".pt"
) as tmp_checkpoint_file:
# Dump state_dict to a file to simulate how HuggingFace model is initialized.
# The file will be loaded via .load_state_dict(...)
state_dict = real_model.state_dict()
torch.save(state_dict, tmp_checkpoint_file.name)
with torch.onnx.enable_fake_mode() as fake_context:
fake_args = create_args()
fake_kwargs = create_kwargs()
fake_model = create_model()
if load_checkpoint_during_init:
fake_model.load_state_dict(torch.load(tmp_checkpoint_file.name))
# Export the model with fake inputs and parameters
export_options = torch.onnx.ExportOptions(
dynamic_shapes=self.dynamic_shapes,
op_level_debug=self.op_level_debug,
fake_context=fake_context,
)
export_output = torch.onnx.dynamo_export(
fake_model,
*fake_args,
**fake_kwargs,
export_options=export_options,
)
```
However, some models, for example, GPT2, there is non-persistent buffer which can't be detached to state_dict(). Subsequently, ONNX graph complains about the missing buffers, but we don't have it in external data of the model initializer. This kind of case can be be reproduced when we use Config to `create_model()`.
cc @BowenBao @thiagocrepaldi @wschin
| 1 |
1,502 | 107,201 |
The difference between channels last backward and channels first backward of AvgPool2d on CUDA is too large
|
module: nn, module: cuda, triaged, module: memory format
|
### π Describe the bug
The difference between channels last backward and channels first backward of AvgPool2d on CUDA is too large.
```
### reproducer
import torch
import copy
import torch.utils._pytree as pytree
x = torch.randn(2, 3, 6, 6).cuda().requires_grad_()
x2 = x.contiguous(memory_format=torch.channels_last).cuda().detach().requires_grad_()
model = torch.nn.AvgPool2d((2, 2), (2, 2), (1, 1)).cuda()
model2 = copy.deepcopy(model).cuda()
output = model(x)
grad_out = torch.randn(output.shape).cuda()
grad_out2 = grad_out.clone().contiguous(memory_format=torch.channels_last).cuda()
output2 = model2(x2)
if isinstance(output2, torch.Tensor):
outputs2 = (output2,)
outputs = (output,)
# === Do backward pass. ===
ref_diff_outputs = tuple(t for t in outputs)
ref_params = tuple(p for p in model.parameters())
ref_inputs=(x,)
ref_diff_inputs = tuple(
t
for t in pytree.tree_flatten((ref_inputs, ref_params))[0]
)
ref_grad_outputs = tuple(
torch.rand_like(t)
for t in ref_diff_outputs
)
ref_grad_inputs = torch.autograd.grad(
ref_diff_outputs,
ref_diff_inputs,
grad_outputs=ref_grad_outputs,
allow_unused=True,
)
diff_outputs = tuple(t for t in outputs2)
params = tuple(p for p in model2.parameters())
inputs=(x2,)
diff_inputs = tuple(
t
for t in pytree.tree_flatten((inputs, params))[0]
)
grad_outputs = tuple(
torch.rand_like(t)
for t in diff_outputs
)
grad_outputs = tuple(
t1.copy_(t2)
for (t1, t2) in zip(grad_outputs, ref_grad_outputs)
)
grad_inputs = torch.autograd.grad(
diff_outputs,
diff_inputs,
grad_outputs=grad_outputs,
allow_unused=True,
)
torch.testing.assert_close(grad_inputs, ref_grad_inputs, atol=None, rtol=None)
```
The difference is too large:
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 162 / 216 (75.0%)
Greatest absolute difference: 0.2320583015680313 at index (1, 2, 1, 4) (up to 1e-05 allowed)
Greatest relative difference: 245.96385192871094 at index (1, 2, 1, 4) (up to 1.3e-06 allowed)
```
### Versions
PyTorch master.
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @jamesr66a
| 0 |
1,503 | 107,200 |
[inductor] [dynamic shape] 5 HF models fails with `Constraints violated` using transformers v4.31.0
|
triaged, oncall: pt2, module: dynamic shapes, module: dynamo
|
### π Describe the bug
`BertForMaskedLM`, `BertForQuestionAnswering`, `CamemBert`, `RobertaForCausalLM`, `RobertaForQuestionAnswering` in HF fail with `Constraints violated` using transformers `v4.31.0` but work well with transformers `4.30.2`.
The failure is caused by this change in transformers: https://github.com/huggingface/transformers/pull/24510 and more exactly this function:
[warn_if_padding_and_no_attention_mask](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/modeling_utils.py#L3480C9-L3480C46).
When running the benchmark, the batch dim is marked as dynamic [here](https://github.com/pytorch/pytorch/blob/ddf36c82b83b2db3be7ce7a85d4aea3507c9d7ef/benchmarks/dynamo/common.py#L3454). But due to the HF model change of adding [warn_if_padding_and_no_attention_mask](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/modeling_utils.py#L3480C9-L3480C46), the batch dim in the generated code becomes a Constant instead of a dynamic dim, which has ended up with this failure:
```
[2023-08-14 13:53:15,380] torch._dynamo.output_graph: [INFO] Step 2: done compiler function inductor
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/home/user/pytorch/benchmarks/dynamo/common.py", line 2226, in warmup
fn(model, example_inputs)
File "/home/user/pytorch/torch/_dynamo/eval_frame.py", line 307, in _fn
return fn(*args, **kwargs)
File "benchmarks/dynamo/huggingface.py", line 544, in forward_pass
return mod(**inputs)
File "/home/user/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/transformers/src/transformers/models/bert/modeling_bert.py", line 1358, in forward
outputs = self.bert(
File "/home/user/pytorch/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/pytorch/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/transformers/src/transformers/models/bert/modeling_bert.py", line 970, in forward
self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
File "/home/user/pytorch/torch/_dynamo/eval_frame.py", line 467, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/user/pytorch/torch/_dynamo/convert_frame.py", line 559, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/user/pytorch/torch/_dynamo/convert_frame.py", line 130, in _fn
return fn(*args, **kwargs)
File "/home/user/pytorch/torch/_dynamo/convert_frame.py", line 378, in _convert_frame_assert
return _compile(
File "/home/user/pytorch/torch/_dynamo/utils.py", line 194, in time_wrapper
r = func(*args, **kwargs)
File "/home/user/pytorch/torch/_dynamo/convert_frame.py", line 506, in _compile
check_fn = CheckFunctionManager(
File "/home/user/pytorch/torch/_dynamo/guards.py", line 887, in __init__
guard.create(local_builder, global_builder)
File "/home/user/pytorch/torch/_guards.py", line 215, in create
return self.create_fn(self.source.select(local_builder, global_builder), self)
File "/home/user/pytorch/torch/_dynamo/guards.py", line 564, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
File "/home/user/pytorch/torch/fx/experimental/symbolic_shapes.py", line 2809, in produce_guards
raise ConstraintViolationError(f"Constraints violated!\n{err}")
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated!
1. Could not validate constraint RelaxedUnspecConstraint(L['input_ids'].size()[0]) as L['input_ids'].size()[0] was inferred to be constant (16). For more information about why it is constant, run with TORCH_LOGS=dynamic
```
### Minified repro
You could change `BertForMaskedLM` in the below line to other model names to reproduce the corresponding failures.
You'll need to install transformers `v4.31.0` to reproduce the failure.
```bash
python -u benchmarks/dynamo/huggingface.py --dynamic-shapes --dynamic-batch-only --performance --float32 -dcpu --inference -n5 --inductor --no-skip --dashboard --cold-start-latency --freezing --timeout 9000 --only=BertForMaskedLM
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git236eda4
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31
Python version: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 1195.781
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.14.2
[pip3] ema-pytorch==0.2.3
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] intel-extension-for-pytorch==1.13.0+cpu
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.7
[pip3] torch==2.1.0a0+git0341408
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.0.3
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.16.0a0+b5e2a1b
[pip3] torchvision==0.16.0a0+bb3aae7
[pip3] triton==2.0.0
[pip3] vector-quantize-pytorch==1.6.30
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.14.2 pypi_0 pypi
[conda] ema-pytorch 0.2.3 pypi_0 pypi
[conda] intel-extension-for-pytorch 1.13.0+cpu pypi_0 pypi
[conda] mkl 2022.1.0 hc2b9512_224 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-include 2022.2.1 pypi_0 pypi
[conda] mkl-static 2022.2.1 pypi_0 pypi
[conda] numpy 1.21.2 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.7 pypi_0 pypi
[conda] torch 2.1.0a0+git0341408 dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b5e2a1b dev_0 <develop>
[conda] torchvision 0.16.0a0+bb3aae7 dev_0 <develop>
[conda] triton 2.0.0 pypi_0 pypi
[conda] vector-quantize-pytorch 1.6.30 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 10 |
1,504 | 107,197 |
DISABLED test_make_fx_symbolic_exhaustive_special_entr_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: ProxyTensor
|
Platforms: asan, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_entr_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15889444552).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_entr_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
| 11 |
1,505 | 107,194 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,506 | 107,190 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor, merging
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 5 |
1,507 | 107,188 |
Can't construct a tensor from List[SymFloat]
|
triaged, oncall: pt2, module: dynamic shapes, module: dynamo, mlperf
|
### π Describe the bug
Continue from https://github.com/pytorch/pytorch/issues/106101, apply the fix.
```
import torch
import torchaudio
from torchaudio.io import StreamReader
bundle = torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH
feature_extractor = bundle.get_streaming_feature_extractor()
decoder = bundle.get_decoder()
token_processor = bundle.get_token_processor()
# random values
# This works
decoder(torch.randn(80, 80), length=torch.tensor([1.0]), beam_width=10)
decoder = torch.compile(decoder, fullgraph=True)
# This does not work
decoder(torch.randn(80, 80), length=torch.tensor([1.0]), beam_width=10)
```
### Error logs
```
Unsupported: torch.tensor call with list of unspec
from user code:
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torchaudio/models/rnnt_decoder.py", line 293, in forward
return self._search(enc_out, None, beam_width)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torchaudio/models/rnnt_decoder.py", line 246, in _search
b_hypos = self._gen_b_hypos(b_hypos, a_hypos, next_token_probs, key_to_b_hypo)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torchaudio/models/rnnt_decoder.py", line 164, in _gen_b_hypos
_, sorted_idx = torch.tensor([_get_hypo_score(hypo) for hypo in b_hypos]).sort()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
File "/usr/local/fbcode/platform010/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/fbcode/platform010/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/usr/local/fbcode/platform010/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/executorch/examples/export/test/test_export.py", line 90, in test_emformer_export_to_executorch
self._assert_eager_lowered_same_result(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/executorch/examples/export/test/test_export.py", line 37, in _assert_eager_lowered_same_result
edge_model = exir.capture(eager_model, example_inputs, _CAPTURE_CONFIG).to_edge(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/executorch/exir/capture/_capture.py", line 84, in capture
ep = export(f, args, constraints=constraints)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_export/__init__.py", line 222, in export
gm_torch_level, _ = torch._dynamo.export(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/eval_frame.py", line 1197, in inner
result_traced = opt_f(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/convert_frame.py", line 132, in _fn
return fn(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/convert_frame.py", line 370, in _convert_frame_assert
return _compile(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/convert_frame.py", line 536, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/convert_frame.py", line 447, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/convert_frame.py", line 425, in transform
tracer.run()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 2176, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 2283, in inline_call_
tracer.run()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 2176, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 2283, in inline_call_
tracer.run()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/variables/torch.py", line 715, in call_function
unimplemented("torch.tensor call with list of unspec")
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/a95c0f0718520e1f/executorch/examples/export/test/__test_export__/test_export#link-tree/torch/_dynamo/exc.py", line 143, in unimplemented
raise Unsupported(msg)
```
### Minified repro
_No response_
### Versions
Can't run the script.
On top of FB Internal repo commit: 13420fcffc88ddc1d369ea28e6637116638fe16d
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
| 6 |
1,508 | 107,183 |
DISABLED test_RNN_dropout_state (__main__.TestNN)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://hud.pytorch.org/failure/test_nn.py%3A%3ATestNN%3A%3Atest_RNN_dropout_state))
Specific example: https://github.com/pytorch/pytorch/actions/runs/5859121053/job/15885247751
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
1,509 | 107,177 |
Timeout during NCCL initialization due to store
|
oncall: distributed, better-engineering
|
## Context
Pytorch barrier for the NCCL backend is implemented by calling [allReduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/colls.html#ncclallreduce) from NCCL. Because we initialize NCCL lazily, before the allreduce we need to set up the NCCL communicators. To initialize communicator, a unique nccl id is broadcasted to all ranks in the group. To do this, we use a key-value store where rank 0 sets the ID, and all other ranks read from this.
## Issue
There is a possibility that some rank is trying to read from the store but no key is set (rank 0 is not setting it). This can happen when rank 0 is a straggler. The default timeout for store is 5 mins which is different that the default nccl watchdog timeout of 30 mins.
the error would look something like:
```
"RuntimeError: [6] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Socket Timeout",
```
We should rethink our nccl initialization design and how we want to timeout for initialization
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu @penguinwu
| 0 |
1,510 | 107,175 |
sdp_kernel causes dynamo error on torch.compile(model, fullgraph=True)
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
Applying torch.compile(model, fullgraph=True) to a model using sdp_kernel results with a dynamo error.
### Error logs
from user code:
File ... , line 14, in forward
with torch.backends.cuda.sdp_kernel(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
### Minified repro
```
import torch
import torch._dynamo
import torch._inductor
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
self.enable_flash = True
self.enable_math = True
self.enable_mem_efficient = True
def forward(self, q, k, v):
with torch.backends.cuda.sdp_kernel(
enable_flash=self.enable_flash,
enable_math=self.enable_math,
enable_mem_efficient=self.enable_mem_efficient,
):
out = torch.nn.functional.scaled_dot_product_attention(
q,
k,
v,
attn_mask=None,
dropout_p=0.0,
is_causal=False,
)
return out
rep = Repro()
rep_opt = torch.compile(rep, fullgraph=True)
q = torch.empty_strided(
(1, 48, 64, 64), (0, 4096, 64, 1), device="cuda", dtype=torch.half
)
k = torch.empty_strided(
(1, 48, 64, 64), (0, 4096, 64, 1), device="cuda", dtype=torch.half
)
v = torch.empty_strided(
(1, 48, 64, 64), (0, 4096, 64, 1), device="cuda", dtype=torch.half
)
with torch.no_grad():
out = rep_opt(q, k, v)
```
### Versions
NA
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,511 | 107,174 |
Surface NCCL and CUDA version incompatibility
|
oncall: distributed, better-engineering
|
When there is an incompatibly of the NCCL version with the CUDA version installed on the system, the user sees an error:
```
torch.distributed.DistBackendError: NCCL error in: fbcode/caffe2/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1190, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library.
```
Ideally we can surface this as a check earlier or provide a better error message.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @kwen2501 @awgu @penguinwu
| 5 |
1,512 | 107,173 |
Dynamo test_vmap failures on Python-3.8
|
triaged, oncall: pt2, module: functorch, module: dynamo
|
### π Describe the bug
While migrating test jobs away from bionic (in https://github.com/pytorch/pytorch/pull/105260 ) I've noticed that following jobs began to fail (though may be they were disabled already):
- `test_arithmetic_add_dunder` segfaults:
```
Process 5225 stopped
* thread #1, name = 'python', stop reason = signal SIGABRT
frame #0: 0x00007ffff7c8b00b libc.so.6`__GI_raise(sig=2) at raise.c:51:1
(lldb) bt
* thread #1, name = 'python', stop reason = signal SIGABRT
* frame #0: 0x00007ffff7c8b00b libc.so.6`__GI_raise(sig=2) at raise.c:51:1
frame #1: 0x00007ffff7c6a859 libc.so.6`__GI_abort at abort.c:79:7
frame #2: 0x00007ffff6b7a35a libstdc++.so.6`__cxxabiv1::__terminate(void (*)()) + 10
frame #3: 0x00007ffff6b7a3c5 libstdc++.so.6`std::terminate() + 21
frame #4: 0x00007fffdf67eeab libtorch_cpu.so`__clang_call_terminate + 11
frame #5: 0x00007fffdf92094c libtorch_cpu.so`at::functorch::WithoutTop::~WithoutTop() + 252
frame #6: 0x00007fffdf92108c libtorch_cpu.so`at::functorch::dynamicLayerBack(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*, bool) + 316
frame #7: 0x00007fffe0565aff libtorch_cpu.so`c10::impl::BoxedKernelWrapper<at::Tensor (at::Tensor const&, at::Tensor const&, c10::Scalar const&), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 63
frame #8: 0x00007fffe07cb883 libtorch_cpu.so`at::_ops::add_Tensor::call(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 387
frame #9: 0x00007fffdf7e543f libtorch_cpu.so`std::tuple<at::Tensor, c10::optional<long> > at::functorch::_binary_pointwise_batch_rule<at::Tensor (*)(at::Tensor const&, at::Tensor const&, c10::Scalar const&), &(at::_ops::add_Tensor::call(at::Tensor const&, at::Tensor const&, c10::Scalar const&)), c10::Scalar const&>(at::Tensor const&, c10::optional<long>, at::Tensor const&, c10::optional<long>, c10::Scalar const&) + 111
frame #10: 0x00007fffdf7c5db7 libtorch_cpu.so`at::Tensor at::functorch::add_Tensor_generated_plumbing<std::tuple<at::Tensor, c10::optional<long> > (*)(at::Tensor const&, c10::optional<long>, at::Tensor const&, c10::optional<long>, c10::Scalar const&), &(at::functorch::BinaryPointwiseBatchRuleHelper<at::Tensor (*)(at::Tensor const&, at::Tensor const&, c10::Scalar const&), &(at::_ops::add_Tensor::call(at::Tensor const&, at::Tensor const&, c10::Scalar const&)), c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::Scalar const&> >::apply(at::Tensor const&, c10::optional<long>, at::Tensor const&, c10::optional<long>, c10::Scalar const&))>(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 695
frame #11: 0x00007fffdf6e4be2 libtorch_cpu.so`std::decay<c10::guts::infer_function_traits<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&, c10::Scalar const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::Scalar const&> > >::type::return_type>::type c10::impl::call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&, c10::Scalar const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::Scalar const&> >, false, 0ul, 1ul, 2ul, at::Tensor const&, at::Tensor const&, c10::Scalar const&>(c10::OperatorKernel*, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<std::vector> >*, std::integer_sequence<unsigned long, 0ul, 1ul, 2ul>, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::Scalar const&>*) + 82
frame #12: 0x00007fffdf6e4ab8 libtorch_cpu.so`c10::impl::make_boxed_from_unboxed_functor<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&, c10::Scalar const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, c10::Scalar const&> >, false>::call(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) + 24
frame #13: 0x00007fffdf6a962a libtorch_cpu.so`c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const + 106
frame #14: 0x00007fffdf924dcd libtorch_cpu.so`at::functorch::Interpreter::process(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) + 45
frame #15: 0x00007fffdf920c69 libtorch_cpu.so`void c10::BoxedKernel::make_boxed_function<&(at::functorch::dynamicLayerFrontFallback(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*))>(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) + 345
frame #16: 0x00007fffe0565aff libtorch_cpu.so`c10::impl::BoxedKernelWrapper<at::Tensor (at::Tensor const&, at::Tensor const&, c10::Scalar const&), void>::call(c10::BoxedKernel const&, c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 63
frame #17: 0x00007fffe07cb883 libtorch_cpu.so`at::_ops::add_Tensor::call(at::Tensor const&, at::Tensor const&, c10::Scalar const&) + 387
frame #18: 0x00007fffed0dbbe6 libtorch_python.so`torch::autograd::THPVariable_add(_object*, _object*, _object*) + 630
frame #19: 0x00007fffed0be616 libtorch_python.so`_object* torch::autograd::TypeError_to_NotImplemented_<&(torch::autograd::THPVariable_add(_object*, _object*, _object*))>(_object*, _o
```
- `test_fails_with_autograd_function` passes for `grad` and `vmap`
Skipped those in CI, but filing this one to debug further
### Versions
CI
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,513 | 107,170 |
Torch randn cannot take symbol shapes as shape argument.
|
triaged, oncall: pt2, module: dynamic shapes
|
### π Describe the bug
torch randn fails stand if the input shape has non-concrete types. This smells like a bug to me though as randn_like would work in this example since it can get the symbolic shape of the tensor. I can post
### Error logs
```
Error executing job with overrides: []
Traceback (most recent call last):
File "/diffusion/run.py", line 26, in <module>
main()
File "/usr/lib/python3/dist-packages/hydra/main.py", line 94, in decorated_main
_run_hydra(
File "/usr/lib/python3/dist-packages/hydra/_internal/utils.py", line 394, in _run_hydra
_run_app(
File "/usr/lib/python3/dist-packages/hydra/_internal/utils.py", line 457, in _run_app
run_and_report(
File "/usr/lib/python3/dist-packages/hydra/_internal/utils.py", line 223, in run_and_report
raise ex
File "/usr/lib/python3/dist-packages/hydra/_internal/utils.py", line 220, in run_and_report
return func()
File "/usr/lib/python3/dist-packages/hydra/_internal/utils.py", line 458, in <lambda>
lambda: hydra.run(
File "/usr/lib/python3/dist-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/usr/lib/python3/dist-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/usr/lib/python3/dist-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "/diffusion/run.py", line 22, in main
return train(config)
File "/diffusion/diffusion/train.py", line 139, in train
return eval_and_then_train()
File "/diffusion/diffusion/train.py", line 136, in eval_and_then_train
trainer.eval()
File "/usr/lib/python3/dist-packages/composer/trainer/trainer.py", line 2656, in eval
self._eval_loop(
File "/usr/lib/python3/dist-packages/composer/trainer/trainer.py", line 2780, in _eval_loop
self.state.outputs = self._original_model.eval_forward(self.state.batch)
File "/diffusion/diffusion/models/stable_diffusion.py", line 206, in eval_forward
unet_out, targets, timesteps = self.forward(batch)
File "/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py", line 333, in _fn
return fn(*args, **kwargs)
File "/diffusion/diffusion/models/stable_diffusion.py", line 173, in forward
latents = self.vae.encode(inputs)['latent_dist'].sample().data
File "/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py", line 493, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 605, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 132, in _fn
return fn(*args, **kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 370, in _convert_frame_assert
return _compile(
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 536, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 447, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/lib/python3/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/usr/lib/python3/dist-packages/torch/_dynamo/convert_frame.py", line 425, in transform
tracer.run()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 2176, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 2283, in inline_call_
tracer.run()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 1167, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 2176, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 2283, in inline_call_
tracer.run()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 1167, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/torch.py", line 716, in call_function
tensor_variable = wrap_fx_proxy(
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/builder.py", line 1163, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/usr/lib/python3/dist-packages/torch/_dynamo/variables/builder.py", line 1237, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 1351, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 1319, in get_fake_value
return wrap_fake_exception(
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 898, in wrap_fake_exception
return fn()
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 1320, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 1385, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/usr/lib/python3/dist-packages/torch/_dynamo/utils.py", line 1372, in run_node
return node.target(*args, **kwargs)
File "/usr/lib/python3/dist-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
run_name: sd2-base-256
File "/usr/lib/python3/dist-packages/torch/_subclasses/fake_tensor.py", line 1233, in __torch_dispatch__
run_name: sd2-base-256
return self.dispatch(func, types, args, kwargs)
File "/usr/lib/python3/dist-packages/torch/_subclasses/fake_tensor.py", line 1470, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/usr/lib/python3/dist-packages/torch/_subclasses/fake_tensor.py", line 410, in constructors
r = func(*args, **new_kwargs)
File "/usr/lib/python3/dist-packages/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method randn of type object at 0x7f3d98806900>(*((s1, 4, 32, 32),), **{'generator': None, 'device': device(type='cuda', index=0), 'dtype': torch.float16, 'layout': torch.strided}):
aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:3364: SymIntArrayRef expected to contain only concrete integers
from user code:
File "/diffusion/diffusion/models/stable_diffusion.py", line 173, in <resume in forward>
latents = self.vae.encode(inputs)['latent_dist'].sample().data
File "/usr/lib/python3/dist-packages/diffusers/models/vae.py", line 659, in sample
sample = randn_tensor(
File "/usr/lib/python3/dist-packages/diffusers/utils/torch_utils.py", line 79, in randn_tensor
latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Minified repro
_No response_
### Versions
Looking in indexes: https://download.pytorch.org/whl/nightly/cu118
Collecting torch
Downloading https://download.pytorch.org/whl/nightly/cu118/torch-2.1.0.dev20230814%2Bcu118-cp310-cp310-linux_x86_64.whl (2321.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 2.3/2.3 GB 3.3 MB/s eta 0:00:00
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/cu118/torchvision-0.16.0.dev20230814%2Bcu118-cp310-cp310-linux_x86_64.whl (6.2 MB)
ββββββββββββββββββββββββββββββββββββββββ 6.2/6.2 MB 23.0 MB/s eta 0:00:00
Collecting torchaudio
Downloading https://download.pytorch.org/whl/nightly/cu118/torchaudio-2.1.0.dev20230814%2Bcu118-cp310-cp310-linux_x86_64.whl (3.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.3/3.3 MB 18.9 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/lib/python3/dist-packages (from torch) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/lib/python3/dist-packages (from torch) (4.5.0)
Requirement already satisfied: sympy in /usr/lib/python3/dist-packages (from torch) (1.11.1)
Requirement already satisfied: networkx in /usr/lib/python3/dist-packages (from torch) (3.1)
Requirement already satisfied: jinja2 in /usr/lib/python3/dist-packages (from torch) (3.1.2)
Requirement already satisfied: fsspec in /usr/lib/python3/dist-packages (from torch) (2023.6.0)
Collecting pytorch-triton==2.1.0+e6216047b8 (from torch)
Downloading https://download.pytorch.org/whl/nightly/pytorch_triton-2.1.0%2Be6216047b8-cp310-cp310-linux_x86_64.whl (96.4 MB)
ββββββββββββββββββββββββββββββββββββββββ 96.4/96.4 MB 28.0 MB/s eta 0:00:00
Requirement already satisfied: numpy in /usr/lib/python3/dist-packages (from torchvision) (1.24.2)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from torchvision) (2.31.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/lib/python3/dist-packages (from torchvision) (9.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2->torch) (2.1.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/lib/python3/dist-packages (from requests->torchvision) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests->torchvision) (2.8)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->torchvision) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->torchvision) (2023.5.7)
Requirement already satisfied: mpmath>=0.19 in /usr/lib/python3/dist-packages (from sympy->torch) (1.3.0)
DEPRECATION: mlnx-tools -5.2.0- has a non-standard version number. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of mlnx-tools or contact the author to suggest that they release a version with a conforming version number. Discussion can be found at https://github.com/pypa/pip/issues/12063
Installing collected packages: pytorch-triton, torch, torchvision, torchaudio
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
diffusion 0.0.1 requires xformers==0.0.20, which is not installed.
diffusion 0.0.1 requires mosaicml==0.15.1, but you have mosaicml 0.14.1 which is incompatible.
mosaicml 0.14.1 requires torch<2.1,>=1.13.1, but you have torch 2.1.0.dev20230814+cu118 which is incompatible.
mosaicml 0.14.1 requires torchvision<0.16,>=0.13.1, but you have torchvision 0.16.0.dev20230814+cu118 which is incompatible.
torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0.dev20230814+cu118 which is incompatible.
torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0.dev20230814+cu118 which is incompatible.
Successfully installed pytorch-triton-2.1.0+e6216047b8 torch-2.1.0.dev20230814+cu118 torchaudio-2.1.0.dev20230814+cu118 torchvision-0.16.0.dev20230814+cu118
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
1,514 | 107,166 |
jit compilation returns an int rather than a bool when using math.isnan()
|
oncall: jit
|
### π Describe the bug
When using pytorch version 2.0.1+cu117 I'm finding that running jit compilation over a function which uses `math.isnan()` causes `math.isnan()` to return an int rather than a bool. This seems to cause some cascading issues because the compiler makes assumptions that it is dealing with boolean rather than an integer. This can be fairly straight forward to reproduce:
# Example 1
```
import torch
import math
@torch.jit.script
def test_nan(num: float) -> bool:
return math.isnan(num)
print(test_nan(13.))
print(type(test_nan(13.)))
```
## Expected Result
```
> False
> <class 'bool'>
```
## Actual Result
```
> 0
> <class 'int'>
```
# Example 2
If I try using this in an if statement, I then get an additional error telling me to make a bug report:
```
import torch
import math
@torch.jit.script
def test_nan(num: float) -> bool:
if math.isnan(num):
print('foo')
return math.isnan(num)
print(test_nan(13.))
print(type(test_nan(13.)))
```
## Error message
```
RuntimeError Traceback (most recent call last)
<command-3648418541638725> in <module>
8 return math.isnan(num)
9
---> 10 print(test_nan(13.))
11 print(type(test_nan(13.)))
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<command-3648418541638725>", line 6, in test_nan
@torch.jit.script
def test_nan(num: float) -> bool:
if math.isnan(num):
~~~~~~~~~~~~~~~~~~~
print('foo')
~~~~~~~~~~ <--- HERE
return math.isnan(num)
RuntimeError: isBool() INTERNAL ASSERT FAILED at "../aten/src/ATen/core/ivalue.h":645, please report a bug to PyTorch.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1039-aws-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 3000.000
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.03
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
Versions of relevant libraries:
[pip3] numpy==1.20.1
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
1,515 | 107,163 |
Create fastpath backend context manager, similar to SDPA kernel backend manager
|
fb-exported, release notes: nn, release notes: distributed (fsdp)
|
Summary:
1. create context manager infrastructure with aten-managed context with room for 64 settings
- as Python builtin for Python eager and torch.compile
- as TS builtins guarded by torch.hit.is_scripting() for legacy compatibility with TS legacy code. (since "normal" Python builtins that are part of the Pytorch distro are not accessible in TS.)
2. Ability to start with True or False initialization (we always init the state to 0, but will invert some bits based on
desired startup/default polarity of a flag)
3. Create fastpath backend context manager using 4 context bits from 1
(similar to SDPA kernel backend manager, to give users instant familiarity with the mechanism)
4. Use context manager in FSDP test code (this is prep for a future update that broadens use of fastpath, and would
otherwise break this FSDP test when the executions are performed using different kernels, showing divergence in
the test, not thru an error, but the use of different kernels, with different FP rounding characteristics etc)
Test Plan: sandcastle, github
Differential Revision: D48325593
| 45 |
1,516 | 107,155 |
DISABLED test_learnable_forward_per_channel_cpu (quantization.core.test_workflow_ops.TestFakeQuantizeOps)
|
triaged, module: macos, skipped
|
Platforms: macos, mac
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_quantization.py%3A%3ATestFakeQuantizeOps%3A%3Atest_learnable_forward_per_channel_cpu)).
cc @malfet @albanD
| 2 |
1,517 | 107,147 |
Drop c10/util/string_view.hpp
|
fb-exported, Stale, release notes: mobile
|
Test Plan: Sandcastle
Differential Revision: D46862773
| 4 |
1,518 | 107,143 |
[dynamo] calling __torch_function__ with dynamically created subclass of torch.Tensor fails compilation
|
triaged, module: __torch_function__, module: dynamo
|
### π Describe the bug
I apologize in advance to whoever will take care of this bug, as it must be the most obscure corner case of all times.
But I swear, this is breaking our production code right now.
So here we go, the following code snippet breaks:
```python
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
types_ = tuple(
torch.Tensor if t in subclasses else t for t in [type(self)]
)
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # fails
```
```terminal
Traceback (most recent call last):
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_189.py", line 27, in <module>
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # fails
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_189.py", line 21, in call_foo
return x.foo() # works
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_189.py", line 5, in foo
def foo(self):
AttributeError: module '__main__' has no attribute 'MySubClass'
```
This does **not** happen if *any one* of the following criteria is met:
- the subclass of `MyClass` is not created dynamically
- `MyClass.__subclasses__()` is not used for `subclasses`
- `MyClass.__subclasses__()` _is_ used for `subclasses`, but `subclasses` is not used
- `torch.Tensor.__torch_function__()` is not called
<details>
<summary>Code snippets for each of the above</summary>
All of the below produce **no error**:
**1. The subclass of `MyClass` is not created dynamically**
```python
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
types_ = tuple(
torch.Tensor if t in subclasses else t for t in [type(self)]
)
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def call_foo(x):
return x.foo()
class MySubClass(MyClass):
...
cls = MySubClass
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # works
```
**2. `MyClass.__subclasses__()` is not used for `subclasses`**
```python
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = [MyClass]
types_ = tuple(
torch.Tensor if t in subclasses else t for t in [type(self)]
)
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # works
```
**3. `MyClass.__subclasses__()` _is_ used for `subclasses`, but `subclasses` is not used**
```python
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
types_ = tuple(
torch.Tensor for t in [type(self)]
)
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # works
```
**4. `torch.Tensor.__torch_function__()` is not called**
```python
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
types_ = tuple(
torch.Tensor if t in subclasses else t for t in [type(self)]
)
return types_
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # works
```
------------------------
</details>
So from the above __one could conclude that the problem is passing `MyClass.__subclasses__()` to `torch.Tensor.__torch_function__()`, right?__
Well, here where it gets weird (at least to me). __The error happens even if `types_` is never passed to `torch.Tensor.__torch_function__()`, as long as it is assigned somewhere!__
```python
# FAILS THE SAME WAY AS THE CODE ABOVE
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
types_ = tuple(
torch.Tensor if t in subclasses else t for t in [type(self)]
) # this is never used anywhere!
types_ = (torch.Tensor,) # because it is overwritten here
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # fails
```
Commenting out the **unused (!)** line fixes it:
```python
# THIS WORKS!
import torch
class MyClass(torch.Tensor):
def foo(self):
subclasses = MyClass.__subclasses__()
# types_ = tuple(
# torch.Tensor if t in subclasses else t for t in [type(self)]
# ) # commenting out the unused line
types_ = (torch.Tensor,)
return torch.Tensor.__torch_function__(torch.abs, types_, torch.rand(1), {})
def create_subclass(parents):
class MySubClass(*parents):
...
return MySubClass
def call_foo(x):
return x.foo()
cls = create_subclass((MyClass,))
call_foo(cls(torch.rand(1000, 1000))) # works
torch.compile(call_foo, backend='eager')(cls(torch.rand(1000, 1000))) # fails
```
```terminal
Process finished with exit code 0
```
So either I am missing something very obvious, or I just brought you a monstrosity of a corner case bug.
In the latter case, sorry, and salute! π«‘
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 140
Model name: 11th Gen Intel(R) Core(TM) i7-1195G7 @ 2.90GHz
Stepping: 2
CPU MHz: 2469.934
CPU max MHz: 5000,0000
CPU min MHz: 400,0000
BogoMIPS: 5836.80
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 128 KiB
L2 cache: 5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.3.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.4
[pip3] torch==2.0.1+cpu
[conda] Could not collect
cc @hameerabbasi @rgommers @peterbell10 @ezyang @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,519 | 107,139 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
1,520 | 107,136 |
Updating cpuinfo to the latest
|
open source, Stale, topic: not user facing
|
Updating cpuinfo submodule to the latest to include the latest commits on Windows on Arm enablement.
| 5 |
1,521 | 107,134 |
[opinfo] Add cases to slice_scatter to improve coverage
|
triaged, open source, Stale, topic: not user facing
|
`slice_scatter` accepts `end > L` and negative dim. Previously we did not have test cases for them but they come up in real uses.
Based on https://github.com/microsoft/onnxscript/pull/995
| 2 |
1,522 | 107,133 |
torch.inverse throws error when DP but not in DDP or single GPU
|
needs reproduction, triaged, module: data parallel, module: linear algebra, module: edge cases
|
### π Describe the bug
## Description
When I use `torch.inverse` function, it throws an error when I train the model in DataParallel mode, but it will not appear if I use only a single GPU or in DDP mode. I use two RTX4090 on one machine and the OS is Ubuntu 22.04.1 LTS.
## Code
```
proj = torch.matmul(src_proj, torch.inverse(ref_proj))
```
## Error
```
Traceback (most recent call last):
File "train.py", line 345, in <module>
train(model, model_loss, optimizer, TrainImgLoader, TestImgLoader, start_epoch, args)
File "train.py", line 49, in train
loss, scalar_outputs, image_outputs = train_sample(model, model_loss, optimizer, sample, args)
File "train.py", line 119, in train_sample
outputs = model(
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/_utils.py", line 457, in reraise
raise exception
torch._C._LinAlgError: Caught _LinAlgError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/models/mymvsnet.py", line 101, in forward
outputs_stage = self.StageNet(
File "/Anaconda3/envs/mymvsnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/models/mymvsnet.py", line 151, in forward
warped_src = homo_warping(src_fea, src_proj_new, ref_proj_new, depth_hypo) # B C D H W
File "/models/submodules.py", line 133, in homo_warping
proj = torch.matmul(src_proj, torch.inverse(ref_proj))
torch._C._LinAlgError: torch.linalg.inv: (Batch element 0): The diagonal element 1 is zero, the inversion could not be completed because the input matrix is singular.
```
### Versions
## Pytorch Version
```
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 2 |
1,523 | 107,130 |
[docs] Document dtype conversions dtype.to_complex() dtype.to_real()
|
module: docs, triaged
|
Edit by @colesbury
We have helpful conversions between real and complex dtypes:
```
torch.float32.to_complex() -> torch.complex64
torch.complex64.to_real() -> torch.float32
```
But these conversions are not documented. We should document them!
### Original request below:
### π The feature, motivation and pitch
Builtin helpers would be nice for dtype-polymorphic code of the type:
```python
def f(x : torch.Tensor):
assert x.is_floating_point()
return torch.zeros_like(x, dtype = x.dtype.complex()) # or some other bikeshedding
```
Same for conversion of complex dtype to real dtype of the corresponding width
cc @svekars @carljparker
| 1 |
1,524 | 107,125 |
combining `vmap` with NN containing `MaxPool2d' leads to discrepancies in output
|
triaged, module: functorch
|
### π Describe the bug
I would like to compute the Jacobian of the output of a CNN with respect to the inputs, using `torch.func.jacrev`. For a batch of (e.g. MNIST) images, you can do this with a regular for loop or (preferably) with `torch.vmap` instead. Both methods should lead to the same outcome. However, it turns out this is not always the case when the CNN contains a `MaxPool2d`-layer.
The code snippet below gives a concrete example of the discrepancy. When you replace `MaxPool2d` by `AvgPool2d` the issue disappears. Also, the discrepancy grows when the input data has a smaller range (controlled by `scale` in the code below). When you set `scale=1.0` there is no issue, when you set `scale=0.000001` the differences between the two methods are larger. I suspect that `torch.vmap` acts incorrectly on `MaxPool2d` when two (or more) equal values are being pooled.
It would be nice if this issue gets resolved.
```python
import torch
import torch.nn as nn
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1, 16, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)) # try nn.AvgPool2d instead
self.FC = nn.Linear(16 * 14 * 14, 10)
def forward(self, x):
x = self.conv1(x)
x = x.flatten(start_dim=1)
output = self.FC(x)
return output
# initialize model
model = net()
def model_single_input(x): # forward pass on single input (image)
return model(x.unsqueeze(dim=0))
# generate batch of input data
batch_size = 100
scale = 0.001 # smaller scale results in larger discrepancy between two methods
X = scale * torch.rand(size=(batch_size, 1, 28, 28,)) # data in MNIST format
# compute jacrev with vmap
jacrev_vmap = torch.vmap(torch.func.jacrev(model_single_input))(X)
# compute jacrev without vmap
jacrev = torch.zeros(jacrev_vmap.shape)
for i, x in enumerate(X):
jacrev[i] = torch.func.jacrev(model_single_input)(x)
# comparison of methods
print("Equivalence of two methods: ", torch.all(jacrev == jacrev_vmap))
print("Maximum absolute difference between the methods, per input (image):")
torch.max(torch.abs(jacrev - jacrev_vmap).view(batch_size, -1), dim=(1)).values
```
Example of output:
```
Equivalence of two methods: tensor(False)
Maximum absolute difference between the methods, per input (image):
tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0046, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0061, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0048, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0034, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0061, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000], grad_fn=<MaxBackward0>)
```
### Versions
PyTorch version: 2.0.1
Python version: 3.10.9 (main, Mar 1 2023, 12:20:14) [Clang 14.0.6 ] (64-bit runtime)
[conda] pytorch 2.0.1 py3.10_0 pytorch
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 3 |
1,525 | 107,115 |
H100 works differently than rtx4090 on same model
|
needs reproduction, triaged, module: numerical-reproducibility
|
### π Describe the bug
Honestly, I don't know if it's right to leave a bug report here.
However, it seems to be a problem that only occurs between pytorch and H100, so I write down this issue.
I've been training [VITS](https://github.com/jaywalnut310/vits) on H100 GPU, Mel loss doesn't decrease. only KL loss goes down to negative. Even sometimes I saw loss='nan' pops up.
But when I tested with rtx4090 in the same environment, all losses decrease well.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Gold 6448Y
Stepping: 8
Frequency boost: enabled
CPU MHz: 800.000
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 64 MiB
L3 cache: 60 MiB
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.3
[pip3] numpydoc==1.5.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.24.3 py39h14f4228_0
[conda] numpy-base 1.24.3 py39h31eccc5_0
[conda] numpydoc 1.5.0 py39h06a4308_0
[conda] pytorch 2.0.1 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py39_cu118 pytorch
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.2 py39_cu118 pytorch
[conda] triton 2.0.0 pypi_0 pypi
| 4 |
1,526 | 107,114 |
DISABLED test_make_fx_symbolic_exhaustive_special_bessel_y1_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: fx, module: unknown
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_bessel_y1_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15858837799).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_bessel_y1_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_proxy_tensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 9 |
1,527 | 107,112 |
from_blob python api
|
triaged, enhancement, module: python frontend
|
Hi, does python api has from_blob like the one in C++?
cc @albanD
| 2 |
1,528 | 107,109 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,529 | 107,108 |
WIP: [pt2] add metas for `foreach` ops
|
open source, Stale
|
Stack from [ghstack](https://github.com/ezyang/ghstack):
* __->__ #107108
| 6 |
1,530 | 107,102 |
Error when using sparse_coo tensor with optimizer
|
module: sparse, triaged
|
### π Describe the bug
The problem when I using "sparse_coo" tensor
```
import torch
print(torch.__version__)
cuda = torch.device('cuda:0')
a_o = torch.tensor([-3, -4]
, dtype=torch.float32, device=cuda)
a_p = torch.tensor([[-3, 1],
[ 1,-1],
[ 1, 2],]
, dtype=torch.float32, device=cuda)
# /->
""" without this line everything works,
but I need to compress terribly large matrices
so that the data fits in the memory of the GPU
"""
a_p = a_p.to_sparse_coo()
# <-/
b_p = torch.tensor([ 0, -2, -14,]
, dtype=torch.float32, device=cuda)
x = torch.zeros([len(a_o)], dtype=torch.float32, device=cuda)
x.requires_grad = True
def model_lp(x):
return (torch.mul(a_o, x).sum()) + (torch.add(b_p, torch.sum(torch.mul(a_p, x), dim=1)).relu().square().sum()*10_000)
optimizer = torch.optim.Adam([x], lr=3e-3)
for step in range(10000):
pred = model_lp(x)
optimizer.zero_grad()
pred.backward()
optimizer.step()
if step % 1000 == 0:
print(f'step {step}: f(x)={pred.item()}, x={x.tolist()},' )
```
Error message
```
2.0.1+cu118
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-54-8f4f7b22619a>](https://localhost:8080/#) in <cell line: 31>()
32 pred = model_lp(x)
33 optimizer.zero_grad()
---> 34 pred.backward()
35 optimizer.step()
36 if step % 1000 == 0:
1 frames
[/usr/local/lib/python3.10/dist-packages/torch/_tensor.py](https://localhost:8080/#) in backward(self, gradient, retain_graph, create_graph, inputs)
485 inputs=inputs,
486 )
--> 487 torch.autograd.backward(
488 self, gradient, retain_graph, create_graph, inputs=inputs
489 )
[/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py](https://localhost:8080/#) in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors_, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
NotImplementedError: Could not run 'aten::view' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, MkldnnCPU, NestedTensorCPU, NestedTensorCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
Meta: registered at aten/src/ATen/RegisterMeta.cpp:26824 [kernel]
QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:929 [kernel]
QuantizedCUDA: registered at aten/src/ATen/RegisterQuantizedCUDA.cpp:459 [kernel]
MkldnnCPU: registered at aten/src/ATen/RegisterMkldnnCPU.cpp:507 [kernel]
NestedTensorCPU: registered at aten/src/ATen/RegisterNestedTensorCPU.cpp:603 [kernel]
NestedTensorCUDA: registered at aten/src/ATen/RegisterNestedTensorCUDA.cpp:744 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at aten/src/ATen/RegisterFunctionalization_3.cpp:22788 [kernel]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:21 [kernel]
Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:23 [kernel]
ZeroTensor: registered at aten/src/ATen/RegisterZeroTensor.cpp:161 [kernel]
ADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:5017 [kernel]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16043 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:17036 [kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:14198 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/BatchRulesViews.cpp:559 [kernel]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1077 [kernel]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
```
### Versions
```
--2023-08-13 17:58:10-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21653 (21K) [text/plain]
Saving to: βcollect_env.pyβ
collect_env.py 100%[===================>] 21.15K --.-KB/s in 0.009s
2023-08-13 17:58:10 (2.39 MB/s) - βcollect_env.pyβ saved [21653/21653]
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 536.67
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 80
Model name: AMD Ryzen 7 5800H with Radeon Graphics
Stepping: 0
CPU MHz: 3193.912
BogoMIPS: 6387.82
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 4 MiB
L3 cache: 16 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 0 |
1,531 | 107,099 |
memoryview support for `torch._C.import_ir_module_from_buffer`
|
oncall: jit
|
### π The feature, motivation and pitch
Currently `torch._C.import_ir_module_from_buffer` require buffer to be `std::string`(corresponding to `str` and `bytes` via pybind11), and immediately convert it to `std::istringstream`.
https://github.com/pytorch/pytorch/blob/b897c57d47f99807528feb022a06e7f5a9d88d00/torch/csrc/jit/python/script_init.cpp#L1887-L1911
However, copy is inevitable to obtain a `bytes`. `memoryview.tobytes()` will copy the data, `io.BytesIO`'s initialization will copy the data(there has been [copy-on-write optimization](https://github.com/python/cpython/issues/66202) since Python 3.5, but only for `bytes` initialization), and `.read()` will copy the data again!
If `torch._C.import_ir_module_from_buffer` supports memoryview, I can do zero-copy `torch.jit.load` after receiving scriptmodule binary through network:
```python
class BufferWrapper:
def __init__(self, buffer):
self.buffer = buffer
def read(self):
return self.buffer
mem: memoryview = network_recv(...)
model = torch.jit.load(BufferWrapper(mem))
```
BTW, the parameter description of `torch.jit.load` seems too strict: `f` only need to support `read`, instead of "has to implement read, readline, tell, and seek".
https://github.com/pytorch/pytorch/blob/b897c57d47f99807528feb022a06e7f5a9d88d00/torch/jit/_serialization.py#L86-L169
### Alternatives
_No response_
### Additional context
There might be a concern that memoryview can be mutable, however, regular files are mutable too.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,532 | 107,096 |
Revert "Revert "Add `_foreach_clamp` (#106574)""
|
triaged, open source, release notes: foreach_frontend, module: inductor, ciflow/inductor
|
This reverts commit 354484ea6d85ce2da90fd3e1e5fa9524987bcfe6.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 22 |
1,533 | 107,091 |
[vision hash update] update the pinned vision hash
|
triaged, open source, Stale, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 2 |
1,534 | 107,087 |
RuntimeError with operations on torch.float8_e5m2 and torch.float_e4m3fn data types
|
triaged, module: float8
|
### π Describe the bug
I've been experimenting with modifications in PyTorch ([this commit](https://github.com/pytorch/pytorch/pull/104242)) to support the torch.float8_e5m2 and torch.float_e4m3fn data types for CPU operations. While multiplication and division operations work as expected, I've encountered specific errors with addition and subtraction.
Steps to reproduce the behavior:
Create two random tensors using the following:
```python
a = torch.randn((2,3), dtype=torch.float8_e5m2)
b = torch.randn((2,3), dtype=torch.float8_e5m2)
```
The operations c = a * b and c = a / b execute without errors. However, when I attempt to use c = a + b or c = a - b, I encounter the error:
```
RuntimeError: "add_stub" not implemented for 'Float8_e5m2'
```
Additionally, if I use only a tensor with a single value for any operation, I receive the error:
```python
a = torch.tensor(1.2, dtype=torch.float8_e5m2)
b = torch.tensor(2.3, dtype=torch.float8_e5m2)
```
```
RuntimeError: Unsupported TypeMeta in ATen: nullptr (uninitialized)
```
Would appreciate guidance or a potential fix for these issues.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git89fd1b8
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
CPU family: 6
Model: 63
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 2
CPU max MHz: 3100.0000
CPU min MHz: 1200.0000
BogoMIPS: 4589.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 6 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+git4bd5356
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2023.1.0 h06a4308_46343
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0a0+git4bd5356 dev_0 <develop>
cc @yanbing-j
| 6 |
1,535 | 107,081 |
[FSDP] summon_full_params won't change parameters
|
triaged, module: fsdp
|
### π Describe the bug
I have bugs that params in FSDP_model won't change after summon_full_params ends, even if writeback is set to be true.
For example, I expected the following code will set some zeros to params in FSDP_model. But after summon_full_params, the parameters revert to the previous non-zero values.
```
for n,p in FSDP_model.model.named_parameters():
if p is not None:
total_param = p.data.numel()
non_zero_param = torch.count_nonzero(p.data)
env.print(n,'main train() percentage of zero weight:', (total_param-non_zero_param)/total_param, "total_param",total_param)
with FSDP.summon_full_params(
FSDP_model,
writeback=True,
offload_to_cpu=True,
rank0_only=False,
with_grads=False
):
for n,p in FSDP_model.model.named_parameters():
p.data.masked_fill_(mask, 0.0)
for n,p in FSDP_model.model.named_parameters():
if p is not None:
total_param = p.data.numel()
non_zero_param = torch.count_nonzero(p.data)
env.print(n,'within summon_full_params main train() percentage of zero weight:', (total_param-non_zero_param)/total_param, "total_param",total_param)
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.73.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 3269.385
BogoMIPS: 4491.62
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] pytorch-lightning==1.9.4
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] numpy 1.24.2 pypi_0 pypi
[conda] pytorch-lightning 1.9.4 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 2.0.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 5 |
1,536 | 107,076 |
[Dynamo] Unable to Trace AdamW Optimizer when there is LR Scheduler
|
high priority, module: optimizer, triaged, oncall: pt2, module: dynamic shapes
|
### π Describe the bug
## What is Broken
- the lr in AdamW Optimizer is a float so causes a recompile everytime LRScheduler changes the lr
## Suggested fix
- fix the graph break in `_multi_tensor AdamW Optimizer` and/or normal `AdamW Optimizer`
- change that LRScheduler base class outputs Scalar Tensor and AdamW Optimizer accepts Scalar Tensor for lr
- this is similar to what @janeyx99 is doing in Fused AdamW Optimizer https://github.com/pytorch/pytorch/pull/106916
## Why this should be fixed
- a lot of the community uses AdamW Optimizer in combination with LR Scheduler and wants to torch compile it.
## Reprod Script
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torch._dynamo.utils
import torch._dynamo
import logging
torch._dynamo.config.log_level = logging.INFO
class SimpleModel(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(SimpleModel, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
input_size = 784
hidden_size = 500
num_classes = 10
model = SimpleModel(input_size, hidden_size, num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim._multi_tensor.AdamW(model.parameters(), lr=0.001)
scheduler = optim.lr_scheduler.StepLR(optimizer,step_size=1, gamma=0.1)
inputs = torch.randn(64, 784)
labels = torch.randint(0,10,(64,))
num_epochs = 5
def opt_scheduler_step(optimizer):
optimizer.step()
return optimizer
opt_scheduler_step = torch.compile(opt_scheduler_step, backend="eager", fullgraph=False)
for epoch in range(num_epochs):
outputs = model(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer = opt_scheduler_step(optimizer)
scheduler.step()
print(loss.item())
```
## Logs
```
[2023-08-11 22:52:19,789] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing opt_scheduler_step
[2023-08-11 22:52:19,857] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing wrapper
[2023-08-11 22:52:19,858] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored
[2023-08-11 22:52:19,869] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing _use_grad
[2023-08-11 22:52:19,874] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing step
[2023-08-11 22:52:19,878] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing <graph break in step>
[2023-08-11 22:52:19,898] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing adamw
[2023-08-11 22:52:19,910] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing <graph break in adamw>
[2023-08-11 22:52:19,912] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing <graph break in wrapper>
2.3640196323394775
[2023-08-11 22:52:19,927] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing adamw
1.709822654724121
[2023-08-11 22:52:19,949] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing adamw
1.6508420705795288
[2023-08-11 22:52:19,972] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing adamw
1.6449681520462036
[2023-08-11 22:52:19,993] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing adamw
1.6443781852722168
```
cc: @janeyx99 @cchan @ezhang887
### Versions
python==3.8.13
torch==2.0.1
cc @ezyang @gchanan @zou3519 @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,537 | 107,075 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 5 |
1,538 | 107,073 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,539 | 107,061 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
1,540 | 107,055 |
Build a check we can defer to runtime, potentially add to the graph
|
triaged, oncall: pt2, module: dynamo
|
See discussion here https://github.com/pytorch/pytorch/pull/106886/files#r1291433908
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,541 | 107,054 |
Extend dict and by extension __dict__ modeling in dynamo to support `setdefault`, `get`
|
triaged, oncall: pt2, module: dynamo
|
ex:
`return module.__dict__.setdefault(REGISTRY_KEY, default_registry) # type: ignore[call-overload]`
`_module_state_mapping.get(module, None)`
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,542 | 107,053 |
Dynamo x FSDP - Issue Tracking Master Task
|
triaged, oncall: pt2, module: dynamo
|
- [ ] https://github.com/pytorch/pytorch/issues/107054
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,543 | 107,047 |
UNSTABLE pull / linux-bionic-py3.8-clang9 / test (dynamo)
|
module: ci, triaged, oncall: pt2, module: functorch, unstable
|
Multiple similar failures on trunk recently:
https://hud.pytorch.org/failure/RuntimeError%3A%20functorch%2Ftest_vmap%202%2F2%20failed
```
functorch/test_vmap 2/2 failed! Received signal: SIGIOT
2023-08-11T05:39:49.5188676Z Traceback (most recent call last):
2023-08-11T05:39:49.5189105Z File "test/run_test.py", line 1709, in <module>
2023-08-11T05:39:49.5191726Z main()
2023-08-11T05:39:49.5192146Z File "test/run_test.py", line 1678, in main
2023-08-11T05:39:49.5195732Z run_tests(general_tests, test_directory, options, general_failures)
2023-08-11T05:39:49.5196271Z File "test/run_test.py", line 1585, in run_tests
2023-08-11T05:39:49.5198237Z raise RuntimeError(failure.message)
2023-08-11T05:39:49.5198751Z RuntimeError: functorch/test_vmap 2/2 failed! Received signal: SIGIOT
2023-08-11T05:39:49.9126288Z
2023-08-11T05:39:49.9126735Z real 72m46.379s
2023-08-11T05:39:49.9127130Z user 233m22.949s
2023-08-11T05:39:49.9127446Z sys 8m15.861s
2023-08-11T05:39:49.9168650Z ##[error]Process completed with exit code 1.
```
The failure started to appear after #106852, but it's not consistent and looks unrelated to this PR.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 1 |
1,544 | 107,044 |
[RFC] Option to check eager if compile fails
|
Stale, module: dynamo, ciflow/inductor
|
Suppress errors is problematic because it swallows exceptions and keeps going. The question for this RFC is - do want a regime wherein we can catch failures in the compiler, and then quickly check eager to see if it passes or fails?
There's 2 ways we can take this:
1) Replace suppress errors w/ this, and make the code here actually return the eager result - this acts as a fallback mechanism, and disable in one, without routing out to eval_frame.c OR reusing the buggy/failed compilation
OR
2) Agree that an exception is an exception, and just run eager to give the user signal on where the failure is (in eager AND compiled, or just in eager), and then always fail at this point.
I prefer the latter, but wanted to discuss.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107044
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
1,545 | 107,041 |
I want to calculate the matrix multiplication of two Boolean matrices, but torch.mm will report an error. Is there any more efficient alternative?
|
triaged, module: boolean tensor
|
### π The feature, motivation and pitch
I want to calculate the multiplication of two Boolean matrices, but torch.mm will report an error. Is there any more efficient alternative?
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
1,546 | 107,040 |
Dynamo not handling a NamedTuple
|
high priority, triaged, oncall: pt2, module: dynamo, mlperf
|
### π Describe the bug
Context: https://discuss.pytorch.org/t/struggling-to-get-pytorch-fast-enough-to-use-in-public-competition/186015
Tried to poke around the NamedTupleVariable https://gist.github.com/msaroufim/24c0da855b28c8acbea4ddf2d628e977 to get this fixed in dynamo but haven't managed to solve it yet. Opening this while i work on librispeech now instead
## Repro
Setup instructions here https://github.com/mlcommons/algorithmic-efficiency
```
python3 datasets/dataset_setup.py \
--data_dir=~/data \
--ogbg
```
```
python submission_runner.py \
--framework=pytorch \
--workload=ogbg \
--experiment_dir=/home/ubuntu/algorithmic-efficiency \
--experiment_name=ogbg_pytorch \
--submission_path=reference_algorithms/development_algorithms/ogbg/ogbg_pytorch/submission.py \
--tuning_search_space=reference_algorithms/development_algorithms/ogbg/tuning_search_space.json
```
### Error logs
Full logs: https://gist.github.com/msaroufim/36e68408c6cb156cf8cb38baabd17bd3
Error message: `torch._dynamo.exc.InternalTorchDynamoError: GraphsTuple.__new__() missing 6 required positional arguments: 'edges', 'receivers', 'senders', 'globals', 'n_node', and 'n_edge'`
Here's what a `GraphsTuple` is: https://github.com/deepmind/jraph/blob/master/jraph/_src/graph.py#L26
### Minified repro
_No response_
### Versions
n/a
cc @ezyang @gchanan @zou3519 @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,547 | 107,035 |
[TEST] Revert "NumPy support in torch.compile (#106211)"
|
Stale, ciflow/binaries_conda, module: inductor, module: dynamo, ciflow/inductor
|
This reverts commit a9dca53438ca5a3c71d3cbdd4d701b91a61fb7d1.
Windows nightlies are broken, testing to see if revert can fix this: https://github.com/pytorch/pytorch/actions/runs/5830091528/job/15810775269
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 3 |
1,548 | 107,030 |
Vectorized operation on quantized tensors returns wrong values (different rounding)
|
oncall: quantization, triaged
|
### π Describe the bug
The following code fails:
```
import numpy as np
import torch
X = torch.from_numpy(np.full(64+1, 514., dtype=np.float32))
(scale, zero_point, torch_type) = (1028.02, 255, torch.quint8)
assert X.is_contiguous(memory_format=torch.contiguous_format)
qX = torch.quantize_per_tensor(X, scale=scale, zero_point=zero_point,
dtype=torch_type)
f_min, f_max = 0.0, 1.0
q_min, q_max = torch.iinfo(torch_type).min, torch.iinfo(torch_type).max
output_scale = (f_max - f_min) / (q_max - q_min + 1.0)
qY = torch.ops.quantized.sigmoid(qX, output_scale=output_scale, output_zero_point=0)
print(qY)
assert qY[0] == qY[-1]
```
In particular the first 64 values are "0.5039" while the remainder are "0.5000". This happens for any remainder not fitting into chunks of 64 values.
Found by reducing an example of a failing test in `test_quantization`:
```
======================================================================
FAIL: test_sigmoid (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
<snip>
AssertionError: Quantized tensor-likes are not close!
Mismatched elements: 63 / 75 (84.0%)
Greatest absolute difference: 0.00390625 at index (0, 0, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.0078125 at index (0, 0, 1) (up to 1.3e-06 allowed) : sigmoid - quantized.sigmoid failed: (tensor([[[0.0000, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039]],
[[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039]],
[[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5039],
[0.5039, 0.5039, 0.5039, 0.5039, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]], size=(3, 5, 5),
dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine,
scale=0.00390625, zero_point=0) vs. tensor([[[0.0000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]],
[[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]],
[[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]], size=(3, 5, 5),
dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine,
scale=0.00390625, zero_point=0))
Falsifying example: test_sigmoid(
X=(array([[[-261630., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.]],
[[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.]],
[[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.],
[ 514., 514., 514., 514., 514.]]],
dtype=float32), (1028.0156862745098, 255, torch.quint8)),
self=<quantization.core.test_quantized_op.TestQuantizedOps testMethod=test_sigmoid>,
)
----------------------------------------------------------------------
Ran 942 tests in 656.469s
FAILED (failures=2, errors=1, skipped=72)
```
This seems to happen for all PyTorch versions so far and does not depend on the host CPU. I reproduced this even on ppc64le.
### Versions
> PyTorch version: 2.0.1+cu117
> Is debug build: False
>
> OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
> GCC version: (GCC) 11.3.0
> Clang version: Could not collect
> CMake version: version 3.27.1
> Libc version: glibc-2.17
>
> Python version: 3.10.4 (main, Oct 6 2022, 14:14:40) [GCC 11.3.0] (64-bit runtime)
> Python platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-glibc2.17
>
> Is XNNPACK available: True
>
> CPU:
> Architecture: x86_64
> Model name: AMD EPYC 7352 24-Core Processor
> Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
>
> Versions of relevant libraries:
> [pip3] numpy==1.25.2
> [pip3] torch==2.0.1
> [pip3] triton==2.0.0
> [conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 3 |
1,549 | 107,029 |
Doc for dtensor?
|
oncall: distributed, module: dtensor
|
### π The feature, motivation and pitch
I noticed that torch dtensor has been released, is there a revised standard documentation for dtensor related API?π
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 6 |
1,550 | 107,026 |
a bug about tensor stride
|
triaged, module: viewing and reshaping
|
### π Describe the bug
>>> a=torch.rand(4,4)
>>> a.T.unsqueeze(0).transpose(-2,-1).contiguous().stride()
(4, 4, 1)
>>> a.T.unsqueeze(0).transpose(-2,-1).contiguous().is_contiguous()
True
>>> a.T.unsqueeze(0).transpose(-2,-1).stride()
(4, 4, 1)
>>> a.T.unsqueeze(0).contiguous().transpose(-2,-1).contiguous().stride()
(16, 4, 1)>>> torch.__version__
'2.0.0a0+gitc263bd4'
the stride of a.T.unsqueeze(0).transpose(-2,-1).contiguous() should be (16,4,1) instead of (4, 4, 1)
os:ubuntu 18.04
### Versions
'2.0.0a0+gitc263bd4'
| 0 |
1,551 | 107,023 |
[feature request] [onnx] Support QuantLinear/DequantLinear float16 inputs (opset19 and maybe "backport"-support them for opset17)
|
module: onnx, triaged, enhancement
|
### π The feature, motivation and pitch
They added standardized support for float16 inputs for QuantLinear/DequantLinear: https://github.com/onnx/onnx/blob/main/docs/Changelog.md#QuantizeLinear-19 and https://github.com/onnx/onnx/blob/main/docs/Changelog.md#dequantizelinear-19
When doing a hack to force it to be float16, got `UserWarning: The exported ONNX model failed ONNX shape inference.The model will not be executable by the ONNX Runtime.If this is unintended and you believe there is a bug,please report an issue at https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:QuantizeLinear, node name: /qkv/_input_quantizer/QuantizeLinear): x typestr: T1, has unsupported type: tensor(float16) (Triggered internally at /opt/pytorch/pytorch/torch/csrc/jit/serialization/export.cpp:1410.)`
Related: https://github.com/pytorch/pytorch/issues/102430
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
1,552 | 107,021 |
torchrunοΌ RendezvousConnectionError when use C10d on multi nodes
|
oncall: distributed, oncall: r2p
|
### π Describe the bug
torchrun \
--nnodes=1:3\
--nproc_per_node=4\
--max_restarts=3\
--rdzv_id=1\
--rdzv_backend=c10d\
--rdzv_endpoint="192.0.0.1:1234"\
train.py
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[E socket.cpp:860] [c10d] The client socket has timed out after 60s while trying to connect to 192.0.0.1:1234
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @dzhulgakov
| 1 |
1,553 | 107,018 |
Memory tracker does not report the module name correctly.
|
Stale, topic: not user facing
|
If a module contains a sub module followed by some functional method. The name of the functional method is mistakenly labeled to be within the sub module. It should be the parent module. It can be shown in the following example:
```
import torch
from torch.distributed._tools import MemoryTracker
class MyModel(torch.nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
def forward(self, input):
res = self.linear(input)
res = torch.relu(res)
res = res.softmax(-1)
return res
class Parent(torch.nn.Module):
def __init__(self):
super(Parent, self).__init__()
self.child = MyModel()
def forward(self, input):
return self.child(input)
def run():
model = Parent()
memory_tracker = MemoryTracker()
memory_tracker.start_monitor(model)
input = torch.randn(5, 10)
res = model(input)
memory_tracker.stop()
allocated = memory_tracker.memories_allocated
for i in range(0, len(allocated)):
print(f"{allocated[i][0]}\t{allocated[i][1]}")
if __name__ == "__main__":
run()
```
In current master, the output is:
```
.randn.default_0 0.0
child.linear.forward.t.default_0 0.0
child.linear.forward.addmm.default_0 0.0
child.linear.forward.relu.default_0 0.0
child.linear.forward._softmax.default_0 0.0
```
It shows that relu and _softmax are part of the linear. However, they should be part of child.
With the PR, the output is:
```
.randn.default_0 0.0
child.linear.forward.t.default_0 0.0
child.linear.forward.addmm.default_0 0.0
child.forward.relu.default_0 0.0
child.forward._softmax.default_0 0.0
```
The name is shown correctly.
| 4 |
1,554 | 107,015 |
cov to onnx error
|
module: onnx, module: windows, triaged
|
### π Describe the bug
code:
```python
import torch
import torchaudio
from torch import nn
from torch import onnx
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1)
def export_datacov_onnx(path):
model = DataCov()
model.eval()
model = torch.jit.script(model)
x = torch.randn((1, 48000 * 12), requires_grad=True)
args = (x,)
torch.onnx.dynamo_export(model, args, path, export_params=True, opset_version=17)
if __name__ == '__main__':
export_datacov_onnx('DataCov.onnx')
```
error:
```
C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\onnx\_internal\exporter.py:112: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\onnx\_internal\exporter.py", line 1006, in dynamo_export
).export()
^^^^^^^^
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\onnx\_internal\exporter.py", line 810, in export
graph_module = self.options.fx_tracer.generate_fx(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 198, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\_dynamo\eval_frame.py", line 1116, in inner
check_if_dynamo_supported()
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\_dynamo\eval_frame.py", line 538, in check_if_dynamo_supported
raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\dell\miniconda3\envs\mos_cov_to_android\Lib\site-packages\torch\onnx\_internal\exporter.py", line 1016, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at report_dynamo_export.sarif. Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
python-BaseException
[report_dynamo_export.zip](https://github.com/pytorch/pytorch/files/12318966/report_dynamo_export.zip)
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230810+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 δΈδΈη
GCC version: (Rev5, Built by MSYS2 project) 13.1.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.1.0.dev20230810+cpu
[pip3] torchaudio==2.1.0.dev20230810+cpu
[pip3] torchvision==0.16.0.dev20230810+cpu
[conda] blas 1.0 mkl defaults
[conda] mkl 2023.1.0 h6b88ed4_46357 defaults
[conda] mkl-service 2.4.0 py311h2bbff1b_1 defaults
[conda] mkl_fft 1.3.6 py311hf62ec03_1 defaults
[conda] mkl_random 1.2.2 py311hf62ec03_1 defaults
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpy-base 1.25.0 py311hd01c5d8_0 defaults
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch 2.1.0.dev20230810+cpu pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230810+cpu pypi_0 pypi
[conda] torchvision 0.16.0.dev20230810+cpu pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
1,555 | 107,014 |
Create fastpath backend context manager, similar to SDPA kernel backend manager
|
fb-exported, Stale, release notes: nn, release notes: distributed (fsdp)
|
Summary: Create fastpath backend context manager, similar to SDPA kernel backend manager. This allows us control over reproducibility by giving us control over the backend/kernel that will be selected
PRs like #106824 demonstrate the need for controlling what kernels are used in various situations, this gives users a general solution to the problem.
cc: @drisspg
Test Plan: sandcastle, github
Differential Revision: D48256968
| 20 |
1,556 | 107,013 |
Add flake8-bugbear code B007
|
module: mkldnn, open source, release notes: quantization, release notes: releng, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor
|
Fixes flake8-bugbear code B007, from #106571
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 5 |
1,557 | 107,011 |
max_pool3d_with_indices_backward_cuda does not have a deterministic implementation
|
triaged, module: determinism, module: pooling
|
### π The feature, motivation and pitch
The operation "max_pool3d_with_indices_backward_cuda" does not have a deterministic implementation in the current context. I would like to have a deterministic implementation so that I can conduct fair algorithm comparison experiments.
### Alternatives
_No response_
### Additional context
The experiment cannot be reproduced.
cc @mruberry @kurtamohler
| 0 |
1,558 | 107,009 |
[WIP][CI Test] Inline through skipfiles
|
Stale, ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107009
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 3 |
1,559 | 107,006 |
Apply fusion more aggressively in NAdam and Adagrad compilation
|
good first issue, triaged, oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
Currently NAdam compiles to > 5 kernels and Adagrad does as well. See https://github.com/pytorch/pytorch/blob/4df84c3b4d93ea53c9ca169f32a0dc4619bee797/test/inductor/test_compiled_optimizers.py#L4
for current kernel counts. Ideally these should fully fuse, but likely there are issues due to the presence of mutation. A good place to start is https://github.com/pytorch/pytorch/blob/4df84c3b4d93ea53c9ca169f32a0dc4619bee797/torch/_inductor/scheduler.py#L4
Specifically https://github.com/pytorch/pytorch/blob/4df84c3b4d93ea53c9ca169f32a0dc4619bee797/torch/_inductor/scheduler.py#L644
Ideally the rules should be able to be modified to soundly allow these to fuse fully.
Contact @mlazos for more details
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 7 |
1,560 | 107,005 |
Dynamic shapes support for inductor foreach codegen
|
good first issue, triaged, oncall: pt2, module: dynamic shapes, module: inductor
|
### π The feature, motivation and pitch
Currently foreach op compilation doesn't support dynamic shapes.
See the restriction here: https://github.com/pytorch/pytorch/blob/4df84c3b4d93ea53c9ca169f32a0dc4619bee797/torch/_inductor/lowering.py#L419
Since this feature was originally intended for optimizers, dynamic shapes aren't strictly necessary, but in order for this codegen to be used elsewhere in the stack (think input concatenation, more generalized horizontal fusion) dynamic shapes support is necessary.
### Alternatives
_No response_
### Additional context
A good place to get started is here: https://github.com/pytorch/pytorch/blob/4df84c3b4d93ea53c9ca169f32a0dc4619bee797/torch/_inductor/codegen/triton_foreach.py#L142
This file is what handles codegen for the foreach operators. At a high level additional runtime args and a more sophisticated kernel launch grid calculation will be needed to take into account runtime sizes vs static sizes. Contact @mlazos for more details
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 4 |
1,561 | 107,003 |
tmp test
|
Stale, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #107003
* #107004
* #106912
* #106911
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 2 |
1,562 | 107,002 |
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, NoneType, NoneType, Device, bool,
|
module: onnx, triaged
|
### π Describe the bug
Trying to export a model with ONNX.
torch.onnx.export fails with
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, NoneType, NoneType, Device, bool,
I can post a reproduction if desired.
Full trace:
```
File <@beartype(torch.onnx.utils.export) at 0x7fcaa745cdc0>:348, in export(__beartype_func, __beartype_conf, __beartype_get_violation, __beartype_object_140508366436928, __beartype_object_94890050286208, __beartype_object_140508366424384, __beartype_object_94890048963904, __beartype_object_94889960377152, __beartype_object_94889991698880, __beartype_getrandbits, __beartype_object_94890048956656, __beartype_object_140510018072896, __beartype_object_94889991692304, __beartype_object_140510276498368, __beartype_object_94889991918032, *args, **kwargs)
File ~/venv/gpu/lib/python3.10/site-packages/torch/onnx/utils.py:506, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions)
188 @_beartype.beartype
189 def export(
190 model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
(...)
206 export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
207 ) -> None:
208 r"""Exports a model into ONNX format.
209
210 If ``model`` is not a :class:`torch.jit.ScriptModule` nor a
(...)
503 All errors are subclasses of :class:`errors.OnnxExporterError`.
504 """
--> 506 _export(
507 model,
508 args,
509 f,
510 export_params,
511 verbose,
512 training,
513 input_names,
514 output_names,
515 operator_export_type=operator_export_type,
516 opset_version=opset_version,
517 do_constant_folding=do_constant_folding,
518 dynamic_axes=dynamic_axes,
519 keep_initializers_as_inputs=keep_initializers_as_inputs,
520 custom_opsets=custom_opsets,
521 export_modules_as_functions=export_modules_as_functions,
522 )
File ~/venv/gpu/lib/python3.10/site-packages/torch/onnx/utils.py:1548, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, onnx_shape_inference, export_modules_as_functions)
1545 dynamic_axes = {}
1546 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
-> 1548 graph, params_dict, torch_out = _model_to_graph(
1549 model,
1550 args,
1551 verbose,
1552 input_names,
1553 output_names,
1554 operator_export_type,
1555 val_do_constant_folding,
1556 fixed_batch_size=fixed_batch_size,
1557 training=training,
1558 dynamic_axes=dynamic_axes,
1559 )
1561 # TODO: Don't allocate a in-memory string for the protobuf
1562 defer_weight_export = (
1563 export_type is not _exporter_states.ExportTypes.PROTOBUF_FILE
1564 )
File <@beartype(torch.onnx.utils._model_to_graph) at 0x7fcaa745dbd0>:11, in _model_to_graph(__beartype_func, __beartype_conf, __beartype_get_violation, __beartype_object_94890049168080, *args, **kwargs)
File ~/venv/gpu/lib/python3.10/site-packages/torch/onnx/utils.py:1113, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
1110 args = (args,)
1112 model = _pre_trace_quant_model(model, args)
-> 1113 graph, params, torch_out, module = _create_jit_graph(model, args)
1114 params_dict = _get_named_param_dict(graph, params)
1116 try:
File ~/venv/gpu/lib/python3.10/site-packages/torch/onnx/utils.py:989, in _create_jit_graph(model, args)
984 graph = _C._propagate_and_assign_input_shapes(
985 graph, flattened_args, param_count_list, False, False
986 )
987 return graph, params, torch_out, None
--> 989 graph, torch_out = _trace_and_get_graph_from_model(model, args)
990 _C._jit_pass_onnx_lint(graph)
991 state_dict = torch.jit._unique_state_dict(model)
File ~/venv/gpu/lib/python3.10/site-packages/torch/onnx/utils.py:893, in _trace_and_get_graph_from_model(model, args)
891 prev_autocast_cache_enabled = torch.is_autocast_cache_enabled()
892 torch.set_autocast_cache_enabled(False)
--> 893 trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
894 model,
895 args,
896 strict=False,
897 _force_outplace=False,
898 _return_inputs_states=True,
899 )
900 torch.set_autocast_cache_enabled(prev_autocast_cache_enabled)
902 warn_on_static_input_change(inputs_states)
File ~/venv/gpu/lib/python3.10/site-packages/torch/jit/_trace.py:1268, in _get_trace_graph(f, args, kwargs, strict, _force_outplace, return_inputs, _return_inputs_states)
1266 if not isinstance(args, tuple):
1267 args = (args,)
-> 1268 outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
1269 return outs
File ~/venv/gpu/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/venv/gpu/lib/python3.10/site-packages/torch/jit/_trace.py:127, in ONNXTracedModule.forward(self, *args)
124 else:
125 return tuple(out_vars)
--> 127 graph, out = torch._C._create_graph_by_tracing(
128 wrapper,
129 in_vars + module_state,
130 _create_interpreter_name_lookup_fn(),
131 self.strict,
132 self._force_outplace,
133 )
135 if self._return_inputs:
136 return graph, outs[0], ret_inputs[0]
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, NoneType, NoneType, Device, bool,
Candidates:
aten::full.names(int[] size, Scalar fill_value, *, str[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full(SymInt[] size, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full.names_out(int[] size, Scalar fill_value, *, str[]? names, Tensor(a!) out) -> Tensor(a!)
aten::full.out(SymInt[] size, Scalar fill_value, *, Tensor(a!) out) -> Tensor(a!)
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6800.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] audiolm-pytorch==1.2.27
[pip3] classifier-free-guidance-pytorch==0.2.2
[pip3] clip-anytorch==2.5.2
[pip3] ema-pytorch==0.2.3
[pip3] flake8==6.1.0
[pip3] functorch==2.0.0
[pip3] lion-pytorch==0.1.2
[pip3] msgpack-numpy==0.4.8
[pip3] mypy==0.981
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.0
[pip3] open-clip-torch==2.9.1
[pip3] pytorch-lightning==1.8.6
[pip3] pytorch-metric-learning==2.3.0
[pip3] rotary-embedding-torch==0.2.6
[pip3] soundstorm-pytorch==0.0.22
[pip3] spear-tts-pytorch==0.0.14
[pip3] torch==2.0.1
[pip3] torch-audiomentations==0.11.0
[pip3] torch-pitch-shift==1.2.4
[pip3] torchaudio==2.0.2
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.0
[pip3] torchcrepe==0.0.21
[pip3] torchdata==0.6.1
[pip3] torchdiffeq==0.2.3
[pip3] torchdyn==1.0.4
[pip3] torchmetrics==0.11.4
[pip3] torchode==0.1.8
[pip3] torchsde==0.2.5
[pip3] torchtext==0.15.2
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] vector-quantize-pytorch==1.6.30
[pip3] voicebox-pytorch==0.0.12
[conda] Could not collect
| 0 |
1,563 | 106,999 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,564 | 106,996 |
Only check for fusions within a node distance of 64
|
Stale, ciflow/trunk, module: inductor, ciflow/inductor, release notes: inductor
|
After Animesh's change to help with peak memory, nodes farther than 64 edges in the graph won't be fused anyway, this ensures the scheduler doesn't add those nodes to the list possible fusions. Makes a fusion pass O(N) instead of O(N^2)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 2 |
1,565 | 106,995 |
Use aot_compile instead of compile_fx_aot in _export.aot_compile
|
triaged, open source, Stale, topic: not user facing, module: export
|
`torch._inductor` module provides function [aot_compile()](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/__init__.py#L30) which internally uses `_inductor.compile_fx.compile_fx_aot()`.
Since `torch._inductor` module has AOT compilation function `_inductor.aot_compile()` we supposed to use it instead of using module internal function `_inductor.compile_fx.compile_fx_aot()`.
Also, `_inductor.aot_compile()` has a logic to process `compile_fx_aot()` output to get actual lib path.
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 5 |
1,566 | 106,991 |
Add cutlass as an alternative backend of PT2 Inductor
|
triaged, oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
### Motivation
[Cutlass](https://github.com/NVIDIA/cutlass) is an efficient template library for compute-heavy GPU operations like Gemm, Conv and others. It has decent support for H100. Some important kernels (e.g. Flash attention v2, XFormer attention) are implemented based on Cutlass, too. It would be a good complement to Triton to generate Inductor fused kernels for some compute heavy operations.
### Proposal
The proposal is to add Cutlass as an alternative backend of Inductor, as demonstrated in the prototype PR (https://github.com/pytorch/pytorch/pull/106607). As shown in the PR:
1) In torch/_inductor/config.py, "Cutlass" could be configured as one of the Inductor max-autotune backend through `max_autotune_gemm_backends`. (In the future, we may want to extend it to cover other cases beyond gemm.) This option can be set via options in `torch.compile()` (https://pytorch.org/docs/stable/generated/torch.compile.html#torch-compile).
2) Once "Cutlass" is added as a candidate, max-autotune will also tune and select cutlass kernels, together with Aten and Triton kernels.
3) We'll also utilize Cutlass epilogue visitor to support flexible gemm and epilogue fusions in later PRs. More features will come in the future.
### Release / Dependency
1) Pytorch release: to properly release cutlass together with Pytorch, we need to find a way to pack cutlass into Pytorch package distribution. This includes both C++ header files (third_party/cutlass/include) as well as Python scripts and modules (third_party/cutlass/tools/library/scripts).
2) NVCC dependency: The prototype implementation relies on NVCC. We need to figure out ways to switch to NVRTC. Cutlass team has a proposal to use NVCC to do pre-processing at Pytorch packaging stage, and make end-users only rely on NVRTC. This needs to be discussed in details.
3) Meta internal deployment: We need to figure out a way to properly deploy cutlass dependency in Meta internal env.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @atalman @malfet @alband @ptrblck @seemethere @jansel @gottbrath
cc @jackkosaian @mnicely @hwu36
### Alternatives
_No response_
### Additional context
_No response_
| 5 |
1,567 | 106,989 |
`ray` multiprocessing interference by torch import
|
module: multiprocessing, triaged
|
### π Describe the bug
torch import order can disable `ray` multiprocessing.
Ray will use all `num_cpu` cores, if `torch` is imported before `wandb` (expected behavior)
Ray will only use 1 core, if `wandb` is imported before `torch` (unexpected behavior)
I confirm this by looking at system monitor.
For example, the following code will only utilize 1 core even though 12 have been specified:
```python
import ray
import wandb
import torch
@ray.remote
class TestWorker:
def __init__(self):
pass
def f(self):
x = 0
for i in range(100000000):
for j in range(100000000):
x += i * j
num_cpus = 12
ray.init(num_cpus=num_cpus)
workers = [TestWorker.remote() for _ in range(num_cpus)]
print(f"n workers: {len(workers)}")
object_ids = [w.f.remote() for w in workers]
ray.get(object_ids)
```
However when I move `import torch` up by 1 line, it will use all 12 cores:
```python
import ray
import torch # this line is moved up 1
import wandb
```
This happens only when torch is installed throug Conda:
`- pytorch=2.0.1=py3.10_cuda11.8_cudnn8.7.0_0`
When just installing torch==2.0.1 through pip the issue is resolved.
environment .yaml reference:
```yaml
name: env1
channels:
- pytorch
- nvidia
- conda-forge
- defaults
dependencies:
- pytorch=2.0.1=py3.10_cuda11.8_cudnn8.7.0_0 # including this causes described issues
- pip:
# - torch==2.0.1 # including this will fix it
- ray==2.6.2
- wandb==0.15.8
```
### Versions
ollecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 470.199.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 33
Model name: AMD Ryzen 9 5900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7385.64
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
[conda] pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu118 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu118 pytorch
cc @VitalyFedyunin
| 2 |
1,568 | 106,981 |
Reland "Make adding buffers more like adding parameters (#104069)" (take #2)
|
ciflow/trunk, release notes: quantization, release notes: nn, topic: new features, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Merged in forward fix from https://github.com/pytorch/pytorch/pull/106783, not sure whether we are relanding in this state but opening for CI
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106981
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
Differential Revision: [D49443325](https://our.internmc.facebook.com/intern/diff/D49443325)
| 5 |
1,569 | 106,979 |
Sdpa higher order op
|
Stale, release notes: fx, module: inductor, module: dynamo, ciflow/inductor
|
@Chillee I am going to start to address some of the comments in the original PR: #105596
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 3 |
1,570 | 106,973 |
Facing error while using onnx from scatterelements
|
module: onnx, triaged
|
### π Describe the bug
I am trying to convert a pytorch model (Conv-TasNet) to onnx and then to tflite link- https://github.com/kaituoxu/Conv-TasNet
Script to convert torch model to onnx
```
import onnx
import torch
from conv_tasnet import ConvTasNet
def convertoOnnx():
device = torch.device('cpu')
# Create model.
model = ConvTasNet(256, 20, 256, 512, 3, 8, 4,
2, norm_type="gLN", causal=0,
mask_nonlinear="relu")
model.to(device)
dummy_input = {'mixtures': torch.ones(256, 20).to(torch.device('cpu'))}
onnx_model_path = 'conv_tasnet.onnx'
torch.onnx.export(model, dummy_input["mixtures"], onnx_model_path, verbose=True, opset_version=12)
def main():
convertoOnnx()
if __name__ == "__main__":
main()
```
onnx model link - https://drive.google.com/file/d/189UHTs9OvDiNBc6BiZDG5zde2zSyTe6E/view?usp=sharing
When i tried to convert onnx to tflite model facing issue
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__ConcatV2_N_4_device_/job:localhost/replica:0/task:0/device:CPU:0}} ConcatOp : Ranks of all input tensors should match: shape[0] = [256,2,2,10,1] vs. shape[2] = [2,1,256,2,2,10,1] [Op:ConcatV2] name: concat
ERROR: input_onnx_file_path: conv_tasnet.onnx
ERROR: onnx_op_name: /decoder/ScatterElements
```
The author of the repo onnx2tf says that it a bug from pytorch . Could you kindly resolve it
Additionally even if we inference with standard onnxruntime we face below error
```
sit4onnx -if conv_tasnet.onnx -oep cpu
INFO: file: conv_tasnet.onnx
INFO: providers: ['CPUExecutionProvider']
INFO: input_name.1: onnx::Unsqueeze_0 shape: [256, 20] dtype: float32
Traceback (most recent call last):
File "/home/b920405/.local/bin/sit4onnx", line 8, in <module>
sys.exit(main())
File "/home/b920405/.local/lib/python3.10/site-packages/sit4onnx/onnx_inference_test.py", line 506, in main
final_results = inference(
File "/home/b920405/.local/lib/python3.10/site-packages/sit4onnx/onnx_inference_test.py", line 357, in inference
results = onnx_session.run(
File "/home/b920405/.local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 :
INVALID_ARGUMENT : Non-zero status code returned while running ScatterElementsnode.
Name: '/decoder/ScatterElements'
Status Message: Indices and updates must have the same rank
```
This is the issue where the author of onnx2tf told that its a bug from pytorch https://github.com/PINTO0309/onnx2tf/issues/447
### Versions
I tested it in both environments env1 and env2
pytorch env 1
[pip3] numpy==1.21.6
[pip3] torch==1.13.1
[pip3] torchprofile==0.0.2
[pip3] torchscan==0.1.1
[pip3] torchstat==0.0.7
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.6.1
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.13.1 pypi_0 pypi
pytorch env 2
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.6
[pip3] torch==2.0.1
[pip3] torchlibrosa==0.0.9
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-lightning 2.0.6 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchlibrosa 0.0.9 pypi_0 pypi
[conda] torchmetrics 1.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 0 |
1,571 | 106,972 |
RuntimeError: _Map_base::at when exporting squeeze
|
module: onnx, module: autograd, triaged
|
### π Describe the bug
`RuntimeError: _Map_base::at` when exporting a simple squeeze function.
```python
import torch
class MySqueeze(torch.autograd.Function):
@staticmethod
def symbolic(g, input, dim):
return g.op('Squeeze', input, axes_i=[dim])
@staticmethod
def forward(ctx, input, dim):
return input.squeeze(dim)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, dim):
return MySqueeze.apply(x, dim)
input = torch.randn(1, 3)
dim = torch.tensor(0)
model = Model()
print(model(input, dim))
with torch.no_grad():
input = torch.randn(1, 3)
dim = torch.tensor(0)
torch.onnx.export(
model,
(input, dim),
'model.onnx',
)
```
And got the following error:
```
File "/home/eugene/training_extensions/demo/model.py", line 31, in <module>
torch.onnx.export(
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/eugene/training_extensions/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/eugene/training_extensions/demo/model.py", line 20, in forward
return MySqueeze.apply(x, dim)
RuntimeError: _Map_base::at
```
### Versions
```
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 3000 with Max-Q Design
Nvidia driver version: 528.49
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 126
Model name: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
Stepping: 5
CPU MHz: 1497.598
BogoMIPS: 2995.19
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 8 MiB
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] pytorch-lightning==1.9.2
[pip3] pytorchcv==0.0.67
[pip3] torch==1.13.1+cu117
[pip3] torchaudio==0.13.1+cu117
[pip3] torchmetrics==0.10.3
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1+cu117
[conda] Could not collect
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 3 |
1,572 | 106,971 |
Found two conflicting CUDA installs
|
module: build, triaged
|
### π Describe the bug
When I build FBGEMM in our CUDA 12.1 Docker image, it fails with this error message:
```
CMake Error at /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:65 (mess
age):
Found two conflicting CUDA installs:
V12.1.66 in '/usr/local/cuda/include' and
V12.1.66 in '/usr/local/cuda-12.1/include'
Call Stack (most recent call first):
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:87 (include)
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:113 (find_package)
```
Workaround is to run with env var CUDA_PATH=/usr/local/cuda-12.1
### Versions
main
cc @malfet @seemethere
| 5 |
1,573 | 106,967 |
ONNX Model Producing Different Results Compared to Original PyTorch and JIT Traced Model
|
module: onnx, triaged
|
### π Describe the bug
I have a PyTorch model that I want to convert to ONNX format. I have managed to produce a traced version of the model with `torch.jit.trace` which works correctly, but exporting this to ONNX produces a model that gives back different (wrong) outputs.
I have attached the serialised traced model is in this [zip folder](https://github.com/pytorch/pytorch/files/12313364/postcvpr.zip). The original PyTorch model is a slightly modified version of [HOOD](https://github.com/Dolorousrtur/HOOD) to get tracing working (inputs/outputs turned into flat `tuple`s of `Tensor`s instead of `HeteroData` batches) etc...
See https://gist.github.com/NathanielB123/75fa9229f49ccdf4af3628abf73c4d12 for the script I used to export to `onnx` and print a sample of the outputs.
The full terminal output after running this code is:
```
/home/nathaniel/miniconda3/envs/hood/lib/python3.10/site-packages/torch/onnx/utils.py:825: UserWarning: no signature found for <torch.ScriptMethod object at 0x7fb25f51bc40>, skipping _decide_input_format
warnings.warn(f"{e}, skipping _decide_input_format")
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
0:
[0.11268847 0.11745856 1.171965 ]
[-0.09882738 -0.75298095 0.57629055]
500:
[ 0.36263114 -0.09879217 0.8864989 ]
[-0.21008204 -0.9361933 0.81566393]
1000:
[-0.6373062 0.28653488 1.4825789 ]
[-1.2088397 -0.39165828 1.3485816 ]
1500:
[-0.17704955 0.5167098 1.6783938 ]
[ 0.0593356 -0.71061885 1.5147011 ]
2000:
[-0.41957054 0.12701258 1.0791388 ]
[-0.35199475 -0.79554534 0.8017714 ]
2500:
[-0.5762653 -0.23102543 0.46685016]
[-1.1951463 -0.57939434 0.12812436]
3000:
[-0.01403345 -0.07792548 0.9337326 ]
[-0.79627764 -0.7208227 0.6344711 ]
3500:
[-0.6358214 0.34578276 1.105367 ]
[-0.8967222 -1.1001766 0.4269399]
4000:
[-0.20787726 0.01798273 1.1856971 ]
[-0.39938125 -0.7097303 0.6083367 ]
```
As you can see, the vectors outputted by each model are (very) different.
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.1.0-8ubuntu1~22.04) 13.1.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro T2000 with Max-Q Design
Nvidia driver version: 531.41
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-10855M CPU @ 2.80GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 2
BogoMIPS: 5616.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch3d==0.7.4
[pip3] torch==2.0.1
[pip3] torch-cluster==1.6.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torchaudio==2.0.2
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] mxnet-mkl 1.6.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch3d 0.7.4 py310_cu117_pyt201 pytorch3d
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-cluster 1.6.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu117 pytorch
[conda] triton 2.0.0 pypi_0 pypi
```
| 2 |
1,574 | 106,965 |
Enable Mypy Checking in torch/_inductor/kernel/conv.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/kernel/conv.py
After Fix:
mypy --follow-imports=skip torch/_inductor/kernel/conv.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,575 | 106,959 |
`tensor.repeat` quirks: has no `torch.` variant, no `out=` variant, no inplace variant | `torch.tile` also does not have `out=` variant and uses `dims=` instead of `dim=`
|
triaged, module: viewing and reshaping, module: python frontend
|
### π Describe the bug
subj :)
given that https://github.com/pytorch/pytorch/issues/31980 is slow, we are obliged to sometimes use `tensor.repeat` in loops, so this revealed some `tensor.repeat` issues
### Versions
nightly
cc @albanD
| 6 |
1,576 | 106,957 |
Enable Mypy Checking in torch/_inductor/fx_passes/mkldnn_fusion.py
|
open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/fx_passes/mkldnn_fusion.py
After Fix:
mypy --follow-imports=skip torch/_inductor/fx_passes/mkldnn_fusion.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
1,577 | 106,956 |
Readily available python wheels for windows ARM
|
oncall: binaries, module: windows, feature
|
### π The feature, motivation and pitch
Currently, [PyPI](https://pypi.org/project/torch/#files) does not support arm64 wheels for windows
### Alternatives
Building from source
### Additional context
_No response_
cc @seemethere @malfet @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 2 |
1,578 | 106,951 |
stride of gradient is not same as the corresponding tensor
|
module: autograd, module: optimizer, triaged, actionable
|
### π Describe the bug
When I tried to use torch.optimizer.Adam with fused=True, I got the following error:
```text
File "/home/weixu/venvs/working/lib/python3.10/site-packages/torch/optim/adam.py", line 591, in _fused_adam
torch._fused_adam_(
RuntimeError: params, grads, exp_avgs, and exp_avg_sqs must have same dtype, device, and layout
```
With some debug, I found this is caused by the mismatching of the stride of a parameter and its gradient. Indeed, as the following code shows, the strides do mismatch in some corner case:
```python
import torch
d = 1
device = 'cuda'
w = torch.nn.Parameter(torch.zeros((2, d, 100), device=device))
x = torch.rand((10, 2, 100), device=device)
y = torch.bmm(x.transpose(0, 1), w.transpose(1,2))
y.sum().backward()
print(w.stride(), w.grad.stride())
assert w.stride() == w.grad.stride()
```
It still fails if bmm is replaced by `torch.baddbmm` or `torch.einsum()`. Fails with device='cpu' too.
`w.grad.stride()` is `(100, 1, 1)`, while `w.stride()` is `(100, 100, 1)`
The assert can pass if d > 1.
Tested on botch torch 1.13.0+cu116 and torch2.0.1+cu118
### Versions
Collecting environment information...
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.199.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU max MHz: 4600.0000
CPU min MHz: 1200.0000
BogoMIPS: 5599.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 22 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] numpy-quaternion==2022.4.3
[pip3] pytorch3d==0.7.3
[pip3] torch==1.13.0+cu116
[pip3] torchaudio==0.13.0+cu116
[pip3] torchvision==0.14.0+cu116
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @vincentqb @jbschlosser @janeyx99 @crcrpar
| 2 |
1,579 | 106,950 |
Enable Mypy Checking in torch/_inductor/freezing.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/freezing.py
After Fix:
mypy --follow-imports=skip torch/_inductor/freezing.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 4 |
1,580 | 106,947 |
Enable Mypy Checking in torch/_inductor/exc.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/exc.py
After Fix:
mypy --follow-imports=skip torch/_inductor/exc.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
1,581 | 106,946 |
Enable Mypy Checking in torch/_inductor/codegen/triton_foreach.py
|
triaged, open source, topic: not user facing, module: inductor, ciflow/inductor
|
See #105230
Summary:
As suggested in https://github.com/pytorch/pytorch/issues/105230 mypy checking is enabled in torch/_inductor/codegen/triton_foreach.py
After Fix:
mypy --follow-imports=skip torch/_inductor/codegen/triton_foreach.py Success: no issues found in 1 source file
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
1,582 | 106,942 |
[Minor Bug] Should consume_prefix_in_state_dict_if_present change ordering of keys?
|
module: docs, module: nn, triaged, actionable
|
### π Describe the bug
Hi, I was looking into the source code for `torch.nn.modules.utils.consume_prefix_in_state_dict_if_present` and saw that the very first thing it did was to sort the keys, which would change the ordering of the keys downstream during inplace assignment of new prefix-removed keys.
Perhaps I overlooked something, but I was wondering if this is intended behaviour, because it is not mentioned in the docstring.
The model would load fine with `load_state_dict`, but with `nn.Module` state dicts being OrderedDicts, just seems a little uncessary and confusing.
Example code:
```
# python 3.8.10, torch 2.0.1+cu117
from collections import OrderedDict
from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
state_dict = OrderedDict()
state_dict['module.body.stage1.0.0.weight'] = 0
state_dict['module.body.stage1.0.1.weight'] = 1
state_dict['module.body.stage1.0.1.bias'] = 2
state_dict['module.bboxes.0.conv1x1.weight'] = 3
state_dict['module.bboxes.0.conv1x1.bias'] = 4
for k in state_dict:
print(k)
print("-------------------------")
# now the ordering is changed
consume_prefix_in_state_dict_if_present(state_dict, 'module.')
for k in state_dict:
print(k)
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1492.760
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5190.45
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
1,583 | 106,939 |
Cannot export MiVOLO model into `onnx` format using `torch.onnx.export`
|
module: onnx, triaged
|
### π Describe the bug
I am trying to convert a [pytorch based model](https://github.com/WildChlamydia/MiVOLO) into onnx format. The pytorch model is working fine, but I can't convert it into onnx. I am adding following code snippet [here](https://github.com/WildChlamydia/MiVOLO/blob/024d3ae71d42c4147bf21a46bbc2f7521cd94d53/mivolo/model/mi_volo.py#L123) to convert `self.model` into `onnx` format. Here is my code snippet
```python
random_input = torch.randn(1, 6, 224, 224, device=self.device)
onnx_model_name = "mi_volo.onnx"
# pytorch to onnx
torch.onnx.export(self.model, random_input, onnx_model_name, verbose=True, opset_version=18)
```
but I am getting the following error:
```
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/home/master/.local/lib/python3.10/site-packages/torch/onnx/symbolic_opset18.py", line 52, in col2im
num_dimensional_axis = symbolic_helper._get_tensor_sizes(output_size)[0]
TypeError: 'NoneType' object is not subscriptable
```
I tried debugging the error, but couldn't understand it due to less familiarity with the conversion process. The actual line that is causing an error inside `torch/onnx/symbolic_opset18.py` is
```python
num_dimensional_axis = symbolic_helper._get_tensor_sizes(output_size)[0]
```
At the time of the error, I tried inspecting the `output_size` variable and it is
```
1072 defined in (%1072 : int[] = prim::ListConstruct(%958, %963), scope: mivolo.model.mivolo_model.MiVOLOModel::/torch.nn.modules.container.Sequential::network.0/timm.models.volo.Outlooker::network.0.0/timm.models.volo.OutlookAttention::attn)
```
Debugging deeper, inside `_get_tensor_sizes` method, the [following branch](https://github.com/pytorch/pytorch/blob/dfd441a12cede6702d1fd160eb808778b62adae0/torch/onnx/symbolic_helper.py#L571C1-L572C20) is causing a `None` return.
```python
if not _is_tensor(x) or x.type() is None:
return None
```
**MiVOLO** - Latest Pull
**Pytorch version** - 2.0.1
**onnx version** - 1.14.1
**OS** - Ubuntu 22.04.3LTS
Any help/direction/discussion will be highly appreciated, thank you.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 9
CPU max MHz: 3100.0000
CPU min MHz: 400.0000
BogoMIPS: 5399.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 512 KiB (2 instances)
L3 cache: 3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==0.13.1+cpu
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
| 0 |
1,584 | 106,931 |
Other overloads of `_foreach_clamp`
|
module: optimizer, triaged, enhancement, actionable, module: mta
|
### π The feature, motivation and pitch
Implement the overloads such as `Tensor`, `TensorList`, and `ScalarList` for `_foreach_clamp`. Currently only `_foreach_clamp(Tensor[], Scalar, Scalar)` is being implemented -- https://github.com/pytorch/pytorch/pull/106574/files#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991R10130
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @mcarilli
| 2 |
1,585 | 106,907 |
fix B017 lints
|
open source, Stale, release notes: quantization, release notes: distributed (c10d), ciflow/mps
|
working on B017 as part of #106571
| 3 |
1,586 | 106,906 |
[Dynamo x FSDP][5/x] Fix bug in __class__ sources
|
Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106738
* __->__ #106906
* #106890
* #106888
* #106886
* #106884
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 4 |
1,587 | 106,899 |
Port "Fix kDefaultTimeout multiple definition build failure (#97270)" to release/2.0 branch
|
open source, Stale, release notes: distributed (c10d)
|
#97270 fixed #90448 on the master branch by making the namespace explicit to avoid the constexpr conflict on GCC 11. Unfortunately, the issue persists on release/2.0 causing lost productivity across the Pytorch developer community to manually apply this patch. Let's please include it in the release branch.
@ezyang
| 2 |
1,588 | 106,896 |
[torch.optim/C++] Add Adagrad state initialization
|
triaged, open source, release notes: optimizer
|
Fixes #87706 by adding state initialization to the Adagrad optimizer.
cc @albanD @jbschlosser @soumith @iramazanli @vincentqb
| 7 |
1,589 | 106,894 |
[autograd.Function] freevar lifting is too aggressive?
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
Seems like the freevar lifting things it needs to lift constants and then graph breaks if they are used in autograd.Function backwards:
```
import torch
from typing import *
def halve(x):
return x * 0.5
class ScaleGradient(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
def backward(ctx, grad):
return halve(x)
x = torch.randn(3, requires_grad=True)
def f(x):
return ScaleGradient.apply(x)
output = torch.compile(f, backend='eager', fullgraph=True)(x)
```
gives:
```
f.call_function(tx, sub_args, {})
File "/raid/rzou/pt/debug-cpu2/torch/_dynamo/variables/higher_order_ops.py", line 950, in call_funct
ion
unimplemented("NYI - freevars in autograd function.")
File "/raid/rzou/pt/debug-cpu2/torch/_dynamo/exc.py", line 143, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: NYI - freevars in autograd function.
from user code:
File "/raid/rzou/pt/debug-cpu2/foo.py", line 20, in f
return ScaleGradient.apply(x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
| 0 |
1,590 | 106,893 |
[autograd.Function] torch.compile w/ once_differentiable leads to opaque graph break
|
high priority, triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
Some internal models use once_differentiable. The error message is very opaque:
```py
import torch
from typing import *
from torch.autograd.function import once_differentiable
class ScaleGradient(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x
@staticmethod
@once_differentiable
def backward(ctx, grad):
return grad * 0.5
x = torch.randn(3, requires_grad=True)
def f(x):
return ScaleGradient.apply(x)
output = torch.compile(f, backend='eager', fullgraph=True)(x)
```
returns
```
Unsupported: inline in skipfiles: ScaleGradient.backward | wrapper /raid/rzou/pt/debug-cpu2/torch/aut
ograd/function.py
from user code:
File "<ipython-input-20-149cb1010897>", line 18, in f
return ScaleGradient.apply(x)
File "/raid/rzou/pt/debug-cpu2/torch/_dynamo/variables/misc.py", line 241, in trampoline_autograd_bw
d
return fn_cls.backward(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @ezyang @gchanan @kadeng @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
| 8 |
1,591 | 106,888 |
[Dynamo x FSDP][3/x] TypedStorage and storage_offset
|
Stale, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106738
* #106906
* #106890
* __->__ #106888
* #106886
* #106884
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 3 |
1,592 | 106,885 |
Dynamo graph break when using pyton module `heapq` (manipulates with `list`s)
|
low priority, triaged, module: dynamo
|
### π Describe the bug
OP: https://discuss.pytorch.org/t/is-heapq-module-supported-for-compilation-by-dynamo/185863:
`heapq` is a standard python's module useful for priority queue loops for Dijkstra-like algos
@msaroufim: @voznesenskym on github might say this is a good dynamo starter task XD
```python
import heapq
import torch
@torch.compile(fullgraph=True)
def program():
h = []
heapq.heappush(h, 3)
heapq.heappush(h, 1)
heapq.heappush(h, 4)
val = heapq.heappop(h)
return val * torch.randn(10)
program()
```
### Versions
N/A
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 2 |
1,593 | 106,877 |
[ONNX] Float8 support
|
module: onnx, low priority, triaged
|
### π The feature, motivation and pitch
Support exporting float8 models to ONNX.
### Alternatives
_No response_
### Additional context
Exporting the model
```python
import torch
class Float8Module(torch.nn.Module):
def forward(self, input: torch.Tensor):
input = input.to(torch.float8_e5m2)
return input + torch.tensor(1.0, dtype=torch.float8_e5m2)
torch.onnx.dynamo_export(Float8Module(), torch.randn(1, 3, 224, 224))
```
Raises
```
/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py:112: UserWarning: Torchlib only supports opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1006, in dynamo_export
).export()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 810, in export
graph_module = self.options.fx_tracer.generate_fx(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 198, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1014, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 321, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 153, in wrapped
return output_adapter.apply(model_func(*args, **kwargs))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 481, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 129, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 361, in _convert_frame_assert
return _compile(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 194, in time_wrapper
r = func(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 431, in _compile
out_code = transform_code_object(code, transform)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 416, in transform
tracer.run()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 168, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 561, in call_function
return wrap_fx_proxy(tx, proxy, **options)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1156, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1230, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1360, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1328, in get_fake_value
return wrap_fake_exception(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 907, in wrap_fake_exception
return fn()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1329, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1394, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1381, in run_node
return node.target(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1235, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1525, in dispatch
r = func(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 229, in _fn
result = fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 120, in _fn
compute_dtype, result_dtype = utils.elementwise_dtypes(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/__init__.py", line 1373, in elementwise_dtypes
highest_type = get_higher_type(highest_type, dtype_to_type(x.dtype))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/__init__.py", line 915, in dtype_to_type
raise ValueError("Invalid dtype!")
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in function add>(*(FakeTensor(..., size=(1, 3, 224, 224), dtype=torch.float8_e5m2), FakeTensor(..., size=(), dtype=torch.float8_e5m2)), **{}):
Invalid dtype!
from user code:
File "/home/justinchu/dev/onnx-script/float8export.py", line 7, in forward
return input + torch.tensor(1.0, dtype=torch.float8_e5m2)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/justinchu/dev/onnx-script/float8export.py", line 9, in <module>
torch.onnx.dynamo_export(Float8Module(), torch.randn(1, 3, 224, 224))
File "<@beartype(torch.onnx._internal.exporter.dynamo_export) at 0x7f5cd348da20>", line 53, in dynamo_export
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1016, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at report_dynamo_export.sarif. Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
```
After adding `torch.float8_e5m2` to `_float_dtypes` in `torch/_prims_common/__init__.py`, the new error is
```
[2023-08-09 17:40:21,961] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /home/justinchu/anaconda3/envs/onnx/lib/python3.10/contextlib.py
[2023-08-09 17:40:21,961] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /home/justinchu/anaconda3/envs/onnx/lib/python3.10/contextlib.py
[2023-08-09 17:40:21,962] torch._dynamo.eval_frame: [DEBUG] skipping helper /home/justinchu/anaconda3/envs/onnx/lib/python3.10/contextlib.py
[2023-08-09 17:40:21,962] torch._dynamo.eval_frame: [DEBUG] skipping __init__ /home/justinchu/anaconda3/envs/onnx/lib/python3.10/contextlib.py
[2023-08-09 17:40:21,962] torch._dynamo.eval_frame: [DEBUG] skipping __enter__ /home/justinchu/anaconda3/envs/onnx/lib/python3.10/contextlib.py
[2023-08-09 17:40:21,962] torch._dynamo.eval_frame: [DEBUG] skipping enable_dynamic /home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py
[2023-08-09 17:40:21,962] torch._dynamo.eval_frame: [DEBUG] skipping wrapped /home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py
[2023-08-09 17:40:21,962] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward /home/justinchu/dev/onnx-script/float8export.py:5
[2023-08-09 17:40:21,965] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line forward /home/justinchu/dev/onnx-script/float8export.py:5
[2023-08-09 17:40:21,965] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] def forward(self, input: torch.Tensor):
[2023-08-09 17:40:22,012] torch._dynamo.variables.builder: [DEBUG] wrap_to_fake L['input'] (1, 3, 224, 224) [<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>] [None, None, None, None]
[2023-08-09 17:40:22,013] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line forward /home/justinchu/dev/onnx-script/float8export.py:6
[2023-08-09 17:40:22,013] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] input = input.to(torch.float8_e5m2)
[2023-08-09 17:40:22,013] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST input []
[2023-08-09 17:40:22,013] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR to [TensorVariable()]
[2023-08-09 17:40:22,014] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_GLOBAL torch [GetAttrVariable(TensorVariable(), to)]
[2023-08-09 17:40:22,014] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR float8_e5m2 [GetAttrVariable(TensorVariable(), to), TorchVariable(<module 'torch' from '/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/__init__.py'>)]
[2023-08-09 17:40:22,015] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION 1 [GetAttrVariable(TensorVariable(), to), ConstantVariable(dtype)]
[2023-08-09 17:40:22,017] torch._dynamo.symbolic_convert: [DEBUG] TRACE STORE_FAST input [TensorVariable()]
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line forward /home/justinchu/dev/onnx-script/float8export.py:7
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] return input + torch.tensor(1.0, dtype=torch.float8_e5m2)
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST input []
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_GLOBAL torch [TensorVariable()]
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR tensor [TensorVariable(), TorchVariable(<module 'torch' from '/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/__init__.py'>)]
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 1.0 [TensorVariable(), TorchVariable(<built-in method tensor of type object at 0x7f9d7359c000>)]
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_GLOBAL torch [TensorVariable(), TorchVariable(<built-in method tensor of type object at 0x7f9d7359c000>), ConstantVariable(float)]
[2023-08-09 17:40:22,018] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR float8_e5m2 [TensorVariable(), TorchVariable(<built-in method tensor of type object at 0x7f9d7359c000>), ConstantVariable(float), TorchVariable(<module 'torch' from '/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/__init__.py'>)]
[2023-08-09 17:40:22,019] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST ('dtype',) [TensorVariable(), TorchVariable(<built-in method tensor of type object at 0x7f9d7359c000>), ConstantVariable(float), ConstantVariable(dtype)]
[2023-08-09 17:40:22,019] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION_KW 2 [TensorVariable(), TorchVariable(<built-in method tensor of type object at 0x7f9d7359c000>), ConstantVariable(float), ConstantVariable(dtype), ConstantVariable(tuple)]
[2023-08-09 17:40:22,033] torch._dynamo.symbolic_convert: [DEBUG] TRACE BINARY_ADD None [TensorVariable(), TensorVariable()]
Traceback (most recent call last):
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1006, in dynamo_export
).export()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 810, in export
graph_module = self.options.fx_tracer.generate_fx(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 198, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1014, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 321, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 153, in wrapped
return output_adapter.apply(model_func(*args, **kwargs))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 481, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 129, in _fn
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 361, in _convert_frame_assert
return _compile(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 194, in time_wrapper
r = func(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 431, in _compile
out_code = transform_code_object(code, transform)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 416, in transform
tracer.run()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2071, in run
super().run()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 168, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 561, in call_function
return wrap_fx_proxy(tx, proxy, **options)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1156, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1230, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1360, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1328, in get_fake_value
return wrap_fake_exception(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 907, in wrap_fake_exception
return fn()
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1329, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1394, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1381, in run_node
return node.target(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1235, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1525, in dispatch
r = func(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 229, in _fn
result = fn(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 120, in _fn
compute_dtype, result_dtype = utils.elementwise_dtypes(
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/__init__.py", line 1373, in elementwise_dtypes
highest_type = get_higher_type(highest_type, dtype_to_type(x.dtype))
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/_prims_common/__init__.py", line 915, in dtype_to_type
raise ValueError("Invalid dtype!")
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in function add>(*(FakeTensor(..., size=(1, 3, 224, 224), dtype=torch.float8_e5m2), FakeTensor(..., size=(), dtype=torch.float8_e5m2)), **{}):
Invalid dtype!
from user code:
File "/home/justinchu/dev/onnx-script/float8export.py", line 7, in forward
return input + torch.tensor(1.0, dtype=torch.float8_e5m2)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/justinchu/dev/onnx-script/float8export.py", line 9, in <module>
torch.onnx.dynamo_export(Float8Module(), torch.randn(1, 3, 224, 224))
File "<@beartype(torch.onnx._internal.exporter.dynamo_export) at 0x7f9cb6e0dbd0>", line 53, in dynamo_export
File "/home/justinchu/anaconda3/envs/onnx/lib/python3.10/site-packages/torch/onnx/_internal/exporter.py", line 1016, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at report_dynamo_export.sarif. Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
[2023-08-09 17:40:22,046] torch._dynamo.utils: [INFO] TorchDynamo compilation metrics:
[2023-08-09 17:40:22,046] torch._dynamo.utils: [INFO] Function Runtimes (s)
[2023-08-09 17:40:22,046] torch._dynamo.utils: [INFO] ---------- --------------
[2023-08-09 17:40:22,046] torch._dynamo.utils: [INFO] _compile 0
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 3 |
1,594 | 106,873 |
[Dynamo] Integration exporter's diagnostic system into ONNXRuntime backend
|
module: onnx, triaged
|
### π The feature, motivation and pitch
As title. DORT doesn't have any diagnostic features, so reusing the system from ONNX exporter seems a good idea.
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,595 | 106,872 |
[Dynamo] revise ONNXRuntime backend's use of CapabilityBasedPartitioner
|
module: onnx, triaged
|
### π The feature, motivation and pitch
I believe the following code can be improved. Using string matching to identify fused sub-graphs looks unsafe.
```
# Overriding fused_module's __call__() function with ort_acclerated_call()
# This loop goes through all graph partitions (each of them is an ONNX-representable graph)
# and override their _wrappped_call function with _ort_accelerated_call.
# Inside _ort_accelerated_call, the partition's graph is exported into ONNX and executed by ORT.
for node in partitioned_prim_graph_module.graph.nodes:
# TODO(wschin): use a better way to identify fused submodule
if node.op == "call_module" and "fused_" in node.name:
fused_module = getattr(partitioned_prim_graph_module, node.name)
# self.ort_acclerated_call is responsible for exporting graph to ONNX,
# creating ORT session, and running ORT session.
fused_module._wrapped_call = self._ort_acclerated_call
```
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,596 | 106,871 |
[Dynam] a graph pass in Dynamo-ONNXRuntime backend needs revision
|
module: onnx, triaged
|
### π The feature, motivation and pitch
A pass `_replace_to_copy_with_to` was added in the early days of dynamo development. As dynamo becomes more matured, we should try to remove this. If this is really needed, this should be implemented as a pass in ONNX exporter.
```
# TODO(wschin): this is required for removing aten::_to_copy in _replace_to_copy_with_to.
_replace_to_copy_with_to(prim_graph_module)
```
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,597 | 106,869 |
[Dyanmo] Pre-allocate flag should be a ONNXRuntime inference session level attribute
|
module: onnx, triaged
|
### π The feature, motivation and pitch
Now DORT stores pre-allocated flag in `OrtBackend.preallocate_output` which looks a bit more global than it can be. Because pre-allocated memory is not needed by all execution providers (e.g., CUDA execution provider doesn't need it).
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,598 | 106,868 |
[Dynamo] ONNXRuntime backend (DORT) requires some guards to re-partition extracted by Dynamo
|
module: onnx, triaged
|
### π The feature, motivation and pitch
Given a GraphModule, DORT partitions it into sub-GraphModules using
```
partitioner = CapabilityBasedPartitioner(
prim_graph_module,
self._supported_ops,
allows_single_node_partition=True,
)
partitioned_prim_graph_module = partitioner.partition_and_fuse()
self._partitioner_cache[graph_module] = partitioned_prim_graph_module
```
Each time the same GraphModule `g` is encountered, we use the cached partition result in `self._partitioner_cache[g] `. However, this assumes that the partition result is independent to graph inputs, which could be wrong since some computation is shape/type-dependent. As a solution, we should guard at least input ranks. If input ranks changes, we re-partition again.
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,599 | 106,867 |
[Dynamo] ONNXRuntime Backend Shold Allow External Allocator
|
module: onnx, triaged
|
### π The feature, motivation and pitch
DORT (Dynamo-ONNXRuntime) creates inference session in DORT using the following code.
```
def _create_onnx_session(onnx_proto, eps: Tuple[str, ...], session_options):
# TODO(wschin): enable external allocators.
return onnxruntime.InferenceSession(
onnx_proto, providers=eps, sess_options=session_options
)
```
Since no external allocator is provided, ORT and PyTorch manage memory using different allocators. This can result higher memory peak, so we should pass PyTorch's allocator to ORT.
DORT PR: https://github.com/pytorch/pytorch/pull/106589
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,600 | 106,853 |
Add Half support for aminmax on CPU
|
module: cpu, triaged, open source, module: half, ciflow/trunk, topic: not user facing, intel, ciflow/mps
|
Add Half support for aminmax on CPU.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.