Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
101 | 111,620 |
Add meta support for embedding bag backward
|
fb-exported
|
Test Plan: CI
Differential Revision: D50477243
| 3 |
102 | 111,619 |
DISABLED test_cat_nhwc (__main__.TestQuantizedOps)
|
triaged, module: macos, skipped
|
Platforms: macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_quantization.py%3A%3ATestQuantizedOps%3A%3Atest_cat_nhwc)).
This periodic test looks flaky over the past few weeks.
cc @malfet @albanD
| 1 |
103 | 111,617 |
Debug trymerge internal
|
topic: not user facing, test-config/default
|
Nothing to review here.
| 2 |
104 | 111,614 |
[dynamo] fix None routing bug during var_getattr on UDO
|
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111726
* #111725
* #111415
* __->__ #111614
* #111717
* #111306
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
105 | 111,613 |
[AOTInductor] Enforce no_grad for Run entries
|
topic: not user facing, module: inductor, ciflow/inductor
|
Summary:
Always enter no_grad mode in AOTInductor run entries.
```
// AOTInductor uses at::addmm_out, which doesn't supports
// arguments that requires gradient. For this reason, we
// enforce no_grad context for run APIs.
```
Test Plan:
buck2 test mode/dev-nosan caffe2/test/inductor:test_aot_inductor
and OSS CI
Differential Revision: D50432042
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
106 | 111,611 |
[HigherOrderOp] don't mannually set input for cond
|
ciflow/trunk, module: dynamo, ciflow/inductor, module: higher order operators
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111611
* #111610
We set mannualy_set_graph_inputs to False for CondHigherOrder. After that, it became necessary to deduplicate the inputs. We'll add pytree tests in the follow-up pr.
Test Plan:
existing tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @zou3519
| 1 |
107 | 111,609 |
[Pytorch][Vulkan] mean.dim
|
fb-exported, module: vulkan, release notes: vulkan, ciflow/periodic
|
Summary:
We implement [`torch.mean(input, dim, keepdim)`](https://pytorch.org/docs/stable/generated/torch.mean.html) for tensors of 2d to 4d.
Since 0-dim tensor hasn't been supported yet, we only support `dim.size() < input.dim()` for now. We will support following cases in the future work:
- `dim.size() == input.dim()`
- `input.dim() == 1`
Test Plan:
```
[luwei@devbig984.prn1 /data/users/luwei/fbsource (970fcd90c)]$ LD_LIBRARY_PATH=third-party/swiftshader/lib/linux-x64/ buck run fbcode/mode/dev-nosan //xplat/caffe2:pt_vulkan_api_test_bin -- --gtest_filter="*mean*"
Building: finished in 0.1 sec (100%) 339/339 jobs, 0/339 updated
Total time: 0.1 sec
BUILD SUCCEEDED
Running main() from third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc
Note: Google Test filter = *mean*
[==========] Running 7 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 7 tests from VulkanAPITest
[ RUN ] VulkanAPITest.mean_invalid_inputs
[ OK ] VulkanAPITest.mean_invalid_inputs (46 ms)
[ RUN ] VulkanAPITest.mean_dim_2d
[ OK ] VulkanAPITest.mean_dim_2d (127 ms)
[ RUN ] VulkanAPITest.mean_dim_3d
[ OK ] VulkanAPITest.mean_dim_3d (103 ms)
[ RUN ] VulkanAPITest.mean_dim_4d
[ OK ] VulkanAPITest.mean_dim_4d (89 ms)
[ RUN ] VulkanAPITest.mean_dim_keepdim_2d
[ OK ] VulkanAPITest.mean_dim_keepdim_2d (66 ms)
[ RUN ] VulkanAPITest.mean_dim_keepdim_3d
[ OK ] VulkanAPITest.mean_dim_keepdim_3d (127 ms)
[ RUN ] VulkanAPITest.mean_dim_keepdim_4d
[ OK ] VulkanAPITest.mean_dim_keepdim_4d (4 ms)
[----------] 7 tests from VulkanAPITest (564 ms total)
[----------] Global test environment tear-down
[==========] 7 tests from 1 test suite ran. (564 ms total)
[ PASSED ] 7 tests.
```
Reviewed By: yipjustin
Differential Revision: D50312990
| 5 |
108 | 111,608 |
[UCC][CUDA] Overlap p2p
|
triaged, open source, release notes: distributed (c10d)
|
The process group needs to set different streams for send and recv ops to make them asynchronous.
| 1 |
109 | 111,607 |
DISABLED test_meta_outplace_fft_hfft_cpu_float64 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_hfft_cpu_float64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17872251167).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 30 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_fft_hfft_cpu_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
110 | 111,606 |
DISABLED test_narrow_cpu_float32 (__main__.TestNestedTensorDeviceTypeCPU)
|
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_narrow_cpu_float32&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17867918159).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_narrow_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
111 | 111,605 |
Add aot inductor test for dynamic batch size
|
ciflow/trunk, topic: not user facing, module: inductor
|
Summary:
add aot inductor test for dynamic batch size
Test Plan:
```
python test/inductor/test_aot_inductor.py -k test_dynamic_batch_sizes
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 6 |
112 | 111,604 |
Revert "Revert "Nvfuser code removal (#111093)""
|
triaged, open source, module: amp (automated mixed precision), ciflow/trunk, release notes: jit
|
This reverts commit 715dfced72657e5adacd5bef16e3d458cd94851b.
The original PR #111093 is reverted due to broken internal build.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| 6 |
113 | 111,603 |
`Enum` used as a key of the input raises guards error
|
oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
Using `Enum` as a key of the input raises a guards-related error:
```python
import torch
from enum import Enum
class MyEnum(Enum):
A = "a"
class SomeModel(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x) -> torch.Tensor:
return self.linear(x[MyEnum.A])
x = {MyEnum.A: torch.rand(100, 1)}
model = torch.compile(SomeModel())
model(x)
model(x)
```
Possibly related to #99605 and its fix PR #99680
### Error logs
```
ERROR RUNNING GUARDS forward /home/akihiro/work/github.com/pyg-team/pytorch_geometric/test_compile.py:12
lambda L, **___kwargs_ignored:
___guarded_code.valid and
___check_type_id(L['x'], 94746595577440) and
set(L['x'].keys()) == {L["MyEnum"].A} and
___check_obj_id(L['self'], 139872498725504) and
L['self'].training == True and
hasattr(L['x'][L["MyEnum"].A], '_dynamo_dynamic_indices') == False and
___is_grad_enabled() and
not ___are_deterministic_algorithms_enabled() and
___is_torch_function_enabled() and
utils_device.CURRENT_DEVICE == None and
___check_obj_id(G['MyEnum'].A, 139869662261408) and
___check_tensors(L['x'][L["MyEnum"].A], tensor_check_names=tensor_check_names)
Traceback (most recent call last):
File "/home/akihiro/work/github.com/pyg-team/pytorch_geometric/test_compile.py", line 18, in <module>
model(x)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "<string>", line 8, in guard
KeyError: 'MyEnum'
```
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.5
Libc version: glibc-2.31
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.16.0
[pip3] pytorch-lightning==2.0.9.post0
[pip3] pytorch-memlab==0.3.0
[pip3] torch==2.1.0+cu118
[pip3] torch_frame==0.1.0
[pip3] torch_geometric==2.4.0
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-lightning 2.0.9.post0 pypi_0 pypi
[conda] pytorch-memlab 0.3.0 pypi_0 pypi
[conda] torch 2.1.0+cu118 pypi_0 pypi
[conda] torch-frame 0.1.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchvision 0.16.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
114 | 111,600 |
Add testing for foreach scalar Tensor overloads in inductor
|
ciflow/trunk, topic: not user facing, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111600
* #111084
* #111079
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 4 |
115 | 111,595 |
Pass `BUILD_ENVIRONMENT` to MPS tests
|
topic: not user facing, ciflow/mps
|
As well as default branch
Should fix the
```
Warning: Gathered no stats from artifacts for build env None build env and None test config. Using default build env and default test config instead.
```
| 1 |
116 | 111,594 |
[functorch] support lstm on cuda
|
open source
|
Fixes https://github.com/pytorch/pytorch/issues/110422
| 2 |
117 | 111,593 |
Apply same 'pick_grad' on generating fp64 reference outputs
|
open source, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108294
* __->__ #111593
To lower memory consumption for inference mode.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
118 | 111,591 |
[inductor][easy] skip test_extension_backend.py in fbcode
|
fb-exported, topic: not user facing, module: inductor
|
Summary: It's currently failing. We should skip it in fbcode because cpp extensions don't work right now.
Differential Revision: D48852412
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 10 |
119 | 111,590 |
Add decomp for `replication_pad2d` and use for CUDA deterministic
|
module: determinism, open source, release notes: nn, ciflow/inductor
|
Fixes #95578
cc @mruberry
| 1 |
120 | 111,589 |
Updated new README styling
|
open source, topic: not user facing
|
Incorporates the new (IMPORTANT, NOTE) tags for new styling
| 2 |
121 | 111,587 |
wip
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111459
* #111402
* __->__ #111587
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
122 | 111,584 |
Use 'device' argument in test_sparse.py::TestSparseAnyCUDA::test_as_sparse_gradcheck_*
|
open source, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/rocm
|
Argument "device" was missed.
So, "test_sparse.py::TestSparseAnyCUDA::test_as_sparse_gradcheck_*_cuda" was always run on the default device ("cpu") if another default torch device was not configured before.
This fix will probably detect a number of issues on various devices which were previously missed.
Should fix failed rocm CI jobs with "##[error]The action has timed out." and speedup test execution
| 3 |
123 | 111,583 |
DISABLED test_vmapjvpall_linalg_det_singular_cpu_float32 (__main__.TestOperatorsCPU)
|
triaged, module: macos, skipped
|
Platforms: macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/functorch%2Ftest_ops.py%3A%3ATestOperatorsCPU%3A%3Atest_vmapjvpall_linalg_det_singular_cpu_float32)).
cc @malfet @albanD
| 1 |
124 | 111,582 |
[C10D] C++ Callbacks part 1
|
fb-exported, release notes: distributed (c10d)
|
Summary:
Breaking down https://github.com/pytorch/pytorch/pull/110307 into smaller
pieces to try to land without revert.
This PR adds some hook functions but does not call them.
Test Plan: OSS CI and internal tests
Differential Revision: D50460640
| 3 |
125 | 111,581 |
Place local_used_map_dev_ on CPU for MTIA
|
fb-exported, release notes: distributed (c10d)
|
Summary:
The dist backend used on MTIA doesn't support int32 allreduce for now. The local_used_map_dev_ has to be placed on CPU.
Test Plan: See diff D50387636
Differential Revision: D50460304
| 6 |
126 | 111,580 |
Dynamic shapes doesn't work for torch.diff / resize__symint in some cases
|
oncall: pt2, module: dynamic shapes
|
```python
a = torch.tensor([0, 2])
b = torch.tensor([1])
def fn(a, b):
a = a.clone()
b = b.clone()
torch.diff(a, n=0, out=b)
return b
compiled_f = torch.compile(fn, fullgraph=True, backend="eager", dynamic=True)
out = compiled_f(a, b)
# Works
a = torch.tensor([0, 3, 4]) # size changed
b = torch.tensor([1]) # size is the same
out = compiled_f(a, b)
# Doesn't work
a = torch.tensor([0, 3, 4]) # size changed
b = torch.tensor([1, 2]) # size changed
out = compiled_f(a, b)
```
Versions: main after https://github.com/pytorch/pytorch/pull/111530
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 3 |
127 | 111,577 |
Prolonged network hiccup preventing retrieval of workflow job id
|
triaged, module: devx
|
Please see https://github.com/pytorch/pytorch/pull/111483 for context
First known bad is https://hud.pytorch.org/pytorch/pytorch/commit/e0b035c220a4db2a15b53848cd16ac6416fcf323
Got fixed in https://hud.pytorch.org/pytorch/pytorch/commit/543dc757463aa0ad559c49337c98eece6a25150f?
~How is it that the second I notice it happening it gets fixed... Some quantum observation thing going on here...~
cc @ZainRizvi @kit1980 @huydhn
| 0 |
128 | 111,574 |
`illegal memory access` for `torch.sparse.mm(src, other) / deg.view(-1, 1).clamp_(min=1)`
|
high priority, triage review, module: sparse, module: crash, module: cuda, triaged
|
### 🐛 Describe the bug
Original Issue from PyG: https://github.com/pyg-team/pytorch_geometric/issues/8213
Failing example: https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rev_gnn.py
```
CUDA_LAUNCH_BLOCKING=1 python3 /workspace/examples/rev_gnn.py
Traceback (most recent call last):
File "/workspace/examples/rev_gnn.py", line 187, in <module>
loss = train(epoch)
File "/workspace/examples/rev_gnn.py", line 125, in train
out = model(data.x, data.adj_t)[data.train_mask]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/examples/rev_gnn.py", line 76, in forward
x = conv(x, edge_index, mask)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/models/rev_gnn.py", line 166, in forward
return self._fn_apply(args, self._forward, self._inverse)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/models/rev_gnn.py", line 181, in _fn_apply
out = InvertibleFunction.apply(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/models/rev_gnn.py", line 52, in forward
outputs = ctx.fn(*x)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/models/rev_gnn.py", line 283, in _forward
y_in = xs[i] + self.convs[i](y_in, edge_index, *args[i])
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/examples/rev_gnn.py", line 35, in forward
return self.conv(x, edge_index)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/sage_conv.py", line 130, in forward
out = self.propagate(edge_index, x=x, size=size)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/message_passing.py", line 431, in propagate
out = self.message_and_aggregate(edge_index, **msg_aggr_kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/sage_conv.py", line 149, in message_and_aggregate
return spmm(adj_t, x[0], reduce=self.aggr)
File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/spmm.py", line 99, in spmm
return torch.sparse.mm(src, other) / deg.view(-1, 1).clamp_(min=1)
RuntimeError: CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+32f93b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 530.41.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 7599.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] onnx==1.14.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+32f93b1
[pip3] torch_geometric==2.4.0
[pip3] torch-tensorrt==0.0.0
[pip3] torchdata==0.6.0+5bbcd77
[pip3] torchmetrics==1.2.0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0+e621604
[pip3] tritonclient==2.38.0.69485441
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ptrblck
| 5 |
129 | 111,571 |
DISABLED test_meta_outplace_fft_hfft_cpu_complex64 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_hfft_cpu_complex64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17857582924).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 18 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_fft_hfft_cpu_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
130 | 111,570 |
Tensor `.cuda()` very slow with specific array sizes
|
module: performance, module: cuda, triaged
|
### 🐛 Describe the bug
I've been profiling some streaming code and found a strange case where copying a tensor to the GPU is much slower for specific array sizes.
This is specific to arrays that are fortran-ordered to begin with and the issue is mitigated by calling `.clone()` on the tensor before `.cuda()`.
```
rows = 327680 # 2 ** 16 * 5
np_arr = np.asfortranarray(np.random.randn(rows, 2000).astype(np.float32))
arr = torch.from_numpy(np_arr)
x = arr[10_000: 50_000] # grab a subset of 40k rows
%%time
_ = x.cuda()
```
This takes around 800ms on my machine.
With `rows = 327680 - 1`, it takes 280ms on my machine.
With `rows = 327680 + 1`, it takes 170ms on my machine.
With `rows = 100000`, it takes 80ms on my machine.
In all cases, adding a `.clone()` prior to `.cuda()` seems to reduce the time to around 80ms. In general it seems that multiples of 2^16 perhaps perform slower (I've tried with 2^16 and 2^16 - 1 and there seems to be a significant difference).
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.86-flatcar-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.103.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7542 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 0
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.1
[pip3] pytorch-lightning==2.1.0
[pip3] torch==2.1.0+cu118
[pip3] torchmetrics==1.2.0
[pip3] triton==2.1.0
[conda] Could not collect
cc @ptrblck
| 3 |
131 | 111,569 |
[dynamo] so-called global state guard is installed on global, when in fact values are thread-local
|
oncall: pt2
|
### 🐛 Describe the bug
https://github.com/pytorch/pytorch/blob/971f67c9880b4037064b4bd24b858c80e6c69174/torch/_dynamo/convert_frame.py#L113
https://github.com/pytorch/pytorch/blob/971f67c9880b4037064b4bd24b858c80e6c69174/torch/_dynamo/convert_frame.py#L377
This means that we are not thread-safe with respect to compiling multiple functions at once across different threads.
### Specific Scenarios
For instance, one call to compile may set a new global state based on its thread-local values, and then another thread reads off those values when it is instantiating or checking a guard.
### Details
See that many of the values it captures:
https://github.com/pytorch/pytorch/blob/971f67c9880b4037064b4bd24b858c80e6c69174/torch/csrc/dynamo/guards.cpp#L438
Are in fact thread-local values:
https://github.com/pytorch/pytorch/blob/fa995626a8e181e3666b27fdb4edbe6116b22ee3/c10/core/AutogradState.h#L13
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym
| 0 |
132 | 111,568 |
Enable cupti
|
open source, ciflow/binaries, ciflow/periodic
|
Fixes #ISSUE_NUMBER
| 2 |
133 | 111,567 |
DISABLED test_narrow_cpu_float16 (__main__.TestNestedTensorDeviceTypeCPU)
|
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_narrow_cpu_float16&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17849375503).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_narrow_cpu_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
134 | 111,566 |
build: failure when building pytorch with TBB
|
module: build, triaged, module: tbb
|
## Issue description
I want to build pytorch from source using TBB but not OMP, and I try v1.10.2 and v1.13.1, both failed. plz help.
## Code example
Error messages:
```
[ 98%] Linking CXX executable ../../../../bin/torch_shm_manager
/data/wangjie/dependtool/pytorch/build/lib/libtorch_cpu.so:对‘std::allocator<std::pair<long, std::tuple<torch::jit::SourceRange, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::intrusive_ptr<torch::jit::InlinedCallStack, c10::detail::intrusive_target_default_null_type<torch::jit::InlinedCallStack> > > > >::allocator()’未定义的引用
collect2: error: ld returned 1 exit status
```
- PyTorch or Caffe2:
- How you installed PyTorch (conda, pip, source): source
- Build command you used (if compiling from source): USE_CUDA=0 BUILD_TEST=0 USE_TBB=1 USE_OPENMP=0 MKLDNN_CPU_RUNTIME=TBB MKL_THREADING=TBB ATEN_THREADING=TBB python setup.py install
- OS: centos7
- PyTorch version: v1.13.1 & v1.10.2
- Python version: 3.10.13
- CUDA/cuDNN version:
- GPU models and configuration:
- GCC version (if compiling from source):3.7.0
- CMake version:3.21.0
- Libc version: glibc-2.17
- Versions of any other relevant libraries:
cc @malfet @seemethere
| 2 |
135 | 111,564 |
misusing percision value in test_cuda function in torch/testing/_internal/common_nn.py.
|
triaged, module: testing
|
### 🐛 Describe the bug
I found tf32_precision of NewModuleTest class. is not using tf32 test. Like Conv2d_groups testcase.
In test_nn.py file, using add_test to collect testcases and enable tf32 testcases, which is using with tf32_on(self, test.tf32_precision) to set tf32 config and also modify self.precision value. But self.precision is not using in test_cuda.
we can find self.precision in test_cuda function is not tf32_percision in add_test function. From add_test to test_cuda, self variable is to testcase, and test in add_test is self in test_cuda.
def add_test(test, decorator=None) path: https://github.com/pytorch/pytorch/blob/main/test/test_nn.py#L7484
def tf32_on(self, tf32_precision=1e-5): https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_cuda.py#L94
def test_cuda(self, test_case): https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_nn.py#L4450
```
# before enter to test_cuda function.
-> test.test_cuda(self, **kwargs)
(Pdb) p self.percision
(Pdb) p test.__dict__["precision"]
0.0002
(Pdb) p self.__dict__["_precision"]
0.005
(Pdb) p test.tf32_precision
0.005
# enter test_cuda function
(Pdb) n
> /projs/platform/shangang/anaconda3/envs/shangang_conda_torch19/lib/python3.7/site-packages/torch/testing/_internal/common_nn.py(6029)test_cuda()
-> if not TEST_CUDA or not self.should_test_cuda:
(Pdb) p test_case.precision
0.005
(Pdb) p self.precision
0.0002 // still using 0.0002 to compare cuda tf32 result to cpu result, without using tf32_percision.
````
Is this a bug of testcases?
### Versions
Collecting environment information...
PyTorch version: 1.13.0a0+gitd922c29
Is debug build: True
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.17
Python version: 3.7.16 (default, Jan 17 2023, 22:20:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.15.0-161-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-PCIE-32GB
GPU 1: Tesla V100-PCIE-32GB
GPU 2: Tesla V100-PCIE-32GB
GPU 3: Tesla V100-PCIE-32GB
GPU 4: Tesla V100-PCIE-32GB
GPU 5: Tesla V100-PCIE-32GB
GPU 6: Tesla V100-PCIE-32GB
GPU 7: Tesla V100-PCIE-32GB
Nvidia driver version: 510.85.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] pytorch-lightning==1.6.5
[pip3] torch==1.13.0a0+gitd922c29
[pip3] torchaudio==0.9.0
[pip3] torchmetrics==0.11.0
[pip3] torchvision==0.10.0+cu111
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 defaults
[conda] cudatoolkit-dev 11.6.0 h72bdee0_5 conda-forge
[conda] numpy 1.20.3 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] torch 1.13.0a0+gitd922c29 pypi_0 pypi
[conda] torchaudio 0.9.0 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchvision 0.10.0+cu111 pypi_0 pypi
| 0 |
136 | 111,563 |
Higher-order derivatives extremely slow, increasing exponentially
|
module: autograd, triaged, needs research
|
### 🐛 Describe the bug
In my application, I need to take the nth order mixed derivative of a function. However, I found that the torch.autograd.grad computation time increases exponentially as n increases. Is this expected, and is there any way around it?
This is my code for differentiating a function self.F (from R^n -> R^1):
```
def differentiate(self, x):
x.requires_grad_(True)
xi = [x[...,i] for i in range(x.shape[-1])]
dyi = self.F(torch.stack(xi, dim=-1))
for i in range(self.dim):
start_time = time.time()
dyi = torch.autograd.grad(dyi.sum(), xi[i], retain_graph=True, create_graph=True)[0]
grad_time = time.time() - start_time
print(grad_time)
return dyi
```
And these are the times printed for each iteration of the above loop:
```
0.0037012100219726562
0.005133152008056641
0.008165121078491211
0.019922733306884766
0.059255123138427734
0.1910409927368164
0.6340939998626709
2.1612229347229004
11.042078971862793
```
I assume this is because the size of the computation graph is increasing? Is there any way around this? I thought I might be able to circumvent this issue by taking a functional approach (presumably obviating the need for a computation graph), using torch.func.grad. However, this actually increased the runtime of the same code! Am I not understanding torch.func.grad properly?
### Versions
torch 2.1.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 31 |
137 | 111,562 |
[dynamo] `not aliased -> aliased` Guard only implemented for Tensors
|
oncall: pt2
|
### 🐛 Describe the bug
For tensors, both `aliased -> not aliased` and `not aliased -> aliased` can lead to guard failure.
- `aliased -> not aliased`: L['x'] is L['y']"
- `not aliased -> aliased`: Duplicate tensor found where not expected! L['y']should not alias to anything, but is aliased"
For other objects, only `aliased -> not aliased`
- `aliased -> not aliased`: L['x'] is L['y']"
### Diagnosis
This is due to the guards deduping tensors:
https://github.com/pytorch/pytorch/blob/aa3243bceb8c84f56f326b9e8c60ecc9794bbce4/torch/csrc/dynamo/guards.cpp#L404
But not objects
### Question:
1. If there is 1 alias, which then increases to 2, will it still trigger recompile? Yes:
```python
def fn(z, x, y):
if x is y:
return z + x * 2
else:
return z + x + y
fn_opt = torch.compile(backend='eager', fullgraph=True, dynamic=True)(fn)
x = torch.zeros(2)
y = torch.ones(2)
self.assertEqual(fn(x, x, y), fn_opt(x, x, y))
self.assertEqual(fn(x, x, x), fn_opt(x, x, x))
```
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
138 | 111,561 |
DISABLED test_meta_outplace_addmm_decomposed_cpu_complex64 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_addmm_decomposed_cpu_complex64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17846849108).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_addmm_decomposed_cpu_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 3 |
139 | 111,559 |
[RFC] Add GradScaler on CPU
|
triaged, module: half
|
### 🚀 The feature, motivation and pitch
To enable FP16 support on CPU https://github.com/pytorch/pytorch/issues/97068, GradScaler is necessary for FP16 training to prevent grad overflows.
### Alternatives
* Step1: Add the corresponding kernels of `_amp_foreach_non_finite_check_and_unscale_` and `_amp_update_scale_kernels` on CPU
Follow the implements of CUDA.
* Step2: Frontend API for GradScaler on CPU:
1: Move the common logic codes of GradScaler to a file `torch/amp/grad_scaler.py`. Most parts of GradScaler can be abstracted as a base class since the algorithm of GradScaler is same on CPU and CUDA.
2: Add 2 derived class for CPU and CUDA to set device related config.
The design of Frontend API for GradScaler is same as autocast.
* API usage:
```Python
# Creates a GradScaler once at the beginning of training.
# Before: only available on CUDA
# scaler = torch.cuda.amp.GradScaler()
# After: available on CUDA and CPU
# scaler = torch.cuda.amp.GradScaler()
scaler = torch.cpu.amp.GradScaler()
for epoch in epochs:
for input, target in data:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
# Scales loss. Calls backward() on scaled loss to create scaled gradients.
scaler.scale(loss).backward()
# scaler.step() first unscales gradients of the optimizer's params.
# If gradients don't contain infs/NaNs, optimizer.step() is then called,
# otherwise, optimizer.step() is skipped.
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
```
### Additional context
This RFC depends on FP16 supports for operators https://github.com/pytorch/pytorch/issues/97068 and FP16 support of autocast on CPU https://github.com/pytorch/pytorch/issues/96093
| 0 |
140 | 111,556 |
[dynamo] Implement `set.__contains__` for tensors based on object identity
|
oncall: pt2
|
### 🚀 The feature, motivation and pitch
Workaround to https://github.com/pytorch/pytorch/issues/111544 for `set.__contains__` case
### Alternatives
_No response_
### Additional context
Related https://github.com/pytorch/pytorch/issues/111550
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
141 | 111,555 |
Fix inconsistency of max_split_size between DeviceStats and CUDAAllocatorConfig
|
open source, ciflow/binaries, topic: not user facing
|
CUDAAllocatorConfig uses size_t max_split_size and initializes it to std:: numeric_limits<size_t>::max(), and then the value is assigned to max_split_size of DeviceStats which is of type int64_t, so that the command
```
python3 -c "import torch;y=torch.empty(3,device='cuda');print(torch.cuda.memory_stats(0)['max_split_size'])"
```
returned -1.
After skimming through the code, and reading the doc in https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html, It was sure that negative values of max_split_size make no sense and we should use size_t instead. Now the error has been fixed and the command returns std:: numeric_limits<size_t>::max().
This issue was found in revert of #111137
cc @malfet
| 5 |
142 | 111,554 |
AOTAutograd: handle set_(), detect metadata mutations that cancel out
|
module: dynamo, ciflow/inductor, release notes: AO frontend
|
This should be enough to get @voznesenskym 's FSDP branch to plumb `set_()` through AOTAutograd properly and have everything properly no-op out. Main changes are:
(1) graph break on `aten::set_.source_Tensor_storage_offset` (we could support it but it isn't needed, seems safer to graph break)
(2) Functionalization: add a "proper" functionalization kernel for `aten::set_.source_Tensor`. The previous one we had was codegen'd and it was wrong (it would just clone() and call set_(), which does not do the right thing). I also manually mark on the `FunctionalTensorWrapper` when a given tensor has been mutated by a `set_()` call.
(3) AOTAutograd: I added a new field, `InputAliasInfo.mutates_storage_metadata`, so we can distinguish between "regular" metadata mutations, and metadata mutations due to `set_()` calls. This is mainly because at runtime, one requires calling `as_strided_()` to fix up metadata, while the other requires calling `set_()`.
(4) Made AOTAutograd's detection for metadata mutations / set_() mutations smarter and detect no-ops (if the storage and metadata are all the same).
I also killed `was_updated()` and `was_metadata_updated()`, and replaced them with (existing) `has_data_mutation() ` and (new) `has_data_mutation()`, which can more accurately distinguish between data-mutation vs. `set_()` calls vs. metadata-mutation
**This PR is still silently correct in one case though**, which I'd like to discuss more. In particular, this example:
```
def f(x):
x_view = x.view(-1)
x.set_(torch.ones(2))
x_view.mul_(2)
return
```
If you have an input that experiences both a data-mutation **and** a `x_old.set_(x_new)` call, there are two cases:
(a) the data mutation happened on the storage of `x_new`. This case should be handled automatically: if x_new is a graph intermediate then we will functionalize the mutation. If x_new is a different graph input, then we will perform the usual `copy_()` on that other graph input
(b) the data mutation happened on the storage of `x_old`. This is more of a pain to handle, and doesn't currently work. At runtime, the right thing to do is probably something like:
```
def functionalized_f(x):
x_view = x.view(-1)
# set_() desugars into a no-op; later usages of x will use x_output
x_output = torch.ones(2)
# functionalize the mutation on x_view
x_view_updated = x.mul(2)
x_updated = x_view_updated.view(x.shape)
# x experienced TWO TYPES of mutations; a data mutation and a metatadata mutation
# We need to return both updated tensors in our graph
return x_updated, x_output
def runtime_wrapper(x):
x_data_mutation_result, x_set_mutation_result = compiled_graph(x)
# First, perform the data mutation on x's old storage
x.copy_(x_data_mutation_result)
# Then, swap out the storage of x with the new storage
x.set_(x_set_mutation_result)
```
There are two things that make this difficult to do though:
(1) Functionalization: the functionalization rule for `set_()` will fully throw away the old `FunctionalStorageImpl` on the graph input. So if there are any mutations to that `FunctionalStorageImpl` later on in the graph, the current graph input won't know about it. Maybe we can have a given `FunctionalTensorWrapper` remember all previous storages that it had, and track mutations on all of them - although this feels pretty complicated.
(2) AOTAutograd now needs to know that we might have *two* graph outputs that correspond to a single "mutated input", which is annoying.
It's worth pointing out that this issue is probably extremely unlikely for anyone to run into - can we just detect it and error? This feels slightly easier than solving it, although not significantly easier. We would still need `FunctionalTensorWrapper` to keep track of mutations on any of its "previous" storages, so it can report this info back to AOTAutograd so we can raise an error.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111554
* #111642
* #111553
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
143 | 111,552 |
[Bug]: some parameters' grad is None when using FSDP with torch2.1.0
|
oncall: distributed, triaged, module: fsdp
|
### 🐛 Describe the bug
To reproduce the problem: Training [InternLM](https://github.com/InternLM/InternLM/tree/develop) with [config fsdp=True](https://github.com/InternLM/InternLM/blob/develop/configs/7B_sft.py#L157):
```shell
srun -p llm -n8 --ntasks-per-node=8 --cpus-per-task=4 --gpus-per-task=1 python train.py --config ./configs/7B_sft.py
```
```python
zero1 parallel (dict):
1. size: int
* if size <= 0, the size of the zero process group is equal to the size of the dp process group,
so parameters will be divided within the range of dp.
* if size == 1, zero is not used, and all dp groups retain the full amount of model parameters.
* if size > 1 and size <= dp world size, the world size of zero is a subset of dp world size.
For smaller models, it is usually a better choice to split the parameters within nodes with a setting <= 8.
2. fsdp: bool, enable/disable torch's fully sharded data parallel, defaults to False.
"""
parallel = dict(
zero1=dict(size=-1, fsdp=True),
tensor=1,
pipeline=dict(size=1, interleaved_overlap=True),
sequence_parallel=True,
)
```
The error message is shown as follows:
```shell
2023-10-19 14:58:44,720 ERROR train.py:318 in <module> -- Raise exception from SH-IDC1-10-140-1-139 with rank id: 0
Traceback (most recent call last):
File "/mnt/petrelfs/huangting.p/workspace/InternLM/train.py", line 316, in <module>
main(args)
File "/mnt/petrelfs/huangting.p/workspace/InternLM/train.py", line 240, in main
trainer_result = trainer.step()
File "/mnt/petrelfs/huangting.p/workspace/InternLM/internlm/core/trainer.py", line 195, in step
return self._engine.step()
File "/mnt/petrelfs/huangting.p/workspace/InternLM/internlm/core/engine.py", line 118, in step
success, grad_norm = self.optimizer.step()
File "/mnt/petrelfs/share_data/llm_env/miniconda3-py39_4/envs/llm-torch2.1-flash2/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/mnt/petrelfs/huangting.p/workspace/InternLM/internlm/solver/optimizer/fsdp_optimizer.py", line 118, in step
norm_group = self._compute_norm_with_fsdp_flatten(group_idx)
File "/mnt/petrelfs/huangting.p/workspace/InternLM/internlm/solver/optimizer/fsdp_optimizer.py", line 92, in _compute_norm_with_fsdp_flatten
norm_group = compute_norm(gradients=gradients, parameters=params, last_stage=True)
File "/mnt/petrelfs/huangting.p/workspace/InternLM/internlm/solver/optimizer/utils.py", line 270, in compute_norm
tensor_parallel_grads.append(g.data.float())
AttributeError: 'NoneType' object has no attribute 'data'
```
### Versions
```shell
# torch2.1.0
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```

**Note that when using torch2.0.1, the error will not happen.**
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 2 |
144 | 111,551 |
Custom `ModuleDict.__getitem__(key: tuple)` produces a graph break
|
oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
#97932 enabled TorchDynamo to support modules with custom `__getitem__`. However, if the key is a tuple, it produces a graph break due to this assertion https://github.com/pytorch/pytorch/blob/v2.1.0/torch/_dynamo/variables/nn_module.py#L545.
```python
import torch
from torch_geometric.nn.module_dict import ModuleDict
class SomeModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.module_dict = ModuleDict({
("author", "writes", "paper"): torch.nn.Linear(1, 1),
})
def forward(self, x):
x = self.module_dict[("author", "writes", "paper")](x)
return x
model = torch.compile(SomeModel())
model(torch.randn(100, 1))
```
`torch_geometric.nn.module_dict.ModuleDict` is defined at https://github.com/pyg-team/pytorch_geometric/blob/2.4.0/torch_geometric/nn/module_dict.py
### Error logs
```
Traceback (most recent call last):
File "/home/akihiro/work/github.com/pyg-team/pytorch_geometric/test/nn/test_compile_hetero.py", line 16, in <module>
model(torch.randn(100, 1))
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 168, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 618, in call_function
result = handler(tx, *args, **kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 950, in call_getitem
return args[0].call_method(tx, "__getitem__", args[1:], kwargs)
File "/home/akihiro/.conda/envs/pyg310/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 545, in call_method
assert isinstance(key, (str, int))
AssertionError:
from user code:
File "/home/akihiro/work/github.com/pyg-team/pytorch_geometric/test/nn/test_compile_hetero.py", line 12, in forward
x = self.module_dict[("author", "writes", "paper")](x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.5
Libc version: glibc-2.31
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.16.0
[pip3] pytorch-lightning==2.0.9.post0
[pip3] pytorch-memlab==0.3.0
[pip3] torch==2.1.0+cu118
[pip3] torch_frame==0.1.0
[pip3] torch_geometric==2.4.0
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-lightning 2.0.9.post0 pypi_0 pypi
[conda] pytorch-memlab 0.3.0 pypi_0 pypi
[conda] torch 2.1.0+cu118 pypi_0 pypi
[conda] torch-frame 0.1.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchvision 0.16.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 0 |
145 | 111,550 |
[dynamo] Implement full `is_` checking
|
oncall: pt2, module: dynamo
|
### 🚀 The feature, motivation and pitch
This requires checking equality of objects.
For consts, this just requires checking the backing const.
~~For `VariableTracker`s with sources, this requires checking the sources. I believe we do not need to actually reference the original object. Hope this is right? @voznesenskym~~
source is not always available. `example_value` `FakeTensor` is better.
### Alternatives
Users cannot do things like deduplicate tensors or other objects in traced code
### Additional context
Required for properly tracing `nn.modules`: https://github.com/pytorch/pytorch/pull/111548
Related: https://github.com/pytorch/pytorch/issues/109504
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
146 | 111,549 |
DISABLED test_detach_cpu_float64 (__main__.TestNestedTensorDeviceTypeCPU)
|
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_detach_cpu_float64&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17842173875).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_detach_cpu_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
147 | 111,547 |
Bug with as_strided_tensorimpl for MPS devices
|
triaged, module: mps
|
### 🐛 Describe the bug
I am experiencing a problem on my M1 Pro MacBook with training a Neural ODE model.
My use-case is quite extensive and depends on multiple python files and some custom integration routines, so I am unable to provide a trimmed-down version of my code. I have, however, provided a traceback of the error.
```python
Traceback (most recent call last):
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/damage_neural/neural.py", line 370, in <module>
r = ode.odeint_adjoint(model, y[0],t,block_size=time_chunk_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/ode.py", line 656, in odeint_adjoint
return wrapper.apply(solver, times, *adjoint_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/autograd/function.py", line 551, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/ode.py", line 583, in forward
y = solver.integrate(times, cache_adjoint=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/ode.py", line 370, in integrate
result[k : k + self.n] = self.block_update(
^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/ode.py", line 497, in block_update
dy = chunktime.newton_raphson_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/chunktime.py", line 41, in newton_raphson_chunk
R, J = fn(x)
^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/pyoptmat/ode.py", line 493, in RJ
yd, yJ = func(times, y)
^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/damage_neural/neural.py", line 231, in forward
dy_dot_dy = vmap(vmap(jacfwd(self.rate, argnums = 1)))(t, y, erate, T)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 188, in wrapped
return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 266, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 38, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 379, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 188, in wrapped
return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 266, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 38, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 379, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 1132, in wrapper_fn
results = vmap(push_jvp, randomness=randomness)(basis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 188, in wrapped
return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 266, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 38, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 379, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 1123, in push_jvp
output = _jvp_with_argnums(func, args, basis, argnums=argnums, has_aux=has_aux)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 38, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 969, in _jvp_with_argnums
result_duals = func(*duals)
^^^^^^^^^^^^
File "/Users/ganesh/ArgonneWork/BlackBox_NN/env/damage_neural/neural.py", line 193, in rate
x = torch.cat([y, erate.unsqueeze(-1), T.unsqueeze(-1)], dim = -1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: !self.is_mps() INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp":1166, please report a bug to PyTorch. as_strided_tensorimpl does not work with MPS; call self.as_strided(...) instead
```
Please let me know if there are any further details you might require.
Thanks.
### Versions
ct_env
Collecting environment information...
PyTorch version: 2.2.0.dev20231018
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] torch==2.2.0.dev20231018
[conda] functorch 2.0.0 pypi_0 pypi
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
(env) (base) ganesh@Ganeshs-MacBook-Pro-3 / %
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
148 | 111,544 |
[dynamo] `set.__contains__` is not properly implemented for tensors, by virtue of `eq(Tensor, Tensor)` being inconsistently implemented
|
oncall: pt2
|
### 🐛 Describe the bug
```python
param = torch.zeros(5)
param2 = torch.zeros(5)
tensor_list = set()
tensor_list.add(param2)
print(param2 in tensor_list) # False
def fn(param, param2):
tensor_list = set([param2])
return param in tensor_list
ret = torch.compile(fn, onegraph=True)(param, param2)
```
```
torch._dynamo.exc.Unsupported: comparison TensorVariable() <built-in function eq> TensorVariable()
```
Inconsistent behaviour
```python
param = torch.zeros(5)
param2 = torch.zeros(5)
tensor_list = set()
tensor_list.add(param2)
print(param2 in tensor_list) # False
def fn(param, param2):
tensor_list = set([param2])
return param in tensor_list
ret = torch.compile(fn, fullgraph=True)(param, param2)
assert ret == fn(param, param2) # RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
Root cause: `__contains__` based on equality of tensors has inconsistent behaviour due to overloading eq https://github.com/pytorch/pytorch/issues/111542
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
149 | 111,541 |
Enhance the unit testing doc: add one more example
|
open source, topic: not user facing
| null | 1 |
150 | 111,538 |
Propose to add constant padding mode to the `torch.nn.functional.grid_sample` function
|
module: nn, triaged
|
### 🚀 The feature, motivation and pitch
I'm working on a registration project. When I am using `torch.nn.functional.grid_sample` to apply the displacement field to the moving image, I find I am unable to use constant values like 255 to pad it (at least it is not that intuitive) just like the way in `np.pad`. So I propose to add one. If possible I would really want to implement it myself, though I am a completely new contributor to the pytorch project.
### Alternatives
As a workaround, I pre-pad the image with 255 pixels on the border and use the `border` mode.
### Additional context
Here are two samples,


cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
151 | 111,537 |
[Pytorch][CPU] Switch building compiler to Clang
|
fb-exported, module: inductor, ciflow/inductor
|
Summary:
The slimdsnn model is currently built with GCC, and I see Clang-15 generates better code than GCC which is 10% faster, after a stack of backporting (D50338220).
There are likely other improvements to internal Clang as the TOT Clang in LLVM upstream generates even better code.
Test Plan:
Before:
buck2 run mode/{opt,inplace} //accelerators/workloads/models/slimdsnn:slimdsnn_dso_benchmark -- --iterations=100000000
Starting benchmark, 100000000 iterations...
Batch=1 latency: 0.643 us
After:
buck2 run mode/{opt,inplace} //accelerators/workloads/models/slimdsnn:slimdsnn_dso_benchmark -- --iterations=100000000
Starting benchmark, 100000000 iterations...
Batch=1 latency: 0.593 us
Reviewed By: bertmaher
Differential Revision: D50399150
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 4 |
152 | 111,536 |
DISABLED test_Conv2d_naive_groups_cuda_float16 (__main__.TestConvolutionNNDeviceTypeCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Failure observed in ROCm5.7 CI upgrade PR, so skipping until resolved: https://github.com/pytorch/pytorch/pull/110465#issuecomment-1758427256
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
153 | 111,535 |
DISABLED test_meta_outplace_addmm_decomposed_cpu_complex128 (__main__.TestMetaCPU)
|
module: flaky-tests, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_addmm_decomposed_cpu_complex128&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17841787282).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_addmm_decomposed_cpu_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_meta.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
154 | 111,534 |
[dynamo] Fix context wrapping grad mode variable
|
triaged, open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/111528
Makes use of `ContextWrappingVariable` so that the function will enter the grad mode whenever it is called, and exit once it is done calling.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
155 | 111,533 |
DISABLED test_Conv2d_groups_nobias_v2 (__main__.TestConvolutionNN)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Failure observed in ROCm5.7 CI upgrade PR, so skipping until resolved: https://github.com/pytorch/pytorch/pull/110465#issuecomment-1758427256
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
156 | 111,532 |
DISABLED test_Conv2d_groups_nobias (__main__.TestConvolutionNN)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Failure observed in ROCm5.7 CI upgrade PR, so skipping until resolved: https://github.com/pytorch/pytorch/pull/110465#issuecomment-1758427256
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 1 |
157 | 111,531 |
Add compile support for NT unbind
|
module: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111531
* #111530
* #111529
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
158 | 111,528 |
[dynamo] `no_grad`, `enable_grad` - `_NoParamDecoratorContextManager` are not handled correctly
|
oncall: pt2
|
### 🐛 Describe the bug
Root cause of https://github.com/pytorch/pytorch/issues/109138
Reproducer:
```python
import torch
def cool_name(x):
return x.sin()
def fn(x):
return torch.no_grad(cool_name)(x)
x = torch.zeros(10)
result = fn(x)
print(result)
result = torch.compile(fn, backend="eager", fullgraph=True)(x)
print(result)
```
Also fails:
```python
def fn(x):
@torch.no_grad
def cool_name(x):
return x.sin()
return cool_name(x)
x = torch.zeros(10)
result = fn(x)
print(result)
result = torch.compile(fn, backend="eager", fullgraph=True)(x)
print(result)
```
Does not fail: no_grad is instantiated outside of compile region
```python
@torch.no_grad
def cool_name(x):
return x.sin()
def fn(x):
return cool_name(x)
x = torch.zeros(10)
result = fn(x)
print(result)
result = torch.compile(fn, backend="eager", fullgraph=True)(x)
print(result)
```
### Solution
Handle it properly when it is called as a function. It might need to instantiate the gradmodevariable whenever the function is called.
To do so, one can put a `context_var_hook` which sets up the context var when the function is called.
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
159 | 111,525 |
Functorch FCD breaks with tensor subclasses
|
triaged, module: functorch, module: first class dims
|
### 🐛 Describe the bug
Classes that inherits from torch.Tensor do not work with firct class dimensions when `__getitem__` is overwritten:
```python
import torch
from torch import Tensor
from functorch import dim
class MyTensor(Tensor):
def __getitem__(self, item):
return super().__getitem__(item)
t = Tensor([[1, 2],[3, 4]])
d0 = dim.dims(1)
t[d0] # works
t = MyTensor([[1, 2],[3, 4]])
d0 = dim.dims(1)
t[d0] # breaks
```
which gives
```
ValueError Traceback (most recent call last)
Cell In[3], line 1
----> 1 t[d0]
Cell In[2], line 7, in MyTensor.__getitem__(self, item)
6 def __getitem__(self, item):
----> 7 return super().__getitem__(item)
ValueError: dimension d0 is unbound
```
I guess that since `t[d0]` in the regular `Tensor` case is not a `Tensor` anymore but a `functorch.dim.Tensor`, the expected behaviour is ill-defined. Maybe a better error message in this case would help to let users know what is going on?
cc @ezyang @msaroufim @albanD @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @zdevito
### Versions
Recent nightly built ('2.2.0.dev20231011')
| 2 |
160 | 111,523 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
161 | 111,522 |
Insufficient hasattr guards on user defined objects
|
oncall: pt2
|
### 🐛 Describe the bug
```
import torch
import torch._dynamo
@torch.compile(backend="eager", fullgraph=True)
def f(x, y):
return x + y.a
class A:
a = 2
print(f(torch.zeros(2), A()))
del A.a
print(f(torch.zeros(2), A()))
```
produces
```
ERROR RUNNING GUARDS f /data/users/ezyang/a/pytorch/a.py:4
lambda L, **___kwargs_ignored:
___guarded_code.valid and
___check_global_state() and
hasattr(L['x'], '_dynamo_dynamic_indices') == False and
___check_type_id(L['y'], 102721216) and
___check_type_id(L['y'].a, 7640416) and
L['y'].a == 2 and
utils_device.CURRENT_DEVICE == None and
___skip_backend_check() or ___current_backend() == ___lookup_backend(140059152699776) and
___check_tensors(L['x'], tensor_check_names=tensor_check_names)
Traceback (most recent call last):
File "/data/users/ezyang/a/pytorch/a.py", line 15, in <module>
print(f(torch.zeros(2), A()))
File "/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py", line 401, in _fn
return fn(*args, **kwargs)
File "<string>", line 13, in guard
AttributeError: 'A' object has no attribute 'a'
```
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
162 | 111,519 |
[pt2+profiler] attach aot_id to CompiledFunction
|
oncall: pt2
|
### 🚀 The feature, motivation and pitch
torch-compiled models will have profiles that contain `CompiledFunction` and `CompiledFunctionBackward` regions.
Meanwhile PT2 logs (e.g. TORCH_COMPILE_DEBUG=1) will also dump the graphs. But in graphs that have multiple CompiledFunctions, it's kind of hard to figure out which graph (in the logs) maps to a given CompiledFunction (in the profile).
We should figure out how to attach the aot_id to the CompiledFunction profiler event.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 who had this great idea :)
### Alternatives
_No response_
### Additional context
Idea: autograd.Functions have handling in C++ to add the record_function event. That's where the CompiledFunction event comes from.
It might be possible to attach a static parameter or method on the CompiledFunction (like `_compiled_autograd_key()`) that is used for additional information attached to the C++ RecordFunction.
We should make sure this doesn't slow down CompiledFunction, especially in the no-profiling case. It would be great if we can collect the aot_id during CompiledFunction construction so that we don't have to do the python/C++ conversion on each iteration.
| 0 |
163 | 111,517 |
MPS Performance regressions on Sonoma 14.0
|
high priority, triage review, triaged, module: mps
|
### 🐛 Describe the bug
This issue is related to #77799. tldr: speed 50% slower, big memory leaks.
Basically, since upgrading to Sonoma, performance of the MPS device on sentence transformers models has taken a big nosedive. I don't have an apples-to-apples (🥁) comparison exactly, but an M1 ultra on Sonoma is 50% slower than an M1 Pro on Ventura. [Here](https://github.com/pytorch/pytorch/issues/77799#issuecomment-1304882735) are some numbers I collected on Ventura with an M1 ultra, not sure that data can be exactly compared, but it looks like the ratio between inference time on M1 Ultra/Ventura and M1 Ultra / Sonoma is about 1:2.
On Sonoma (M1 Ultra):
```
In [1]: from sentence_transformers import SentenceTransformer
In [2]: import torch
In [5]: %timeit model.encode(["hi"], device=torch.device("cpu"))
85.3 ms ± 12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [6]: %timeit model.encode(["hi"], device=torch.device("mps"))
23 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
The MPS device is clearly functioning and present, but 23 ms is about 50% slower than an M1 Pro on Ventura. Here is the Ventura data:
```
In [2]: import torch
In [3]: model = SentenceTransformer("all-mpnet-base-v2")
In [4]: %timeit model.encode(["hi"], device=torch.device("mps"))
14.7 ms ± 854 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit model.encode(["hi"], device=torch.device("cpu"))
40.1 ms ± 408 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
Both cases with with today's nightly PyTorch, and sentence-transformers==2.2.2, transformers==4.34.1.
Additionally, let me know if I should file another ticket for this, but I observe huge memory leaks when using the MPS device. I often conduct a lot of bulk embedding using this model, and after ~32k calls to `model.forward`, I see memory usage exceeding 100GB and increasing. It's compressed, so it looks leaked. I observe this problem when running torch in an Flask wrapper.
I am wondering if something about Apple's Metal Performance Shaders implementation changed recently.
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20231018
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.6
Libc version: N/A
Python version: 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Ultra
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.2.0.dev20231018
[pip3] torchsort==0.1.9
[pip3] torchvision==0.17.0.dev20231018
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
164 | 111,516 |
[ci] Save various json files from test infra into folder
|
ciflow/trunk, release notes: releng, suppress-bc-linter
|
We pull a lot of files from https://github.com/pytorch/test-infra/blob/generated-stats/stats and name them separately when we add them to the artifacts in the build, so stick them in a folder and just add that instead.
Slow test and disabled test jsons remain as they were since they are pulled during the test step and do not need to be included in the artifacts during build since they are not used for sharding.
Sanity checked that test times could be found for linux, mac, windows, and rocm.
| 2 |
165 | 111,514 |
DISABLED test_detach_cpu_float32 (__main__.TestNestedTensorDeviceTypeCPU)
|
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_detach_cpu_float32&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17831419866).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_detach_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
166 | 111,513 |
DISABLED test_meta_outplace_addmm_cpu_complex128 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_addmm_cpu_complex128&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17830649623).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 24 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_addmm_cpu_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
167 | 111,511 |
[re-land][inductor] Refactor and optimize allocation calls (#111117)
|
fb-exported, topic: not user facing, module: inductor, ciflow/inductor
|
Summary:
This is a re-land of https://github.com/pytorch/pytorch/pull/111117 with
updates to our internal tests included.
This splits out changes from
https://github.com/pytorch/pytorch/pull/102625 to make things easier to
review.
This diff creates a `make_allocation()` method that extracts the logic
from `make_buffer_allocation()` while allowing us to allocate non-buffer
objects. In particular, we will use this to allocate memory pools during
memory planning.
This diff also includes a small optimization -- if the desired
allocation is contiguous, then we emit a call to `empty()` instead of
`empty_strided()` with its superfluous stride argument.
Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/9ce0ae836d6801a39776897b9e891cd978b28aea
Differential Revision: D50429424
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
168 | 111,510 |
[WIP][TD] Historical edited files and profiling heuristics
|
release notes: releng
|
Fixes #ISSUE_NUMBER
| 1 |
169 | 111,509 |
Sparse Tensor Sum Still Does Not Work for PyTorch Geometric
|
module: sparse, triaged
|
### 🐛 Describe the bug
original issue: https://github.com/pytorch/pytorch/issues/98796
There was a PR to fix it: https://github.com/pytorch/pytorch/commit/a54043516fad1a37134f280f3e75f85a5b2daa13
However the issue still persists for this example: https://github.com/pyg-team/pytorch_geometric/blob/master/examples/correct_and_smooth.py
(new error message but still broken at the same step)
```
python3 example/correct_and_smooth.py
...
Epoch: 300, Loss: 0.9754, Train: 0.7694, Val: 0.7390, Test: 0.5940
Traceback (most recent call last):
File "/workspace/examples/correct_and_smooth.py", line 72, in <module>
deg = adj_t.sum(dim=1).to(torch.float)
RuntimeError: reduction operations on CSR tensors with keepdim=False is unsupported
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+32f93b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
Nvidia driver version: 530.41.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 7599.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] onnx==1.14.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+32f93b1
[pip3] torch_geometric==2.4.0
[pip3] torch-tensorrt==0.0.0
[pip3] torchdata==0.6.0+5bbcd77
[pip3] torchmetrics==1.2.0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0+e621604
[pip3] tritonclient==2.38.0.69485441
[conda] Could not collect
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 8 |
170 | 111,508 |
LBFGS accuracy difference between CPU and GPU
|
needs reproduction, module: optimizer, triaged
|
### 🐛 Describe the bug
I am running a [complex minimization algorithm](https://github.com/joacorapela/svGPFA) using LBFGS in PyTorch. I have a method ``model.to(device)`` that moves all model variables to ``device``. If I run the optimization with the model on ``cpu`` (i.e., calling ``model.to(torch.device("cpu"))``) the algorithm converges to a relatively large (poor) value. But if I run the same optimization algorithm, with the same data, but with the model on gpu (i.e., calling ``model.to(torch.device("cuda:0"))``) the algorithm converges to a much smaller (better) value. **I was expecting to obtain similar results on cpu and gpu**. I would appreciate any hint on why this difference could be happening.
Sorry I cannot post a minimal working example, but I could not reproduce the problem with simpler optimizations. If it helps, I can create a Google Colab notebook replicating this problem.
### Versions
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro P5000
GPU 1: NVIDIA GeForce RTX 2080
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 1199.283
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4594.42
Virtualization: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 5 MiB
L3 cache: 50 MiB
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @mruberry @kurtamohler
| 2 |
171 | 111,507 |
[BE] Enable Ruff's Flake8 PYI036
|
module: cpu, triaged, open source, module: amp (automated mixed precision), release notes: distributed (fsdp)
|
Enable [bad-exit-annotation (PYI036)](https://docs.astral.sh/ruff/rules/bad-exit-annotation/#bad-exit-annotation-pyi036)
Link: #110950
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| 2 |
172 | 111,506 |
XLA Tensor creation fails on functionalization inside dynamo.
|
triaged, module: xla, module: functionalization
|
### 🐛 Describe the bug
Minimal reproducible program:
```python
import torch
import torch_xla.core.xla_model as xm
device = xm.xla_device()
def foo():
return torch.tensor([0.0], device=device)
compiled = torch.compile(backend="openxla")(foo)
tensor = compiled()
```
```python
File "torch/_subclasses/functional_tensor.py", line 304, in __torch_dispatch__
return return_and_correct_aliasing(func, args, kwargs, outs_wrapped)
File "torch/utils/_python_dispatch.py", line 364, in return_and_correct_aliasing
_correct_storage_aliasing(func, schema_info, args, (out,) if not isinstance(out, tuple) else out)
File "torch/utils/_python_dispatch.py", line 251, in _correct_storage_aliasing
alias_non_inplace_storage(args[arg_idx], outs[return_idx])
File "torch/utils/_python_dispatch.py", line 234, in alias_non_inplace_storage
torch.ops.aten.set_.source_Storage_storage_offset(ret, arg.untyped_storage(), ret.storage_offset(), ret.shape)
File "torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
torch._dynamo.exc.BackendCompilerFailed: backend='openxla' raised:
RuntimeError: Attempted to set the storage of a tensor on device "xla:0" to a storage on different device "lazy:0". This is no longer allowed; the devices must match.
While executing %tensor : [num_users=1] = call_function[target=torch.tensor](args = ([0.0],), kwargs = {device: xla:0})
Original traceback:
File "examples/bug-device.py", line 7, in foo
return torch.tensor([0.0], device=device)
```
### Versions
PyTorch version: 2bf3ca1be759460cf9fbf011d96d3246001361e9 (Oct 4)
PyTorch/XLA version: c9a132484fb89bfdc9c602ada7bd8a3cec0db1aa (Oct 3)
cc @bdhirsh @ezyang
| 9 |
173 | 111,505 |
Dynamo runner: add FSDP handcrafted module wrapping policy
|
topic: not user facing, module: dynamo, ciflow/inductor
|
The default size based auto wrap policy may not be representative of actual usage of the models. We add support for a few handpicked models, and fallback to the size based policy.
sample command:
`PYTHONPATH=~/benchmark/ python benchmarks/dynamo/torchbench.py -dcuda --training --backend=inductor --multiprocess --performance --only nanogpt --fsdp`
1.257x
1.256x
1.257x
1.252x
1.257x
1.262x
1.258x
1.272x
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
174 | 111,502 |
Fix iphoneos compilation
|
fb-exported, topic: not user facing
|
Summary: As title
Test Plan: buck build @//arvr/mode/iphoneos/mac/opt //xplat/third-party/XNNPACK:ukernels_asm_aarch64
Reviewed By: mcr229
Differential Revision: D50423968
| 6 |
175 | 111,498 |
Add unit test for ONNX models with torch.distributions.normal.Normal
|
module: onnx, open source, onnx-triaged, release notes: onnx, topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111498
* #111497
Fixes #111034
| 1 |
176 | 111,497 |
Add support to ExportedProgram as input to torch.onnx.dynamo_export
|
module: onnx, open source, onnx-triaged, release notes: onnx
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111498
* __->__ #111497
Fixes #109889
This PR adds `torch.export.export` as another `FXGraphExtractor` implementation. `torch.onnx.dynamo_export` automatically uses this new FX tracer when a `torch.export.ExportedProgram` is specified as `model`
Implementation is back compatible, thus non `ExportedProgram` models are handled the exact same way as before
| 1 |
177 | 111,495 |
[ONNX][dynamo] Parameter to export flat graphs
|
module: onnx, triaged
|
### 🚀 The feature, motivation and pitch
I'm exporting a model with a single linear layer to ONNX. Each layer of the generated graph is an ONNX function with an underlying function body composed of other functions and operators. The feature I'm requesting is a configurable option to generate a flat graph with no ONNX functions.
The motivation for this request is to enable optimizations like constant folding. With function nodes, important information is not passed down to the function body resulting in fewer optimizations than possible with a flat graph. Additionally, the TorchScript-based ONNX export does provide an argument to [export modules as functions](https://github.com/pytorch/pytorch/blob/main/torch/onnx/utils.py#L481). It would be beneficial for users if the TorchDynamo-based ONNX exporter had a similar feature.
Code to reproduce:
```
import torch
class LinearModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc0 = torch.nn.Linear(5, 7)
def forward(self, tensor_x: torch.Tensor):
output = self.fc0(tensor_x)
return output
def linearDataloader():
yield torch.randn(3, 5).cuda()
# Get model and input
model = LinearModel()
data = next(linearDataloader())
# ONNX Export
export_output = torch.onnx.dynamo_export(
model.eval().to('cuda'),
data
)
export_output.save('linear_dynamo.onnx')
```
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
178 | 111,492 |
[dynamo] support comparing LHS constant with tensor
|
open source, ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/108582
Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 10 |
179 | 111,489 |
Use more performant bsr_scatter_mm within bsr_dense_mm when blocksize is 16.
|
open source, release notes: sparse
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111489
* #111470
* #110396
| 1 |
180 | 111,487 |
BFloat16 datatype support in Quantization
|
oncall: quantization, triaged
|
### 🚀 The feature, motivation and pitch
I am working on quantization using Pytorch2 infrastructure and would like for Bfloat16 to be supported in quantization annotation. If you look in torch/ao/quantization/quantizer/quantizer.py; BFloat16 is currently not a supported dtype.
### Alternatives
_No response_
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 5 |
181 | 111,486 |
Supports ROCm6.0 reorganization and cleanup
|
module: rocm, open source
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 3 |
182 | 111,484 |
Incorrect and inconsistent outputs from CrossEntropyLoss(reduction="none") with torch.float16 dtype
|
high priority, triage review, module: nn, module: cuda, triaged, module: correctness (silent)
|
### 🐛 Describe the bug
I'm trying to evaluate the loss on a fairly large tensor, and I get different results between the first eval and any subsequent evals, and all evals are incorrect for a chunk of the output. @albanD you've been great help before, let me know what you think.
Minimal repro (requires min 40GB GPU RAM):
```
import torch
batch_size = 70
seq_len = 2048
vocab_size = 50000
shift_labels = torch.zeros(batch_size, seq_len-1, dtype=torch.long).to("cuda")
logits = torch.ones(batch_size, seq_len-1, vocab_size, dtype=torch.float16).to("cuda")
loss_fct = torch.nn.CrossEntropyLoss(reduction="none")
# Evaluate loss first time
nll = loss_fct(logits.permute(0, 2, 1), shift_labels).float()
print(nll)
# This gives:
# tensor([[10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# ...,
# [-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
# [-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
# [-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000]])
# Evaluate loss second time
nll = loss_fct(logits.permute(0, 2, 1), shift_labels).float()
print(nll)
# This gives
# tensor([[10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# ...,
# [-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
# [-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
# [-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]])
```
The expected outputs for both calls should be:
```
# tensor([[10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# ...,
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203],
# [10.8203, 10.8203, 10.8203, ..., 10.8203, 10.8203, 10.8203]])
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1049-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 470.199.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7V13 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
BogoMIPS: 4890.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 96 MiB (3 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] torch==2.1.0
[pip3] triton==2.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck
| 4 |
183 | 111,482 |
When keep_inference_input_mutations=True is set, one dynamic shape test fails
|
triaged, oncall: pt2, module: aotdispatch
|
### 🐛 Describe the bug
In order to get aot_eager and inductor backends to be consistent, I submitted a PR to set `keep_inference_input_mutations=True` but `dynamo/test_dynamic_shapes.py::DynamicShapesAotAutogradFallbackTests::test_call_fn_with_non_const_inputs_aot_unsafe_dynamic_shapes` started failing with
```
"RuntimeError: a leaf Variable that requires grad is being used in an in-place operation."
```
Example failure: https://hud.pytorch.org/pr/111453
@bdhirsh noted that this is probably a bug with autograd
### Versions
github master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
184 | 111,481 |
BUG: fix np.typecodes under Dynamo
|
triaged, module: numpy, open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
`numpy.typecodes` is a module-level Dict[Str, Str] which lists allowed dtypes by category. Since we only support a subset of numpy dtypes, `torch._numpy.typecodes` differs from `numpy.typecodes`.
Since this is a dict of primitive types, need to trick dynamo to stop it from simply inlining the numpy dict.
This is only a partial fix, it only works if there's something to compile. Otherwise, e.g. bare
```
def fn():
return np.typecodes['AllInteger']
```
still generates a numpy dict.
Partially fixes item 2 of https://github.com/pytorch/pytorch/issues/111370.
cc @mruberry @rgommers @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
185 | 111,480 |
torch.jit.script persistently changes default from utf-8 to ascii
|
oncall: jit
|
### 🐛 Describe the bug
here a 'simple' way to reproduce the issue
```
import torch
@torch.jit.script
def snake(x, alpha):
x = x + alpha + 1e-9
return x
class Snake1d(torch.nn.Module):
def __init__(self, channels):
super().__init__()
self.alpha = torch.nn.Parameter(torch.ones(1, channels, 1))
def forward(self, x):
return snake(x, self.alpha)
class ResidualUnit(torch.nn.Module):
def __init__(self, dim=16, dilation=1):
super().__init__()
pad = ((7 - 1) * dilation) // 2
self.block = torch.nn.Sequential(
Snake1d(dim),
torch.nn.Conv1d(dim, dim, kernel_size=7, dilation=dilation, padding=pad),
Snake1d(dim),
torch.nn.Conv1d(dim, dim, kernel_size=1),
)
def forward(self, x):
return self.block(x)
model = torch.nn.Sequential(
ResidualUnit(32, dilation=1),
ResidualUnit(32, dilation=3),
)
model.to("cuda")
_ = model.forward(torch.zeros((1, 32, 10000)).to("cuda"))
with open("/tmp/test.txt", "w") as f:
f.write(chr(999))
```
### Versions
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1499.741
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5200.15
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==2.0.6
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] rotary-embedding-torch==0.2.7
[pip3] torch==2.1.0
[pip3] torch-stoi==0.1.2
[pip3] torchaudio==2.1.0
[pip3] torchcrepe==0.0.21
[pip3] torchlibrosa==0.1.0
[pip3] torchmetrics==1.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.1.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-lightning 2.0.6 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.7 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torch-stoi 0.1.2 pypi_0 pypi
[conda] torchaudio 2.1.0 pypi_0 pypi
[conda] torchcrepe 0.0.21 pypi_0 pypi
[conda] torchlibrosa 0.1.0 pypi_0 pypi
[conda] torchmetrics 1.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
186 | 111,479 |
multi_head_attention_forward generates different values on MPS compared to CPU
|
triaged, module: mps
|
### 🐛 Describe the bug
`multi_head_attention_forward` will generate different values on MPS compared to CPU with same inputs.
I don't have an MPS machine to reproduce this issue. You can refer to https://github.com/pytorch/pytorch/actions/runs/6561612634/job/17822025576.
FP32 output on CPU:
```
(tensor([[[-6.5419e+02, -8.7080e+01],
[ 1.2814e+02, -1.7165e+02]],
[[-1.3241e+03, -1.7267e+02],
[ 1.2814e+02, -1.7165e+02]],
[[-1.4078e+03, -3.3899e+02],
[-2.6367e-02, -3.5078e+00]]], grad_fn=<ViewBackward0>), tensor([[[1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[3.0850e-09, 1.0000e+00, 0.0000e+00, 1.8921e-10],
[0.0000e+00, 1.0000e+00, 1.0000e+00, 0.0000e+00]],
[[0.0000e+00, 2.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 2.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]]],
grad_fn=<MeanBackward1>))
```
FP32 output on MPS:
```
(tensor([[[-2.9954e+02, -5.9902e+02],
[-2.6367e-02, -3.5078e+00]],
[[-1.3241e+03, -1.7267e+02],
[-9.9200e+01, -2.0069e+02]],
[[-1.3241e+03, -1.7267e+02],
[-9.2043e+02, -3.0561e+02]]], device='mps:0', grad_fn=<ViewBackward0>), tensor([[[0.0000e+00, 1.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.0000e+00, 0.0000e+00, 1.8921e-10],
[0.0000e+00, 1.0000e+00, 0.0000e+00, 0.0000e+00]],
[[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.0000e+00, 1.0000e+00, 0.0000e+00]]], device='mps:0',
grad_fn=<MeanBackward1>))
```
### Versions
PyTorch version: 2.2.0a0+git5fa0c13
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.5.2111 (x86_64)
GCC version: (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
Clang version: 16.0.0 (Red Hat 16.0.0-2.module_el8+405+25122a8c)
CMake version: version 3.21.4
Libc version: glibc-2.28
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.16.0-rc1-intel-next-00543-g5867b0a2a125-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] flake8==3.8.2
[pip3] flake8-bugbear==20.1.4
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.3.0
[pip3] flake8-executable==2.0.4
[pip3] flake8-pyi==20.5.0
[pip3] intel-extension-for-pytorch==2.2.0+gite7090c6
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.15.1
[pip3] onnxscript==0.1.0.dev20230830
[pip3] torch==2.2.0a0+git29048be
[pip3] torchvision==0.16.0a0+fb115c2
[pip3] triton==2.0.0
[conda] intel-extension-for-pytorch 2.2.0+gite7090c6 dev_0 <develop>
[conda] mkl 2022.1.0 hc2b9512_224
[conda] mkl-include 2023.1.0 pypi_0 pypi
[conda] mkl-static 2023.1.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 2.2.0a0+git29048be dev_0 <develop>
[conda] torchvision 0.16.0a0+fb115c2 dev_0 <develop>
[conda] triton 2.0.0 pypi_0 pyp
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
187 | 111,476 |
[caffe2] avoid variable shadowing
|
fb-exported
|
Summary:
Some builds use -Wshadow and currently there is a compiler warning when building that file.
Code inspection shows that `torch::autograd::impl::get_view_autograd_meta` simply extracts information from the passed object, which is `const`. Therefore the returned views should be the same all the time, and we can fetch the view only once.
Test Plan:
CI
NOTE: please advise for a more comprehensive test plan.
Differential Revision: D50407625
| 5 |
188 | 111,475 |
[qnnpack] suppress empty translation unit warning
|
module: cpu, fb-exported, release notes: quantization
|
Summary: Spotted this while compiling on a Mac M1. The code in these files is gated behind #ifdef and requires SSE, so when building for ARM these files become empty.
Test Plan: CI
Differential Revision: D50407334
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 5 |
189 | 111,473 |
Rephrase sentence in "Why and when to use sparsity" for better understanding.
|
module: docs, triaged
|
### 📚 The doc issue
Current Sentence:
By default PyTorch stores [torch.Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor) stores elements contiguously physical memory.
### Suggest a potential alternative/fix
Should be (I feel it reads better this way):
By default PyTorch stores [torch.Tensor](https://pytorch.org/docs/stable/tensors.html#torch.Tensor) elements in physically contiguous memory.
cc @svekars @carljparker
| 0 |
190 | 111,471 |
test_learnable_forward_per_channel fails due to integer overflow
|
oncall: quantization
|
### 🐛 Describe the bug
`test_quantization.py` fails in `test_learnable_forward_per_channel` due to an integer overflow in the kernel code.
```
AssertionError: False is not true : Expected kernel forward function to have results match the reference forward function
Falsifying example: test_learnable_forward_per_channel_cpu(
X=(array([9.223372e+14], dtype=float32),
(array([1.]), array([0]), 0, torch.quint8)),
self=<quantization.core.test_workflow_ops.TestFakeQuantizeOps testMethod=test_learnable_forward_per_channel_cpu>,
)
```
I traced the issue to an integer overflow leading to different results in the kernel and reference function.
https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/test/quantization/core/test_workflow_ops.py#L812
returns a scale of `1.e-4` the zero-point is (in my case) actually zero.
The reason is obvious when comparing the reference function and the kernel. The reference is called at https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/test/quantization/core/test_workflow_ops.py#L796
The [calculation](https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/torch/testing/_internal/common_quantized.py#L200) is `(torch.clamp(torch.round(X[i] * (1.0 / per_channel_scale[i]) + per_channel_zero_point[i]), quant_min, quant_max) - per_channel_zero_point[i]) * per_channel_scale[i]`
When putting the values for the parameter to `clamp` we get `9.223372e+14 * (1.0 / 1e-4)`
Similar in the actual kernel it also rounds, clamps and converts the zero_point to int: https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/aten/src/ATen/native/quantized/FakeQuantPerChannelAffine.cpp#L147
So we get to [this kernel code:](https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L2702-L2708) `std::fmin(std::fmax(static_cast<int64_t>(zero_point + std::nearbyint(self * inv_scale)), quant_min), quant_max) - zero_point) * scale;`
We can see a similar "clamp" with a strange cast to `int64` of the value (in this case) `9.223372e+14 * 1e4` and sure enough `INT64_MAX` is ~ `9.223372e+18` hence we get **undefined behavior** here.
For the reference function I get a result of `0.0015` so it (correctly) clamps to `quant_max=15` (and multiplies by `scale=1.e-4`)
As I get a result of zero from the kernel I assume the UB manifests in a negative integer which then gets clamped to `quant_min=0`
Note that a similar issue exists in the [mask-calculation](https://github.com/pytorch/pytorch/blob/a4391f085bff409dca93a8b3eff8e379f0ef8f68/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L2694) and in the path for floating-point zero-points where `std::lrint` is used which has:
> If the result of the rounding is outside the range of the return type, [FE_INVALID](https://en.cppreference.com/w/cpp/numeric/fenv/FE_exceptions) is raised and an implementation-defined value is returned.
Given that `fmin/fmax` is used, the cast to `int64_t` can likely just be removed.
### Versions
PyTorch 2.0.1 with examples above from current main (2.1.0)
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 0 |
191 | 111,470 |
Use lru_cache to cache indices data for bsr_scatter_mm.
|
open source, release notes: sparse
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111489
* __->__ #111470
* #110396
| 1 |
192 | 111,468 |
Set `CAFFE2_STATIC_LINK_CUDA` in installed cmake files
|
triaged, open source
|
May relate to: https://github.com/pytorch/pytorch/pull/82695
| 2 |
193 | 111,466 |
yolov5_train
|
feature, module: cuda, triaged, module: determinism
|
### 🐛 Describe the bug
File "D:\anaconda\envs\pytorch\lib\site-packages\torch\_tensor.py", line 487, in backward
torch.autograd.backward(
File "D:\anaconda\envs\pytorch\lib\site-packages\torch\autograd\__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: adaptive_avg_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
### Versions
1
cc @ptrblck @mruberry @kurtamohler
| 1 |
194 | 111,464 |
new_qtensor support privateuseone allocator.
|
triaged, open source, release notes: quantization
|
I want to create a quant tensor through `PerTensorAffineQuantizer`. But I found that it will throw error because of the lake of judgment for PrivateUse1.
| 1 |
195 | 111,463 |
Add tests for strided layout in factory functions
|
triaged, open source, topic: not user facing
|
Fixes #111222
This pull request adds tests for factory functions that create tensors with a strided layout. The tests are added to the `test_ops.py` file and check the behavior of the `empty`, `zeros`, `ones`, and `rand` factory functions when used with the `layout=torch.strided` argument.
| 3 |
196 | 111,462 |
DISABLED test_meta_inplace_addmm_decomposed_cpu_complex128 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_inplace_addmm_decomposed_cpu_complex128&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17802491680).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_inplace_addmm_decomposed_cpu_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
197 | 111,461 |
[dynamo] allow DeviceMesh variable desugar ProcessGroup
|
ciflow/trunk, module: dynamo, ciflow/inductor, release notes: distributed (dtensor)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111461
As titled, when calling get_dim_groups, DeviceMeshVariable should be
de-sugared to ProcessGroup Variable in dynamo
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
Differential Revision: [D50400451](https://our.internmc.facebook.com/intern/diff/D50400451)
| 2 |
198 | 111,459 |
[not for review] testing memory planning
|
module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111459
* #111402
* #111587
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
199 | 111,456 |
torch.autocast() hangs on CPUs
|
triage review, module: cpu, module: amp (automated mixed precision)
|
### 🐛 Describe the bug
Hi PyTorch Team,
First and foremost, thank you for your invaluable contributions to the ML community! I recently encountered a performance issue when using torch.autocast() on CPUs. In the FP32 mode, everything is fine. When turning on the AMP with BF16, the code seems to be stuck and never finishes. Here is a quick repo:
```python3
import time
import torch
from transformers import RobertaForSequenceClassification, RobertaTokenizer
SEQUENCE_LENGTH = 512
def generate_random_batch(tokenizer, batch_size, sequence_length=SEQUENCE_LENGTH):
"""Generate a batch of random sequences for testing."""
return tokenizer(
[" ".join(["I am a test string."] * sequence_length) for _ in range(batch_size)],
padding="max_length",
truncation=True,
max_length=SEQUENCE_LENGTH,
return_tensors="pt",
)
def benchmark_throughput(model, tokenizer, mixed_precision=False, batch_size=32, num_iterations=3):
"""Test and print the throughput of the model."""
input_data = generate_random_batch(tokenizer, batch_size).to(device)
# Warm up
for _ in range(3):
with torch.no_grad():
# PyTorch only supports bfloat16 mixed precision on CPUs.
with torch.autocast(device_type="cpu", dtype=torch.bfloat16, enabled=mixed_precision):
_ = model(**input_data)
start_time = time.time()
for _ in range(num_iterations):
with torch.no_grad():
with torch.autocast(device_type="cpu", dtype=torch.bfloat16, enabled=mixed_precision):
_ = model(**input_data)
end_time = time.time()
elapsed_time = end_time - start_time
sequences_per_second = (batch_size * num_iterations) / elapsed_time
latency = 1000 / sequences_per_second * batch_size
return sequences_per_second, latency
if __name__ == "__main__":
device = torch.device("cpu")
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = RobertaForSequenceClassification.from_pretrained("roberta-base").to(device).eval()
model = torch.compile(model)
throughput, latency = benchmark_throughput(model, tokenizer, mixed_precision=False)
print(f"FP32 Throughput: {throughput:.2f} sequences/second, Latency: {latency:.2f} ms")
throughput, latency = benchmark_throughput(model, tokenizer, mixed_precision=True)
print(f"Mixed Precision Throughput: {throughput:.2f} sequences/second, Latency: {latency:.2f} ms")
```
On my Intel i7-13700K, FP 32 throughput is 4.98 sequences/second. Enabling AMP with autocast seems to hang the code, as 1 core spikes to 100% and the script would not finish after waiting for 30 minutes. Would appreciate a further look into this issue - thanks in advance!
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.113.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 5400.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 24 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] pytorch-lightning==2.1.0
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| 8 |
200 | 111,454 |
[ONNX][dynamo] Failed to export cumsum with dtype=float16
|
module: onnx, triaged, module: half
|
### 🐛 Describe the bug
This is exposed by the test case: `test_output_match` in test/onnx/test_fx_op_consistency.py. When export cumsum with inputs of dtype=torch.float16 to an ONNX graph, it will get the following error:
`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/allocation_planner.cc:230 int& onnxruntime::PlannerImpl::UseCount(onnxruntime::OrtValueIndex) n >= 0 && static_cast<size_t>(n) < ort_value_info_.size() was false. invalid value index: -1 against size 5`
A simple reproducer:
```
import torch
import onnxruntime
import io
import numpy as np
# import intel_extension_for_pytorch as ipex
from typing import (
Any,
Callable,
Mapping,
Optional,
Sequence,
Union,
)
from torch.types import Number
_NumericType = Union[Number, torch.Tensor, np.ndarray]
_InputArgsType = Optional[
Union[torch.Tensor, int, float, bool, Sequence[Any], Mapping[str, Any]]
]
_OutputsType = Sequence[_NumericType]
def run_ort(
onnx_model: Union[str, torch.onnx.ExportOutput],
pytorch_inputs: Sequence[_InputArgsType],
) -> _OutputsType:
"""Run ORT on the given ONNX model and inputs
Used in test_fx_to_onnx_with_onnxruntime.py
Args:
onnx_model (Union[str, torch.onnx.ExportOutput]): Converter ONNX model
pytorch_inputs (Sequence[_InputArgsType]): The given torch inputs
Raises:
AssertionError: ONNX and PyTorch should have the same input sizes
Returns:
_OutputsType: ONNX model predictions
"""
if isinstance(onnx_model, torch.onnx.ExportOutput):
buffer = io.BytesIO()
onnx_model.save(buffer)
ort_model = buffer.getvalue()
else:
ort_model = onnx_model
# Suppress floods of warnings from ONNX Runtime
session_options = onnxruntime.SessionOptions()
session_options.log_severity_level = 3 # Error
session = onnxruntime.InferenceSession(
ort_model, providers=["CPUExecutionProvider"], sess_options=session_options
)
input_names = [ort_input.name for ort_input in session.get_inputs()]
if len(input_names) != len(pytorch_inputs):
raise AssertionError(
f"Expected {len(input_names)} inputs, got {len(pytorch_inputs)}"
)
ort_input = {k: v.cpu().numpy() for k, v in zip(input_names, pytorch_inputs)}
return session.run(None, ort_input)
class SingleOpModel(torch.nn.Module):
"""Test model to wrap around a single op for export."""
def __init__(self, op, kwargs):
super().__init__()
self.operator = op
self.kwargs = kwargs
def forward(self, *args):
return self.operator(*args, **self.kwargs)
set_dtype = torch.float16
input = (torch.randn((5, 5, 5), dtype=set_dtype), 1)
kwargs = {"dtype": set_dtype}
model = SingleOpModel(torch.cumsum, kwargs)
ref_input_kwargs = {}
export_output = torch.onnx.dynamo_export(
model,
*input,
**ref_input_kwargs,
export_options=torch.onnx.ExportOptions(
op_level_debug=False,
dynamic_shapes=False,
diagnostic_options=torch.onnx.DiagnosticOptions(
verbosity_level=10
),
),
)
onnx_format_args = export_output.adapt_torch_inputs_to_onnx(
*input, **ref_input_kwargs
)
ort_outputs = run_ort(export_output, onnx_format_args)
```
### Versions
PyTorch version: 2.2.0a0+git5fa0c13
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.5.2111 (x86_64)
GCC version: (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
Clang version: 16.0.0 (Red Hat 16.0.0-2.module_el8+405+25122a8c)
CMake version: version 3.21.4
Libc version: glibc-2.28
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.16.0-rc1-intel-next-00543-g5867b0a2a125-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] flake8==3.8.2
[pip3] flake8-bugbear==20.1.4
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.3.0
[pip3] flake8-executable==2.0.4
[pip3] flake8-pyi==20.5.0
[pip3] intel-extension-for-pytorch==2.2.0+gite7090c6
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.15.1
[pip3] onnxscript==0.1.0.dev20230830
[pip3] torch==2.2.0a0+git46b6478
[pip3] torchvision==0.16.0a0+fb115c2
[pip3] triton==2.0.0
[conda] intel-extension-for-pytorch 2.2.0+gite7090c6 dev_0 <develop>
[conda] mkl 2022.1.0 hc2b9512_224
[conda] mkl-include 2023.1.0 pypi_0 pypi
[conda] mkl-static 2023.1.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 2.2.0a0+git46b6478 dev_0 <develop>
[conda] torchvision 0.16.0a0+fb115c2 dev_0 <develop>
[conda] triton 2.0.0 pypi_0 pypi
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.