Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
2,301 | 103,581 |
Passing dict in datapipe/dataset will have memory leak problem
|
triaged, module: data
|
### π Describe the bug
Passing dict in datapipe or dataset will casuse memory leak
```python
from copy import deepcopy
import gc
from memory_profiler import profile
import torch
from torch.utils.data import DataLoader
from torchdata.datapipes.iter import IterableWrapper
from torchdata.dataloader2 import DataLoader2
def build_dp1(num_batch):
item_list = list()
for idx in range(num_batch):
item = {
"id": idx,
"clean": {
"path": str(idx),
"id": idx,
},
"noisy":{
"path": str(idx),
"id": idx,
},
}
item_list.append(item)
return IterableWrapper(item_list)
def build_dp2(num_batch):
item_list = list()
for idx in range(num_batch):
item = {
"id": idx,
"clean_path": str(idx),
"clean_id": idx,
"noisy_path": str(idx),
"noisy_id": idx,
}
item_list.append(item)
return IterableWrapper(item_list)
def add_audio1(item):
item["clean"]["audio"] = torch.randn([5000, 10])
item["noisy"]["audio"] = torch.randn([5000, 10])
return item
def add_audio2(item):
new_item = deepcopy(item)
new_item["clean"]["audio"] = torch.randn([5000, 10])
new_item["noisy"]["audio"] = torch.randn([5000, 10])
return new_item
def add_audio3(item):
item["clean_audio"] = torch.randn([5000, 10])
item["noisy_audio"] = torch.randn([5000, 10])
return item
class MyDataset1(torch.utils.data.Dataset):
def __init__(self, datalen):
super().__init__()
self.datalen = datalen
def __getitem__(self, index):
item = {
"id": index,
"clean_path": str(index),
"clean_id": index,
"clean_audio": torch.randn([5000, 10]),
"noisy_path": str(index),
"noisy_id": index,
"noisy_audio": torch.randn([5000, 10]),
}
return item
def __len__(self):
return self.datalen
class MyDataset2(torch.utils.data.Dataset):
def __init__(self, datalen):
super().__init__()
self.datalen = datalen
def __getitem__(self, index):
return torch.randn([5000, 10]), torch.randn([5000, 10])
def __len__(self):
return self.datalen
@profile
def datapipe(num_batch):
dp = build_dp2(num_batch).map(add_audio3)
dl = DataLoader2(dp)
for i, batch in enumerate(dl):
pass
pass
del dp, dl
@profile
def dataset1(num_batch):
ds = MyDataset1(num_batch)
dl = DataLoader(ds)
for i, batch in enumerate(dl):
pass
pass
del ds, dl
@profile
def dataset2(num_batch):
ds = MyDataset2(num_batch)
dl = DataLoader(ds)
for i, batch in enumerate(dl):
pass
pass
del ds, dl
num_batch = 1000
gc.collect()
datapipe(num_batch)
gc.collect()
dataset1(num_batch)
gc.collect()
dataset2(num_batch)
gc.collect()
num_batch = 5000
gc.collect()
datapipe(num_batch)
gc.collect()
dataset1(num_batch)
gc.collect()
dataset2(num_batch)
gc.collect()
```
output:
```
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
88 328.1 MiB 328.1 MiB 1 @profile
89 def datapipe(num_batch):
90 328.4 MiB 0.3 MiB 1 dp = build_dp2(num_batch).map(add_audio3)
91 330.6 MiB 2.2 MiB 1 dl = DataLoader2(dp)
92 714.3 MiB 383.6 MiB 1001 for i, batch in enumerate(dl):
93 714.3 MiB 0.0 MiB 1000 pass
94 714.3 MiB 0.0 MiB 1 pass
95 714.3 MiB 0.0 MiB 1 del dp, dl
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
97 714.4 MiB 714.4 MiB 1 @profile
98 def dataset1(num_batch):
99 714.4 MiB 0.0 MiB 1 ds = MyDataset1(num_batch)
100 714.4 MiB 0.0 MiB 1 dl = DataLoader(ds)
101 716.9 MiB 2.5 MiB 1001 for i, batch in enumerate(dl):
102 716.9 MiB 0.0 MiB 1000 pass
103 716.9 MiB 0.0 MiB 1 pass
104 716.9 MiB 0.0 MiB 1 del ds, dl
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
106 716.9 MiB 716.9 MiB 1 @profile
107 def dataset2(num_batch):
108 716.9 MiB 0.0 MiB 1 ds = MyDataset2(num_batch)
109 716.9 MiB 0.0 MiB 1 dl = DataLoader(ds)
110 716.9 MiB 0.0 MiB 1001 for i, batch in enumerate(dl):
111 716.9 MiB 0.0 MiB 1000 pass
112 716.9 MiB 0.0 MiB 1 pass
113 716.9 MiB 0.0 MiB 1 del ds, dl
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
88 716.9 MiB 716.9 MiB 1 @profile
89 def datapipe(num_batch):
90 717.0 MiB 0.0 MiB 1 dp = build_dp2(num_batch).map(add_audio3)
91 721.6 MiB 4.6 MiB 1 dl = DataLoader2(dp)
92 2254.1 MiB 1532.6 MiB 5001 for i, batch in enumerate(dl):
93 2254.1 MiB 0.0 MiB 5000 pass
94 2254.1 MiB 0.0 MiB 1 pass
95 2252.1 MiB -2.0 MiB 1 del dp, dl
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
97 2251.5 MiB 2251.5 MiB 1 @profile
98 def dataset1(num_batch):
99 2251.5 MiB 0.0 MiB 1 ds = MyDataset1(num_batch)
100 2251.5 MiB 0.0 MiB 1 dl = DataLoader(ds)
101 2251.5 MiB -7642068.4 MiB 5001 for i, batch in enumerate(dl):
102 2251.5 MiB -7640538.2 MiB 5000 pass
103 721.3 MiB -1530.2 MiB 1 pass
104 721.3 MiB 0.0 MiB 1 del ds, dl
Filename: /home/haoyu.tang/uim_se/test_datapipes.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
106 721.3 MiB 721.3 MiB 1 @profile
107 def dataset2(num_batch):
108 721.3 MiB 0.0 MiB 1 ds = MyDataset2(num_batch)
109 721.3 MiB 0.0 MiB 1 dl = DataLoader(ds)
110 721.3 MiB 0.0 MiB 5001 for i, batch in enumerate(dl):
111 721.3 MiB 0.0 MiB 5000 pass
112 721.3 MiB 0.0 MiB 1 pass
113 721.3 MiB 0.0 MiB 1 del ds, dl
```
It is clear that is pasing the dict of tensor memory will leak but list of tensor will not.
I used dict of tensor in my model training, and I found the training faied multiple times all since of memory leak. And I tried to used Tensordict(https://pytorch.org/rl/tensordict/), but it cannot contains the string. I need string during my datapipes passing (str to tensor encode in one of datapipes).
copy from: https://github.com/pytorch/data/issues/1183
### Versions
### Versions
torch version: 2.0.0
torchdata version: 0.6.0
cc @VitalyFedyunin @ejguan @dzhulgakov
| 3 |
2,302 | 103,580 |
Support ByteTensor and ShortTensor for nn.Embedding and nn.EmbeddingBag
|
module: nn, triaged, enhancement, actionable, topic: improvements
|
### π The feature, motivation and pitch
Torch's embedding layers only accept int32 and int64 as input. However, for sequences with a small number of distinct possible tokens (e.g., ASCII character embeddings or DNA sequences) int8 or int16 are sufficient to index all of the tokens. Currently, modeling long sequences that consist of only a few possible tokens means wasting a lot of GPU memory and being forced to use smaller batch sizes than might be desirable.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 6 |
2,303 | 103,578 |
ImportError: undefined symbol: cublasSetWorkspace_v2, version libcublas.so.11
|
oncall: binaries
|
### π Describe the bug
I actually conda create a new environment python=3.10 and then use the command
"pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118"
However, torch seems not to be installed correctly, since when I "import torch", it rasie an error.
import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/dataln/morong/miniconda3/envs/py310/lib/python3.10/site-packages/torch/__init__.py", line 229, in <module>
from torch._C import * # noqa: F403
ImportError: /root/dataln/morong/miniconda3/envs/py310/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so: undefined symbol: cublasSetWorkspace_v2, version libcublas.so.11
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.5.2111 (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
Is CUDA available: N/A
CUDA runtime version: 11.0.194
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: A100-SXM-80GB
GPU 1: A100-SXM-80GB
GPU 2: A100-SXM-80GB
GPU 3: A100-SXM-80GB
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.4
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2899.998
BogoMIPS: 5799.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torchaudio 2.0.2+cu118 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
cc @seemethere @malfet
| 0 |
2,304 | 103,575 |
add default argument device type api
|
triaged, open source, Stale, topic: not user facing
|
Fixes #103828
1γFor many operators(such as pin_memory), the device argument is cuda if not given; but for other device, we must have to give extra argument device_type comparing to cuda, so we add an API to set the default argument device just once at the begining to keep usage consistent with cuda.
2γAnd there are some API defined in Python, we add a argument named device_type and the default value is cuda, so that we could support more device (privateuse1 device). So we use the this api to get the default device to keep usage consistent with cuda if not gived device_type.
| 7 |
2,305 | 103,573 |
[ONNX] Support aten::mT
|
module: onnx, low priority, triaged, OSS contribution wanted
|
### π The feature, motivation and pitch
Add onnx export for aten::mT
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
2,306 | 103,572 |
[ONNX] Support aten::linalg_solve_triangular
|
module: onnx, triaged
|
### π The feature, motivation and pitch
Add onnx export support for aten::linalg_solve_triangular
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
2,307 | 103,571 |
[ONNX] Support aten::linalg_cholesky_ex
|
module: onnx, triaged
|
### π The feature, motivation and pitch
Add support for onnx export of aten::linalg_cholesky_ex
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
2,308 | 103,570 |
File Missing When i build with C++
|
module: cpp, triaged
|
I installed pytorch For C++distribution and added in CMakeList.txt.
1) Initially it given torch/torch.h No Such file or directory and I used CMAKE_PREFIX_PATH to libtorch.
2) I try to build it will showing ATen/Tensor.h No such file or directory. I checked for .h file(No file neme Tensor.h).
3) I tried different versions of NVIDIA toolkit CUDA versions for Build the Cmake.
4) Now it showing ATen/Tensor.h No such file or directory. I checked for .h file(No file Name Tensor.h).
which Version best suitable for Running the smallest Applications.
#include <iostream>
#include <torch/torch.h>
int main() {
torch::Tensor tensor = torch::rand({ 2, 3 },torch::kCUDA);
std::cout << tensor << std::endl;
}
cc @jbschlosser
| 6 |
2,309 | 103,553 |
Request: flag to know model is compiled after torch.compile()
|
triaged, enhancement, oncall: pt2
|
### π The feature, motivation and pitch
As a user, it will be great to have a flag that reports that the model has been compiled successfully. Like:
model = torch.compile(model)
print(model.is_compile)
True
Even if the flag does not exist before compilation and it is created later, that would be great.
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
2,310 | 103,552 |
Inject detailed NVTX markers into the Inductor Triton generated kernels
|
triaged, oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
The inductor/triton generated kernels have limited NVTX marker support and the kernel names don't provide much information related to which operators they implement. Since many of the generated kernels represent fused ops, it would be useful to be able to see from the profiler which high level ops contributed to the fused kernels. There is an existing attribute in the inductor graph ir called called origin_node. Origin node indicates which high level op the node in the IR is associated with. For fused kernels there is a list of origin_nodes which describe which high level ops contributed to the fused kernel. This is really useful information for understanding how the fusion algorithm works. I am proposing to capture this information for each triton kernel and inject a marker into the generated code so it appears in the profiler at runtime. The markers will use the record_function python api.
The screen shot below shows an example of the markers I implemented. They are prefixed with **triton_info:** and show details about each of the origin ops, including the module name, op type and sequence id.

### Alternatives
Not really, some of this information is available in the debug logs but it is difficult to map the kernel instance directly to the ops in the original fx graphs.
### Additional context
This is related to #102375 which adds fwd and bwd sequence id tracking to aot autograd. The sequence id of each op is included in the triton_info nvtx markers.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @davidberard98
| 0 |
2,311 | 103,539 |
torch.fx.passes.split_module.split_module doesn't support dynamic shapes
|
good first issue, triaged, module: dynamic shapes
|
### π Describe the bug
Steps to reproduce:
1. Enable dynamic shapes on test_multiple_aot_autograd_calls_dupe_args (deleting the config patch)
2. Test fails with
```
File "/data/users/ezyang/b/pytorch/torch/_dynamo/output_graph.py", line 857, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/users/ezyang/b/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_dynamo/output_graph.py", line 913, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/users/ezyang/b/pytorch/torch/_dynamo/output_graph.py", line 909, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/ezyang/b/pytorch/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/ezyang/b/pytorch/test/dynamo/test_aot_autograd.py", line 688, in test_compile
submod_1_inps = split_gm.submod_0(*example_inps)
File "/data/users/ezyang/b/pytorch/torch/fx/graph_module.py", line 662, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/graph_module.py", line 281, in __call__
raise e
File "/data/users/ezyang/b/pytorch/torch/fx/graph_module.py", line 271, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/data/users/ezyang/b/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
torch._dynamo.exc.BackendCompilerFailed: backend='test_compile' raised:
TypeError: forward() takes 2 positional arguments but 3 were given
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @wconstab
### Versions
main
| 1 |
2,312 | 103,530 |
Deduplicate the operands passed into torch.cond after dynamo tracing.
|
triaged
|
### π The feature, motivation and pitch
Currently, we lift the free variables inside torch.cond branches as extra inputs to the branch graph. As a result, for simplicitly, we naively extend the torch.cond operands list with free lifted variables from each branch. For example, let's consider `cond(pred, true_fn, false_fn, [x])` where `true_fn` has `a, b, c` as free variables and `false_fn` has `a, b, d` as free variables. Then, dynamo will rewrite it as `cond(pred, true_fn, false_fn, [x, a, b, c, a, b, d])`. Ideally, we should detect this and deduplicate the operands list.
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
2,313 | 103,518 |
`gradcheck` produces false positives with sparse inputs when `masked=False`.
|
module: sparse, module: autograd, triaged
|
### π Describe the bug
As per title. As an example, let's consider the `sampled_addmm` method which is semantically equivalent to
`sampled_addmm(s, m1, m2, alpha, beta) := alpha * (m1 @ m2).sparse_mask(s) + beta * s`.
If we inspect the subgradient of `sampled_addmm` wrt `s` in `derivatives.yaml`, we find the following:
```
- name: sparse_sampled_addmm(Tensor self, Tensor mat1, Tensor mat2, *, Scalar beta=1, Scalar alpha=1) -> Tensor
self: maybe_multiply(grad, beta.conj())
```
Note, that under the assumption of masked semantics this formula is correct, even though it does not account for the `(mat1 @ mat2).sparse_mask(self)` part. This follows from the sparse semantics that implies `self.indices == (self + perturbation_of_self).indices`. Hence we can expect `gradcheck` to work with `masked=True`:
```python
In [1]: import torch
In [2]: x = torch.eye(3, dtype=torch.double).to_sparse_csr().requires_grad_(True)
In [3]: y = torch.rand(3, 3, dtype=torch.double)
In [4]: z = torch.rand(3, 3, dtype=torch.double)
In [5]: torch.autograd.gradcheck(lambda x: torch.sparse.sampled_addmm(x, y, z).to_dense(masked_grad=True), (x,), masked=True)
Out[5]: True
```
However, the situation is reversed for `masked=False`. In this case the backward formula for `self` should take `alpha * (m1 @ m2).sparse_mask(self)` into consideration, so it is expected for `gradcheck` with `masked=False` to fail.
This, however, does not happen:
```python
In [6]: torch.autograd.gradcheck(lambda x: torch.sparse.sampled_addmm(x, y, z).to_dense(masked_grad=False), (x,), masked=False)
Out[6]: True
```
As per @pearu's insight, this happens during the densification process in gradcheck. Namely, it sometimes expands `self.indices` to full dimensions while producing a new sparse input `self_densified`. Unfortunately, `sampled_addmm(self)` and `sampled_addmm(self_densified)` are not equivalent in backward, because `sampled_addmm(self_densified)` should pass gradcheck with either `masked=True` or `masked=False` since it's mask is the whole space.
### Versions
Current master.
cc @alexsamardzic @pearu @cpuhrsch @amjames @bhosmer @ezyang @albanD @zou3519 @gqchen @soulitzer @Lezcano @Varal7
| 14 |
2,314 | 103,505 |
[functorch] [FakeTensorMode, meta tensor] + aot_autograd Bug.
|
triaged, oncall: pt2, module: fakeTensor, module: aotdispatch
|
### π Describe the bug
I am trying to use FakeTensor and aot_autograd to capture the computation graph, but I met below errors. Can anyone help me out?
# FakeTensorMode case
In this case, I got errors like `TypeError: Multiple dispatch failed for 'torch._ops.aten.t.default'; all __torch_dispatch__ handlers returned NotImplemented`.
```python
import torch
from torch.nn import Linear
from torchdistx.fake import fake_mode
from torch._subclasses.fake_tensor import FakeTensorMode, FakeTensor
from torch._functorch.aot_autograd import aot_export_joint_simple
class TestModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = Linear(1024, 4096)
self.linear2 = Linear(4096, 1024)
def forward(self, x):
y = self.linear(x)
z = self.linear2(y)
# loss = torch.sum(z)
return tuple([z])
with FakeTensorMode():
sample_input = torch.randn(4, 512, 1024)
loss = torch.rand(4, 512, 4096)
model = TestModel()
z = model(sample_input)
graph_module = aot_export_joint_simple(model, tuple([sample_input]), trace_joint=True)
print(graph_module)
```
```
[2023-06-13 20:10:10,474] torch.fx.experimental.proxy_tensor.__not_implemented: [DEBUG] ProxyTensorMode tensors without proxy had unrecognized subclasses: [<class 'torch._subclasses.fake_tensor.FakeTensor'>]
[2023-06-13 20:10:10,474] torch._subclasses.fake_tensor.__not_implemented: [DEBUG] FakeTensor mode already active: <torch._subclasses.fake_tensor.FakeTensorMode object at 0x7f91b9efbfa0> in [<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7f91b9efbfa0>, <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7f91c24e23a0>]
Traceback (most recent call last):
File "/Users/connolly/Documents/GitHub/Autoplanner/test/fake_tensor_bug_issue.py", line 28, in <module>
graph_module = aot_export_joint_simple(model, tuple([sample_input]), trace_joint=True)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3960, in aot_export_joint_simple
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 4050, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3262, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2083, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2263, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1500, in aot_dispatch_base_graph
fw_module = create_functionalized_graph(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1387, in create_functionalized_graph
fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 769, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 463, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 810, in trace
(self.create_arg(fn(*args)),),
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 480, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1379, in fwd_helper
return functionalized_f_helper(*args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1329, in functionalized_f_helper
f_outs = fn(*f_args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1128, in inner_fn
outs = fn(*args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3374, in flat_fn
tree_out = fn(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 788, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 429, in call_module
return forward(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 781, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/connolly/Documents/GitHub/Autoplanner/test/fake_tensor_bug_issue.py", line 15, in forward
y = self.linear(x)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 788, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 429, in call_module
return forward(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 781, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
TypeError: Multiple dispatch failed for 'torch._ops.aten.t.default'; all __torch_dispatch__ handlers returned NotImplemented:
- mode object <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7f91c24e23a0>
- tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'>
```
# torchdistx fake_mode case
If I replace the `with FakeTensorMode():` by `with fake_mode()` in torchdistx, it gets below errors:
```
Traceback (most recent call last):
File "/Users/connolly/Documents/GitHub/Autoplanner/test/fake_tensor_bug_issue.py", line 28, in <module>
graph_module = aot_export_joint_simple(model, tuple([sample_input]), trace_joint=True)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3960, in aot_export_joint_simple
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 4050, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3201, in create_aot_dispatcher_function
fake_flat_args = process_inputs(flat_args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3199, in process_inputs
return [convert(idx, x) for idx, x in enumerate(flat_args)]
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3199, in <listcomp>
return [convert(idx, x) for idx, x in enumerate(flat_args)]
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3197, in convert
return fake_mode.from_tensor(x, static_shapes=False)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1588, in from_tensor
return self.fake_tensor_converter(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 341, in __call__
return self.from_real_tensor(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 294, in from_real_tensor
out = self.meta_converter(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 531, in __call__
r = self.meta_tensor(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_subclasses/meta_utils.py", line 410, in meta_tensor
s = t.untyped_storage()
NotImplementedError: Cannot access storage of FakeTensorImpl
```
# Meta Tensor case
In meta tensor case, below code works for small models. But for some complicated models like Bert, it raises Errors shown below.
```python
from transformers import BertModel, BertConfig
with torch.device("meta"):
sample_input = torch.randint(0, 30522, [4, 512])
model = BertModel(BertConfig())
z = model(sample_input)
graph_module = aot_export_module(model, tuple([sample_input]),output_loss_index=0, trace_joint=True)
print(graph_module)
```
```
Traceback (most recent call last):
File "/Users/connolly/Documents/GitHub/Autoplanner/test/fake_tensor_bug_issue.py", line 29, in <module>
graph_module = aot_export_module(model, tuple([sample_input]),output_loss_index=0, trace_joint=True)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3868, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 4050, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3212, in create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 733, in inner
flat_f_outs = f(*flat_f_args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3374, in flat_fn
tree_out = fn(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3822, in fn_to_trace
out = functional_call(*args)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 3347, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1013, in forward
embedding_output = self.embeddings(
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 238, in forward
embeddings = self.dropout(embeddings)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/modules/dropout.py", line 59, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/functional.py", line 1267, in dropout
return handle_torch_function(dropout, (input,), input, p=p, training=training, inplace=inplace)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/overrides.py", line 1541, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/utils/_device.py", line 76, in __torch_function__
return func(*args, **kwargs)
File "/Users/connolly/opt/anaconda3/envs/astropy/lib/python3.9/site-packages/torch/nn/functional.py", line 1270, in dropout
return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: 0 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/boxing/KernelFunction.cpp":19, please report a bug to PyTorch. fallthrough_kernel was executed but it should have been short-circuited by the dispatcher. This could occur if you registered a fallthrough kernel as a override for a specific operator (as opposed to a backend fallback); this is NOT currently supported, and we do not intend to add support for it in the near future. If you do find yourself in need of this, let us know in the bug tracker.
```
### Versions
PyTorch version: 2.1.0.dev20230612
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.9.13 (main, Aug 25 2022, 18:29:29) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.23.2
[pip3] torch==2.1.0.dev20230612
[pip3] torchdistx==0.3.0.dev0+cpu
[conda] functorch 0.2.1 pypi_0 pypi
[conda] numpy 1.23.2 pypi_0 pypi
[conda] torch 2.1.0.dev20230612 pypi_0 pypi
[conda] torchdistx 0.3.0.dev0+cpu pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
2,315 | 103,499 |
CUBLAS_WORKSPACE_CONFIG can not be parsed
|
triaged, module: cublas
|
### π Describe the bug
The following errors occur:
python3.8/site-packages/torch/nn/modules/linear.py:114: UserWarning: Could not parse CUBLAS_WORKSPACE_CONFIG, using default workspace size of 8519680 bytes. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/cuda/CublasHandlePool.cpp:56.)
lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Could not parse CUBLAS_WORKSPACE_CONFIG, using default workspace size of 8519680 bytes. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/cuda/CublasHandlePool.cpp:56.)
The warning is implemented in Pytorch itself. But i find it impossible to fix it. When setting the CUBLAS_WORKSPACE_CONFIG variable myself the error still occurs.
This seems to not be a problem with the previous major version of pytorch.
The warning is annoying because it gets spammed in our logs so it is difficult to find other warnings.
### Versions
**Versions**
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-ibm-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB
Nvidia driver version: 520.61.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Cascadelake)
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 6
BogoMIPS: 4988.13
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 32 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] numpy-ext==0.9.8
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] captum 0.6.0 0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.23.1 pypi_0 pypi
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] numpy-ext 0.9.8 pypi_0 pypi
[conda] pytorch 2.0.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] tensorflow-base 2.11.0 mkl_py38he5f8e37_0
[conda] torchaudio 2.0.2 py38_cu118 pytorch
[conda] torchtriton 2.0.0 py38 pytorch
[conda] torchvision 0.15.2 py38_cu118 pytorch
```
cc @csarofeen @ptrblck @xwang233
| 2 |
2,316 | 103,498 |
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 6047) of binary: /home/win10-ubuntu/anaconda3/envs/vicuna-7b/bin/python
|
oncall: distributed, triaged
|
### π Describe the bug

### Versions
Fine-tune vicuna-7b error
Fine-tuning commandsοΌ

But I got an errorοΌ

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,317 | 103,495 |
DISABLED test_mem_get_info (__main__.TestCuda)
|
module: cuda, triaged, module: flaky-tests, skipped
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mem_get_info&suite=TestCuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14208052987).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mem_get_info`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_cuda.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_cuda.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ptrblck
| 3 |
2,318 | 103,484 |
No backward implementation for `torch._native_multi_head_attention`
|
triaged, module: multi-headed-attention
|
### π The feature, motivation and pitch
There is a forward implementation for `torch._native_multi_head_attention` but no coressponded backward implementation. So use torch to realize training MHA, we need to use small ops to compose it or use `torch.nn.MultiHeadAttention`.
### Alternatives
* Small ops like linear/bmm to compose MHA.
* `torch.nn.MultiHeadAttention`
### Additional context
_No response_
| 2 |
2,319 | 103,483 |
torch._dynamo.exc.Unsupported: Tensor.backward with aten_graph=True
|
triaged, oncall: pt2, module: export
|
### π Describe the bug
When trying to export the ATen graph of any model containing a `backward()` call using dynamo I'm hitting an "Unsupported" exception. However, exporting the graph of a model without the backward call works completely fine:
```
graph():
%arg0 : [#users=0] = placeholder[target=arg0]
%arg1 : [#users=1] = placeholder[target=arg1]
%arg2 : [#users=1] = placeholder[target=arg2]
%view_default : [#users=1] = call_function[target=torch.ops.aten.view.default](args = (%arg1, [16, 784]), kwargs = {})
%_param_constant0 : [#users=1] = get_attr[target=_param_constant0]
%t_default : [#users=1] = call_function[target=torch.ops.aten.t.default](args = (%_param_constant0,), kwargs = {})
%_param_constant1 : [#users=1] = get_attr[target=_param_constant1]
%addmm_default : [#users=1] = call_function[target=torch.ops.aten.addmm.default](args = (%_param_constant1, %view_default, %t_default), kwargs = {})
%relu_default : [#users=2] = call_function[target=torch.ops.aten.relu.default](args = (%addmm_default,), kwargs = {})
%detach_default : [#users=0] = call_function[target=torch.ops.aten.detach.default](args = (%relu_default,), kwargs = {})
%_log_softmax_default : [#users=2] = call_function[target=torch.ops.aten._log_softmax.default](args = (%relu_default, 1, False), kwargs = {})
%detach_default_1 : [#users=0] = call_function[target=torch.ops.aten.detach.default](args = (%_log_softmax_default,), kwargs = {})
%nll_loss_forward_default : [#users=2] = call_function[target=torch.ops.aten.nll_loss_forward.default](args = (%_log_softmax_default, %arg2, None, 1, -100), kwargs = {})
%getitem : [#users=1] = call_function[target=operator.getitem](args = (%nll_loss_forward_default, 0), kwargs = {})
%getitem_1 : [#users=0] = call_function[target=operator.getitem](args = (%nll_loss_forward_default, 1), kwargs = {})
return [getitem]
```
Since currently there [seems to be no issue tracking ATen export functionality](https://discuss.pytorch.org/t/torch-dynamo-exc-unsupported-tensor-backward/169246/4?u=gengrill), I'm creating it here.
### Error logs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 601, in export
result_traced = opt_f(*args, **kwargs)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 342, in wrapper
return inner_fn(self, inst)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 474, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 744, in call_function
return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/variables/tensor.py", line 341, in call_method
unimplemented(f"Tensor.{name}")
File "/home/user/projects/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 71, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Tensor.backward
### Minified repro
```
import torch
import torch._dynamo as dynamo
from torch import nn
class Simple(nn.Module):
def __init__(self, H=28, W=28, C=10):
super(Simple, self).__init__()
self.linear = nn.Linear(H*W, C)
def forward(self, x):
x = torch.flatten(x, start_dim=1)
x = self.linear(x)
return nn.functional.relu(x)
def generate_data(b):
return (torch.randn(b,28,28).to(torch.float32), torch.randint(10, (b,)))
def no_train(model, data):
pred = model(data[0])
loss = nn.CrossEntropyLoss()(pred, data[1])
return loss
def train(model, data):
pred = model(data[0])
loss = nn.CrossEntropyLoss()(pred, data[1])
loss.backward()
return loss
model = Simple()
model_exp_no_train = dynamo.export(no_train, model, generate_data(16), aten_graph=True)
print(model_exp_no_train[0].graph)
model_exp_train = dynamo.export(train, model, generate_data(16), aten_graph=True)
print(model_exp_train[0].graph)
```
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
2,320 | 103,482 |
Document CI retry rules
|
triaged, module: devx
|
With all the recent changes w.r.t retrying to harden PyTorch CI, we need to create a wiki page to document all these mechanisms. The tentative list includes:
* Individual test case retry (flaky bot)
* Retry test file
* Retry on workflow steps (using GHA)
* Retry the job itself (retry bot)
In addition, we also want to gather data points to answer the following questions
* How much resource do we spend on retrying these cases?
* And a rough estimation on how frequently people manually retry stuffs on their PR to get green signals or to debug flaky issue
cc @ZainRizvi @kit1980 @clee2000
| 2 |
2,321 | 103,475 |
[Inductor] Optimize More Cases of Int32 -> Int64
|
triaged, enhancement, module: inductor
|
### π The feature, motivation and pitch
Inductor has [an existing](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/optimize_indexing.py#L313-L317) optimization which will convert indirect indexing that is done in int64 to int32 for index expressions we can prove are expressible in int32. However this optimization is incomplete.
1. We do not propagate the bounds of tensors from one kernel to the other.
2. We do not change the dtype of temporary tensors which could be converted to int32.
3. We use ValueRange Analysis to strength reduce on indices whose bounds we can prove. However, in some cases, as in the kernel below, we are indexing from Tensor values whose bounds we don't know.
For the following repro:
```
import torch
import triton
inp = torch.rand([6, 3, 352, 352], device="cuda", requires_grad=False)
inp2 = torch.rand([6, 352, 352, 2], device="cuda", requires_grad=False)
def grid(inp, inp2):
return torch.ops.aten.grid_sampler_2d.default(inp, inp2, 0, 0, False)
def invoke_grid():
return grid(inp, inp2)
median_ms = triton.testing.do_bench(
lambda: invoke_grid()
)
grid_opt = torch.compile()(invoke_grid)
median_ms2 = triton.testing.do_bench(
lambda: grid_opt()
)
print(f"Eager Execution time: {median_ms} secs")
print(f"Compiled Execution time: {median_ms2} secs")
```
Inductor is significantly slower than eager (25%) which can be reduced to 10% with int64->int32 conversions.
[Original code](https://gist.github.com/eellison/cb422e219f8be58a3d8787a0dea75401), and then [optimize code](https://gist.github.com/eellison/91b80dac72c088344a92da17411161b9), where the int64 has been replaced with int32.
This could be optimized, but it's a bit tricky. If we guard on the tensor used in the second kernel having numel < 2^32, we can set the bounds of this final expression to also be in the range[0, 2^32).
```
tmp61 = tl.load(in_ptr6 + (x0 + (123904*x2)), None, eviction_policy='evict_last')
tmp64 = tl.load(in_ptr7 + (x0 + (123904*x2)), None, eviction_policy='evict_last')
```
From there, we would need to propagate the bounds backwards. There is initial work to do that by @ysiraichi in https://github.com/pytorch/pytorch/pull/97963, and we could extend it for the set of ops that appears in this kernel to start. Once the bounds are propagated we could reduce the dtype of the int64 to int32, as well as the ops that construct those tensors.
I originally wrote this up as starter task but it might be a bit complicated for that lol.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @aakhundov @chenyang78 @ezyang, @lezcano for other thoughts here
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
2,322 | 103,473 |
Error encountered when tracing model with Dynamo/Functorch for export with trilinear interpolation
|
triaged, oncall: pt2, module: dynamic shapes, module: export
|
### π Describe the bug
When exporting a small network via any of the following functions, one of two errors is encountered. The model runs successfully as-is (via `model(sample_inputs)`) and also works with `torch.compile`, but fails during export. Additionally, the model has no explicit control flow and `torch._dynamo.explain` shows no graph breaks.
**Functions With Errors:**
- `torch.fx.experimental.proxy_tensor.make_fx`
- `torch._dynamo.export`
- `torch._export.export`
- `torch._functorch.aot_autograd`
**Sample Script with Network**
```python
import torch
class MyModule(torch.nn.Module):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
def forward(self, x):
out = torch.nn.functional.interpolate(x, size=(10, 20, 30), mode="trilinear", align_corners=True)
return out + 1
# Build model, sample inputs, and validate model succeeds on sample inputs
model = MyModule().eval().cuda()
sample_input = torch.rand((1, 2, 3, 4, 5)).cuda()
model(sample_input)
# Try various export/tracing methods
try:
torch._dynamo.export(model, sample_input, aten_graph=True, tracing_mode="symbolic")
except Exception as e:
print("Dynamo export:\n", e)
try:
torch._export.export(model, sample_input)
except Exception as e:
print("Torch export:\n", e)
try:
torch._functorch.aot_autograd.aot_export_module(model, sample_input, trace_joint=False)
except Exception as e:
print("AOT export:\n", e)
try:
torch.fx.experimental.proxy_tensor.make_fx(model, tracing_mode="symbolic", _allow_non_fake_inputs=True, pre_autograd=True)(sample_input)
except Exception as e:
print("Make FX:\n", e)
print(torch._dynamo.explain(model, sample_input))
```
**Errors**
**Error 1 [`torch._export.export` + `aot_export_module`]:**
```python
Failed running call_function <function interpolate at 0x7f1b7d1769d0>(*(FakeTensor(..., device='cuda:0', size=(2, 3, 4, 5)),), **{'size': (10, 20, 30), 'mode': 'trilinear', 'align_corners': True}):
Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [4, 5] and output size of (10, 20, 30). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format.
```
**Error 2 [`torch._dynamo.export` + `make_fx`]:**
```python
Failed running call_function <function interpolate at 0x7f1b7d1769d0>(*(FakeTensor(..., device='cuda:0', size=(1, s0, s1, s2, s3)),), **{'size': (10, 20, 30), 'mode': 'trilinear', 'align_corners': True}):
Cannot call sizes() on tensor with symbolic sizes/strides
```
### Versions
```python
Versions of relevant libraries:
[pip3] torch==2.1.0.dev20230608+cu118
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
2,323 | 103,469 |
[inductor] multi-kernel support
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106012
* __->__ #103469
For a persistent reduction, we generate 2 flavor of 'equivalant' kernels at the same time
- persistent reduction
- regular reduction
A MultiKernel wraps these 2 kernels and pick the one with better performance at runtime.
Here I talk more about implementation details:
- Inductor maintains states for generating kernels. E.g. the wrapper code. After we generate code for one kernel, we need restore the inductor state before we can generate the counterpart.
***There is one thing I need some comments from others***:
There is one tricky thing about kernel arguments. In general, inductor removes a buffer from the argument list if it's only used inside the kernel. But somehow a buffer removed by persistent reduction kernel may still be kept by the regular (non-persistent) reduction kernel because of some CSE invalidation rule. My current implementation avoid removing buffers if multi_kernel is enabled. This makes sure both flavors of reduction has consistent argument list. Another idea I have is, we generate the multi-kernel definition with the union of arguments from both sub-kernels. Let each sub-kernel pick the subset of arguments it wants. But this will make the code-gen or multi-kernel much complex.
I'm not sure if there is some easy and clean way to resolve this.
Testing command:
```
TORCHINDUCTOR_MULTI_KERNEL=1 TORCH_LOGS=+torch._inductor.graph TORCHINDUCTOR_UNIQUE_KERNEL_NAMES=1 python benchmarks/dynamo/huggingface.py --backend inductor --amp --performance --only BertForMaskedLM --training
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @aakhundov
| 5 |
2,324 | 103,467 |
[ao] making hist_obs handle torch.inf and closeby values
|
module: cpu, Stale, with-ssh, release notes: quantization, topic: bug fixes
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #103467
* #107623
Summary: This PR does 2 things:
1) Previously this would simply error, now it will ignore any
torch.inf values that it recieves. note: The code checks for torch.inf after
aminmax that way if there are no torch.inf values found, the perf is a
relatively unchanged
2) as mentioned in https://github.com/pytorch/pytorch/issues/100051,
values close to (but not quite at) the maximum/minimum float value could
overflow to infinity in the course of _adjust_min_max() (when this large
value would be multiplied by something in the middle of a calculation
that would otherwise result in a non inf value). This was fixed by
rearranging the order of operations for the lines in question without
altering the actual equations. Specifically, where operations in lines
1095, 1098 and 1100 have multiplication and division of large values,
its better to divide the two large values before multiplying, rather
than multiplying the two large values together (creating overflow) before dividing like it had been.
Test Plan: python test/test_quantization.py
TestObserver.test_histogram_observer_ignore_infinity
python test/test_quantization.py TestObserver.test_histogram_observer_handle_close_to_infinity
Reviewers:
Subscribers:
Tasks:
Tags:
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 7 |
2,325 | 103,462 |
Memory efficient SDP yields wrong gradients
|
triaged, module: multi-headed-attention
|
### π Describe the bug
The gradients of samples for models with and without memory efficient SPD should be nearly identical, but are in practice often very different. I have a code example here: https://gist.github.com/lengstrom/fd091020d1cf8e22b2e0caa01c2e9255. `optimum` transforms the model to use memory efficient SDP -- the gradients go back to matching when I disable memory efficient SDP and enable math SDP.
Considering a GPTNeoX model, we fix a sample and record the gradient of the memory efficient SDP model and the gradient of the original model. We then measure the cosine similarity (a vector similarity metric, 0 = uncorrelated, 1 = perfectly correlated) and find that both:
(a) the cosine similarity of gradients between the ME-SDP and standard models is not 1.0 and furthermore
(b) on some parameter groups, this cosine similarity is very low. For example, see `gpt_neox.layers.0.input_layernorm.weight` - the similarity is `0.45` on this parameter group
You can see the logs that show this here (from the script above):
```
gpt_neox.embed_in.weight grad match: False Maxdiff: 2.2251620292663574, relativediff: nan, cosine=0.6628117561340332
gpt_neox.layers.0.input_layernorm.weight grad match: False Maxdiff: 0.23522385954856873, relativediff: 6.147019863128662, cosine=0.4535333514213562
gpt_neox.layers.0.input_layernorm.bias grad match: False Maxdiff: 0.0925818607211113, relativediff: 4.2182841300964355, cosine=0.6757722496986389
gpt_neox.layers.0.post_attention_layernorm.weight grad match: False Maxdiff: 0.09811977297067642, relativediff: 6.601757526397705, cosine=0.7666352987289429
gpt_neox.layers.0.post_attention_layernorm.bias grad match: False Maxdiff: 0.08014828711748123, relativediff: 3.4428935050964355, cosine=0.7390220165252686
gpt_neox.layers.0.attention.query_key_value.weight grad match: False Maxdiff: 1.9186370372772217, relativediff: 12.226367950439453, cosine=0.7150740623474121
gpt_neox.layers.0.attention.query_key_value.bias grad match: False Maxdiff: 0.20297080278396606, relativediff: inf, cosine=0.7175157070159912
gpt_neox.layers.0.attention.dense.weight grad match: False Maxdiff: 0.4924759864807129, relativediff: 11.39132308959961, cosine=0.6795455813407898
```
This code should perfectly reproduce with just pytorch (nightly), optimumΒ (latest -- this library is unversioned?), datasets (2.11.1), and transformers (4.29.2) installed. The full output is commented in the gist. This issue was explored more in https://github.com/huggingface/optimum/issues/1091: it doesn't arise with the 160m Pythia model (only 70m).
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230609+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7513 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3681.6399
CPU min MHz: 1500.0000
BogoMIPS: 5189.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+9820899b38
[pip3] torch==2.1.0.dev20230609+cu117
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9820899b38 pypi_0 pypi
[conda] torch 2.1.0.dev20230609+cu117 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
| 5 |
2,326 | 103,449 |
Asynchronous CUDA AveragedModel
|
module: optimizer, triaged, needs research
|
### π The feature, motivation and pitch
This is a proposal to improve the efficiency of CUDA `AveragedModel` used in EMA and SWA. (Follow-up of https://github.com/pytorch/pytorch/pull/94820). I would provide the implementation but would like feedback/approval before opening a PR.
Currently the EMA/SWA weight update is done on the default stream, same as all the other GPU work.
Because EMA/SWA weights are typically updated at the end of a training iteration (after an optimizer step), and are not needed until the end of the next iteration, we can actually run the EMA/SWA update in parallel of the forward/backward/optimizer step, virtually eliminating the overhead of EMA/SWA in many cases.
This can be done by using a separate dedicated CUDA stream to perform the weight update.
This is how it is done in the NeMo framework:
- Stream creation: https://github.com/NVIDIA/NeMo/blob/a87702a522387da0aac62dc1f90a88a8e0bfc7cc/nemo/collections/common/callbacks/ema.py#L234
- Synchronization between the dedicated stream and the main stream: https://github.com/NVIDIA/NeMo/blob/a87702a522387da0aac62dc1f90a88a8e0bfc7cc/nemo/collections/common/callbacks/ema.py#L259
- Weight update in the dedicated stream: https://github.com/NVIDIA/NeMo/blob/a87702a522387da0aac62dc1f90a88a8e0bfc7cc/nemo/collections/common/callbacks/ema.py#L261
- API to manually synchronize the dedicated stream: https://github.com/NVIDIA/NeMo/blob/a87702a522387da0aac62dc1f90a88a8e0bfc7cc/nemo/collections/common/callbacks/ema.py#L310
If the team is interested in extending the current `AveragedModel` class with an optional asynchronous feature, let me know and I'll work on a PR.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 3 |
2,327 | 103,444 |
Deprecation warning on lr_scheduler.step(num_steps)
|
module: optimizer, triaged, actionable
|
### π Describe the bug
`step(num_steps)` produces a deprecation warning currently. However, there is a legitimate use case for this API in learning rate schedulers β if reloading a trained model and continuing to train, it is necessary to advance the number of steps inside the scheduler to match the current model state.
The scheduler does not provide an alternative way to advance the number of steps. And it is not possible to use `state_dict()` + `load_state_dict()` because they also prevent changing the learning rate and other hyperparameters during transitions, unless the user manually changes the state dict, which is a hacky.
```
UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
```
### Versions
Versions of relevant libraries:
[pip3] torch==2.0.1
cc @vincentqb @jbschlosser @albanD @janeyx99
| 4 |
2,328 | 103,439 |
test_generate_tensor_from_list_of_numpy_primitive_type fails if run under pytest
|
triaged, module: dynamic shapes
|
### π Describe the bug
Sample failure:
```
__________________ StaticDefaultDynamicShapesFunctionTests.test_return_numpy_ndarray_dynamic_shapes_static_default ___________________
Traceback (most recent call last):
File "/data/users/ezyang/d/pytorch/test/dynamo/test_functions.py", line 42, in test_fn
return torch._dynamo.testing.standard_test(self, fn=fn, nargs=nargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/testing.py", line 225, in standard_test
self.assertTrue(same(val1a, correct1))
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 944, in same
assert isinstance(res, torch.Tensor), f"type mismatch {type(ref)} {type(res)}"
AssertionError: type mismatch <class 'torch.Tensor'> <class 'numpy.ndarray'>
-------------------------------------------------------- Captured stdout call --------------------------------------------------------
stats [('calls_captured', 2), ('unique_graphs', 1)]
___________________________ DynamicShapesMiscTests.test_generate_tensor_from_list_of_numpy_primitive_type ____________________________
Traceback (most recent call last):
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3548, in test_generate_tensor_from_list_of_numpy_primitive_type
res = opt_fn()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 295, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3541, in fn
x = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 448, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 527, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 430, in _compile
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 415, in transform
tracer.run()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 2024, in run
super().run()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 707, in run
and self.step()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 667, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 389, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 1099, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 558, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/torch.py", line 607, in call_function
tensor_variable = wrap_fx_proxy(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/builder.py", line 1063, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/builder.py", line 1098, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1298, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1266, in get_fake_value
return wrap_fake_exception(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 860, in wrap_fake_exception
return fn()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1267, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1332, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1319, in run_node
return node.target(*args, **kwargs)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <class 'torch.LongTensor'>(*([FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), dtype=torch.int64)],), **{}):
The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
from user code:
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3543, in <resume in fn>
z = torch.LongTensor(y)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
-------------------------------------------------------- Captured stdout call --------------------------------------------------------
frames [('total', 2), ('ok', 1)]
unimplemented []
graph_break [('numpy.<built-in function array>()', 1)]
____________________ DynamicShapesMiscTests.test_generate_tensor_from_list_of_numpy_primitive_type_dynamic_shapes ____________________
Traceback (most recent call last):
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3548, in test_generate_tensor_from_list_of_numpy_primitive_type
res = opt_fn()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 295, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3541, in fn
x = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 448, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 527, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 127, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert
return _compile(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 430, in _compile
out_code = transform_code_object(code, transform)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object
transformations(instructions, code_options)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/convert_frame.py", line 415, in transform
tracer.run()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 2024, in run
super().run()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 707, in run
and self.step()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 667, in step
getattr(self, inst.opname)(inst)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 389, in wrapper
return inner_fn(self, inst)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 1099, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ezyang/d/pytorch/torch/_dynamo/symbolic_convert.py", line 558, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/torch.py", line 607, in call_function
tensor_variable = wrap_fx_proxy(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/builder.py", line 1063, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/variables/builder.py", line 1098, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1298, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1266, in get_fake_value
return wrap_fake_exception(
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 860, in wrap_fake_exception
return fn()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1267, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1332, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/data/users/ezyang/d/pytorch/torch/_dynamo/utils.py", line 1319, in run_node
return node.target(*args, **kwargs)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <class 'torch.LongTensor'>(*([FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), dtype=torch.int64)],), **{}):
The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
from user code:
File "/data/users/ezyang/d/pytorch/test/dynamo/test_misc.py", line 3543, in <resume in fn>
z = torch.LongTensor(y)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
-------------------------------------------------------- Captured stdout call --------------------------------------------------------
frames [('total', 2), ('ok', 1)]
unimplemented []
graph_break [('numpy.<built-in function array>()', 1)]
```
It doesn't fail if I run it under python directly
### Versions
main
| 0 |
2,329 | 103,425 |
The document does not emphasize Illegal value in nn.Bilinear
|
module: nn, triaged, actionable, module: edge cases
|
### π Describe the bug
`Illegal value of in1_feature parameter in nn.Bilinear`
`ZeroDivisionError: float division by zero`
### Code
```py
import torch
from torch import nn
class lenet(nn.Module):
def __init__(self):
super(lenet, self).__init__()
self.conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1)
self.linear = nn.Bilinear(in1_features=0, in2_features=0, out_features=0)
def forward(self, x):
# 1st block
x = self.conv(x)
x = self.linear(x)
return x
if __name__ == '__main__':
net = lenet()
```
### Versions
### Version
```
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (main, Mar 21 2023, 13:41:05) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1
[conda] numpy 1.23.5 py310hb93e574_0
[conda] numpy-base 1.23.5 py310haf87e8b_0
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,330 | 103,424 |
The document does not emphasize hidden range in nn.Embedding
|
needs reproduction, module: docs, triaged
|
### π Describe the bug
`Hidden range of padding parameter in nn.Embedding`
### Code
```py
import torch
import torch.nn as nn
class BiLSTM(nn.Module):
def __init__(self, batch_size, hidden_dim, vocab_size, sequence_len):
super().__init__()
self.batch_size = batch_size
self.hidden_dim = hidden_dim
self.input_size = vocab_size
self.num_classes = vocab_size
self.sequence_len = sequence_len
# Dropout
self.dropout = nn.Dropout(0.25)
# Embedding layer
self.embedding_0 = nn.Embedding(num_embeddings=2, embedding_dim=2, padding_idx=3)
def forward(self, x):
# Bi-LSTM
# hs = [batch_size x hidden_size]
# cs = [batch_size x hidden_size]
hs_forward = torch.zeros(x.size(0), self.hidden_dim)
cs_forward = torch.zeros(x.size(0), self.hidden_dim)
hs_backward = torch.zeros(x.size(0), self.hidden_dim)
cs_backward = torch.zeros(x.size(0), self.hidden_dim)
# LSTM
# hs = [batch_size x (hidden_size * 2)]
# cs = [batch_size x (hidden_size * 2)]
hs_lstm = torch.zeros(x.size(0), self.hidden_dim * 2)
cs_lstm = torch.zeros(x.size(0), self.hidden_dim * 2)
# Weights initialization
torch.nn.init.kaiming_normal_(hs_forward)
torch.nn.init.kaiming_normal_(cs_forward)
torch.nn.init.kaiming_normal_(hs_backward)
torch.nn.init.kaiming_normal_(cs_backward)
torch.nn.init.kaiming_normal_(hs_lstm)
torch.nn.init.kaiming_normal_(cs_lstm)
# From idx to embedding
x = self.embedding_0(x.long())
return x
if __name__ == '__main__':
net = BiLSTM(5, 2, 100, 100)
```
### Version
```
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (main, Mar 21 2023, 13:41:05) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1
[conda] numpy 1.23.5 py310hb93e574_0
[conda] numpy-base 1.23.5 py310haf87e8b_0
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
```
cc @svekars @carljparker
| 2 |
2,331 | 103,423 |
The document does not emphasize hidden range in nn.MaxPool2d
|
needs reproduction, module: docs, triaged
|
### π Describe the bug
`Hidden range of padding parameter in nn.MaxPool2d`
`pad should be at most half of kernel size, but got pad=2 and kernel_size=2`
### Code
```py
import torch
from torch import nn
class lenet(nn.Module):
def __init__(self):
super(lenet, self).__init__()
self.conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=2, padding=2, stride=1)
self.pool = nn.MaxPool2d(padding=2, kernel_size=2)
def forward(self, x):
# 1st block
x = self.conv(x)
x = self.pool(x)
return x
if __name__ == '__main__':
net = lenet()
```
### Versions
### Version
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (main, Mar 21 2023, 13:41:05) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-12.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1
[conda] numpy 1.23.5 py310hb93e574_0
[conda] numpy-base 1.23.5 py310haf87e8b_0
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
cc @svekars @carljparker
| 2 |
2,332 | 103,422 |
Possible memory leak when using Torch and Torchvision in conjunction with XGBoost
|
module: memory usage, triaged, module: vision
|
### π Describe the bug
I had an issue in one of the services I work on, where it would use more and more memory until crashing. After some digging around I was able to reduce it to the following script:
```python
import argparse
import logging
import math
import os
import psutil
import torch
import torchvision
import xgboost
import numpy as np
def main() -> None:
process = psutil.Process(os.getpid())
parser = argparse.ArgumentParser()
parser.add_argument("xgboost_model_path")
args = parser.parse_args()
feature_extractor = torchvision.models.vit_b_16(num_classes=512)
predictor = xgboost.XGBClassifier(
base_score=0.5,
booster=None,
colsample_bylevel=1,
colsample_bynode=1,
colsample_bytree=1,
gamma=0,
gpu_id=-1,
importance_type="gain",
interaction_constraints=None,
learning_rate=0.3,
max_delta_step=0,
max_depth=10,
min_child_weight=1,
missing=math.nan,
monotone_constraints=None,
n_estimators=300,
n_jobs=32,
num_parallel_tree=1,
objective="multi:softprob",
random_state=0,
reg_alpha=0,
reg_lambda=1,
scale_pos_weight=None,
subsample=1,
tree_method=None,
validate_parameters=False,
verbosity=0,
)
predictor.load_model(args.xgboost_model_path)
frames_torch = torch.rand((1, 3, 224, 224), device="cpu")
i = 0
while True:
with torch.no_grad():
embedding = feature_extractor(frames_torch).numpy().mean(axis=0)
if i == 0:
logging.warning(f"Mem usage (embedding) {process.memory_percent()}")
features = np.expand_dims(embedding, 0)
predictor.predict_proba(features)
i = (i + 1) % 10
if __name__ == "__main__":
main()
```
which uses the following xgboost model: [xgboost_classifier.txt](https://github.com/dmlc/xgboost/files/11720066/xgboost_classifier.txt) (txt format becuase github doesn't allow JSON, apparently).
I left this script running for a day and memory usage grew from 640MB to about 9GB. This issue seems to depend on the import order, if XGBoost is imported before torch and torchvision the memory usage is more or less constant (I didn't leave it to run for the same amount of time but I didn't see an upwards trend that's clearly visible otherwise).
### Versions
```
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.15 (main, Jun 12 2023, 04:04:50) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1036-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2469.691
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] No relevant packages
```
cc @datumbox @vfdev-5 @pmeier
| 4 |
2,333 | 103,417 |
Torch model compile error "/usr/bin/ld: cannot find -lcuda" though cuda is installed via run file
|
triaged, oncall: pt2, upstream triton
|
### π Describe the bug
I have installed the NVIDIA driver seperate and CUDA seperate
libcuda.so --> is provided by the NVIDIA Driver and is here
```
/usr/lib/x86_64-linux-gnu/libcuda.so.525.105.17
/usr/lib/x86_64-linux-gnu/libcuda.so.1
```
libcudart.so --> is provided by CUDA Runtime and is here
```
ld -L/usr/local/cuda/lib64/ -lcudart --verbose
attempt to open /usr/local/cuda/lib64//libcudart.so succeeded
```
and it is linked to CUDA 12.0
```
ll /usr/local/cuda/lib64//libcudart.so
lrwxrwxrwx 1 root root 15 Jun 6 21:14 /usr/local/cuda/lib64//libcudart.so -> libcudart.so.12*
```
All this is fine and as expected
I have given the LD_LIBRARY_PATH
```
export LD_LIBRARY_PATH=/usr/local/cuda/lib64
sudo ldconfig
```
I am able to run a model in GPU. However when I run the torch.model.compile it links against `libcuda.so`. From my understanding it shoud be able to work also with `libcudart.so` ; but I am unable to set any environment variable or flag to let torch to use this library
Sample Code
```
import torch
import torchvision
print("torch version is ",torch.__version__)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
x=torch.ones(1,3,224,224).to(device)
model=torchvision.models.resnet50().to(device)
compiled=torch.compile(model)
compiled(x)
```
Ouput
```
python test_cuda.py
torch version is 2.0.0.dev20230202+cu116
Using device: cuda
/home/alex/.local/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:89: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
/usr/bin/ld: cannot find -lcuda: No such file or directory
collect2: error: ld returned 1 exit status
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0.dev20230202+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800H with Radeon Graphics
CPU family: 25
Model: 80
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4462.5000
CPU min MHz: 1200.0000
BogoMIPS: 6388.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.0.0+0d7e753227
[pip3] torch==2.0.0.dev20230202+cu116
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==2.0.0.dev20230201+cu116
[pip3] torchvision==0.15.0.dev20230201+cu116
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
2,334 | 103,415 |
[inductor][cpp_wrapper] Support rand fallback
|
triaged, oncall: pt2
|
### Edited/minified issue
```python
import torch
import torch._dynamo
import torch._inductor.config
torch._inductor.config.fallback_random = True
torch._inductor.config.cpp_wrapper = True
def fn(x):
y = torch.randint(0, 10, (4, 4), dtype=torch.int32)
return y + x
opt_fn = torch._dynamo.optimize("inductor")(fn)
x = torch.rand((4, 4))
torch.manual_seed(42)
ref = fn(x)
torch.manual_seed(42)
res = opt_fn(x)
print(torch.max(torch.abs(res-ref)))
```
Error:
~~~
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/anijain/work/pytorch/torch/_inductor/scheduler.py", line 1379, in codegen
self.codegen_extern_call(node)
File "/scratch/anijain/work/pytorch/torch/_inductor/scheduler.py", line 1300, in codegen_extern_call
node.codegen(V.graph.wrapper_code)
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 3314, in codegen
super().codegen(wrapper)
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 3002, in codegen
args = [*self.codegen_args(), *self.codegen_kwargs()]
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 2875, in codegen_kwargs
assert (
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: ordered_kwargs_for_cpp_kernel has to be provided
~~~
There are a few issues here:
1. self.ordered_kwargs_for_cpp_kernel doesn't exist when we try to codegen.
2. We can't always get the schema because the `kernel` passed into the FallbackKernel IR node is sometimes an OpOverload and sometimes an OpOverloadPacket
3. if we add it, self.kwargs != self.ordered_kwargs_for_cpp_kernel. AFAIK, this is fine because the missing self.kwargs (specifically, `layout`) are optional.
4. The codegen-ed code doesn't match the CPP args: the dtype, layout, etc. are expected to be provided as a single TensorOptions() object but the codegen provides it as a list of individual options.
### π Original bug below
~~~
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._inductor.config.fallback_random = True
torch._inductor.config.triton.autotune_cublasLt = False
torch._inductor.config.triton.unique_kernel_names = True
torch._inductor.config.triton.store_cubin = True
torch._inductor.config.cpp_wrapper = True
isolate_fails_code_str = None
# torch version: 2.1.0a0+gita5cdb9a
# torch cuda version: 11.8
# torch git version: a5cdb9a9a4de0b8cd92d850588a1d7e40958189b
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2022 NVIDIA Corporation
# Built on Wed_Sep_21_10:33:58_PDT_2022
# Cuda compilation tools, release 11.8, V11.8.89
# Build cuda_11.8.r11.8/compiler.31833905_0
# GPU Hardware Info:
# NVIDIA A100-SXM4-40GB : 1
from torch.nn import *
class Repro(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, arg0_1):
full = torch.ops.aten.full.default([209982], 1, dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=0), pin_memory = False)
select = torch.ops.aten.select.int(arg0_1, 0, 0)
select_1 = torch.ops.aten.select.int(arg0_1, 0, 1); arg0_1 = None
view = torch.ops.aten.view.default(select_1, [-1])
expand = torch.ops.aten.expand.default(view, [209982]); view = None
full_1 = torch.ops.aten.full.default([10000], 0, dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=0), pin_memory = False)
scatter_add = torch.ops.aten.scatter_add.default(full_1, 0, expand, full); full_1 = expand = None
pow_1 = torch.ops.aten.pow.Tensor_Scalar(scatter_add, -0.5); scatter_add = None
eq = torch.ops.aten.eq.Scalar(pow_1, inf)
scalar_tensor = torch.ops.aten.scalar_tensor.default(0.0, dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=0))
where = torch.ops.aten.where.self(eq, scalar_tensor, pow_1); eq = scalar_tensor = pow_1 = None
index = torch.ops.aten.index.Tensor(where, [select]); select = None
mul = torch.ops.aten.mul.Tensor(index, full); index = full = None
index_1 = torch.ops.aten.index.Tensor(where, [select_1]); where = select_1 = None
mul_1 = torch.ops.aten.mul.Tensor(mul, index_1); mul = index_1 = None
return (mul_1,)
def load_args(reader):
buf0 = reader.storage(None, 3359712, device=device(type='cuda', index=0), dtype_hint=torch.int64)
reader.tensor(buf0, (2, 209982), dtype=torch.int64, is_leaf=True) # arg0_1
load_args._version = 0
mod = Repro()
if __name__ == '__main__':
from torch._dynamo.repro.after_aot import run_repro
run_repro(mod, load_args, accuracy=False, command='minify', save_dir='/scratch/anijain/work/pytorch/torch_compile_debug/run_2023_06_12_06_49_04_662047-pid_1028077/minifier/checkpoints', tracing_mode='real')
~~~~
~~~
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/scratch/anijain/work/pytorch/torch/_inductor/scheduler.py", line 1379, in codegen
self.codegen_extern_call(node)
File "/scratch/anijain/work/pytorch/torch/_inductor/scheduler.py", line 1300, in codegen_extern_call
node.codegen(V.graph.wrapper_code)
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 3314, in codegen
super().codegen(wrapper)
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 3002, in codegen
args = [*self.codegen_args(), *self.codegen_kwargs()]
File "/scratch/anijain/work/pytorch/torch/_inductor/ir.py", line 2875, in codegen_kwargs
assert (
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: ordered_kwargs_for_cpp_kernel has to be provided
~~~
### Versions
N/A
cc @ezyang @msaroufim @wconstab @bdhirsh
| 1 |
2,335 | 103,412 |
[Distributed] Limit world_size to 8 for FSDP Unit tests
|
module: rocm, triaged, open source, ciflow/trunk, topic: not user facing, ciflow/periodic, rocm, rocm priority, merging
|
There are few unit tests in FSDP that can support upto 8 GPUs.
In this case, for example test_fsdp_uneven has an input size of [8,3]. For each process/rank we pass the data as input[self.rank] as below. So when we use 16 GPUs for our tests, these tests throw an index/key error. So basically to avoid such corner cases, I would like to add this change to use 8GPUs if there are more than 8 GPUs. This is applicable to both ROCm and CUDA builds as well.
https://github.com/pytorch/pytorch/blob/main/test/distributed/fsdp/test_fsdp_uneven.py#L44
https://github.com/pytorch/pytorch/blob/main/test/distributed/fsdp/test_fsdp_uneven.py#L55
cc: @jithunnair-amd
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 28 |
2,336 | 103,402 |
[decomp][bad accuracy] AlbertForQuestionAnswering
|
triaged, oncall: pt2, module: inductor, module: pt2 accuracy
|
### π Describe the bug
Repro - `python benchmarks/dynamo/huggingface.py --backend=aot_eager_decomp_partition --amp --training --device cuda --accuracy --only=AlbertForQuestionAnswering`
Setup - Get my branch - `https://github.com/pytorch/pytorch/tree/tb-pin` and run the above cmd.
Note that the accuracy fails with `aot_eager_decomp_partition`. It passes with `aot_eager`
My branch
* removes all the decomps except softmax from the decomp table.
* further fires off decomp only for the first softmax.
This limits the scope to just one decomp. But, I am out of ideas. I am unable to debug this further. The softmax decomp looks really simple. Would love to have someone look into this.
### Versions
N/A
cc @ezyang @msaroufim @wconstab @bdhirsh @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
| 4 |
2,337 | 103,397 |
LayerNorm freeze processes using torch multiprocessing
|
module: multiprocessing, triaged
|
### π Describe the bug
LayerNorm operation is freezing my processes when i launch 1 or more process using torch multiprocessing. I did a trivial network only containing a layernorm and my process all freezes in the forward pass. I am on linux and i did not hve this problem on MacOS. The code is :
```
import sys, os
import torch
import torch.multiprocessing as mp
def worker(rank, model, input_action):
"""Worker function"""
print(f"Worker {rank} received model")
out = model(input_action)
#Create a model using torch layernorm
class LayerNormModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layernorm = torch.nn.LayerNorm(4)
def forward(self, x):
return self.layernorm(x)
if __name__ == '__main__':
# Set the multiprocessing start method
#mp.set_start_method('spawn')
num_threads = torch.get_num_threads()
print("Number of threads:", num_threads)
model = LayerNormModel()
input_action = torch.randn(1, 2, 4)
#Working properly
test = model(input_action)
torch.set_num_threads(5)
num_threads = torch.get_num_threads()
print("Number of threads:", num_threads)
# Create a list of values
values = [1, 2, 3, 4, 5]
# Create a process for each value
processes = []
for i, value in enumerate(values):
p = mp.Process(target=worker, args=(i, model, input_action))
processes.append(p)
p.start()
# Wait for all processes to finish
for p in processes:
p.join()
print("Done!")
```
### Versions
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @VitalyFedyunin
| 2 |
2,338 | 103,393 |
Typing missing on arithmetic ops on `Tensor`
|
module: typing, triaged
|
### π Describe the bug
This is related to #103375 and #103376, but I assume it's better to split into smaller fixes.
Some of the dunder ops are not defined in `_C._TensorBase` but directly in `Tensor`:
https://github.com/pytorch/pytorch/blob/03101a227f6639d5a9ad628d1dc300f9f99a8812/torch/_tensor.py#L850-L902
However, as seen, there's no typing for these methods.
### Versions
master
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 0 |
2,339 | 103,382 |
NotImplementedError Could not run 'c10d::alltoall_' with arguments from the 'Meta' backend.
|
triaged
|
### π Describe the bug
I use FakeTensor for Shape_prop_pass and met this error:
Exception has occurred: NotImplementedError
Could not run 'c10d::alltoall_' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'c10d::alltoall_' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at ../torch/csrc/distributed/c10d/Ops.cpp:700 [kernel]
CUDA: registered at ../torch/csrc/distributed/c10d/Ops.cpp:704 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ../torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
While executing %runtime_apply_1 : [#users=1] = call_function[target=colossalai.auto_parallel.passes.runtime_apply_pass.runtime_apply](args = (%transformer_wte, %origin_node_sharding_spec_dict, %sharding_spec_convert_dict, 11, 0), kwargs = {})
Original traceback:
None
File "/usr/local/lib/python3.9/site-packages/torch/_ops.py", line 287, in __call__
return self._op(*args, **kwargs or {})
File "/usr/local/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1175, in dispatch
raise not_implemented_error
File "/usr/local/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1175, in dispatch
raise not_implemented_error
File "/usr/local/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 988, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 3281, in all_to_all
work = group.alltoall(output_tensor_list, input_tensor_list, opts)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/colossalai/tensor/comm_spec.py", line 66, in _all_to_all
dist.all_to_all(output_tensor_list, input_tensor_list, group)
File "/usr/local/lib/python3.9/site-packages/colossalai/tensor/comm_spec.py", line 321, in forward
output = _all_to_all(input_, comm_spec)
File "/usr/local/lib/python3.9/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.9/site-packages/colossalai/tensor/comm_spec.py", line 368, in all_to_all
return _AllToAll.apply(input_, comm_spec)
File "/usr/local/lib/python3.9/site-packages/colossalai/tensor/comm_spec.py", line 514, in covert_spec_to_action
tensor = pattern_to_func_dict[self.comm_pattern](tensor, self)
File "/usr/local/lib/python3.9/site-packages/colossalai/tensor/shape_consistency.py", line 742, in apply_for_autoparallel_runtime
tensor = comm_spec.covert_spec_to_action(tensor)
File "/usr/local/lib/python3.9/site-packages/colossalai/auto_parallel/passes/runtime_apply_pass.py", line 30, in runtime_apply
return shape_consistency_manager.apply_for_autoparallel_runtime(node, origin_sharding_spec, target_sharding_spec)
File "/usr/local/lib/python3.9/site-packages/torch/fx/interpreter.py", line 252, in call_function
return target(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/fx/interpreter.py", line 180, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/fx/passes/fake_tensor_prop.py", line 31, in run_node
result = super().run_node(n)
File "/usr/local/lib/python3.9/site-packages/torch/fx/interpreter.py", line 139, in run
self.env[node] = self.run_node(node)
File "/usr/local/lib/python3.9/site-packages/torch/fx/passes/fake_tensor_prop.py", line 38, in propagate
return super().run(*fake_args)
File "/usr/local/lib/python3.9/site-packages/colossalai/_analyzer/fx/passes/shape_prop.py", line 287, in shape_prop_pass
FakeTensorProp(module, mode=fake_mode).propagate(*args)
File "/workspace/workfile/nanoGPT_colossalai/nanoGPT/ColossalAI/colossalai/auto_parallel/tensor_shard/initialize.py", line 150, in transform_to_sharded_model
shape_prop_pass(gm, *meta_args.values(), sharding_spec_dict, origin_spec_dict, comm_actions_dict)
File "/workspace/workfile/nanoGPT_colossalai/nanoGPT/ColossalAI/colossalai/auto_parallel/tensor_shard/initialize.py", line 274, in initialize_model
gm, sharding_spec_dicts = transform_to_sharded_model(gm, meta_args, solution, device_mesh, strategies_constructor,
File "/workspace/workfile/nanoGPT_colossalai/nanoGPT/ColossalAI/colossalai/auto_parallel/tensor_shard/initialize.py", line 342, in autoparallelize
rst_to_unpack = initialize_model(model,
File "/workspace/workfile/nanoGPT_colossalai/nanoGPT/train.py", line 299, in train
gm, solution = autoparallelize(model, meta_input_sample, return_solution=True)
File "/workspace/workfile/nanoGPT_colossalai/nanoGPT/train.py", line 405, in <module>
train()
### Versions
pytorch 2.0
| 1 |
2,340 | 103,375 |
Inplace binary ops on tensor subclasses can cause mypy error
|
module: typing, triaged
|
### π Describe the bug
Using inplace binary ops on a subclass of `Tensor` will cause mypy error (e.g. `*=` shown below, same for other ops `+=` etc.)
```python
import torch
a = torch.nn.Parameter(torch.Tensor())
a *= 2
```
run `mypy`:
```
a.py:3: error: Result type of * incompatible in assignment
```
This is because of pyi definition `def __imul__(self, other: Any) -> Tensor: ...`, which means it always returns `Tensor` but not the subclass.
---
Although mypy does not give any error with `a.mul_(2)` instead of `a*=2`, they should be the same, and inplace methods like `mul_` should return `Self` instead of `Tensor`.
```python
import torch
a = torch.nn.Parameter(torch.Tensor())
print(a.mul_(2).__class__)
```
The result is indeed `torch.nn.parameter.Parameter`.
### Versions
master
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 0 |
2,341 | 103,372 |
ImportError: cannot import name 'Store' from 'torch.distributed'
|
oncall: distributed, triaged
|
### π Describe the bug
Hello,
I am trying to run YoloNAS on the nvidia Orin NX. I can have successfully for YoloV7 working but YoloNAS is complaining about torch.distributed.
Here is some information about my Orin:
torch 2.0.0+nv23.5
torchmetrics 0.8.0
torchvision 0.15.1
Python 3.8.10
Model: NVIDIA Orin NX Developer Kit - Jetpack 5.1 [L4T 35.2.1]
The error I get is the following:
ImportError: cannot import name 'Store' from 'torch.distributed' (/home/rebotnix/.local/lib/python3.8/site-packages/torch/distributed/__init__.py)
Checking in python:
>>> import torch
>>> torch.distributed.is_available()
False
>>>
Any suggestions would be much appreciated.
All the best,
Simon
### Versions
rebotnix@rebotnix:~/Documents/yolo-nas$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.0+nv23.05
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.104-tegra-aarch64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 1984.0000
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 2 MiB
L3 cache: 4 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==2.0.0+nv23.5
[pip3] torchmetrics==0.8.0
[pip3] torchvision==0.15.1
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,342 | 103,370 |
torchgen/gen_backend_stubs.py compatibility with DispatchStubs
|
triaged, module: dispatch, module: codegen, module: structured kernels
|
### π The feature, motivation and pitch
For our out-of-tree backend, I would like to support many structured kernels similar to how is done in CUDA, i.e. registering `DispatchStub` per operation, similar to how is done [here](https://github.com/pytorch/pytorch/blob/03101a227f6639d5a9ad628d1dc300f9f99a8812/aten/src/ATen/native/cuda/BinaryMulKernel.cu#L46), for `PrivateUse1`, the support for registering stubs was added in [this PR](https://github.com/pytorch/pytorch/pull/99611).
This PR however, does not tackle how to implement structured kernels in the same manner. It would be nice if [`get_backend_stubs.py`](https://github.com/pytorch/pytorch/blob/03101a227f6639d5a9ad628d1dc300f9f99a8812/torchgen/native_function_generation.py) could support implementation paths of this form instead.
I'm not certain as to whether `torchgen` could support the reuse of these kernels with overridden headers, which is how we've implemented a method to redirect CUDA kernel launches from our backend successfully with various unstructured kernels.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @bhosmer @bdhirsh
| 1 |
2,343 | 103,369 |
test_workspace_allocation_error fails on my local devgpu
|
triaged, module: cuda graphs
|
### π Describe the bug
I get this error:
```
======================================================================
ERROR: test_workspace_allocation_error (__main__.CudaGraphTreeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/ezyang/d/pytorch/test/inductor/test_cudagraph_trees.py", line 778, in test_workspace_allocation_error
foo(*inps)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 292, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/test/inductor/test_cudagraph_trees.py", line 770, in foo
@torch.compile()
File "/data/users/ezyang/d/pytorch/torch/_dynamo/eval_frame.py", line 292, in _fn
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_functorch/aot_autograd.py", line 3721, in forward
return compiled_fn(full_args)
File "/data/users/ezyang/d/pytorch/torch/_functorch/aot_autograd.py", line 1439, in g
return f(*args)
File "/data/users/ezyang/d/pytorch/torch/_functorch/aot_autograd.py", line 2394, in runtime_wrapper
all_outs = call_func_with_args(
File "/data/users/ezyang/d/pytorch/torch/_functorch/aot_autograd.py", line 1463, in call_func_with_args
out = normalize_as_list(f(args))
File "/data/users/ezyang/d/pytorch/torch/_functorch/aot_autograd.py", line 1548, in rng_functionalization_wrapper
return compiled_fw(args)
File "/data/users/ezyang/d/pytorch/torch/_inductor/compile_fx.py", line 454, in run
return model(new_inputs)
File "/data/users/ezyang/d/pytorch/torch/_inductor/compile_fx.py", line 496, in run
return compiled_fn(new_inputs)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 360, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, static_input_idxs, *args, **kwargs)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 384, in cudagraphify
return manager.add_function(
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 1856, in add_function
return fn, fn(inputs)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 1676, in run
out = self._run(new_inputs, function_id)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 1717, in _run
return self.run_eager(new_inputs, function_id)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 1832, in run_eager
return node.run(new_inputs)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 600, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, new_storages)
File "/data/users/ezyang/d/pytorch/torch/_inductor/cudagraph_trees.py", line 1543, in check_memory_pool
raise RuntimeError(msg)
RuntimeError: These live storage data ptrs are in the cudagraph pool but not accounted for as an output of cudagraph trees:
Data Pointer: 140691082575872, history:
File "??", line 0, in torch::unwind::unwind()
File "??", line 0, in torch::CapturedTraceback::gather(bool, bool, bool)
File "Module.cpp", line 0, in gather_with_cpp()
File "CUDACachingAllocator.cpp", line 0, in c10::cuda::CUDACachingAllocator::Native::DeviceCachingAllocator::malloc(int, unsigned long, CUstream_st*)
File "crtstuff.c", line 0, in c10::cuda::CUDACachingAllocator::Native::NativeCachingAllocator::malloc(void**, int, unsigned long, CUstream_st*)
File "crtstuff.c", line 0, in c10::cuda::CUDACachingAllocator::Native::NativeCachingAllocator::allocate(unsigned long) const
File "??", line 0, in at::cuda::getCurrentCUDABlasHandle()
File "offloadstuff.c", line 0, in void at::cuda::blas::gemm<float>(char, char, long, long, long, at::OpMathType<float>::type, float const*, long, float const*, long, at::OpMathType<float>::type, float*, long)
File "Blas.cpp", line 0, in at::native::(anonymous namespace)::addmm_out_cuda_impl(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::Scalar const&, c10::Scalar const&, at::native::(anonymous namespace)::Activation) [clone .isra.0]
File "??", line 0, in at::native::structured_mm_out_cuda::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&)
File "RegisterCUDA.cpp", line 0, in at::(anonymous namespace)::wrapper_CUDA_mm_out_out(at::Tensor const&, at::Tensor const&, at::Tensor&)
File "ADInplaceOrViewType_0.cpp", line 0, in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor& (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor&), &torch::ADInplaceOrView::(anonymous namespace)::mm_out_out>, at::Tensor&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor&> >, at::Tensor& (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor&)
File "VariableType_1.cpp", line 0, in torch::autograd::VariableType::(anonymous namespace)::mm_out_out(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor&)
File "??", line 0, in at::_ops::mm_out::call(at::Tensor const&, at::Tensor const&, at::Tensor&)
File "python_torch_functions_1.cpp", line 0, in torch::autograd::THPVariable_mm(_object*, _object*, _object*)
File "/usr/local/src/conda/python-3.10.11/Objects/methodobject.c", line 543, in cfunction_call
File "/usr/local/src/conda/python-3.10.11/Objects/call.c", line 305, in _PyObject_Call
File "/usr/local/src/conda/python-3.10.11/Python/ceval.c", line 5917, in do_call_core
File "/data/users/ezyang/d/pytorch/torch/utils/_device.py", line 76, in __torch_function__
return func(*args, **kwargs)
File "/usr/local/src/conda/python-3.10.11/Include/internal/pycore_ceval.h", line 46, in _PyEval_EvalFrame
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ezyang/local/d/pytorch-env/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/ezyang/d/pytorch/test/inductor/test_cudagraph_trees.py", line 783, in test_workspace_allocation_error
).run(str(e))
RuntimeError: Expected to find "at::cuda::getNewWorkspace" but did not find it
Searched string:
These live storage data ptrs are in the cudagraph pool but not accounted for as an output of cudagraph trees:
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Data Pointer: 140691082575872, history:
From CHECK: at::cuda::getNewWorkspace
======================================================================
FAIL: test_workspace_allocation_error (__main__.CudaGraphTreeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/users/ezyang/d/pytorch/test/inductor/test_cudagraph_trees.py", line 132, in tearDown
self.assertEqual(all_live_block_count(), 0)
File "/data/users/ezyang/d/pytorch/torch/testing/_internal/common_utils.py", line 3096, in assertEqual
raise error_metas[0].to_error(
AssertionError: Scalars are not equal!
Expected 0 but got 1.
Absolute difference: 1
Relative difference: inf
----------------------------------------------------------------------
Ran 39 tests in 77.199s
```
Build config
```
$ python -c "import torch.__config__; print(torch.__config__.show())"
PyTorch built with:
- GCC 11.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 12.0
- NVCC architecture flags: -gencode;arch=compute_80,code=sm_80
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=ON, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
```
cc @mcarilli @eellison
### Versions
main
| 3 |
2,344 | 103,367 |
RuntimeError: CUDA error: unknown error
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug

### Versions
RuntimeError: CUDA error: unknown error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 8193) of binary: /home/win10-ubuntu/anaconda3/envs/vicuna-7b/bin/python3.10
Traceback (most recent call last):
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/win10-ubuntu/anaconda3/envs/vicuna-7b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
fastchat/train/train_mem.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-06-10_20:45:09
host : DESKTOP-LA7GLEG.localdomain
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 8193)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 6 |
2,345 | 103,359 |
Libtorch compile error when defining D_GLIBCXX_DEBUG
|
module: build, module: abi, triaged
|
### π Describe the bug
I'm not sure if this is expected behaviour, but by having the compiler flag `-D_GLIBCXX_DEBUG` will result in a `use of deleted function` error.
Minimal reproducible example here:
main.cpp:
```c++
#include <torch/torch.h>
int main() {
return 0;
}
```
CMakeLists.txt:
```
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(main main.cpp)
target_link_libraries(main "${TORCH_LIBRARIES}")
set_property(TARGET main PROPERTY CXX_STANDARD 14)
target_compile_options(main PUBLIC
-Wall -Wextra -D_GLIBCXX_DEBUG
)
```
cmake generated output:
```
$cmake -DCMAKE_PREFIX_PATH=/usr/local/libtorch ..
-- The C compiler identification is GNU 11.3.0
-- The CXX compiler identification is GNU 11.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found CUDA: /usr/local/cuda (found version "11.8")
-- The CUDA compiler identification is NVIDIA 11.8.89
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Caffe2: CUDA detected: 11.8
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.8
-- /usr/local/cuda/lib64/libnvrtc.so shorthash is 672ee683
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- Autodetected CUDA architecture(s): 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
-- Found Torch: /usr/local/libtorch/lib/libtorch.so
-- Configuring done (3.3s)
-- Generating done (0.0s)
-- Build files have been written to: /home/tuero/Documents/test/test_torch/build
```
Compiler error:
```
%$ make
[ 50%] Building CXX object CMakeFiles/main.dir/main.cpp.o
In file included from /usr/local/libtorch/include/torch/csrc/autograd/variable.h:11,
from /usr/local/libtorch/include/torch/csrc/autograd/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/tuero/Documents/test/test_torch/main.cpp:1:
/usr/local/libtorch/include/ATen/NamedTensorUtils.h: In function βbool at::has_names(at::ITensorListRef)β:
/usr/local/libtorch/include/ATen/NamedTensorUtils.h:15:35: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
15 | return std::any_of(tensors.begin(), tensors.end(), [](const Tensor& t) {
| ~~~~~~~~~~~~~^~
In file included from /usr/local/libtorch/include/ATen/WrapDimUtils.h:3,
from /usr/local/libtorch/include/ATen/TensorNames.h:3,
from /usr/local/libtorch/include/ATen/NamedTensorUtils.h:3,
from /usr/local/libtorch/include/torch/csrc/autograd/variable.h:11,
from /usr/local/libtorch/include/torch/csrc/autograd/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/tuero/Documents/test/test_torch/main.cpp:1:
/usr/local/libtorch/include/ATen/core/IListRef.h:362:7: note: βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β is implicitly deleted because the default definition would be ill-formed:
362 | class IListRefIterator {
| ^~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:362:7: error: use of deleted function βc10::IListRefIterator<at::Tensor>::Payload::~Payload()β
/usr/local/libtorch/include/ATen/core/IListRef.h:482:9: note: βc10::IListRefIterator<at::Tensor>::Payload::~Payload()β is implicitly deleted because the default definition would be ill-formed:
482 | union Payload {
| ^~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:483:25: error: union member βc10::IListRefIterator<at::Tensor>::Payload::boxed_iteratorβ with non-trivial βc10::impl::ListIterator<T, Iterator>::~ListIterator() [with T = at::Tensor; Iterator = __gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<c10::IValue*, std::__cxx1998::vector<c10::IValue, std::allocator<c10::IValue> > >, std::__debug::vector<c10::IValue>, std::random_access_iterator_tag>]β
483 | boxed_iterator_type boxed_iterator;
| ^~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:485:32: error: union member βc10::IListRefIterator<at::Tensor>::Payload::materialized_iteratorβ with non-trivial β__gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<const std::reference_wrapper<const at::Tensor>*, std::__cxx1998::vector<std::reference_wrapper<const at::Tensor>, std::allocator<std::reference_wrapper<const at::Tensor> > > >, std::__debug::vector<std::reference_wrapper<const at::Tensor>, std::allocator<std::reference_wrapper<const at::Tensor> > >, std::random_access_iterator_tag>::~_Safe_iterator()β
485 | materialized_iterator_type materialized_iterator;
| ^~~~~~~~~~~~~~~~~~~~~
In file included from /usr/local/libtorch/include/torch/csrc/autograd/variable.h:11,
from /usr/local/libtorch/include/torch/csrc/autograd/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/tuero/Documents/test/test_torch/main.cpp:1:
/usr/local/libtorch/include/ATen/NamedTensorUtils.h:15:50: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
15 | return std::any_of(tensors.begin(), tensors.end(), [](const Tensor& t) {
| ~~~~~~~~~~~^~
In file included from /usr/local/libtorch/include/ATen/core/dispatch/OperatorEntry.h:12,
from /usr/local/libtorch/include/ATen/core/dispatch/Dispatcher.h:6,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/types.h:12,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/all.h:9,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/tuero/Documents/test/test_torch/main.cpp:1:
/usr/local/libtorch/include/ATen/core/dispatch/DispatchKeyExtractor.h: In member function βvoid c10::detail::MultiDispatchKeySet::operator()(at::ITensorListRef)β:
/usr/local/libtorch/include/ATen/core/dispatch/DispatchKeyExtractor.h:79:28: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
79 | for (const auto& x : xs) {
| ^~
/usr/local/libtorch/include/ATen/core/dispatch/DispatchKeyExtractor.h:79:28: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
In file included from /usr/local/libtorch/include/ATen/WrapDimUtils.h:3,
from /usr/local/libtorch/include/ATen/TensorNames.h:3,
from /usr/local/libtorch/include/ATen/NamedTensorUtils.h:3,
from /usr/local/libtorch/include/torch/csrc/autograd/variable.h:11,
from /usr/local/libtorch/include/torch/csrc/autograd/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/autograd.h:3,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /usr/local/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/tuero/Documents/test/test_torch/main.cpp:1:
/usr/local/libtorch/include/ATen/core/IListRef.h: In instantiation of βc10::IListRef<T>::iterator c10::IListRef<T>::begin() const [with T = at::Tensor; c10::IListRef<T>::iterator = c10::IListRefIterator<at::Tensor>]β:
/usr/local/libtorch/include/ATen/NamedTensorUtils.h:15:35: required from here
/usr/local/libtorch/include/ATen/core/IListRef.h:561:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
561 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.begin(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:561:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
561 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.begin(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:561:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
561 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.begin(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h: In instantiation of βc10::IListRef<T>::iterator c10::IListRef<T>::end() const [with T = at::Tensor; c10::IListRef<T>::iterator = c10::IListRefIterator<at::Tensor>]β:
/usr/local/libtorch/include/ATen/NamedTensorUtils.h:15:50: required from here
/usr/local/libtorch/include/ATen/core/IListRef.h:565:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
565 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.end(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:565:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
565 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.end(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h:565:5: error: use of deleted function βc10::IListRefIterator<at::Tensor>::~IListRefIterator()β
565 | TORCH_ILISTREF_UNWRAP(tag_, { return this_.end(); });
| ^~~~~~~~~~~~~~~~~~~~~
/usr/local/libtorch/include/ATen/core/IListRef.h: In instantiation of βc10::IListRefIterator<T>::IListRefIterator(c10::IListRefIterator<T>::unboxed_iterator_type) [with T = at::Tensor; c10::IListRefIterator<T>::unboxed_iterator_type = const at::Tensor*]β:
/usr/local/libtorch/include/ATen/core/IListRef.h:561:5: required from βc10::IListRef<T>::iterator c10::IListRef<T>::begin() const [with T = at::Tensor; c10::IListRef<T>::iterator = c10::IListRefIterator<at::Tensor>]β
/usr/local/libtorch/include/ATen/NamedTensorUtils.h:15:35: required from here
/usr/local/libtorch/include/ATen/core/IListRef.h:433:78: error: use of deleted function βc10::IListRefIterator<at::Tensor>::Payload::~Payload()β
433 | IListRefIterator(unboxed_iterator_type unboxed) : tag_(IListRefTag::Unboxed) {
| ^
make[2]: *** [CMakeFiles/main.dir/build.make:76: CMakeFiles/main.dir/main.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/main.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
```
Removing the `-D_GLIBCXX_DEBUG` works as expected.
### Versions
```
$ cat /usr/local/libtorch/build-version
2.0.1+cu118
$ g++ --version
g++ (Ubuntu 11.3.0-6ubuntu1) 11.3.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
cc @malfet @seemethere
| 0 |
2,346 | 103,354 |
Add a requirements.txt for windows pip packages
|
triaged, module: devx
|
**Goal**: Enable pytorch devs to add/update pip packages for windows builds and tests in a single PR
### Context
Today, if you're introducing a PR that takes a dependency on a new python package, for Windows you need to:
1. Add a manual, temporary pip install somewhere in the workflow so that your PR passes
2. Create a test-infra PR that adds that pip package to the AMI here: https://github.com/pytorch/test-infra/blob/main/aws/ami/windows/scripts/Installers/Install-Pip-Dependencies.ps1
3. Remove the manual pip install line from step 1
That's pretty tedious
### The Ask
Instead we should do the following:
1. Define a dedicated requirements.txt file for windows in python/python, following the pattern used for https://github.com/pytorch/pytorch/blob/main/.github/requirements/pip-requirements-macOS.txt
3. In that [ps1 file](https://github.com/pytorch/test-infra/blob/main/aws/ami/windows/scripts/Installers/Install-Pip-Dependencies.ps1), download the req.txt file from the prev step (you'll need to get it from the py/py repo) and install the packages it specifies.
4. In the windows build & test workflows, we also always install packages from the requirements.txt file if theyβre not already installed.
### Benefits
1. This would make future installations/upgrades easy by putting all the package changes in a single folder
2. We have dependencies defined in one place
3. We have safe guard in case the dependencies are not found in the AMI (which is expected for a small time window when a new package/upgrade is freshly added)
cc @kit1980 @huydhn @clee2000
| 0 |
2,347 | 103,352 |
[feature request] Native method for iterating Python items of tensors: `iteritems()` and a new `tensor.item(i, j, k, ...)` method
|
feature, triaged, module: python frontend
|
### π The feature, motivation and pitch
without having to call `.item()`. OP: https://github.com/pytorch/pytorch/pull/103339#discussion_r1224826554
so this is needed for 1d tensors, although could be useful in the future in other contexts if string arrays are supported: https://github.com/pytorch/pytorch/issues/101699
Only relevant for large tensors/loops, where materializing a python list first takes too many python objects
Related on slow item indexing: https://github.com/pytorch/pytorch/issues/29973 and proposal of `tensor.item(i, j, k, ...)` fast indexing method returning Python int/float objects without `tensor[i, j, k, ...].item()` first creating an extra tensor object and only then upacking it
Related on supporting `memoryview` on tensors: https://github.com/pytorch/pytorch/issues/43949, then this method could be implemented by simply returning memoryview which supports iteration in python
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| 8 |
2,348 | 103,343 |
mps and cpu give far different results when training a transformer.
|
triaged, module: mps
|
### π Describe the bug
when training a transfromer model with mac CPU this is the resulting loss function log
994,tensor(13.4385),2.598125166893005,2.676572799682617
1988,tensor(13.2934),2.5872674012184143,2.6137495040893555
2000,tensor(13.3104),2.5885423374176026,2.628082513809204
2982,tensor(12.8851),2.5560683608055115,2.675567865371704
3976,tensor(12.6861),2.54050742149353,2.4734363555908203
4000,tensor(12.8042),2.549770863056183,2.584007501602173
4970,tensor(12.6474),2.537447814941406,2.528282403945923
when using the MPS, same code and data
994,tensor(1.4541),0.3743850642442703,0.42320576310157776
1988,tensor(1.4568),0.3762083804607391,0.3572617173194885
2000,tensor(1.4540),0.37433584868907926,0.3778816759586334
2982,tensor(1.4524),0.3732233664393425,0.3656735122203827
3976,tensor(1.4476),0.36990807741880416,0.3881590962409973
4000,tensor(1.4447),0.36788938045501707,0.3148590922355652
4970,tensor(1.4531),0.373729208111763,0.3972059488296509
[test.zip](https://github.com/pytorch/pytorch/files/11710031/test.zip)
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230608
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.3 (main, Apr 19 2023, 18:49:55) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0.dev20230608
[pip3] torchaudio==2.1.0.dev20230608
[pip3] torchvision==0.16.0.dev20230608
[conda] numpy 1.24.3 py311hb57d4eb_0
[conda] numpy-base 1.24.3 py311h1d85a46_0
[conda] pytorch 2.1.0.dev20230608 py3.11_0 pytorch-nightly
[conda] torchaudio 2.1.0.dev20230608 py311_cpu pytorch-nightly
[conda] torchvision 0.16.0.dev20230608 py311_cpu pytorch-nightly
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 4 |
2,349 | 103,336 |
python test/inductor/test_split_cat_fx_passes.py -k test_consecutive_split_merge fails, but running all tests together succeeds
|
triaged, oncall: pt2
|
### π Describe the bug
Something wrong with the harness. I plan to disable these tests for now.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @devashishshankar
### Versions
main
| 0 |
2,350 | 103,332 |
Improve `_group_tensors_by_device_and_dtype`
|
module: optimizer, triaged, better-engineering, actionable, module: mta
|
follow-up https://github.com/pytorch/pytorch/pull/100007
https://github.com/pytorch/pytorch/blob/6fa2d41dc7dfcbff37df7fb5517e6644eb3d74ab/torch/csrc/Module.cpp#L1745-L1746 should be able to be cleaner by implementing an appropriate type caster for `at::ScalarType`
cc @vincentqb @jbschlosser @albanD @janeyx99 @mcarilli
| 0 |
2,351 | 103,329 |
RuntimeError: torch.vmap a function that includes in-place arithmetic operations on a zero-initialized tensor, an error "vmap: inplace arithmetic(self, *extra_args) is not possible" is raised.
|
triaged, module: functorch
|
### π Describe the bug
When using torch.vmap to batch-process a function that includes inplace arithmetic operations on a zero-initialized tensor, an error "vmap: inplace arithmetic(self, *extra_args) is not possible" is raised.
Specifically, an error is raised when initializing the zero tensor using torch.zeros(x.shape), but no error is raised if using torch.zeros_like(x).
```
import torch
def func(x, y):
mat = torch.zeros(x.shape) # Error occurs on this line
# mat = torch.zeros_like(x) # No error occurs on this line
mat[0, 0] = mat[0, 1] + x[0, 0]
return mat
input = torch.ones((10, 5, 6))
inputy = torch.ones((10, 5, 6))
batched_func = torch.vmap(func, in_dims=(0, 0))
batched_func(input, inputy)
```
```
Error message:
RuntimeError: vmap: inplace arithmetic(self, *extra_args) is not possible because there exists a Tensor `other` in extra_args that has more elements than `self`. This happened due to `other` being vmapped over but `self` not being vmapped over in a vmap. Please try to use out-of-place operators instead of inplace arithmetic. If said operator is being called inside the PyTorch framework, please file a bug report instead
```
### Versions
PyTorch version: 2.0.1
Operating system: Mac Monterey 12.6.2
Python version: 3.10.4
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 4 |
2,352 | 103,322 |
Disabling ALL TestOptim on the dynamo config
|
high priority, module: optimizer, triaged, skipped, module: dynamo
|
### π Describe the bug
https://github.com/pytorch/pytorch/pull/102640 introduced flakiness into the dynamo optim tests.
There was a followup PR https://github.com/pytorch/pytorch/pull/103066 to try to disable the tests in attempt to restore health, but that did not sufficiently cover all forms of flakiness as there were further reports of flaky tests after.
This issue tracks the fact that we currently disable ALL test_optim tests in the dynamo shard, which is probably undesirable, but is a necessary stopgap to restore CI health. The alternative is reverting https://github.com/pytorch/pytorch/pull/102640 and its forward fixes, like https://github.com/pytorch/pytorch/pull/103121.
### Versions
main
cc @ezyang @gchanan @zou3519 @vincentqb @jbschlosser @albanD @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @aakhundov
| 5 |
2,353 | 103,318 |
Custom autograd function causes a graph break
|
triaged, oncall: pt2
|
### π Describe the bug
Under what conditions do custom autograd functions cause graph breaks?
### Error logs
I've got a graph break when using mem_efficient_attention MHA from xformers:
https://github.com/facebookresearch/xformers/issues/765
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.107+-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB
L1i cache: 32 KiB
L2 cache: 256 KiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 4 |
2,354 | 103,316 |
binary_cross_entropy (loss) seems to be giving incorrect values for very negative logits
|
module: nn, triaged
|
### π Describe the bug
Please see the following code which computes the loss manually (using PyTorch ops) and using the `F.binary_cross_entropy` API.
```
import torch
from torch import nn
from torch.nn import functional as F
x = nn.Parameter(torch.tensor([-50., -10., -5., -2., 0., 2., 5., 10.]), requires_grad=True)
optimizer = torch.optim.Adam([x])
optimizer.zero_grad()
xs = x.sigmoid()
loss = -xs.log().mean()
loss.backward()
print(x.grad)
optimizer.zero_grad()
loss = F.binary_cross_entropy(x.sigmoid(), torch.ones_like(x))
loss.backward()
print(x.grad)
```
This code prints.
```
tensor([-1.2500e-01, -1.2499e-01, -1.2416e-01, -1.1010e-01, -6.2500e-02,
-1.4900e-02, -8.3660e-04, -5.6773e-06])
tensor([-2.4109e-11, -1.2499e-01, -1.2416e-01, -1.1010e-01, -6.2500e-02,
-1.4900e-02, -8.3660e-04, -5.6773e-06])
```
Note that for the input `-50`, the `binary_cross_entropy` API returns a very small negative value `-2.4109e-11`, instead of a larger negative value, namely `-1.2500e-01`.
### Versions
1.12.1
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 5 |
2,355 | 103,315 |
Add Half support for softmax and log_softmax on CPU
|
module: cpu, open source, ciflow/trunk, release notes: nn, ciflow/periodic, ciflow/mps, module: inductor
|
Add Half support for softmax and log_softmax on CPU.
Note: This introduces a correctness issue with MPS https://github.com/pytorch/pytorch/issues/111416 and https://github.com/pytorch/pytorch/issues/111479.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 6 |
2,356 | 103,313 |
Fast kernels for low rank matrix multiplication
|
triaged, module: linear algebra
|
### π The feature, motivation and pitch
I'm working on the robotics perception field and we have been exploring to use pytorch for production more on the control side where the most common used primitives are camera poses, projections, quaternion arithmetic, which in the real-time production side can be identified by a subset of matmul operators of sizes of 3x1, 4x1, 3x3, 4x4.
With some colleagues we put together in kornia a small package for [Lie Algebra](https://github.com/kornia/kornia/tree/master/kornia/geometry/liegroup), Quaternion, etc based on a colab with the [Sophus](https://github.com/strasdat/Sophus) team. We did some internal benchmarks and ended up turning down the kornia/pytorch implementation because of slowness for small matrix multiplications operators.
Sophus is based on c++ Eigen, however, in parallel I did a quick public [benchmark](https://discuss.pytorch.org/t/matmul-slow-for-small-matrices/168425) comparing pytorch/numpy and (unless something missing; and happy to expand/improve) numpy is still a clear winner that prevents using pytorch in real time production robotics platforms.
### Alternatives
Alternatives I'm thinking to solve my problem:
- Numba (which removes pytorch out of the equation)
- Custom c++/cuda kernels to mimic Eigen (which in kornia we don't have bandwidth to support but happy to contribute (with guidance) to pytorch core.
- Custom Python/Triton kernels (which we can experiment/maintain from kornia side)
### Additional context
Just screenshot of the [benchmark](https://discuss.pytorch.org/t/matmul-slow-for-small-matrices/168425) mentioned above

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 11 |
2,357 | 103,312 |
setup.py fails to pass USE_ROCM to CAFFE2 build
|
module: rocm, triaged
|
### π Describe the bug
When building pytorch main branch (978a2f2b276b51f615aa860d47fadd16a284b2f6) with:
```
python tools/amd_build/build_amd.py
export USE_ROCM=1
export BUILD_CAFFE2=1
python setup.py develop
```
Following compiler error is produced:
```
[1/10] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
/usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -I../caffe2/../third_party -Icaffe2/../aten/src -I../torch/csrc -I../third_party/miniz-2.1.0 -I../third_party/kineto/libkineto/include -I../third_party/kineto/libkineto/src -I../aten/src/ATen/.. -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/ittapi/src/ittnotify -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -Ithird_party/ideep/mkl-dnn/third_party/oneDNN/include -I../third_party/ideep/mkl-dnn/third_party/oneDNN/src/../include -I../third_party/fmt/include -I../third_party/flatbuffers/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party/ittapi/include -isystem ../cmake/../third_party/eigen -isystem /opt/cray/pe/mpich/8.1.21/ofi/cray/10.0/include -isystem ../third_party/ideep/mkl-dnn/third_party/oneDNN/include -isystem ../third_party/ideep/include -isystem ../third_party/ideep/mkl-dnn/include -isystem include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -Wno-sign-compare -pthread -DASMJIT_STATIC -fopenmp -std=gnu++17 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -c ../torch/csrc/jit/ir/ir.cpp
../torch/csrc/jit/ir/ir.cpp: In member function βbool torch::jit::Node::hasSideEffects() constβ:
../torch/csrc/jit/ir/ir.cpp:1191:10: error: βhipβ has not been declared
case hip::set_stream:
^~~
../torch/csrc/jit/ir/ir.cpp:1192:10: error: βhipβ has not been declared
case hip::_set_device:
^~~
../torch/csrc/jit/ir/ir.cpp:1193:10: error: βhipβ has not been declared
case hip::_current_device:
^~~
../torch/csrc/jit/ir/ir.cpp:1194:10: error: βhipβ has not been declared
case hip::synchronize:
^~~
At global scope:
cc1plus: warning: unrecognized command line option β-Wno-aligned-allocation-unavailableβ
cc1plus: warning: unrecognized command line option β-Wno-unused-private-fieldβ
cc1plus: warning: unrecognized command line option β-Wno-invalid-partial-specializationβ
ninja: build stopped: subcommand failed.
```
And after lines in torch/csrc/jit/ir/ir.cpp:1191-1194 from:
```
#if !defined(USE_ROCM)
case hip::set_stream:
case hip::_set_device:
case hip::_current_device:
case hip::synchronize:
#endif
```
to:
```
#if !defined(USE_ROCM)
case c10::cuda::set_stream:
case c10::cuda::_set_device:
case c10::cuda::_current_device:
case c10::cuda::synchronize:
#endif
```
it manages to build.
It seems that when building `caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o` cmake does not pass `USE_ROCM` to preprocessor.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git978a2f2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 0.0.0
OS: Red Hat Enterprise Linux 8.6 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10)
Clang version: 15.0.0 (324a8e7de6a18594c06a0ee5d8c0eda2109c6ac6)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.9.13 (main, Aug 2 2022, 03:25:18) [GCC 9.3.0 20200312 (Cray Inc.)] (64-bit runtime)
Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7542 32-Core Processor
Stepping: 0
CPU MHz: 2900.000
CPU max MHz: 2900.0000
CPU min MHz: 1500.0000
BogoMIPS: 5789.48
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+git978a2f2
[conda] Could not collect
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo
| 1 |
2,358 | 103,306 |
DTensor uneven sharding corner cases.
|
oncall: distributed, triaged
|
### π Describe the bug
While debugging activation checkpointing's unit I found one interesting corner case.
For example:
```
x = torch.rand(5, 10)
y = DTensor.from_local(x).redistribute(device_mesh=mesh, placements=[Shard(0)]).redistribute(device_mesh=mesh, placements=[Replicate()]).to_local()
y.size is not equal to x.
```
### Versions
If world size = 4, y sizes are:
```
y size: torch.Size([8, 10]) (rank 1)
y size: torch.Size([8, 10]) (rank 2)
y size: torch.Size([8, 10]) (rank 0)
y size: torch.Size([8, 10]) (rank 3)
```
So this is a bug of uneven sharding.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,359 | 103,305 |
distributed.gather shape constraints
|
oncall: distributed, triaged, topic: docs
|
### π The doc issue
For distributed.gather(tensor, tensorlist, **kwargs) with backend NCCL
It is not documented what are the shape constraints for gather tensorlist.
I thought for the destination call, the tensors in tensor_list must simply be of the size of the tensor input of each corresponding rank,
but this is not the case,
I had an incorrect shape error with on rank 0 tensor of shape (70, 1), on rank 1 tensor of shape (76, 1)
and a tensor_list on rank 1 with in position 0 a tensor of shape (70, 1), in position 1 a tensor of shape (76, 1)
After searching for the cause of the error, I found a forum message on a somewhat similar issue with all_gather that stated the shape of the tensor in tensor_list must all be equal ? Is that correct ? Or non-increasing (so that last tensor can have less than full size) ? I worked around the issue using asynchronous send and receive instead of a gather, but this could do with better specification.
### Suggest a potential alternative/fix
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,360 | 103,276 |
Dynamo trouble shooting dead link
|
good first issue, triaged, topic: docs, oncall: pt2, module: dynamo
|
### π The doc issue
The dynamo page https://pytorch.org/docs/stable/dynamo/troubleshooting.html#accuracy-debugging link to TROUBLESHOOTING results in a dead link for me.
### Suggest a potential alternative/fix
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @aakhundov
| 5 |
2,361 | 103,272 |
oneDNN kernel fails to compile
|
triaged, module: mkldnn
|
### π Describe the bug
I am trying to compile latest PyTorch master on Arch Linux with Cuda, mkl and onednn. The process is not sufficiently documented at all.
First I gave up on TensorRT, because pytorch does not support v8+. After that I still got errors, but I found my error in an open issue which stated to reinstall gcc-10 because of regressions. I did that and kept going forward. Now I have hit other errors. Cleaned up all build files and ccache but have hit this wall.
I have manually installed Cuda 11.8, latest cudnn 8, nccl 2.16.5. And also system packages for mkl and onednn 2023.1
There are more than 2000 lines of error but I'll provide an excerpt:
```cc
/home/psi/git/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/src/../include/oneapi/dnnl/dnnl_types.h:3091:3: error: conflicting declaration βtypedef enum dnnl_stream_flags_t dnnl_stream_flags_tβ
3091 | } dnnl_stream_flags_t;
| ^~~~~~~~~~~~~~~~~~~
In file included from /usr/include/oneapi/dnnl/dnnl_common.h:23,
from /usr/include/oneapi/dnnl/dnnl_common.hpp:32,
from /usr/include/oneapi/dnnl/dnnl_graph.hpp:20,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/graph_helper.h:3,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:1:
/usr/include/oneapi/dnnl/dnnl_common_types.h:165:3: note: previous declaration as βtypedef enum dnnl_stream_flags_t dnnl_stream_flags_tβ
165 | } dnnl_stream_flags_t;
| ^~~~~~~~~~~~~~~~~~~
In file included from /home/psi/git/pytorch/build/third_party/ideep/mkl-dnn/third_party/oneDNN/include/oneapi/dnnl/dnnl_config.h:20,
from /usr/include/oneapi/dnnl/dnnl_common.h:24,
from /usr/include/oneapi/dnnl/dnnl_common.hpp:32,
from /usr/include/oneapi/dnnl/dnnl_graph.hpp:20,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/graph_helper.h:3,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:1:
/home/psi/git/pytorch/third_party/ideep/mkl-dnn/third_party/oneDNN/src/../include/oneapi/dnnl/dnnl_types.h:3139:3: error: conflicting declaration βtypedef struct dnnl_version_t dnnl_version_tβ
3139 | } dnnl_version_t;
| ^~~~~~~~~~~~~~
In file included from /usr/include/oneapi/dnnl/dnnl_common.h:23,
from /usr/include/oneapi/dnnl/dnnl_common.hpp:32,
from /usr/include/oneapi/dnnl/dnnl_graph.hpp:20,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/graph_helper.h:3,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:1:
/usr/include/oneapi/dnnl/dnnl_common_types.h:213:3: note: previous declaration as βtypedef struct dnnl_version_t dnnl_version_tβ
213 | } dnnl_version_t;
| ^~~~~~~~~~~~~~
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp: In constructor βtorch::jit::fuser::onednn::LlgaKernel::LlgaKernel(const torch::jit::Node*)β:
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:32:34: error: βclass dnnl::graph::partitionβ has no member named βget_in_portsβ; did you mean βget_input_portsβ?
32 | nPartitionInputs_ = partition_.get_in_ports().size();
| ^~~~~~~~~~~~
| get_input_ports
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp: In member function βvoid torch::jit::fuser::onednn::LlgaKernel:
:initializeConstantInputs()β:
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:43:30: error: βclass dnnl::graph::partitionβ has no member named βget_in_portsβ; did you mean βget_input_portsβ?
43 | for (auto& lt : partition_.get_in_ports()) {
| ^~~~~~~~~~~~
| get_input_ports
In file included from /home/psi/git/pytorch/c10/util/Exception.h:4,
from /home/psi/git/pytorch/aten/src/ATen/core/Generator.h:11,
from /home/psi/git/pytorch/aten/src/ATen/CPUGeneratorImpl.h:3,
from /home/psi/git/pytorch/aten/src/ATen/Context.h:3,
from /home/psi/git/pytorch/aten/src/ATen/ATen.h:7,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/LlgaTensorImpl.h:3,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/operator.h:4,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/graph_helper.h:4,
from /home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:1:
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:55:45: error: expected primary-expression before β>β token
55 | value->type()->cast<TensorType>(),
| ^
/home/psi/git/pytorch/c10/macros/Macros.h:200:64: note: in definition of macro βC10_UNLIKELYβ
200 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
| ^~~~
/home/psi/git/pytorch/c10/util/Exception.h:503:7: note: in expansion of macro βC10_UNLIKELY_OR_CONSTβ
503 | if (C10_UNLIKELY_OR_CONST(!(cond))) { \
| ^~~~~~~~~~~~~~~~~~~~~
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:53:7: note: in expansion of macro βTORCH_CHECKβ
53 | TORCH_CHECK(
| ^~~~~~~~~~~
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:55:47: error: expected primary-expression before β)β token
55 | value->type()->cast<TensorType>(),
| ^
/home/psi/git/pytorch/c10/macros/Macros.h:200:64: note: in definition of macro βC10_UNLIKELYβ
200 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
| ^~~~
/home/psi/git/pytorch/c10/util/Exception.h:503:7: note: in expansion of macro βC10_UNLIKELY_OR_CONSTβ
503 | if (C10_UNLIKELY_OR_CONST(!(cond))) { \
| ^~~~~~~~~~~~~~~~~~~~~
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:53:7: note: in expansion of macro βTORCH_CHECKβ
53 | TORCH_CHECK(
| ^~~~~~~~~~~
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp: In member function βstd::map<long unsigned int, long int> torch::jit::fuser::onednn::LlgaKernel::initializeTensorIdToOccurence() constβ:
/home/psi/git/pytorch/torch/csrc/jit/codegen/onednn/kernel.cpp:69:30: error: βconst class dnnl::graph::partitionβ has no member named βget_in_portsβ; did you mean βget_input_portsβ?
69 | for (auto& lt : partition_.get_in_ports()) {
| ^~~~~~~~~~~~
| get_input_ports
At global scope:
cc1plus: note: unrecognized command-line option β-Wno-aligned-allocation-unavailableβ may have been intended to silence earlier diagnostics
cc1plus: note: unrecognized command-line option β-Wno-unused-private-fieldβ may have been intended to silence earlier diagnostics
cc1plus: note: unrecognized command-line option β-Wno-invalid-partial-specializationβ may have been intended to silence earlier diagnostics
[5461/7068] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/Activation.cpp.DEFAULT.cpp.o^C
ninja: build stopped: interrupted by user.
```
<details>
<summary>Cmake config:</summary>
```
/usr/bin/g++-10 /home/psi/git/pytorch/torch/abi-check.cpp -o /home/psi/git/pytorch/build/abi-check
Determined _GLIBCXX_USE_CXX11_ABI=1
Current compiler supports avx2 extension. Will build perfkernels.
Current compiler supports avx512f extension. Will build fbgemm.
Found CUDAToolkit: /opt/cuda/include (found version "11.8.89")
Caffe2: CUDA detected: 11.8
Caffe2: CUDA nvcc is: /opt/cuda/bin/nvcc
Caffe2: CUDA toolkit directory: /opt/cuda
Caffe2: Header version is: 11.8
/opt/cuda/lib64/libnvrtc.so shorthash is 672ee683
Autodetected CUDA architecture(s): 6.1 5.2
Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_52,code=sm_52
Building using own protobuf under third_party per request.
Use custom protobuf build.
3.13.0.0
Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/psi/git/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
Trying to find preferred BLAS backend of choice: MKL
MKL_THREADING = OMP
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
cmake/Modules/FindMKL.cmake:239 (FIND_PACKAGE)
cmake/Modules/FindMKL.cmake:334 (CHECK_ALL_LIBRARIES)
cmake/Dependencies.cmake:196 (find_package)
CMakeLists.txt:705 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
cmake/Modules/FindMKL.cmake:239 (FIND_PACKAGE)
cmake/Modules/FindMKL.cmake:334 (CHECK_ALL_LIBRARIES)
cmake/Dependencies.cmake:196 (find_package)
CMakeLists.txt:705 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
MKL libraries: /opt/intel/oneapi/mkl/latest/lib/intel64/libmkl_intel_lp64.so;/opt/intel/oneapi/mkl/latest/lib/intel64/libmkl_gnu_thread.so;/opt/intel/oneapi/mkl/latest/lib/intel64/libmkl_core.so;-fopenmp;/usr/lib/libpthread.a;/usr/lib/libm.so;/usr/lib/libdl.a
MKL include directory: /opt/intel/oneapi/mkl/latest/include
MKL OpenMP type: GNU
MKL OpenMP library: -fopenmp
Brace yourself, we are building NNPACK
NNPACK backend is x86-64
LLVM FileCheck Found: /usr/bin/FileCheck
git version: v1.6.1 normalized to 1.6.1
Version: 1.6.1
Performing Test HAVE_STD_REGEX -- success
Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
Performing Test HAVE_POSIX_REGEX -- success
Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:129 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
Found OpenMP_C: -fopenmp
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:129 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
Found OpenMP_CXX: -fopenmp
Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:131 (message):
OpenMP found! OpenMP_C_INCLUDE_DIRS =
CMake Warning at third_party/fbgemm/CMakeLists.txt:224 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:225 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:226 (message):
CMAKE_CXX_FLAGS_DEBUG is -g
CMake Warning at third_party/fbgemm/CMakeLists.txt:227 (message):
CMAKE_CXX_FLAGS_RELEASE is -O3 -DNDEBUG
CMake Warning at third_party/fbgemm/CMakeLists.txt:228 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=/home/psi/git/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=OFF
ASMJIT_TARGET_TYPE=STATIC
ASMJIT_DEPS=pthread;rt
ASMJIT_LIBS=asmjit;pthread;rt
ASMJIT_CFLAGS=-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS=-Wall;-Wextra;-Wconversion;-fno-math-errno;-fno-threadsafe-statics;-fno-semantic-interposition;-DASMJIT_STATIC
ASMJIT_PRIVATE_CFLAGS_DBG=
ASMJIT_PRIVATE_CFLAGS_REL=-O2;-fmerge-all-constants;-fno-enforce-eh-specs
INFOUSING OPENCL
Found Numa (include: /usr/include, library: /usr/lib/libnuma.so)
OpenCV found (/usr/lib/cmake/opencv4)
Found FFMPEG or Libav: /usr/lib/libavcodec.so;/usr/lib/libavformat.so;/usr/lib/libavutil.so;/usr/lib/libswscale.so;/usr/lib/libswresample.so, /usr/include
Found FFMPEG/LibAV libraries
Using third party subdirectory Eigen.
Found PythonInterp: /home/psi/.conda/envs/ai/bin/python (found suitable version "3.8.16", minimum required is "3.0")
NumPy ver. 1.23.0 found (include: /home/psi/.conda/envs/ai/lib/python3.8/site-packages/numpy/core/include)
Using third_party/pybind11.
pybind11 include dirs: /home/psi/git/pytorch/cmake/../third_party/pybind11/include
MPI support found
MPI compile flags:
MPI include path: /usr/include
MPI LINK flags path: -Wl,-rpath -Wl,/usr/lib -Wl,--enable-new-dtags
MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so
Found OpenMPI with CUDA support built.
Adding OpenMP CXX_FLAGS: -fopenmp
Will link against OpenMP libraries: /usr/lib/gcc/x86_64-pc-linux-gnu/10.3.0/libgomp.so;/usr/lib/libpthread.a
Autodetected CUDA architecture(s): 6.1 5.2
CMake Warning at cmake/External/nccl.cmake:69 (message):
Enabling NCCL library slimming
Call Stack (most recent call first):
cmake/Dependencies.cmake:1345 (include)
CMakeLists.txt:705 (include)
Converting CMAKE_CUDA_FLAGS to CUDA_NVCC_FLAGS:
CUDA_NVCC_FLAGS = -Xfatbin;-compress-all;-DONNX_NAMESPACE=onnx_torch;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_52,code=sm_52;-Xcudafe;--diag_suppress=cc_clobber_ignored,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl;--expt-relaxed-constexpr;--expt-extended-lambda
CUDA_NVCC_FLAGS_DEBUG = -g
CUDA_NVCC_FLAGS_RELEASE = -O3;-DNDEBUG
CUDA_NVCC_FLAGS_RELWITHDEBINFO = -O2;-g;-DNDEBUG
CUDA_NVCC_FLAGS_MINSIZEREL = -O1;-DNDEBUG
summary of build options:
Install prefix: /home/psi/git/pytorch/torch
Target system: Linux
Compiler:
C compiler: /usr/bin/gcc-10
CFLAGS:
Gloo build as SHARED library
MPI include path: /usr/include
MPI libraries: /usr/lib/libmpi_cxx.so/usr/lib/libmpi.so
CMake Warning (dev) at third_party/gloo/cmake/Cuda.cmake:109 (find_package):
Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables.
Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
CMake variable CUDAToolkit_ROOT is set to:
/opt/cuda
For compatibility, CMake is ignoring the variable.
Call Stack (most recent call first):
third_party/gloo/cmake/Dependencies.cmake:115 (include)
third_party/gloo/CMakeLists.txt:111 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
Found CUDAToolkit: /opt/cuda/include (found suitable version "11.8.89", minimum required is "7.0")
CUDA detected: 11.8.89
CMake Deprecation Warning at third_party/zstd/build/cmake/CMakeLists.txt:11 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
ZSTD_LEGACY_SUPPORT not defined!
ZSTD VERSION 1.3.2
Found PythonInterp: /home/psi/.conda/envs/ai/bin/python (found version "3.8.16")
Generated: /home/psi/git/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: /home/psi/git/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: /home/psi/git/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
******** Summary ********
CMake version : 3.26.4
CMake command : /usr/bin/cmake
System : Linux
C++ compiler : /usr/bin/g++-10
C++ compiler version : 10.3.0
CXX flags : -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Wnon-virtual-dtor
Build type : Release
Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
CMAKE_PREFIX_PATH : /opt/cuda
CMAKE_INSTALL_PREFIX : /home/psi/git/pytorch/torch
CMAKE_MODULE_PATH : /home/psi/git/pytorch/cmake/Modules;/home/psi/git/pytorch/cmake/public/../Modules_CUDA_fix
ONNX version : 1.14.0
ONNX NAMESPACE : onnx_torch
ONNX_USE_LITE_PROTO : OFF
USE_PROTOBUF_SHARED_LIBS : OFF
Protobuf_USE_STATIC_LIBS : ON
ONNX_DISABLE_EXCEPTIONS : OFF
ONNX_WERROR : OFF
ONNX_BUILD_TESTS : OFF
ONNX_BUILD_BENCHMARKS : OFF
Protobuf compiler :
Protobuf includes :
Protobuf libraries :
BUILD_ONNX_PYTHON : OFF
******** Summary ********
CMake version : 3.26.4
CMake command : /usr/bin/cmake
System : Linux
C++ compiler : /usr/bin/g++-10
C++ compiler version : 10.3.0
CXX flags : -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Wnon-virtual-dtor
Build type : Release
Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
CMAKE_PREFIX_PATH : /opt/cuda
CMAKE_INSTALL_PREFIX : /home/psi/git/pytorch/torch
CMAKE_MODULE_PATH : /home/psi/git/pytorch/cmake/Modules;/home/psi/git/pytorch/cmake/public/../Modules_CUDA_fix
ONNX version : 1.4.1
ONNX NAMESPACE : onnx_torch
ONNX_BUILD_TESTS : OFF
ONNX_BUILD_BENCHMARKS : OFF
ONNX_USE_LITE_PROTO : OFF
ONNXIFI_DUMMY_BACKEND :
Protobuf compiler :
Protobuf includes :
Protobuf libraries :
BUILD_ONNX_PYTHON : OFF
Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
Adding -DNDEBUG to compile flags
Compiling with MAGMA support
MAGMA INCLUDE DIRECTORIES: /usr/include
MAGMA LIBRARIES: /usr/lib/libmagma.so
MAGMA V2 check: 0
Could not find hardware support for NEON on this machine.
No OMAP3 processor on this machine.
No OMAP4 processor on this machine.
Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
MIOpen not found. Compiling without MIOpen support
-- Will build oneDNN Graph
MKLDNN_CPU_RUNTIME = OMP
cmake version: 3.26.4
CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:36 (cmake_policy):
The OLD behavior for policy CMP0025 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
DNNL_TARGET_ARCH: X64
DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
Found OpenMP_C: -fopenmp
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
Found OpenMP_CXX: -fopenmp
Could NOT find Doxyrest (missing: DOXYREST_EXECUTABLE)
Found PythonInterp: /home/psi/.conda/envs/ai/bin/python (found suitable version "3.8.16", minimum required is "2.7")
Could NOT find Sphinx (missing: SPHINX_EXECUTABLE)
Enabled workload: TRAINING
Enabled primitives: ALL
Enabled primitive CPU ISA: ALL
Enabled primitive GPU ISA: ALL
Primitive cache is enabled
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:62 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:179 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:62 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:179 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
DNNL_GRAPH_BUILD_FOR_CI is set to be OFF
Compiling oneDNN Graph with CPU runtime OMP support
Compiling oneDNN Graph with GPU runtime NONE support
Graph compiler backend is disabled.
Set version definitions to /home/psi/git/pytorch/third_party/ideep/mkl-dnn/src/utils/verbose.cpp
Compiled partition cache is enabled
Found MKL-DNN: TRUE
-- <FindZVECTOR>
-- check z16
-- check z15
-- check z14
-- </FindZVECTOR>
Module support is disabled.
Version: 9.1.0
Build type: Release
CXX_STANDARD: 17
Required features: cxx_variadic_templates
Using Kineto with CUPTI support
Configuring Kineto dependency:
KINETO_SOURCE_DIR = /home/psi/git/pytorch/third_party/kineto/libkineto
KINETO_BUILD_TESTS = OFF
KINETO_LIBRARY_TYPE = static
CUDA_SOURCE_DIR = /opt/cuda
CUDA_INCLUDE_DIRS = /opt/cuda/include
CUPTI_INCLUDE_DIR = /opt/cuda/extras/CUPTI/include
CUDA_cupti_LIBRARY = /opt/cuda/extras/CUPTI/lib64/libcupti.so
Found CUPTI
Found PythonInterp: /home/psi/.conda/envs/ai/bin/python (found version "3.8.16")
INFO ROCM_SOURCE_DIR =
Kineto: FMT_SOURCE_DIR = /home/psi/git/pytorch/third_party/fmt
Kineto: FMT_INCLUDE_DIR = /home/psi/git/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /opt/cuda/extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
INFO DYNOLOG_INCLUDE_DIR = /home/psi/git/pytorch/third_party/kineto/libkineto/third_party/dynolog/
INFO IPCFABRIC_INCLUDE_DIR = /home/psi/git/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/
Configured Kineto
GCC 10.3.0: Adding gcc and gcc_s libs to link line
NUMA paths:
/usr/include
/usr/lib/libnuma.so
headers outputs:
sources outputs:
declarations_yaml outputs:
Using ATen parallel backend: OMP
CMake Deprecation Warning at third_party/sleef/CMakeLists.txt:91 (cmake_policy):
The OLD behavior for policy CMP0066 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
Found OpenMP_C: -fopenmp (found version "4.5")
Found OpenMP_CXX: -fopenmp (found version "4.5")
Found OpenMP: TRUE (found version "4.5")
Configuring build for SLEEF-v3.6.0
Target system: Linux-6.3.5-zen1-1-zen
Target processor: x86_64
Host system: Linux-6.3.5-zen1-1-zen
Host processor: x86_64
Detected C compiler: GNU @ /usr/bin/gcc-10
CMake: 3.26.4
Make program: /usr/bin/ninja
Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
Building shared libs : OFF
Building static test bins: OFF
MPFR : /usr/lib/libmpfr.so
MPFR header file in /usr/include
GMP : /usr/lib/libgmp.so
RT : /usr/lib/librt.a
FFTW3 : /usr/lib/libfftw3.so
OPENSSL : 1.1.1t
SDE : SDE_COMMAND-NOTFOUND
RUNNING_ON_TRAVIS :
COMPILER_SUPPORTS_OPENMP : 1
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: /home/psi/git/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: /home/psi/git/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: /home/psi/git/pytorch/build/aten/src/ATen/core/enum_tag.h
Generating sources for unboxing kernels /home/psi/.conda/envs/ai/bin/python;-m;torchgen.gen_executorch;--source-path=/home/psi/git/pytorch/test/edge/../../test/edge;--install-dir=/home/psi/git/pytorch/build/out;--tags-path=/home/psi/git/pytorch/test/edge/../../aten/src/ATen/native/tags.yaml;--aten-yaml-path=/home/psi/git/pytorch/test/edge/../../aten/src/ATen/native/native_functions.yaml;--use-aten-lib;--op-selection-yaml-path=/home/psi/git/pytorch/test/edge/../../test/edge/selected_operators.yaml;--custom-ops-yaml-path=/home/psi/git/pytorch/test/edge/../../test/edge/custom_ops.yaml
_GLIBCXX_USE_CXX11_ABI=1 is already defined as a cmake variable
CMake Warning (dev) at torch/CMakeLists.txt:379:
Syntax Warning in cmake code at column 107
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at torch/CMakeLists.txt:379:
Syntax Warning in cmake code at column 115
Argument not separated from preceding token by whitespace.
This warning is for project developers. Use -Wno-dev to suppress it.
Autodetected CUDA architecture(s): 6.1 5.2
Using lib/python3.8/site-packages as python relative installation path
CMake Warning at CMakeLists.txt:1081 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
******** Summary ********
General:
CMake version : 3.26.4
CMake command : /usr/bin/cmake
System : Linux
C++ compiler : /usr/bin/g++-10
C++ compiler id : GNU
C++ compiler version : 10.3.0
Using ccache if found : ON
Found ccache : /usr/bin/ccache
CXX flags : -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow
Build type : Release
Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;BUILD_NVFUSER
CMAKE_PREFIX_PATH : /opt/cuda
CMAKE_INSTALL_PREFIX : /home/psi/git/pytorch/torch
USE_GOLD_LINKER : OFF
TORCH_VERSION : 2.1.0
BUILD_CAFFE2 : OFF
BUILD_CAFFE2_OPS : OFF
BUILD_STATIC_RUNTIME_BENCHMARK: OFF
BUILD_TENSOREXPR_BENCHMARK: OFF
BUILD_NVFUSER_BENCHMARK: OFF
BUILD_BINARY : OFF
BUILD_CUSTOM_PROTOBUF : ON
Link local protobuf : ON
BUILD_DOCS : OFF
BUILD_PYTHON : ON
Python version : 3.8.16
Python executable : /home/psi/.conda/envs/ai/bin/python
Pythonlibs version : 3.8.16
Python library : /home/psi/.conda/envs/ai/lib/libpython3.8.so.1.0
Python includes : /home/psi/.conda/envs/ai/include/python3.8
Python site-packages: lib/python3.8/site-packages
BUILD_SHARED_LIBS : ON
CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
BUILD_TEST : ON
BUILD_JNI : OFF
BUILD_MOBILE_AUTOGRAD : OFF
BUILD_LITE_INTERPRETER: OFF
INTERN_BUILD_MOBILE :
TRACING_BASED : OFF
USE_BLAS : 1
BLAS : mkl
BLAS_HAS_SBGEMM :
USE_LAPACK : 1
LAPACK : mkl
USE_ASAN : OFF
USE_TSAN : OFF
USE_CPP_CODE_COVERAGE : OFF
USE_CUDA : ON
Split CUDA :
CUDA static link : OFF
USE_CUDNN : ON
USE_EXPERIMENTAL_CUDNN_V8_API: ON
CUDA version : 11.8
USE_FLASH_ATTENTION : ON
cuDNN version : 8.9.1
CUDA root directory : /opt/cuda
CUDA library : /usr/lib/libcuda.so
cudart library : /opt/cuda/lib64/libcudart.so
cublas library : /opt/cuda/lib64/libcublas.so
cufft library : /opt/cuda/lib64/libcufft.so
curand library : /opt/cuda/lib64/libcurand.so
cusparse library : /opt/cuda/lib64/libcusparse.so
cuDNN library : /opt/cudnn/lib/libcudnn.so
nvrtc : /opt/cuda/lib64/libnvrtc.so
CUDA include path : /opt/cuda/include
NVCC executable : /opt/cuda/bin/nvcc
CUDA compiler : /opt/cuda/bin/nvcc
CUDA flags : -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_61,code=sm_61 -gencode arch=compute_52,code=sm_52 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__
CUDA host compiler :
CUDA --device-c : OFF
USE_TENSORRT : OFF
USE_ROCM : OFF
BUILD_NVFUSER : ON
USE_EIGEN_FOR_BLAS :
USE_FBGEMM : ON
USE_FAKELOWP : OFF
USE_KINETO : ON
USE_FFMPEG : ON
USE_GFLAGS : OFF
USE_GLOG : OFF
USE_LEVELDB : OFF
USE_LITE_PROTO : OFF
USE_LMDB : OFF
USE_METAL : OFF
USE_PYTORCH_METAL : OFF
USE_PYTORCH_METAL_EXPORT : OFF
USE_MPS : OFF
USE_FFTW : OFF
USE_MKL : ON
USE_MKLDNN : ON
USE_MKLDNN_ACL : OFF
USE_MKLDNN_CBLAS : ON
USE_UCC : OFF
USE_ITT : ON
USE_NCCL : ON
USE_SYSTEM_NCCL : OFF
USE_NCCL_WITH_UCC : OFF
USE_NNPACK : ON
USE_NUMPY : ON
USE_OBSERVERS : ON
USE_OPENCL : ON
USE_OPENCV : ON
OpenCV version : 4.7.0
USE_OPENMP : ON
USE_TBB : OFF
USE_VULKAN : OFF
USE_PROF : OFF
USE_QNNPACK : ON
USE_PYTORCH_QNNPACK : ON
USE_XNNPACK : ON
USE_REDIS : OFF
USE_ROCKSDB : OFF
USE_ZMQ : OFF
USE_DISTRIBUTED : ON
USE_MPI : ON
USE_GLOO : ON
USE_GLOO_WITH_OPENSSL : OFF
USE_TENSORPIPE : ON
Public Dependencies : caffe2::mkl
Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fbgemm;/opt/cuda/lib64/libOpenCL.so;/usr/lib/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_optflow;opencv_videoio;opencv_video;/usr/lib/libavcodec.so;/usr/lib/libavformat.so;/usr/lib/libavutil.so;/usr/lib/libswscale.so;/usr/lib/libswresample.so;ittnotify;fp16;/usr/lib/libmpi_cxx.so;/usr/lib/libmpi.so;caffe2::openmp;tensorpipe;gloo;libzstd_static;foxi_loader;rt;fmt::fmt-header-only;kineto;gcc_s;gcc;dl
Public CUDA Deps. : caffe2::cufft;caffe2::curand;caffe2::cublas
Private CUDA Deps. : torch::cudnn;__caffe2_nccl;tensorpipe_cuda;gloo_cuda;/opt/cuda/lib64/libcudart.so;CUDA::cusparse;CUDA::curand;CUDA::cufft;ATEN_CUDA_FILES_GEN_LIB
USE_COREML_DELEGATE : OFF
BUILD_LAZY_TS_BACKEND : ON
TORCH_DISABLE_GPU_ASSERTS : ON
Configuring done (24.6s)
Generating done (1.9s)
```
</details>
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (Arch Linux 10.3.0-2) 10.3.0
Clang version: 15.0.7
CMake version: version 3.26.4
Libc version: glibc-2.37
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.3.5-zen1-1-zen-x86_64-with-glibc2.17
Is CUDA available: N/A
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1060 6GB
GPU 1: NVIDIA GeForce GTX 970
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU(s) scaling MHz: 98%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 7399.70
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.23.0
[conda] numpy 1.23.0 pypi_0 pypi
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 4 |
2,362 | 103,271 |
Misaligned address error with torch.cat
|
high priority, triaged, oncall: pt2, module: inductor
|
### π Describe the bug
The following repro raises "RuntimeError: Triton Error [CUDA]: misaligned address".
https://gist.github.com/zou3519/0de4a64f5f612531133d04a8f59400eb
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main, A100
cc @ezyang @gchanan @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @davidberard98
| 6 |
2,363 | 103,258 |
Warn / deprecate / remove ProcessGroupNCCL._group_start(), _group_end() APIs
|
oncall: distributed, triaged
|
### π Describe the bug
As reported by @lw, these APIs are error prone and can easily result in correctness issues if the appropriate synchronization is not done by the user. As a result, we should discuss whether to remove these APIs (and just use the `coalescing_manager`), or add a warning saying users have to do explicit sync.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 3 |
2,364 | 103,254 |
Unexpected High PCIe traffic in Distributed Training since PT 2
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
Since PT 2, we have noticed significant amount of PCIe traffic between host and device, which is something we didn't expect to happen and not observed in PT 1.x version. This applies to both DDP and FSDP, and we observed as much as 2G/s to 4G/s traffic throughout of the whole training stage (for our jobs with 3B+ models) which eventually harm the training due to continuous high consumption of the PCIe and the host.
To reproduce, I am using the same code from the official DDP tutorial: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#initialize-ddp-with-torch-distributed-run-torchrun
except:
1. modify model size to test different model sizes
2. add a training loop to observe the continuous traffic
```python
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
from tqdm import tqdm
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 100000000)
self.relu = nn.ReLU()
self.net2 = nn.Linear(100000000, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic():
dist.init_process_group("nccl")
rank = dist.get_rank()
print(f"Start running basic DDP example on rank {rank}.")
# create model and move it to GPU with id rank
device_id = rank % torch.cuda.device_count()
model = ToyModel().to(device_id)
print(f"\n--> model has {sum(p.numel() for p in model.parameters() if p.requires_grad)/1e6} Million params\n")
ddp_model = DDP(model, device_ids=[device_id])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
for _ in tqdm(range(1000000000)):
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device_id)
loss_fn(outputs, labels).backward()
optimizer.step()
```
PCIe traffic:
Β | Prior 2.0 (1.13.1+cu117) | Prior 2.0 (1.13.1+cu117) | 2.x (2.1.0.dev20230603+cu118) | 2.x (2.1.0.dev20230603+cu118)
-- | -- | -- | -- | --
model size | device to host (DtoH) | host to device (HtoD) | device to host (DtoH) | host to device (HtoD)
1.6B | 8.8M/s | 21M/s | 122M/s | 950M/s
160M | 10.8M/s | 24.5M/s | 122M/s | 950M/s
16M | 21M/s | 48M/s | 73M/s | 480M/s
1.6M | 9M/s | 45M/s | 25M/s | 170M/s
0.16M | 8.5M/s | 41.5M/s | 7M/s | 36M/s
Some screenshots (for 1.6B model):
PT 1.x
<img width="1692" alt="image" src="https://github.com/pytorch/pytorch/assets/20955448/681a33fe-eb13-4619-9656-27a93ce08f16">
PT 2.x
<img width="1697" alt="image" src="https://github.com/pytorch/pytorch/assets/20955448/bb73ab7a-4e79-4668-909b-b58124ceb7c2">
Some other notes from our experiences:
1. we can confirm the traffic came only from back path (i.e. `loss.backward()`)
2. for DDP, we observed very little to zero PCIe traffic in PT 1.x (as expected), but very high PCIe traffic in PT 2.x
3. for FSDP, we observed high PCIe traffic in PT 1.x as well (which isn't as expected as we didn't turn on any offloading in FSDP), and in PT 2.x, the traffic is even higher (roughly 2X for the same training)
4. We were able to bring PCIe traffic down to almost zero in all these experiments by turning on CUDA_LAUNCH_BLOCKING. However, that is only for debugging purpose as turning on blocking will slow down the training.
### Versions
```
PyTorch version: 2.1.0.dev20230603+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.48.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Xeon Processor (Cascadelake)
Stepping: 5
CPU MHz: 2400.000
BogoMIPS: 4800.00
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 160 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.0rc1
[pip3] pytorch-triton==2.1.0+9820899b38
[pip3] torch==2.1.0.dev20230603+cu118
[pip3] torchvision==0.16.0.dev20230603+cu118
[conda] numpy 1.25.0rc1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9820899b38 pypi_0 pypi
[conda] torch 2.1.0.dev20230603+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230603+cu118 pypi_0 pypi
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @lessw2020 @HamidShojanazeri
| 27 |
2,365 | 103,253 |
Issue-103101: Refactor dimensionality check in tuned_mm_plus_mm to pattern matching phase.
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes #103101
I am a new contributor and this is my first attempt at solving the issue. Looking forward to feedback.
Thanks,
Sid
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @aakhundov
| 8 |
2,366 | 103,250 |
torch.jit.script mean(keepdim=True) segfaults on GPU
|
oncall: jit
|
### π Describe the bug
When wrapping a `torch.nn.Module` with `torch.jit.script`, calling `torch.mean` with `keepdim=True` leads to a crash in backpropagation. Calling `torch.mean` followed by `unsqueeze` does not.
Crash message:
`RuntimeError: invalid vector subscript`
Reproducing code
```
#!/usr/bin/env python
import torch
N_Floats=50
Batch_Size = 8
Linear_Width = 512
N_Actions = 12
class Agent(torch.nn.Module):
def __init__(self,Linear_Width:int):
super().__init__()
self.A_head = torch.nn.Linear(N_Floats,N_Actions)
def forward(self, inputs):
A = self.A_head(inputs)
#Q = V + A - A.mean(dim=-1).unsqueeze(-1) #Works
Q = A - A.mean(dim=-1,keepdim=True) #Crashes with "invalid vector subscript"
return Q
def learn_on_batch(model,optimizer):
Action_Indexes = torch.randint(low=0,high=N_Actions,size=(Batch_Size,1),device='cuda',dtype=torch.int64)
Inputs = torch.rand(size=(Batch_Size,N_Floats),device='cuda',dtype=torch.float32)
target = torch.ones(Batch_Size,device='cuda')
outputs = model(Inputs)
outputs = torch.gather(outputs, 1, Action_Indexes.type(torch.int64))
loss = torch.nn.functional.mse_loss(outputs,target)
total_loss = torch.sum(loss)
optimizer.zero_grad(set_to_none=True)
total_loss.backward()
optimizer.step()
if __name__ == '__main__':
model = torch.jit.script(Agent(Linear_Width)).to("cuda")
optimizer = torch.optim.Adam(model.parameters())
while True:
learn_on_batch(model,optimizer)
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:51:25) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 535.98
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3401
DeviceID=CPU0
Family=107
L2CacheSize=8192
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3401
Name=AMD Ryzen 9 5950X 16-Core Processor
ProcessorType=3
Revision=8448
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.2
[pip3] torchrl==0.1.1
[conda] blas 2.117 mkl conda-forge
[conda] blas-devel 3.9.0 17_win64_mkl conda-forge
[conda] libblas 3.9.0 17_win64_mkl conda-forge
[conda] libcblas 3.9.0 17_win64_mkl conda-forge
[conda] liblapack 3.9.0 17_win64_mkl conda-forge
[conda] liblapacke 3.9.0 17_win64_mkl conda-forge
[conda] mkl 2022.1.0 h6a75c08_874 conda-forge
[conda] mkl-devel 2022.1.0 h57928b3_875 conda-forge
[conda] mkl-include 2022.1.0 h6a75c08_874 conda-forge
[conda] numpy 1.24.3 py310hd02465a_0 conda-forge
[conda] pytorch 2.0.1 py3.10_cuda11.8_cudnn8_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchrl 0.1.1 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,367 | 103,243 |
torch.cuda.memory_reserved always returns 0 bytes
|
module: cuda, triaged
|
## Issue description
I've been observing an issue with the `torch.cuda.memory_reserved("cuda:0")` function in PyTorch. Despite having a model training on the GPU, the function `torch.cuda.memory_reserved("cuda:0")` always returns 0.
## Code example
Train a model on gpu 0.
```
import torch
print(torch.cuda.memory_reserved("cuda:0"))
```

## System Info
python collect_env.py
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 5000
GPU 1: Quadro RTX 5000
GPU 2: Quadro RTX 5000
GPU 3: Quadro RTX 5000
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2654.601
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4400.00
Virtualisation: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 20 MiB
L3 cache: 27.5 MiB
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.0
[pip3] pytorch-lightning==1.8.5.post0
[pip3] torch==1.13.1+cu117
[pip3] torchmetrics==0.11.0
[pip3] torchsampler==0.1.2
[pip3] torchvision==0.14.1+cu117
cc @ptrblck
| 1 |
2,368 | 103,241 |
Image Processing with Pytorch
|
triaged
|
### π The feature, motivation and pitch
Hello, I would like to use it in Image Processing for calculating the similarity between two photos. I am thinking Pytorch is really well for this process. But I couldn't see any issue therefore I would like to open that issue. I would like to submit a code as an example of usage for PyTorch in this area. And I would like to know how and where can I do this.
### Alternatives
_No response_
### Additional context
My codes calculate the similarity between the reference image and the target image. For the data, I got some of the example pictures but the photo could change, it is necessary.
| 1 |
2,369 | 103,231 |
Benchmark --quick with huggingface runs almost indefinitely on CPU
|
triaged, oncall: pt2
|
Minimal repro is the same as triggering a run on `Albert`.
```
python benchmarks/dynamo/huggingface.py --performance --float32 -dcpu --output=tmp.csv --inference -n5 --inductor --no-skip --dashboard -k Albert --batch-size 1
```
Not a bug per se, it might be just that slow for CPU. However it does hinder --quick sanity check on cpu.
CPU: Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
2,370 | 103,222 |
compilation fails `error: invalid argument '-std=c++17' not allowed with 'C'`
|
module: build, triaged
|
### π Describe the bug
can't compile i get error `error: invalid argument '-std=c++17' not allowed with 'C'`
### Error logs
[log.txt](https://github.com/pytorch/pytorch/files/11682929/log.txt)
### Minified repro
_No response_
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Artix Linux (x86_64)
GCC version: (GCC) 13.1.1 20230429
Clang version: 15.0.7
CMake version: version 3.26.4
Libc version: glibc-2.37
Python version: 3.11.3 (main, Apr 7 2023, 00:46:44) [GCC 12.2.1 20230201] (64-bit runtime)
Python platform: Linux-6.3.4-artix1-1-x86_64-with-glibc2.37
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 3600 6-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 70%
CPU max MHz: 4208.2031
CPU min MHz: 2200.0000
BogoMIPS: 7189.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[conda] Could not collect```
cc @malfet @seemethere @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
2,371 | 103,221 |
[help] did torch.distributed.launch can be applied on k8s cluster with pytorch-operator
|
oncall: distributed, module: elastic
|
### π Describe the bug
Hi, i have been successfuly in starting distributed training via torch.distributed.launch on baremetal servers.
But recently, i need to using k8s cluster, and "apply -f operator.yaml" on kubernetes is recommanded .
I wonder does pytorch-operator or other training-operator support torch.distributed.launch ?
because pytorch-operator does not import args like "ADDR_MASTER" or "NODE_RANK" etc to starts distributed train, but these args should be manually initialized to input to torch.distributed.launch, which really confuses me.
Thank you in advance~ any hints would be helpful to me
### Versions
kubernetes cluster
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @dzhulgakov
| 2 |
2,372 | 103,213 |
Undeterministic behavior in testing in dynamo.
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
TorchDynamo uses object ID to figure out which functions/modules to be traced (https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/allowed_functions.py#L283).
This is problematic when you are running unittests (python test/dynamo/test_misc.py) where one unittest uses allow_in_graph API which directly adds entry to this global allowed_functions tracker. (https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/__init__.py#L90). The reason is once the test that uses allow_in_graph finishes, we don't delete the object id from it directly. As a result, another test that uses different object with same object id will be assumed to be in this allowlist incorrectly.
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @aakhundov
| 4 |
2,373 | 103,212 |
PyTorch can not be compiled with MKLDNN if system compiler is clang
|
module: build, triaged, module: mkldnn
|
### π Describe the bug
I'm trying to move clang configs to use clang for all building and linking, but it breaks `MKLDNNs` ability to detect presence of OpenMP on the system, see https://github.com/pytorch/pytorch/actions/runs/5194566413/jobs/9366365622 as an example:
```2023-06-07T00:46:50.4545541Z -- DNNL_LIBRARY_NAME: dnnl
2023-06-07T00:46:52.0985234Z [33mCMake Warning (dev) at /opt/conda/envs/py_3.9/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
2023-06-07T00:46:52.0985917Z The package name passed to `find_package_handle_standard_args` (OpenMP_C)
2023-06-07T00:46:52.0986252Z does not match the name of the calling package (OpenMP). This can lead to
2023-06-07T00:46:52.0986685Z problems in calling code that expects `find_package` result variables
2023-06-07T00:46:52.0987045Z (e.g., `_FOUND`) to follow a certain pattern.
2023-06-07T00:46:52.0987305Z Call Stack (most recent call first):
2023-06-07T00:46:52.0987709Z cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
2023-06-07T00:46:52.0988470Z third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
2023-06-07T00:46:52.0989047Z third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
2023-06-07T00:46:52.0989682Z This warning is for project developers. Use -Wno-dev to suppress it.
2023-06-07T00:46:52.0990173Z [0m
2023-06-07T00:46:52.0990533Z -- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES)
2023-06-07T00:46:52.0991065Z [33mCMake Warning (dev) at /opt/conda/envs/py_3.9/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
2023-06-07T00:46:52.0991481Z The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
2023-06-07T00:46:52.0991815Z does not match the name of the calling package (OpenMP). This can lead to
2023-06-07T00:46:52.0992122Z problems in calling code that expects `find_package` result variables
2023-06-07T00:46:52.0992446Z (e.g., `_FOUND`) to follow a certain pattern.
2023-06-07T00:46:52.0992811Z Call Stack (most recent call first):
2023-06-07T00:46:52.0993327Z cmake/Modules/FindOpenMP.cmake:584 (find_package_handle_standard_args)
2023-06-07T00:46:52.0993838Z third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:69 (find_package)
2023-06-07T00:46:52.0994243Z third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
2023-06-07T00:46:52.0994658Z This warning is for project developers. Use -Wno-dev to suppress it.
2023-06-07T00:46:52.0995043Z [0m
2023-06-07T00:46:52.0995632Z -- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
2023-06-07T00:46:52.0996131Z -- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND)
2023-06-07T00:46:52.0996569Z [31mCMake Error at third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:118 (message):
2023-06-07T00:46:52.0996907Z OpenMP library could not be found. Proceeding might lead to highly
2023-06-07T00:46:52.0997198Z sub-optimal performance.
2023-06-07T00:46:52.0997419Z Call Stack (most recent call first):
2023-06-07T00:46:52.0997751Z third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
2023-06-07T00:46:52.0997941Z
2023-06-07T00:46:52.0998025Z [0m
```
`oneDNN` should either gracefully handle the failure or , better, use the same methods for locating OpenMP runtime as PyTorch.
### Versions
CI
cc @seemethere @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 2 |
2,374 | 103,194 |
[inductor] test_fft_real_inputs fails with dynamic shapes
|
good first issue, triaged, oncall: pt2, module: dynamic shapes, module: inductor
|
### π Describe the bug
```python
python test/inductor/test_torchinductor_dynamic_shapes.py -k fft
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py -k fft
```
(you will need to un-xfail the tests in `test_torchinductor_(codegen)?_dynamic_shapes.py` files)
Fails with this error:
```
File "/data/users/dberard/pytorch/torch/_functorch/aot_autograd.py", line 3233, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/dberard/pytorch/torch/_functorch/aot_autograd.py", line 2073, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/dberard/pytorch/torch/_functorch/aot_autograd.py", line 2253, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/dberard/pytorch/torch/_functorch/aot_autograd.py", line 1527, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/data/users/dberard/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/dberard/pytorch/torch/_inductor/compile_fx.py", line 763, in fw_compiler_base
return inner_compile(
File "/data/users/dberard/pytorch/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/data/users/dberard/pytorch/torch/_inductor/debug.py", line 220, in inner
return fn(*args, **kwargs)
File "/home/dberard/local/miniconda3/envs/pytorch/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/dberard/pytorch/torch/_inductor/compile_fx.py", line 47, in newFunction
return old_func(*args, **kwargs)
File "/data/users/dberard/pytorch/torch/_inductor/compile_fx.py", line 316, in compile_fx_inner
graph.run(*example_inputs)
File "/data/users/dberard/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/dberard/pytorch/torch/_inductor/graph.py", line 403, in run
return super().run(*args)
File "/data/users/dberard/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/data/users/dberard/pytorch/torch/_inductor/graph.py", line 643, in run_node
result = fallback_handler(n.target, add_to_fallback_set=False)(
File "/data/users/dberard/pytorch/torch/_inductor/lowering.py", line 1127, in handler
TensorBox.create, ir.FallbackKernel.create(kernel, *args, **kwargs)
File "/data/users/dberard/pytorch/torch/_inductor/ir.py", line 3321, in create
) = cls.process_kernel(kernel, *args, **kwargs)
File "/data/users/dberard/pytorch/torch/_inductor/ir.py", line 2711, in process_kernel
example_args.append(ir_node_to_tensor(x, guard_shape=True))
File "/data/users/dberard/pytorch/torch/_inductor/ir.py", line 200, in ir_node_to_tensor
t = torch.empty_strided(
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fx_wrapper' raised:
RuntimeError: aten/src/ATen/RegisterCUDA.cpp:7002: SymIntArrayRef expected to contain only concrete integers
```
### Versions
main branch / after https://github.com/pytorch/pytorch/pull/103183 lands
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @aakhundov
| 2 |
2,375 | 103,189 |
(fsdp - maybe a bug) SHARDED_STATE_DICT returns tensor with no data
|
triaged, module: fsdp
|
### π Describe the bug
The huggingface `T5ForConditionalGeneration` model has the embedding layer named as `shared` at the top module, and passes a reference to it to the `T5Stack` ctor. In the `T5Stack` the passed embedding is stored as `embed_tokens`.
So `t5model.shared` and `t5model.encoder.embed_tokens` point to the same `nn.Embedding()` layer. Here's the two code references:
1. [`model.shared`](https://github.com/huggingface/transformers/blob/12298cb65c7e9d615b749dde935a0b4966f4ae49/src/transformers/models/t5/modeling_t5.py#L1541)
1. [`model.encoder.embed_tokens`](https://github.com/huggingface/transformers/blob/12298cb65c7e9d615b749dde935a0b4966f4ae49/src/transformers/models/t5/modeling_t5.py#L879)
and an excerpt of the code:
```
class T5ForConditionalGeneration(...): # nn.Module
def __init__(...):
# ... omitted...
self.shared = nn.Embedding(...)
self.encoder = T5Stack(encoder_config, self.shared)
# ... omitted ...
class T5Stack(...): # nn.Module
def __init__(..., embed_tokens):
# ... omitted ...
self.embed_tokens = embed_tokens
```
Wrapping this T5 model with FSDP and inspecting the details of embedding layer in a transformer (T5 model) model wrapped in FSDP with `ShardingStrategy.FULL_SHARD` and `auto_wrap_policy=transformer_auto_wrap_policy`, I'm observing that with `StateDictType.SHARDED_STATE_DICT`:
1. `model.state_dict()["shared.weight"]` is a `ShardedTensor` (as expected)
2. `model.state_dict()["encoder.embed_tokens.weight"]` is a Tensor with no data (see repro script's output below)
I'm guessing this has to do with this warning in the FSDP [docs page](

):

But I'm wrapping the `T5Block` (see my repro script below) not `T5Stack` in an FSDP unit so the outer module (`T5ForConditionalGeneration`) and `T5Block` belongs to the same (root) FSDP unit.
Is this an expected behavior (I'm misunderstanding something) or is it a bug or should the documentation be updated?
Repro Script
--------------
To run:
```
$ pip install transformers, tabulate
$ torchrun --rdzv_backend c10d --rdzv_id 1 --nnodes 1 --nproc_per_node 2 repro_script.py sharded
```
Save this as `repro_script.py`
```python
import functools
import os
import sys
import torch
from tabulate import tabulate
import torch.distributed as dist
from torch.distributed._shard.sharded_tensor import ShardedTensor
from torch.distributed.elastic.multiprocessing.errors import record
from torch.distributed.fsdp import (
FullyShardedDataParallel,
ShardingStrategy,
StateDictType,
)
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from transformers import T5ForConditionalGeneration, T5Config
from transformers.models.t5.modeling_t5 import T5Block
@record
def main():
sdtype = sys.argv[1]
state_dict_type = {
"full": StateDictType.FULL_STATE_DICT,
"local": StateDictType.LOCAL_STATE_DICT,
"sharded": StateDictType.SHARDED_STATE_DICT,
}[sdtype]
rank = int(os.environ["RANK"])
sharding_strategy = ShardingStrategy.FULL_SHARD
t5_model = T5ForConditionalGeneration(
T5Config(
d_ff=512,
d_kv=32,
d_model=128,
is_encoder_decoder=True,
model_type="t5",
n_positions=512,
num_heads=2,
num_layers=4,
vocab_size=32128,
)
)
fsdp_model = FullyShardedDataParallel(
t5_model,
auto_wrap_policy=functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls={T5Block},
),
sharding_strategy=sharding_strategy,
device_id=device_id,
)
table = []
layer_names = ["shared.weight", "encoder.embed_tokens.weight"]
with FullyShardedDataParallel.state_dict_type(
fsdp_model, state_dict_type=state_dict_type
):
for layer_name in layer_names:
state_dict = fsdp_model.state_dict()
layer = state_dict.get(layer_name)
tensor_type = type(layer)
row = {
"rank": rank,
"sharding strategy": sharding_strategy.name,
"state_dict_type": state_dict_type.name,
"layer": layer_name,
}
if layer is None:
continue
row.update(
{
"dtype": layer.dtype,
"shape": layer.shape,
"tensor type": tensor_type.__qualname__,
}
)
if tensor_type != ShardedTensor:
row["storage"] = layer.untyped_storage()
table.append(row)
if rank == 0:
print(tabulate(table, headers="keys", stralign="left"))
if __name__ == "__main__":
device_id = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(device_id)
dist.init_process_group("nccl")
try:
main()
finally:
dist.destroy_process_group()
```
Output
----------
```
rank sharding strategy state_dict_type layer dtype shape tensor type storage
------ ------------------- ------------------ --------------------------- ------------- ------------------------ ------------- -------------------------------------------------------
0 FULL_SHARD SHARDED_STATE_DICT shared.weight torch.float32 torch.Size([32128, 128]) ShardedTensor
0 FULL_SHARD SHARDED_STATE_DICT encoder.embed_tokens.weight torch.float32 torch.Size([32128, 128]) Tensor [torch.storage.UntypedStorage(device=cuda:0) of size 0]
```
### Versions
```
kiuk@ip-10-0-61-167% python collect_env.py ~/tmp
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: 11.1.0 (Amazon Linux 2 11.1.0-1.amzn2.0.2)
CMake version: version 3.26.1
Libc version: glibc-2.26
Python version: 3.9.16 (main, Mar 31 2023, 16:44:31) [GCC 7.3.1 20180712 (Red Hat 7.3.1-15)] (64-bit runtime)
Python platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1187.921
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==0.991
[pip3] mypy-boto3-batch==1.26.103
[pip3] mypy-boto3-cloudwatch==1.26.127
[pip3] mypy-boto3-ec2==1.26.103
[pip3] mypy-boto3-iam==1.26.97
[pip3] mypy-boto3-s3==1.26.99
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.9.4
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchsnapshot-nightly==2023.3.15
[pip3] torchx-nightly==2023.5.9
[pip3] triton==2.0.0
[conda] No relevant packages
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 1 |
2,376 | 103,173 |
[RFC] Emit better Telemetry in PyTorch
|
feature, module: logging, triaged, oncall: pt2
|
### π The feature, motivation and pitch
**Summary**
The existing PyTorch 2.0 logging needs to be enhanced to emit better Telemetry. For this purpose, this document proposes the following changes:
1. Emit function parameters along with the function name using [_log_api_usage_once](https://github.com/pytorch/pytorch/blob/7b47cd0a6c206ef1de1ad74881942352089ddc72/torch/csrc/Module.cpp#L1420)
2. Control dumping dynamo compile metrics using compile_times via an environment variable PYTORCH_DYNAMO_COMPILE_TIMES_PERIOD.
**Motivation**
Enhancing the logging will highlight the features actively used by the Pytorch Users. Controlling the dumping of dynamo compile metrics using an environment variable will enable the feature without modifying the User training/inference script.
**Proposed Implementation**
**PyTorch API level logging**
PyTorch uses Googleβs logging library ([glog](https://github.com/google/glog#severity-levels)) for logging in C++. The environment variable [PYTORCH_API_USAGE_STDERR](https://github.com/pytorch/pytorch/blob/main/c10/util/Logging.cpp#L93)
can be used to log what APIs are used. This corresponds to the API [_log_api_usage_once](https://github.com/pytorch/pytorch/blob/7b47cd0a6c206ef1de1ad74881942352089ddc72/torch/csrc/Module.cpp#L1420) in python and C10_LOG_API_USAGE_ONCE in C++. The current implementation only provides insight into which APIs were used, leaving out their arguments.
We would like to capture the parameters of APIs that we want to log without compromising the usersβs IP and privacy. We will modify the below APIs logging to include their parameters. We will only include the arguments of type bool, enum and constrained strings.
```
def format_api(api_name, param_dict):
# format_string will be of the format <API><seperator:-><param_seperator:,>
format_string = os.environ['PYTORCH_LOG_API_FORMAT']
if format_string is None:
format_string = <API><seprator: ><param_seperator:,>
separator_start = "<seperator:"
separator_end = ">"
param_separator_start = "<param_seperator:"
param_separator_end = ">"
#default values
separator = " "
param_separator = ","
separator_index_start = format_string.find(separator_start)
separator_index_end = format_string.find(separator_end, separator_index_start)
param_separator_index_start = format_string.find(param_separator_start)
param_separator_index_end = format_string.find(param_separator_end, param_separator_index_start)
if separator_index_start != -1 and separator_index_end != -1:
separator = format_string[separator_index_start + len(separator_start):separator_index_end]
if param_separator_index_start != -1 and param_separator_index_end != -1:
param_separator = format_string[param_separator_index_start + len(param_separator_start):param_separator_index_end]
formatted_api = format_string.replace("<API>", api_name)
formatted_api = formatted_api.replace(f"{separator_start}{separator}{separator_end}", separator)
formatted_api = formatted_api.replace(f"{param_separator_start}{param_separator}{param_separator_end}", param_separator)
param_string = param_separator.join([f"{key}{param_separator}{value}" for key, value in param_dict.items()])
formatted_api = f"{formatted_api}{separator}{param_string}"
return formatted_api
```
We can use a custom format_api function that takes API name and parameters dict and provide the formatted string as output. The format string can be set using an environment variable **PYTORCH_LOG_API_FORMAT**.
**[torch.compile](https://github.com/pytorch/pytorch/blob/7b47cd0a6c206ef1de1ad74881942352089ddc72/torch/__init__.py#L1574)**
```
param_dict = {}
if(isinstance(backend, str)) {
param_dict[βbackendβ] = backend
}
param_dict[βfullgraphβ] = str(fullgraph)
param_dict[βdynamicβ] = str(dynamic)
param_dict[βmodeβ] = str(mode)
param_dict[βdisableβ] = str(disable)
C._log_api_usage_once(format_api(torch.compile, param_dict))
param_dict = {sync_module_states:str(sync_module_states), forward_prefetch:str(forward_prefetch)}
torch._C._log_api_usage_once(format_api(torch.distributed.fully_shard, param_dict))
```
**[torch._dynamo.optimize](https://github.com/pytorch/pytorch/blob/7b47cd0a6c206ef1de1ad74881942352089ddc72/torch/_dynamo/eval_frame.py#LL532C35-L532C57)**
```
param_dict = {nopython:str(nopython), disable:str(disable), dynamic:str(dynamic)}
torch._C._log_api_usage_once(format_api(torch._dynamo.optimize, param_dict)
```
**[torch._dynamo.export](https://github.com/pytorch/pytorch/blob/7b47cd0a6c206ef1de1ad74881942352089ddc72/torch/_dynamo/eval_frame.py#L832)**
```
param_dict = {aten_graph:str(aten_graph), pre_autograd:str(pre_autograd), tracing_mode:tracing_mode}, assume_static_by_default:str(assume_static_by_default), functionalize:str(functionalize)}
torch._C._log_api_usage_once(format_api(torch._dynamo.export, param_dict))
```
**Compilation runtime**
TorchDynamo uses a function that can be used as a decorator to [capture](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/utils.py#L171) the breakdown of compilation time in terms of graph capture and backend compilation times. However, this breakdown is only made available with an explicit call to a [summary API](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/utils.py#L201) at the end of training/serving. As a result, this information is not available without customer intervention.
The API can be automatically invoked after training/inference. This can be achieved by registering these APIs using [atexit](https://docs.python.org/3/library/atexit.html) exit handler in the torch._dynamo.__init__.py.
`atexit.register(lambda: compile_times(str, True))`
We can also dump this compile_times info periodically using a [Timeloop](https://pypi.org/project/timeloop/) library. We can periodically invoke compile_times API with a defined interval set using an environment variable **PYTORCH_DYNAMO_COMPILE_TIMES_PERIOD** in seconds. This will help live hosted endpoints where the script never exits. The default value will be 0 and gets updated based on user provided value.
```
#Pseudo code
import os
import time
from timeloop import Timeloop
from datetime import timedelta
import torch._dynamo.utils
tl = Timeloop()
period = os.environ['PYTORCH_PRINT_DYNAMO_COMPILE_TIMES_PERIOD']
if not isinstance(period, int):
period = 0;
@tl.job(interval=timedelta(seconds=period))
def check_and_invoke_compile_times():
torch._dynamo.utils.compile_times()
if __name__ == "__main__":
if period > 0:
tl.start(block=True)
```
To prevent redundant dumping of previously dumped metrics in the previous compile_times invocation, it is necessary to maintain an index indicating the point until which the compile times of specific APIs have already been logged. This can be achieved by enhancing the existing compilation_metrics dictionary to include a sub-dictionary for each API. The sub-dictionary will contain the values under the key 'values' and the new index that is pending logging under the key 'log_index'.
By implementing this modification, we can effectively track the progress of logging the compile times for individual APIs and avoid re-dumping already logged metrics.
We will modify the compile_times API as follows:
```
#Pseudo code
def dynamo_timed(original_function=None, phase_name=None):
def dynamo_timed_inner(func):
@wraps(func)
def time_wrapper(*args, **kwargs):
key = func.__qualname__
if key not in compilation_metrics:
compilation_metrics[key] = {values:[], index:0}
with torch.profiler.record_function(f"{key} (dynamo_timed)"):
t0 = time.time()
r = func(*args, **kwargs)
time_spent = time.time() - t0
# print(f"Dynamo timer: key={key}, latency={latency:.2f} sec")
compilation_metrics[key]["values"].append(time_spent)
if phase_name:
frame_key = str(curr_frame)
if frame_key not in frame_phase_timing:
frame_phase_timing[frame_key] = {}
assert (
phase_name not in frame_phase_timing[frame_key]
), f"Duplicate phase name {phase_name} for frame {frame_key}"
frame_phase_timing[frame_key][phase_name] = time_spent
return r
return time_wrapper
if original_function:
return dynamo_timed_inner(original_function)
return dynamo_timed_inner
def compile_times(repr="str", aggregate=False):
"""
Get metrics about torchdynamo frontend/backend compilation times.
Accumulates information from functions tagged with `@dynamo_timed`.
repr='str' returns a printable string for user interaction, and 'csv'
returns headers, rows which can be logged for output
aggregate causes values from multiple compilations (e.g. split graphs)
to be accumulated into one value. If false, expect more than one value
per metric.
"""
def fmt_fn(values, item_fn=lambda x: x):
if aggregate:
return item_fn(sum(values))
return ", ".join(map(item_fn, values))
if repr == "str":
rows = [
(k, fmt_fn(compilation_metrics[k]["values"] if aggregate else compilation_metrics[k]["values"][compilation_metrics[k]["index"]:], item_fn=lambda x: f"{x:.4f}"))
for k in compilation_metrics
]
out = "TorchDynamo compilation metrics:\n"
out += tabulate(rows, headers=("Function", "Runtimes (s)"))
return out
elif repr == "csv":
values = [
fmt_fn(v, item_fn=lambda x: f"{x:.6f}")
for v in compilation_metrics.values()
]
headers = list(compilation_metrics.keys())
return headers, values
```
**Metrics**
This feature will enable PyTorch developers to understand the % usage of different PyTorch features.
**Drawbacks**
This will not break any existing features.
**Alternatives**
NA
### Alternatives
NA
### Additional context
NA
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 8 |
2,377 | 103,169 |
breakpoint() in torch.compile region behaves oddly
|
triaged, oncall: pt2
|
### π Describe the bug
I expect breakpoint() to induce a graph break and let me inspect the frame
```
import torch
@torch.compile(backend="eager")
def f(x):
breakpoint()
return x + 1
f(torch.randn(1))
```
However, when I run this, I actually get dropped in some weird, synthetic catch_errors frame
```
(/home/ezyang/local/a/pytorch-env) [ezyang@devgpu019.ftw1 ~/local/a/pytorch (f15e82f7)]$ python z.py
--Call--
> /data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py(407)catch_errors()
-> @functools.wraps(callback)
(Pdb) bt
/data/users/ezyang/a/pytorch/z.py(8)<module>()
-> f(torch.randn(1))
/data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py(286)_fn()
-> return fn(*args, **kwargs)
/data/users/ezyang/a/pytorch/z.py(5)f()
-> breakpoint()
> /data/users/ezyang/a/pytorch/torch/_dynamo/eval_frame.py(407)catch_errors()
-> @functools.wraps(callback)
```
I can get to the frame I care about by typing `up`, but this threw me for a loop the first time it happened to me.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
2,378 | 103,161 |
Calling jacrev with LSTM and functional_call gives error
|
triaged, module: functorch
|
### π Describe the bug
I wanted to calculate some jacobian from the output of an LSTM, so I use funcitonal_call and feed the parameter of LSTM as input.
```python
import torch
from torch.func import functional_call, jacrev, jacfwd
device = 'cuda'
lstm = torch.nn.LSTM(
input_size=32,
hidden_size=32,
num_layers=1,
batch_first=True,
).to(device)
dict_params = dict(lstm.named_parameters())
input_batch = torch.randn(11, 10, 32).to(device)
def func_call(params, input_batch):
output, _ = functional_call(lstm, params, input_batch)
return output.mean()
jac = jacrev(
lambda params: func_call(params=params, input_batch=input_batch),
argnums=0
)(dict_params)
```
However, it gives me error like this
```error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 jac = jacfwd(
2 lambda params: func_call(params=params, input_batch=input_batch),
3 argnums=0
4 )(dict_params)
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:1128](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:1128), in jacfwd..wrapper_fn(*args)
1125 _, jvp_out = output
1126 return jvp_out
-> 1128 results = vmap(push_jvp, randomness=randomness)(basis)
1129 if has_aux:
1130 results, aux = results
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:434](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:434), in vmap..wrapped(*args, **kwargs)
430 return _chunked_vmap(func, flat_in_dims, chunks_flat_args,
431 args_spec, out_dims, randomness, **kwargs)
433 # If chunk_size is not specified.
--> 434 return _flat_vmap(
435 func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs
436 )
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:39](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:39), in doesnt_support_saved_tensors_hooks..fn(*args, **kwargs)
36 @functools.wraps(f)
37 def fn(*args, **kwargs):
38 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 39 return f(*args, **kwargs)
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:619](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:619), in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
617 try:
618 batched_inputs = _create_batched_inputs(flat_in_dims, flat_args, vmap_level, args_spec)
--> 619 batched_outputs = func(*batched_inputs, **kwargs)
620 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
621 finally:
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:1119](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:1119), in jacfwd..wrapper_fn..push_jvp(basis)
1118 def push_jvp(basis):
-> 1119 output = _jvp_with_argnums(func, args, basis, argnums=argnums, has_aux=has_aux)
1120 # output[0] is the output of `func(*args)`
1121 error_if_complex("jacfwd", output[0], is_input=False)
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:39](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/vmap.py:39), in doesnt_support_saved_tensors_hooks..fn(*args, **kwargs)
36 @functools.wraps(f)
37 def fn(*args, **kwargs):
38 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 39 return f(*args, **kwargs)
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:965](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py:965), in _jvp_with_argnums(func, primals, tangents, argnums, strict, has_aux)
963 primals = _wrap_all_tensors(primals, level)
964 duals = _replace_args(primals, duals, argnums)
--> 965 result_duals = func(*duals)
966 if has_aux:
967 if not (isinstance(result_duals, tuple) and len(result_duals) == 2):
Cell In[9], line 2, in (params)
1 jac = jacfwd(
----> 2 lambda params: func_call(params=params, input_batch=input_batch),
3 argnums=0
4 )(dict_params)
Cell In[7], line 2, in func_call(params, input_batch)
1 def func_call(params, input_batch):
----> 2 output, _ = functional_call(lstm, params, input_batch)
3 return output.mean()
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/functional_call.py:143](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/_functorch/functional_call.py:143), in functional_call(module, parameter_and_buffer_dicts, args, kwargs, tie_weights, strict)
137 else:
138 raise ValueError(
139 f"Expected parameter_and_buffer_dicts to be a dict, or a list[/tuple](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/tuple) of dicts, "
140 f"but got {type(parameter_and_buffer_dicts)}"
141 )
--> 143 return nn.utils.stateless._functional_call(
144 module,
145 parameters_and_buffers,
146 args,
147 kwargs,
148 tie_weights=tie_weights,
149 strict=strict,
150 )
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/utils/stateless.py:262](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/utils/stateless.py:262), in _functional_call(module, parameters_and_buffers, args, kwargs, tie_weights, strict)
258 args = (args,)
259 with _reparametrize_module(
260 module, parameters_and_buffers, tie_weights=tie_weights, strict=strict
261 ):
--> 262 return module(*args, **kwargs)
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:762](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:762), in LSTM.forward(self, input, hx)
760 if not torch.jit.is_scripting():
761 if self._weights_have_changed():
--> 762 self._init_flat_weights()
764 orig_input = input
765 # xxx: isinstance check needs to be in conditional for TorchScript to compile
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:139](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:139), in RNNBase._init_flat_weights(self)
135 self._flat_weights = [getattr(self, wn) if hasattr(self, wn) else None
136 for wn in self._flat_weights_names]
137 self._flat_weight_refs = [weakref.ref(w) if w is not None else None
138 for w in self._flat_weights]
--> 139 self.flatten_parameters()
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:176](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:176), in RNNBase.flatten_parameters(self)
170 return
172 # If any parameters alias, we fall back to the slower, copying code path. This is
173 # a sufficient check, because overlapping parameter buffers that don't completely
174 # alias would break the assumptions of the uniqueness check in
175 # Module.named_parameters().
--> 176 unique_data_ptrs = {p.data_ptr() for p in self._flat_weights}
177 if len(unique_data_ptrs) != len(self._flat_weights):
178 return
File [~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:176](https://vscode-remote+ssh-002dremote-002bquanta-002etitan.vscode-resource.vscode-cdn.net/home/quanta/Projects/FoRL-project/test_code/~/.conda/envs/forl-proj/lib/python3.10/site-packages/torch/nn/modules/rnn.py:176), in (.0)
170 return
172 # If any parameters alias, we fall back to the slower, copying code path. This is
173 # a sufficient check, because overlapping parameter buffers that don't completely
174 # alias would break the assumptions of the uniqueness check in
175 # Module.named_parameters().
--> 176 unique_data_ptrs = {p.data_ptr() for p in self._flat_weights}
177 if len(unique_data_ptrs) != len(self._flat_weights):
178 return
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
It says something like `self._flat_weights` 's matrix does not have storage when calling `Tensor.data_ptr()` method. I also tried to print self._flat_weights when calling this funciton, it is
```python
[GradTrackingTensor(lvl=1, value=
Parameter containing:
tensor([[ 0.0152, -0.0061, 0.0816, ..., -0.0025, -0.0326, 0.0344],
[-0.0327, -0.0508, -0.1517, ..., -0.0478, 0.1520, 0.0825],
[ 0.0265, -0.0549, -0.1710, ..., 0.0329, -0.0428, 0.0193],
...,
[-0.1371, 0.0196, -0.0076, ..., -0.1137, -0.1058, 0.0396],
[ 0.0370, -0.0681, 0.0193, ..., -0.0464, 0.0921, -0.1209],
[-0.1082, -0.1462, 0.0414, ..., -0.0778, -0.0856, 0.0774]],
device='cuda:0', requires_grad=True)
), GradTrackingTensor(lvl=1, value=
Parameter containing:
tensor([[-0.1160, -0.0630, 0.1476, ..., -0.0603, -0.1395, -0.1528],
[ 0.1590, -0.1527, 0.1602, ..., 0.0061, 0.0968, 0.0363],
[-0.1380, 0.0860, -0.1754, ..., -0.0117, -0.0765, -0.0704],
...,
[ 0.1244, -0.1528, -0.1146, ..., 0.0456, 0.1050, 0.0627],
[ 0.1305, 0.1589, -0.1673, ..., 0.0688, 0.0474, -0.0307],
[ 0.0105, -0.0980, -0.0172, ..., -0.1360, 0.1762, -0.1558]],
device='cuda:0', requires_grad=True)
), GradTrackingTensor(lvl=1, value=
Parameter containing:
tensor([ 0.0698, 0.1423, -0.0210, -0.0220, 0.1001, 0.1764, -0.1697, 0.0057,
0.1385, -0.1068, -0.1130, -0.0255, -0.1064, 0.1258, 0.1567, -0.0853,
0.0319, 0.0120, 0.0847, -0.1106, 0.0755, 0.0322, 0.0279, -0.1470,
-0.1333, 0.0607, 0.0515, -0.1195, -0.1491, 0.0726, -0.1084, 0.1267,
-0.1031, 0.0062, 0.1640, -0.0104, -0.0157, -0.1091, 0.0904, 0.1325,
-0.1592, -0.0774, 0.0814, 0.0034, 0.0211, 0.1304, 0.1630, 0.0069,
-0.1333, 0.1403, 0.1562, 0.0679, -0.1202, 0.0201, 0.1482, -0.1630,
0.1039, -0.1758, 0.1112, 0.0051, -0.0909, 0.1661, 0.0383, -0.1568,
-0.0799, -0.0284, -0.1319, 0.1042, 0.0036, -0.0238, -0.0283, 0.0488,
0.0003, 0.1377, 0.1479, -0.1500, -0.0282, -0.0816, 0.0874, 0.0337,
0.0751, 0.1523, 0.0758, -0.1458, -0.0024, -0.0427, -0.0908, -0.1383,
-0.1672, -0.0800, 0.0409, -0.1399, -0.0732, 0.0321, -0.0251, 0.1068,
-0.0486, 0.0953, -0.0917, -0.0501, 0.1601, 0.0244, -0.1500, -0.0720,
-0.1479, -0.0894, -0.1014, 0.0321, -0.0648, 0.1300, -0.0359, 0.1623,
-0.0691, -0.1325, 0.1291, 0.0251, -0.0148, -0.0885, -0.1415, -0.0860,
-0.0983, -0.1115, -0.1256, 0.1620, 0.0293, 0.0540, -0.1512, -0.0097],
device='cuda:0', requires_grad=True)
), GradTrackingTensor(lvl=1, value=
Parameter containing:
tensor([ 0.0405, -0.0769, -0.1045, -0.0878, 0.1220, -0.1439, -0.1761, 0.0604,
0.0725, 0.0428, 0.1289, 0.1255, -0.0375, 0.1240, -0.0087, -0.0632,
0.0611, -0.0453, -0.1217, -0.1690, -0.0899, 0.0293, -0.0544, -0.1171,
-0.0123, 0.1762, -0.0029, -0.0878, 0.0648, -0.1616, 0.0643, 0.0523,
-0.0807, 0.0242, 0.0982, 0.1478, -0.0666, 0.0869, 0.0363, -0.0100,
-0.0016, 0.1506, 0.1727, 0.0422, 0.1144, -0.0533, -0.1611, 0.1124,
0.0476, -0.0143, 0.1005, 0.0768, -0.0520, 0.0885, -0.0570, -0.0359,
0.0745, -0.0665, -0.1128, -0.1228, 0.1578, -0.0029, 0.0960, 0.0956,
0.1746, 0.0738, -0.1099, 0.1381, -0.1351, 0.0500, -0.0044, 0.1273,
-0.1468, -0.0626, -0.0234, 0.1047, 0.1232, 0.0216, 0.1043, 0.0513,
0.1348, -0.0211, 0.1674, 0.1112, 0.1559, 0.0566, 0.0557, -0.1758,
-0.1657, 0.0520, -0.0968, -0.1095, 0.1301, -0.0020, -0.1110, 0.1186,
0.0253, 0.1311, 0.0609, 0.0973, -0.0177, -0.0587, 0.1651, 0.1012,
0.1693, -0.1229, 0.0474, -0.0748, 0.1236, 0.0510, -0.0586, 0.1208,
-0.1384, -0.0365, -0.0905, 0.0042, 0.1580, -0.0101, -0.1153, -0.1726,
-0.1128, -0.1615, -0.0982, -0.1030, -0.1070, 0.1587, -0.1468, -0.1594],
device='cuda:0', requires_grad=True)
)]
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 13.1.1 20230429
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.37
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.3.6-arch1-1-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 12
CPU(s) scaling MHz: 84%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7202.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp sgx_lc md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0+cu118
[pip3] triton==2.0.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
python collect_env.py 4.94s user 0.91s system 123% cpu 4.747 total
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
```[tasklist]
### Tasks
```
| 5 |
2,379 | 103,160 |
Allow overriding __repr__ to call dataclass_repr (infinite recursion right now)
|
triaged, better-engineering, module: codegen
|
### π Describe the bug
dataclass_repr is great and I want it as the default for most complicated dataclasses I define. However, I cannot actually define `__repr__` to call into `dataclass_repr` because this triggers an infinite loop.
### Versions
master
cc @bhosmer @bdhirsh
| 0 |
2,380 | 103,150 |
Build fails at linking torch_shm_manager on aarch64
|
module: build, triaged
|
On a Neoverse N1 server CPU (aarch64) using NVIDIA Tesla V100S GPUs, I am trying to build pytorch version 1.11.0 with Cuda 11.3. It ultimately fails due to a linker error in torch_shm_manager. I am using this command:
```
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install
```
The error is the following:
```
[3631/3634] Linking CXX executable bin/torch_shm_manager
FAILED: bin/torch_shm_manager
: && /home/users/kaftan/anaconda3/envs/pt110cu113/bin/aarch64-conda-linux-gnu-c++ -fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -isystem /home/users/kaftan/anaconda3/envs/pt110cu113/include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -g -fno-omit-frame-pointer -O0 -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--allow-shlib-undefined -Wl,-rpath,/home/users/kaftan/anaconda3/envs/pt110cu113/lib -Wl,-rpath-link,/home/users/kaftan/anaconda3/envs/pt110cu113/lib -L/home/users/kaftan/anaconda3/envs/pt110cu113/lib -rdynamic -rdynamic caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/manager.cpp.o -o bin/torch_shm_manager -Wl,-rpath,/home/users/kaftan/pytorch/build/lib:/usr/local/cuda-11.3/lib64: lib/libshm.so -lrt lib/libtorch.so -Wl,--no-as-needed,"/home/users/kaftan/pytorch/build/lib/libtorch_cpu.so" -Wl,--as-needed lib/libprotobufd.a -pthread -Wl,--no-as-needed,"/home/users/kaftan/pytorch/build/lib/libtorch_cuda.so" -Wl,--as-needed lib/libc10_cuda.so /usr/local/cuda-11.3/lib64/libcudart.so /usr/local/cuda-11.3/lib64/libnvToolsExt.so /usr/local/cuda-11.3/lib64/libcufft.so /usr/local/cuda-11.3/lib64/libcurand.so /usr/local/cuda-11.3/lib64/libcublas.so lib/libc10.so && :
/home/users/kaftan/anaconda3/envs/pt110cu113/bin/../lib/gcc/aarch64-conda-linux-gnu/10.4.0/../../../../aarch64-conda-linux-gnu/bin/ld: bin/torch_shm_manager: hidden symbol `__aarch64_cas4_sync' in /home/users/kaftan/anaconda3/envs/pt110cu113/bin/../lib/gcc/aarch64-conda-linux-gnu/10.4.0/libgcc.a(cas_4_5.o) is referenced by DSO
/home/users/kaftan/anaconda3/envs/pt110cu113/bin/../lib/gcc/aarch64-conda-linux-gnu/10.4.0/../../../../aarch64-conda-linux-gnu/bin/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
[3632/3634] Linking CXX shared library lib/libtorch_python.so
ninja: build stopped: subcommand failed.
```
I have already disabled some elements that caused the build to fail, setting ```BUILD_TEST=0```, ```USE_BREAKPAD=0``` and ```_GLIBCXX_USE_CXX11_ABI=0``` in the environment.
The build summary is the following:
```
-- ******** Summary ********
-- General:
-- CMake version : 3.22.1
-- CMake command : /home/users/kaftan/anaconda3/envs/pt110cu113/bin/cmake
-- System : Linux
-- C++ compiler : /home/users/kaftan/anaconda3/envs/pt110cu113/bin/aarch64-conda-linux-gnu-c++
-- C++ compiler id : GNU
-- C++ compiler version : 10.4.0
-- Using ccache if found : ON
-- Found ccache : CCACHE_PROGRAM-NOTFOUND
-- CXX flags : -fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -isystem /home/users/kaftan/anaconda3/envs/pt110cu113/include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow
-- Build type : Debug
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-- CMAKE_PREFIX_PATH : /home/users/kaftan/anaconda3/envs/pt110cu113/lib/python3.9/site-packages;/home/users/kaftan/anaconda3/envs/pt110cu113;/usr/local/cuda-11.3
-- CMAKE_INSTALL_PREFIX : /home/users/kaftan/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 1.11.0
-- CAFFE2_VERSION : 1.11.0
-- BUILD_CAFFE2 : OFF
-- BUILD_CAFFE2_OPS : OFF
-- BUILD_CAFFE2_MOBILE : OFF
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_TENSOREXPR_BENCHMARK: OFF
-- BUILD_NVFUSER_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.9.12
-- Python executable : /home/users/kaftan/anaconda3/envs/pt110cu113/bin/python
-- Pythonlibs version : 3.9.12
-- Python library : /home/users/kaftan/anaconda3/envs/pt110cu113/lib/libpython3.9.a
-- Python includes : /home/users/kaftan/anaconda3/envs/pt110cu113/include/python3.9
-- Python site-packages: lib/python3.9/site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : False
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- INTERN_BUILD_MOBILE :
-- USE_BLAS : 1
-- BLAS : open
-- BLAS_HAS_SBGEMM :
-- USE_LAPACK : 1
-- LAPACK : open
-- USE_ASAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : ON
-- Split CUDA : OFF
-- CUDA static link : OFF
-- USE_CUDNN : OFF
-- USE_EXPERIMENTAL_CUDNN_V8_API: OFF
-- CUDA version : 11.3
-- CUDA root directory : /usr/local/cuda-11.3
-- CUDA library : /usr/local/cuda-11.3/lib64/stubs/libcuda.so
-- cudart library : /usr/local/cuda-11.3/lib64/libcudart.so
-- cublas library : /usr/local/cuda-11.3/lib64/libcublas.so
-- cufft library : /usr/local/cuda-11.3/lib64/libcufft.so
-- curand library : /usr/local/cuda-11.3/lib64/libcurand.so
-- nvrtc : /usr/local/cuda-11.3/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda-11.3/include
-- NVCC executable : /usr/local/cuda-11.3/bin/nvcc
-- CUDA compiler : /usr/local/cuda-11.3/bin/nvcc
-- CUDA flags : -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__
-- CUDA host compiler :
-- CUDA --device-c : OFF
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : OFF
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_FFTW : OFF
-- USE_MKL : OFF
-- USE_MKLDNN : OFF
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NNPACK : ON
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : ON
-- USE_PYTORCH_QNNPACK : ON
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_WITH_OPENSSL : OFF
-- USE_TENSORPIPE : ON
-- USE_DEPLOY : OFF
-- USE_BREAKPAD : 0
-- Public Dependencies : caffe2::Threads
-- Private Dependencies : pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fp16;gloo;tensorpipe;foxi_loader;rt;fmt::fmt-header-only;kineto;gcc_s;gcc;dl
-- USE_COREML_DELEGATE : OFF
```
Please let me know if you need any more details on my build environment, thank you for your help.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (aarch64)
GCC version: (conda-forge gcc 10.4.0-17) 10.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.12 (main, Jun 1 2022, 11:39:41) [GCC 10.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: Tesla V100S-PCIE-32GB
GPU 1: Tesla V100S-PCIE-32GB
GPU 2: Tesla V100S-PCIE-32GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 80
Socket(s): 1
Stepping: r3p1
Frequency boost: disabled
CPU max MHz: 3300.0000
CPU min MHz: 1000.0000
BogoMIPS: 50.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
L1d cache: 5 MiB (80 instances)
L1i cache: 5 MiB (80 instances)
L2 cache: 80 MiB (80 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-79
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[conda] numpy 1.24.3 py39h8708280_0
[conda] numpy-base 1.24.3 py39h4a83355_0
cc @malfet @seemethere
| 1 |
2,381 | 103,148 |
Optimize the copy of Half to Float and Float to Half on CPU
|
module: cpu, open source, ciflow/trunk, topic: not user facing, ciflow/periodic
|
### Description
Optimize the copy of Half to Float and Float to Half on CPU.
### Testing
Single core:
shape | fp16 -> fp32 / ms | fp32 -> fp16 / ms | bf16 -> fp32 / ms | fp32 -> bf16 / ms
-- | -- | -- | -- | --
size: (1, 777) | 0.00345 | 0.00344 | 0.00411 | 0.00410
size: (2, 512) | 0.00355 | 0.00344 | 0.00431 | 0.00400
size: (10, 555) | 0.00473 | 0.00391 | 0.00562 | 0.00477
size: (1, 2048, 1024) | 0.488 | 0.480 | 0.498 | 0.499
size: (32, 100, 777) | 0.584 | 0.568 | 0.571 | 0.587
28 cores:
shape | fp16 -> fp32 / ms | fp32 -> fp16 / ms | bf16 -> fp32 / ms | fp32 -> bf16 / ms
-- | -- | -- | -- | --
size: (10, 555) | 0.00472 | 0.00369 | 0.00576 | 0.00481
size: (1, 2048, 1024) | 0.0189 | 0.0188 | 0.0173 | 0.0251
size: (64, 512, 1024) | 3.159 | 2.375 | 3.152 | 2.358
size: (32, 100, 777) | 0.0225 | 0.0195 | 0.0193 | 0.0261
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 4 |
2,382 | 103,131 |
error: βaligned_allocβ was not declared in this scope static_cast<char*>(aligned_alloc(FLATBUFFERS_MAX_ALIGNMENT, size)), free);
|
module: build, triaged
|
### π Describe the bug
[100%] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/init_flatbuffer_module.cpp.o
/data/meda_home/hongbin/pytorch/torch/csrc/init_flatbuffer_module.cpp: In function βstd::shared_ptr<char> copyStr(const string&)β:
/data/meda_home/hongbin/pytorch/torch/csrc/init_flatbuffer_module.cpp:37:26: error: βaligned_allocβ was not declared in this scope
static_cast<char*>(aligned_alloc(FLATBUFFERS_MAX_ALIGNMENT, size)), free);
^~~~~~~~~~~~~
gmake[2]: *** [caffe2/torch/CMakeFiles/torch_python.dir/build.make:1630: caffe2/torch/CMakeFiles/torch_python.dir/csrc/init_flatbuffer_module.cpp.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
gmake[1]: *** [CMakeFiles/Makefile2:4151: caffe2/torch/CMakeFiles/torch_python.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2
### Versions
(base) -bash-4.1$ python torch/utils/collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS release 6.9 (Final) (x86_64)
GCC version: (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
Clang version: Could not collect
CMake version: version 3.21.3
Libc version: glibc-2.10
Python version: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-2.6.32-696.el6.x86_64-x86_64-with-centos-6.9-Final
Is CUDA available: N/A
CUDA runtime version: 10.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0a0+git664058f
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.12.0a0+git664058f pypi_0 pypi
cc @malfet @seemethere
| 1 |
2,383 | 103,117 |
Observed regress in DataLoader spawn from PyTorch1.13 to PyTorch2.0
|
high priority, module: performance, triaged, module: regression, module: data
|
## Issue description
Observed a performance regression between PyTorch 1.13.1 and PyTorch 2.0.0 when using DataLoader to spawn processes.
With the example code below, I have observed 37.5% performance regress.
## Code example
```python
import time
from torch.utils.data import Dataset, IterableDataset, DataLoader
from torchvision.datasets import MNIST
def one_experiment(data_train):
dataloader = DataLoader(
dataset = data_train,
batch_size=2,
num_workers=8,
drop_last=True,
multiprocessing_context="spawn"
)
start_time = time.perf_counter()
dataloader._get_iterator()
end_time = time.perf_counter()
elapsed_time = end_time - start_time
print("DataLoader Create Process Time = ", elapsed_time, " seconds")
return elapsed_time
def main() -> None:
num_tries = 5
total_time = 0
results = []
data_train = MNIST('/tmp/mnist_data', train=True, download=True)
for _ in range(num_tries):
elapsed_time = one_experiment(data_train)
results.append(elapsed_time)
total_time = sum(results)
averaged_time = total_time / float(num_tries)
print("Number of tries = ", num_tries)
print("Total time = ", total_time)
print("Averaged time = ", averaged_time)
print("Individual time = ", results)
if __name__ == "__main__":
main()
```
If I run the above binary with PyTorch1.13, the averaged time of spawning the processes is about 6 seconds. On the other hand, if I run the same code with PyTorch2.0, the averaged time of spawning the processes becomes 9 seconds.
## System Info
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1035-gcp-x86_64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
- PyTorch or Caffe2:
- PyTorch
- How you installed PyTorch (conda, pip, source):
- Source
- Build command you used (if compiling from source):
- Internal build system
- PyTorch version:
- 1.13 and 2.0
cc @ezyang @gchanan @zou3519 @VitalyFedyunin @ejguan @dzhulgakov
| 6 |
2,384 | 103,111 |
Turn on Inductor Max Pool2d Backward Lowering For Channels Last
|
feature, good first issue, triaged, oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
We currently disable max_pool2d_with_indices_backward when [an input is in channels last](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/lowering.py#L3063-L3066) due to slow kernel generation.
This is reproducible in https://github.com/pytorch/pytorch/blob/main/benchmarks/dynamo/microbenchmarks/operatorbench.py with a few local changes, shared below.
`python /scratch/eellison/work/pytorch/benchmarks/dynamo/microbenchmarks/operatorbench.py --suite=timm --op=aten.max_pool2d_with_indices_backward.default --max-samples=5 --dtype=float16 --channels-last`
> Inductor Speedups : [0.8040992530735804, 0.8699831653183436, 1.059330068525701]
However, when we run with `TORCHINDUCTOR_COORDINATE_DESCENT_TUNING=1` those regressions turn into speedups:
> Inductor Speedups : [1.1356843331778068, 1.2388101486725653, 1.5219576909790553]
We should investigate updating our pointwise heuristics when running channels last kernels and turn back on kernel.
You'll need to get the changes from https://github.com/pytorch/pytorch/pull/103110 and disable the [fallback](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/lowering.py#L3063-L3066).
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 0 |
2,385 | 103,109 |
Increased / more verbose type aliases for improved readability of user defined content
|
module: typing, triaged, enhancement, needs research
|
### π The feature, motivation and pitch
Right now PyTorch exports a few classes and has a few type aliases under `torch.types`.
While these are useful, for those working with pytorch there may be a case to use `TypeAlias` to help clarify arguments and return statements. For example consider just a few of the following aliases:
```python
TorchDevice: TypeAlias = Union[Device, TorchDeviceTypes]
TorchTensor: TypeAlias = Union[tuple(torch._tensor_classes)] # could be just torch.Tensor, but doesn't offer all the hints
TorchTensors: TypeAlias = Sequence[TorchTensor]
TorchLayer: TypeAlias = torch.nn.Module
TorchLayers: TypeAlias = Sequence[TorchLayer]
TorchLoss: TypeAlias = torch.nn.Module
TorchLosses: TypeAlias = Sequence[TorchLoss]
StateDict: TypeAlias = dict
MaskTensor: TypeAlias = Union[torch.BoolTensor, torch.IntTensor]
HiddenState: TypeAlias = TorchTensor
CellState: TypeAlias = TorchTensor
GRUState: TypeAlias = HiddenState
LSTMStates: TypeAlias = Tuple[HiddenState, CellState]
RNNStates: TypeAlias = Union[GRUState, LSTMStates]
```
Suppose someone is defining custom layer that users can initialize with either a `GRU` or `LSTM`. Showing that the forward call returns `Tuple[Tensor, RNNStates]` may help developer more readily remember that `RNNStates` is either a single `tensor`, or a `tuple`, which could otherwise be overlooked.
Likewise `TorchLayer` and `TorchLoss`, while both an alias for `nn.Module` would help clarify if someone is passing a loss function or a layer into a model a wider variety of what expect input is.
Does PyTorch need an `torch.aliases` submodule with a bunch of different aliases to use? Not technically. I do, however, find working with others on projects where doing so can make things much more clear for collaboration. One could also just use longer type annotations, but I find those eventually clutter code e.g.
```
def foo(..., rnn_states: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]], ...):
pass
```
It is just a preference.
### Alternatives
_No response_
### Additional context
I put it in a [PR](https://github.com/pytorch/pytorch/pull/102914#issuecomment-1578935718)
cc @ezyang @malfet @rgommers @xuzhao9 @gramster
| 7 |
2,386 | 103,104 |
PyTorch should not use `windows.8xlarge.nvidia.gpu` to test binary builds
|
module: ci, triaged
|
### π Describe the bug
Because binary tests are tiny, so smallest machine we can get is better, and also `windows.8xlarge.nvidia.gpu` are often queued. From [hud/metircs](https://hud.pytorch.org/metrics):
<img width="668" alt="image" src="https://github.com/pytorch/pytorch/assets/2453524/27bc37f2-6e51-4a04-a084-b4ef76f16b96">
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
| 0 |
2,387 | 103,101 |
Refactor mm_plus_mm to check conditions upfront
|
feature, good first issue, triaged, oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
We always lower `x @ y + a @ b` to [tuned_mm_plus_mm](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/fx_passes/post_grad.py#L138-L139) even though we only lower to the fused op in special conditions, see https://github.com/pytorch/pytorch/blob/08c4a442fd589f380ac0dd6f1126d6e144f449e5/torch/_inductor/kernel/mm_plus_mm.py#L154-L159.
We could refactor this check to occur in the pattern matching pass so that these mms/adds aren't prevented from participating in other patterns or fusions.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 2 |
2,388 | 103,099 |
torch.compile specializes on output name
|
high priority, triaged, oncall: pt2
|
### π Describe the bug
x-posting https://discuss.pytorch.org/t/pytorch-compile-requires-fixed-input-naming/181323
```python
import torch
torch._dynamo.config.verbose=True
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(100, 10)
def forward(self, x, step):
return {f'output_{step}': self.lin(x[f'input_{step}'])}
mod = MyModule()
opt_mod = torch.compile(mod)
my_input = {
'input_0': torch.ones([100]),
'input_1': torch.ones([100])
}
for step in range(2):
output = opt_mod(my_input, step)
print(output.keys())
```
### Error logs
### Expected Output
```
dict_keys(['output_0'])
dict_keys(['output_1'])
```
### Actual Output
```
dict_keys(['output_0'])
dict_keys(['output_0'])
```
### Minified repro
n/a
### Versions
.
cc @ezyang @gchanan @zou3519 @wconstab @bdhirsh @anijain2305
| 3 |
2,389 | 103,093 |
Inconsistent memory allocation using FSDP between PT 2.0 and Nightlies
|
high priority, triage review, oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
I am running the multi-node training of T5-11B using FSDP. Running this with 5 nodes each 8 A100 40 GB works fine with PT 1.13.1 and PT 2.0, however this runs into OOM with Pytorch Nightlies, (2.1.0.dev20230606+cu118) and even like nightlies from two weeks ago. This kind of continue even if I scale nodes to 7-8 nodes still the same issue. I would appreciate any thoughts on the root cause/ debugging steps.
Also wanted to mention that PT 1.13.1 and PT 2.0 were tested with cuda 11.7 vs PT nightlies is on cuda 11.8.
**memory stats on different versions**
- PT 1.13.1 reserved memory: 37.043 GB, allocate memory: 21.1228
- PT 2.0 reserved memory: 37.6172 GB, allocate memory: 21.1387
- PT nightlies OOM
**Repro steps**
```bash
git clone https://github.com/HamidShojanazeri/examples.git
cd examples
git checkout repro
cd distributed/FSDP
pip install -r requirements.txt
sh download_dataset.sh
sbatch t5.slurm
```
PT 1.13 successful run[ logs](https://gist.github.com/HamidShojanazeri/901927f6b2290c3c3548f95e761d6355)
PT 2.0 successful run[ logs](https://gist.github.com/HamidShojanazeri/4b9f16cd758a05595cb9d3ca22536495)
PT Nightlies failure [logs](https://gist.github.com/HamidShojanazeri/fe9831f6f401524c4235c5a36d9d678b)
### Versions
```bash
Collecting environment information...
PyTorch version: 2.1.0.dev20230606+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.2.152
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2977.283
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.1.0+9820899b38
[pip3] torch==2.1.0.dev20230606+cu118
[pip3] torch-model-archiver==0.8.0
[pip3] torch-workflow-archiver==0.2.8
[pip3] torchaudio==2.1.0.dev20230606+cu118
[pip3] torchpippy==0.1.1+3edf3ab
[pip3] torchserve==0.8.0
[pip3] torchvision==0.16.0.dev20230606+cu118
[pip3] triton==2.0.0
[pip3] vit-pytorch==1.2.2
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9820899b38 pypi_0 pypi
[conda] torch 2.1.0.dev20230606+cu118 pypi_0 pypi
[conda] torch-model-archiver 0.8.0 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.8 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230606+cu118 pypi_0 pypi
[conda] torchpippy 0.1.1+3edf3ab pypi_0 pypi
[conda] torchserve 0.8.0 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230606+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
[conda] vit-pytorch 1.2.2 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 5 |
2,390 | 103,089 |
[OOM] Unable to convert 30B model to ONNX, using 4x A100's
|
module: onnx, triaged
|
### π Describe the bug
Unable to convert 30B model to ONNX. I am using 4x A100's , 500GB RAM, 2.5TB Memory, still running out of memory.
<img width="1510" alt="image" src="https://github.com/pytorch/pytorch/assets/124602977/b1dfb61d-d21b-47fa-86f0-62e9ebf6832b">
Here's the repro:
I believe this is reproable in any container, but here's the container setup step:
1) Create a container on Runpod from winglian/axolotl-runpod:main-py3.9-cu118-2.0.0
- Runpod.io -> My Templates -> New Template -> winglian/axolotl-runpod:main-py3.9-cu118-2.0.0
<img width="985" alt="Screenshot 2023-06-05 at 23 47 34" src="https://github.com/pytorch/pytorch/assets/124602977/8341a58e-f996-4607-98db-58a7ee317b8f">
Then deploy 4x A100 in Secure cloud, search for the Template just created:
<img width="307" alt="image" src="https://github.com/pytorch/pytorch/assets/124602977/923a11df-4fd3-4202-8a14-228a33773a67">
2) Once it loads, start the terminal and:
```
mkdir tmp && ln -s /workspace/tmp /tmp
pip install optimum && pip install onnx && pip install onnxruntime-gpu
git lfs install
git clone https://huggingface.co/ehartford/WizardLM-30B-Uncensored
```
3) Paste the following inference file using vim:
```
touch fp16_to_onnx.py
vim fp16_to_onnx.py
```
Paste this:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from optimum.onnxruntime import ORTModelForCausalLM
import argparse
import os
parser = argparse.ArgumentParser(description="Convert fp16 model to onnx")
parser.add_argument("model_dir", type=str, help="fp16 model folder")
parser.add_argument("--device", type=str, default="cuda:0", help="device")
args = parser.parse_args()
model_dir = args.model_dir
device = torch.device("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_dir)
save_directory = "onnx_wiz/"
print("Loading")
ort_model = ORTModelForCausalLM.from_pretrained(
model_dir, export=True).to(device)
print("Saving")
ort_model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
```
To exit vim, Esc -> Shift + Z -> Shift + Z
4) Now, run the conversion:
python fp16_to_onnx.py WizardLM-30B-Uncensored
This will take about 45 minutes, which already sounds a bit wrong as it should take 5m. gpt2 takes 30 seconds to convert.
Then , it will fail with this:
<img width="1510" alt="image" src="https://github.com/pytorch/pytorch/assets/124602977/d1f795fb-4cc7-4427-893b-2ab275772cf3">
Can you please help unblock? I have been trying to convert this to ONNX for days already
Many thanks
### Versions
```
CPU min MHz: 1500.0000
CPU min MHz: 1500.0000
BogoMIPS: 5600.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxs
r_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe p
opcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core
perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rd
t_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoin
vd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmul
qdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] No relevant packages
```
| 0 |
2,391 | 103,082 |
Ambiguitiy in causal-mask in scaled_dot_product_attention
|
oncall: transformer/mha
|
### π Describe the bug
When `is_causal=True`, `torch.nn.functional.scaled_dot_product_attention` is defined as an efficient implementation of the following
```
# Q: (N, ..., L, E)
# K: (N, ..., S, E)
# V: (N, ..., S, Ev)
attn_mask = torch.ones((L, S), dtype=torch.bool).tril(diagonal=0)
attn_mask = attn_mask.maksed_fill(not attn_mask, -float('inf'))
attn_weight = torch.softmax((Q @ K.transpose(-2, -1) / math.sqrt(Q.size(-1))) + attn_mask, dim=-1)
out = attn_weight @ V
```
If `L != S`, the definition of attn_mask may be ambiguous.
For example, during the auto-regressive generation process of a GPT-like model, when kv-cache is enabled, usually `L=1`. In this case,
```python
In [1]: attn_mask = torch.ones((1, S), dtype=torch.bool).tril(diagonal=0); attn_mask
Out[1]: tensor([[ True, False, False, False, False, False, False, False]])
```
However, the correct `attn_mask` in this scenario should be `[True, True, ..., True]`.
I'm not sure it's a bug or feature. I guess there must be a reason why the causal mask is designed in the current way. It would be helpful to discuss.
### Versions
2.0
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 0 |
2,392 | 103,073 |
torch.compile crash for tensor computing when tensor size is bigger
|
needs reproduction, triaged, oncall: pt2
|
### π Describe the bug
torch.compile crash for tensor computing when tensor size increase from 100_00 to 100_000:
```python
import pyarrow as pa
import torch
def test():
n = 100_000
my_arrow = pa.Table.from_pydict(
{f"c{i}": [float(x) for x in range(n)] for i in range(10)})
torch_tensors = [torch.tensor(c.to_numpy()) for c in my_arrow.columns]
def test_torch(tensors):
t0 = tensors[0]
r = t0
for idx, t in enumerate(tensors):
r += (t * t + idx) / 2
return r
test_torch_compiled = torch.compile(test_torch)
result = test_torch_compiled(torch_tensors)
print(result)
if __name__ == '__main__':
test()
```
The script succeeds when n == 10_000 but crash with 139 exit code when n == 100_000
Crash with 139 exit code.
LLDB stack:
```
* thread #1
* frame #0: 0x00007ff80ee9b694 libsystem_platform.dylib`_platform_bzero$VARIANT$Haswell + 84
frame #1: 0x000000010a6f5f93 libomp.dylib`___kmp_allocate_align(unsigned long, unsigned long) + 66
frame #2: 0x000000010a70dea0 libomp.dylib`__kmp_allocate_thread + 426
frame #3: 0x000000010a709409 libomp.dylib`__kmp_allocate_team + 1587
frame #4: 0x000000010a70ad43 libomp.dylib`__kmp_fork_call + 5423
frame #5: 0x000000010a6ff6dd libomp.dylib`__kmpc_fork_call + 283
frame #6: 0x000000010377fc3c c2o4qtgl5x6zyriivqmcubbwcdqfyrfmw25gf72it576dqtdfimc.so`kernel + 188
frame #7: 0x00000001020f2d92 libffi.8.dylib`ffi_call_unix64 + 82
frame #8: 0x00000001020f2429 libffi.8.dylib`ffi_call_int + 761
frame #9: 0x00000001027137ef _ctypes.cpython-38-darwin.so`_ctypes_callproc + 671
frame #10: 0x000000010270e0f0 _ctypes.cpython-38-darwin.so`PyCFuncPtr_call + 272
frame #11: 0x0000000101c6c058 python3.8`_PyEval_EvalFrameDefault + 39112
frame #12: 0x0000000101b536d5 python3.8`_PyFunction_Vectorcall + 421
frame #13: 0x0000000101c6be73 python3.8`_PyEval_EvalFrameDefault + 38627
frame #14: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #15: 0x0000000101c6c73c python3.8`_PyEval_EvalFrameDefault + 40876
frame #16: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #17: 0x0000000101c6d935 python3.8`_PyEval_EvalFrameDefault + 45477
frame #18: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #19: 0x0000000101c6be73 python3.8`_PyEval_EvalFrameDefault + 38627
frame #20: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #21: 0x0000000101c6d935 python3.8`_PyEval_EvalFrameDefault + 45477
frame #22: 0x000000011a1ed23f libtorch_python.dylib`custom_eval_frame_shim + 159
frame #23: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #24: 0x0000000101c6be73 python3.8`_PyEval_EvalFrameDefault + 38627
frame #25: 0x000000011a1ed7ac libtorch_python.dylib`eval_custom_code + 220
frame #26: 0x000000011a1ed3ff libtorch_python.dylib`custom_eval_frame_shim + 607
frame #27: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #28: 0x0000000101c6d935 python3.8`_PyEval_EvalFrameDefault + 45477
frame #29: 0x0000000101b53a74 python3.8`_PyFunction_Vectorcall + 1348
frame #30: 0x0000000101c6be73 python3.8`_PyEval_EvalFrameDefault + 38627
frame #31: 0x0000000101b536d5 python3.8`_PyFunction_Vectorcall + 421
frame #32: 0x0000000101c6be73 python3.8`_PyEval_EvalFrameDefault + 38627
frame #33: 0x0000000101c609a8 python3.8`_PyEval_EvalCodeWithName + 712
frame #34: 0x0000000101ce0696 python3.8`run_mod + 166
frame #35: 0x0000000101cdf215 python3.8`pyrun_file + 133
frame #36: 0x0000000101cdedbc python3.8`pyrun_simple_file + 460
frame #37: 0x0000000101cdebc5 python3.8`PyRun_SimpleFileExFlags + 53
frame #38: 0x0000000101d02887 python3.8`pymain_run_file + 279
frame #39: 0x0000000101d0204b python3.8`pymain_run_python + 411
frame #40: 0x0000000101d01e65 python3.8`Py_RunMain + 37
frame #41: 0x0000000101b224c8 python3.8`main + 56
frame #42: 0x00000001031304fe dyld`start + 462
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:05:36) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Versions of relevant libraries:
[pip3] flake8==3.9.1
[pip3] flake8-bugbear==23.3.12
[pip3] flake8-quotes==3.3.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
2,393 | 103,070 |
Unexpected failure in LLVM JIT when running TorchScript model in C++
|
oncall: jit
|
### π Describe the bug
I have a [TorchScript model](https://github.com/pytorch/pytorch/files/11661294/model.zip) that I exported from Python and loaded in C++ via the following snippet (network is member of a class)
```c++
network = torch::jit::load(request_msg.torch_script_path);
network.eval();
```
The model is a standard feedforward model with [1024, 512, 256, 256] linear units and ELU activations and producing 4 output values with a final linear layer. When I try to execute the loaded model via
```c++
std::vector<c10::IValue> inputs;
inputs.emplace_back(torch::zeros({1, 275}, torch::kFloat32));
auto network_effort_internal = network.forward(inputs);
```
I get the following error on the second time I try to call the network
```
terminate called after throwing an instance of 'c10::Error'
what(): valOrErr INTERNAL ASSERT FAILED at "../torch/csrc/jit/tensorexpr/llvm_jit.h":33, please report a bug to PyTorch. Unexpected failure in LLVM JIT: Unable to find target for this triple (no targets are registered)
```
Interestingly, I can prevent the error from appearing by cloning the network prior to each call
```
network = network.clone();
```
which is however not a solution since this can be a rather costly operation compared to the execution of the model. This error has also[ been mentioned in the PyTorch discussion forum](https://discuss.pytorch.org/t/calling-forward-on-torchscript-model-multiple-times-leads-to-error/154990), however has not seen much discussion so far.
### Versions
I am using libtorch in Version 2.0.1. The model has been exported from PyTorch 1.11.0 but I also checked with Python version 2.0.1 with no change in behavior.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 3 |
2,394 | 103,060 |
Symbolic trace error about torch.nn.functional.pad
|
triaged, module: fx
|
### π Describe the bug
Symbolic trace failed with `torch.nn.functional.pad`.
I found a similar issue here https://github.com/pytorch/vision/issues/6166. That exact issue have been fixed, but if we modify the code snippet as following, the error still exists.
```python
import torch
import torch.fx
import torch.nn.functional as F
class CustomModule(torch.nn.Module):
def forward(self, x):
bs, c, h, w = x.shape
return F.pad(x, (w, h))
m = CustomModule()
x = torch.rand(1, 3, 4, 4)
m_fx = torch.fx.symbolic_trace(m)
```
```
TypeError: pad(): argument 'pad' (position 2) must be tuple of ints, not tuple
```
It seems that if there is not any straight interger value in the padding sizes, symbolic trace will be not able to correctly parse the params.
### Versions
PyTorch version: 1.13.0+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Versions of relevant libraries:
[pip3] flake8==3.9.2
[pip3] flake8-bugbear==21.4.3
[pip3] flake8-comprehensions==3.6.0
[pip3] flake8-polyfill==1.0.2
[pip3] flake8-tidy-imports==4.5.0
[pip3] horizon-plugin-pytorch==1.7.1.dev20230602+cu116.torch1130.f854a
[pip3] msgpack-numpy==0.4.8
[pip3] mypy==0.910
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-crf==0.7.2
[pip3] pytorch3d==0.7.2
[pip3] torch==1.13.0+cu116
[pip3] torchaudio==0.13.0+cu116
[pip3] torchmetrics==0.5.0
[pip3] torchvision==0.14.0+cu116
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py38h5eee18b_1
[conda] mkl_fft 1.3.6 py38h417a72b_1
[conda] mkl_random 1.2.2 py38h417a72b_1
[conda] msgpack-numpy 0.4.8 pypi_0 pypi
[conda] numpy 1.19.5 pypi_0 pypi
[conda] numpy-base 1.23.5 py38h060ed82_1
[conda] pytorch-crf 0.7.2 pypi_0 pypi
[conda] pytorch3d 0.7.2 pypi_0 pypi
[conda] torch 1.13.0+cu116 pypi_0 pypi
[conda] torchaudio 0.13.0+cu116 pypi_0 pypi
[conda] torchmetrics 0.5.0 pypi_0 pypi
[conda] torchvision 0.14.0+cu116 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 1 |
2,395 | 103,056 |
[Pytorch 2.0] torch::nn::Dropout output is incorrect on Windows
|
oncall: binaries, module: windows, module: cpp, triaged, module: regression
|
### π Describe the bug
Reproduce steps:
1. refer to https://pytorch.org/cppdocs/installing.html
2. download pytorch 2.0.1 windows libtorch package from https://pytorch.org/
3. update the example-app.cpp
```cpp
#include <torch/torch.h>
#include <iostream>
int main() {
auto m = torch::nn::Dropout(torch::nn::DropoutOptions().p(0.42));
std::cout << "module : " << std::endl << m << std::endl << std::endl;
std::cin.get();
}
```
4. build and run
```
mkdir build
cd build
cmake -G "Visual Studio 16 2019" -DCMAKE_PREFIX_PATH=/absolute/path/to/libtorch ..
cmake --build . --config Release
Release\example-app.exe
```
5. the p in the output is incorrect
if libtorch is libtorch-win-shared-with-deps-2.0.1+cu117, the p is a random value.
```
module :
torch::nn::Dropout(p=8.749e+99, inplace=false)
```
if libtorch is libtorch-win-shared-with-deps-2.0.1+cpu, the p is zero
`torch::nn::Dropout(p=0, inplace=false)`
The result is correct on Linux or Libtorch1.13 on Windows.
```
module :
torch::nn::Dropout(p=0.42, inplace=true)
```
### Versions
Stable(2.0.1) LibTorch
The output isn't correct with CPU or CUDA binaries.
cc @seemethere @malfet @peterjc123 @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jbschlosser @ezyang @gchanan @zou3519
| 1 |
2,396 | 103,055 |
lit-llama lora fine tuning NetworkXUnbounded: Infinite capacity path, flow unbounded above
|
high priority, triaged, oncall: pt2, module: dynamic shapes
|
### π Describe the bug
lit-llama version 8aa65ba33e844c283c0a84b9758445fe0c6aab2d
Enable torch.compile with
```
diff --git a/finetune/lora.py b/finetune/lora.py
index 1873701..2d18ee8 100644
--- a/finetune/lora.py
+++ b/finetune/lora.py
@@ -73,6 +73,8 @@ def main(
mark_only_lora_as_trainable(model)
+ model = torch.compile(model, dynamic=True)
+
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
model, optimizer = fabric.setup(model, optimizer)
train(fabric, model, optimizer, train_data, val_data, tokenizer_path, out_dir)
```
Work around unrelated bug with
```
diff --git a/lit_llama/lora.py b/lit_llama/lora.py
index 0f644e2..3f46d1a 100644
--- a/lit_llama/lora.py
+++ b/lit_llama/lora.py
@@ -320,7 +320,7 @@ class MergedLinear(nn.Linear, LoRALayer):
self.lora_B.unsqueeze(-1), # (256, 2) -> (256, 2, 1)
groups=sum(self.enable_lora)
).transpose(-2, -1) # (64, 4, 64) @ (256, 2, 1) -> (64, 256, 64) -> (64, 64, 256)
- result += self.zero_pad(after_B) * self.scaling # (64, 64, 256) after zero_pad (64, 64, 384)
+ result = result + self.zero_pad(after_B) * self.scaling # (64, 64, 256) after zero_pad (64, 64, 384)
return result
```
Patch in Triton fix https://github.com/openai/triton/pull/1741
Use standard setup instructions, `python finetune/lora.py`
Fails with
```
NetworkXUnbounded: Infinite capacity path, flow unbounded above
```
The graph that was generated: https://gist.github.com/ezyang/71460c07dfc86d297090888e077bd88e
I think this is the corresponding joint graph https://gist.github.com/ezyang/65e55a66c9cee4120c1ba242bbaef358
This also affects finetune/adapter.py, so it's probably an issue in the llama model itself
cc @gchanan @zou3519 @msaroufim @wconstab @bdhirsh @anijain2305 @Chillee
### Versions
main
| 2 |
2,397 | 103,023 |
MPS bug: padding_idx in nn.Embedding does not prevent gradient accumulation
|
triaged, module: mps
|
### The `padding_idx` in `nn.Embedding` does not prevent gradient accumulation when run on MPS
Expected behaviour:
```python
import torch
embedding = torch.nn.Embedding(5, 3, padding_idx=0)
input_ids = torch.LongTensor([[0, 1, 1, 2]]) # padding_token, token_1, token_1, token_2
loss = torch.sum(embedding(input_ids))
loss.backward()
# expect to see zeros for the padding token, 2 for token_1, 1 for token_2, and 0 for other tokens
expected = torch.tensor([0, 2, 1, 0, 0]).repeat(3, 1).T
assert torch.all(torch.eq(embedding.weight.grad, expected)), embedding.weight.grad # all good
```
On MPS the gradient accumulates for the padding_idx:
```python
embedding = torch.nn.Embedding(5, 3, padding_idx=0).to('mps')
input_ids = torch.LongTensor([[0, 1, 1, 2]]).to('mps') # padding_token, token_1, token_1, token_2
loss = torch.sum(embedding(input_ids))
loss.backward()
# expect to see zeros for the padding token, 2 for token_1, 1 for token_2, and 0 for other tokens
expected = torch.tensor([0, 2, 1, 0, 0]).repeat(3, 1).T.to('mps')
assert torch.all(torch.eq(embedding.weight.grad, expected)), embedding.weight.grad # fails
```
```python
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[10], line 7
5 # expect to see zeros for the padding token, 2 for token_1, 1 for token_2, and 0 for other tokens
6 expected = torch.tensor([0, 2, 1, 0, 0]).repeat(3, 1).T.to('mps')
----> 7 assert torch.all(torch.eq(embedding.weight.grad, expected)), embedding.weight.grad
AssertionError: tensor([[1., 1., 1.],
[2., 2., 2.],
[1., 1., 1.],
[0., 0., 0.],
[0., 0., 0.]], device='mps:0')
```
This would lead to the padding vector being updated unexpectedly during training on MPS.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:12:31) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.3.1-arm64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.22.4
[pip3] torch==2.0.1
[conda] numpy 1.22.4 py310h0a343b5_0 conda-forge
[conda] pytorch 2.0.1 py3.10_0 pytorch
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
2,398 | 102,999 |
Preserve weight_g/weight_v accessors on new weight_norm
|
module: nn, triaged, module: nn.utils.parametrize
|
### π Describe the bug
Parametrizations don't let you control what the original parameters are called; they're always original0, original1, etc. For weight_norm, this new naming is a bit obtuse; the original naming of g/v was better. Not sure if this is actually worth fixing, holler if you think it is.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @lezcano
### Versions
main
| 10 |
2,399 | 102,977 |
raise `RuntimeError` faster when loading an object with a torch CUDA tensor on a CPU-only machine
|
module: nn, module: serialization, triaged, actionable
|
### π The feature, motivation and pitch
Currently when loading an object with a torch CUDA tensor on a CPU-only machine, torch will raise a `RuntimeError` like https://github.com/pytorch/pytorch/issues/16797 mentioned. A problem in this situation is that torch will spend time (maybe trying to load the tensor bytes) before raise that `RuntimeError`. This extra time is proportional to the size of the tensor. Hence when loading a huge tensor, the extra time may be noticeable for performance-critical applications.
To give a concrete example, I have the following codes:
```python
def _safe_torch_tensor_loads(bs: bytes) -> t.Any:
import torch
f = io.BytesIO(bs)
if not torch.cuda.is_available():
return torch.load(f, map_location="cpu")
else:
return torch.load(f)
class FixTorchUnpickler(pickle.Unpickler):
def find_class(self, module: str, name: str) -> t.Callable[[bytes], t.Any]:
if module == "torch.storage" and name == "_load_from_bytes":
return _safe_torch_tensor_loads
else:
return super().find_class(module, name)
def _fix_torch_loads(bs: bytes) -> t.Any:
f = io.BytesIO(bs)
unpickler = FixTorchUnpickler(f)
return unpickler.load()
def loads_or_fix_torch(bs: bytes):
try:
return pickle.loads(bs)
except RuntimeError:
return _fix_torch_loads(bs)
```
where `_fix_torch_loads` will directly loads a object containing cuda tensors even on a CPU-only machine and `loads_or_fix_torch` will first try to load the object and only use `_fix_torch_loads` when it met the `RuntimeError`. In my benchmark (I actually have a benchmark repo [here](https://github.com/larme/pytorch_unpickler_benchmark)), `loads_or_fix_torch` will be 60%-70% slower than `_fix_torch_loads` for different sizes of objects. For a large enough tensor (like (25000, 25000) fp32 tensor), the difference could be 500-600ms. My application is distributed model inference serving so 500-600ms is quite a lot from end user's point of view.
I'd like to help improving this situation but I need some input from more experienced pytorch developer. I tried to dive in `torch.serialization` and `torch.storage` but they are a little bit complex so if some pointer will be helpful!
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,400 | 102,971 |
Discussion and Design for Masked Loss Functions which can be used with PackedSequence training (but not exclusively)
|
module: loss, triaged, module: masked operators
|
### π The feature, motivation and pitch
Original idea for these come from [Github Gist] that helps reproduce [Issue 102911][packed seqs on mps] ([PackedSequences on MPS accelerator yields grad_y missing or crashes the kernel][packed seqs on mps])
See: [PR][Pull Request]
[packed seqs on mps]: https://github.com/pytorch/pytorch/issues/102911
[packed sequence failure]: https://github.com/pytorch/pytorch/issues/97552
[loss error on m1]: https://github.com/pytorch/pytorch/issues/96416
[grad_y missing fix]: https://github.com/pytorch/pytorch/pull/96601
[gru nan]: https://github.com/pytorch/pytorch/issues/94691
[Github Gist]: https://gist.github.com/dsm-72/1cea0601145a8b92155d8d08c90bf998
[Results]: https://github.com/pytorch/pytorch/issues/94691#issuecomment-1574365231
[Pull Request]: https://github.com/pytorch/pytorch/pull/102915#issuecomment-1576748710
### Alternatives
_No response_
### Additional context
[packed seqs on mps]: https://github.com/pytorch/pytorch/issues/102911
The [Issue 102911][packed seqs on mps] stated above is related to:
- [Issue 96416][loss error on m1] ([Loss.backward() error when using MPS on M1 #96416][loss error on m1]),
- [Issue 94691][gru nan] ([Nan is output by GRU on mps #94691][gru nan]),
- [Issue 97552][packed sequence failure] ([PackedSequence failure with MPS #97552][packed sequence failure]), and
- [PR 96601][grad_y missing fix] ([[MPS] LSTM grad_y missing fix #96601][grad_y missing fix]).
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
### Description
When training it can be useful to either pad input **_or_** mask parts of the expected output to increase efficiency and / or robustness. The example above demonstrates this with a RNN for time series where each sequence may have variable length utilizes padded sequences and `PackedSequence` to improve throughput. The use of padded output (since this was a Seq2Seq model the padded input is the output) or masked output motivates a loss function that can handle this, without having to adjust the dataset / dataloaders / training method.
Such occurs enough to motivate even a base `MaskedLoss` class that can be used to, in a standardized way, quickly and easily generate other masked loss functions e.g. `MaskedMSELoss`, `MaskedL1Loss`, etc.
The [PR][Pull Request] offers an example of how one might achieve this.
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.