Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
1,701 | 106,469 |
Extreme slowdown of torch.mm for certain sizes and strides with bfloat16
|
module: cuda, triaged, module: bfloat16
|
### π Describe the bug
I noticed I was getting some extreme slowdowns of matrix multiplications that usually take much less time for similar shapes. It was only for certain shapes and memory layouts, and I've been able to put together a small reproducible example.
Running the following code:
```python
import torch
from timeit import timeit
# 100000 isn't important and could be any big number
# 100 could be any similar number but the slowdown goes away with significantly different values
m = torch.ones(100000, 100, dtype=torch.bfloat16, device="cuda")
def mm(t):
torch.mm(t, m)
torch.cuda.synchronize()
for i in range(1, 21):
t = torch.ones(100000, i, dtype=torch.bfloat16, device="cuda").T
mm(t)
print(i, timeit(lambda: mm(t), number=10))
```
Results in:
```
1 0.00016830721870064735
2 0.00011278688907623291
3 0.0027380590327084064
4 0.00010453397408127785
5 0.0028314038645476103
6 0.0001054578460752964
7 0.002929423004388809
8 0.0001083549577742815
9 0.0030427640303969383
10 0.00010291696526110172
11 0.003144121030345559
12 0.00010385108180344105
13 0.0032425778917968273
14 0.00010714191012084484
15 0.003299999050796032
16 0.00010801898315548897
17 0.00010303501039743423
18 0.00011960999108850956
19 0.0001004338264465332
20 0.00010727089829742908
```
As you can see, certain odd dimension lengths (3,5,7,9,11,13,15) are ~30x slower than you'd expect!
I'm not super familiar with how performant 2-byte aligned memory ops are vs 4-byte which that could explain the odd lengths with bfloat16, but I still would hope to be better than 30x slower.
### Versions
```
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.28
Python version: 3.10.12 (main, Jun 12 2023, 08:08:55) [GCC 11.2.0] (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.216
BogoMIPS: 4402.78
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] msgpack-numpy==0.4.8
[pip3] mypy==1.2.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.2.0
[pip3] numpy==1.23.2
[pip3] numpyro==0.10.1
[pip3] torch==1.13.1+cu117
[conda] Could not collect
```
cc @ptrblck
| 1 |
1,702 | 106,467 |
nn.CrossEntropyLoss with invalid target generates corrups memory eventualy leading to CUDA error: an illegal memory access
|
module: nn, module: loss, triaged
|
### π Describe the bug
The following code will, on occasion, generate an exit(-123), ie: I have bad data.
```
assert predicted_line_mask.shape[0]==16
assert predicted_line_mask.shape[1]==4
assert predicted_line_mask.shape[2]==512
assert predicted_line_mask.shape[3]==512
assert all_line_mask.shape[0]==16
assert all_line_mask.shape[1]==512
assert all_line_mask.shape[2]==512
assert not torch.any(torch.isinf(predicted_line_mask))
if torch.any(all_line_mask > 3):
print(all_line_mask.dtype)
print(all_line_mask.min())
print(all_line_mask.max())
exit(-123)
assert not torch.any(all_line_mask > 3)
torch.cuda.synchronize()
loss1 = self.criterion_line(predicted_line_mask, all_line_mask)
```
Typical code do not have all these checks and looks like:
```
intermediate = self(image)
predicted_line_mask = self.conv(intermediate)
loss1 = self.criterion_line(predicted_line_mask, all_line_mask)
```
In this case, the code will run most often without reporting any corruptions or strange behavior, even in the presence of this bad data on every run, but I suspect it is corrupting GPU memory. On occasion runs, it will report and exist
`RuntimeError: CUDA error: an illegal memory access was encountered`
and not report any indication where in the code something went bad leaving the impression that Pytorch just crashes on occasion.
Note the bad data is present in all runs, but just occasional runs randomly crash.
Adding torch.cuda.synchronize() allows the narrowing of the invalid memory access.
It would be nice, if the nn.CrossEntropyLoss could validate the range of its target vs the dimensions of the input and report this error in the input data rather than silently corrupting memory.
### Versions
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.27
Python version: 3.10.8 (main, Nov 4 2022, 13:48:29) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.13.1
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.14.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.22.3 py310hfa59a62_0
[conda] numpy-base 1.22.3 py310h9585f30_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-lightning 1.5.10 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.14.1 py310 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 7 |
1,703 | 106,462 |
Enabling Transformer fast path for not batch_first
|
fb-exported, Stale, release notes: nn
|
Summary:
The fast path for the `forward()` method in `MultiheadAttention` only accepted `batch_first = True`. This diff enables fast path for `batch_first=False` as well.
Previously https://github.com/pytorch/pytorch/pull/85576 (approved by Christian but went stale and auto-closed)
Test Plan: Added unit test for fast path for both values of `batch_first` producing identical outputs. Test location - `//caffe2/test/test_native_mha.py`
Differential Revision: D47982531
| 11 |
1,704 | 106,457 |
AOTAutograd should detect false aliasing.
|
triaged, module: viewing and reshaping, oncall: pt2, module: aotdispatch
|
Today, if you write code like this:
```
@torch.compile
def f(x, y):
x.add_(1)
y.add_(1)
base = torch.ones(2, 2)
x = base[:, 0]
y = base[:, 1]
f(x, y)
```
Then AOTAutograd will attempt to perform some complicated "synthetic base" logic detailed [here](https://github.com/pytorch/pytorch/blob/b37a50afda55c5b73298016d10fca1f8c6f65055/torch/_functorch/aot_autograd.py#L2283).
The synthetic base logic is important for general cases where you have two inputs that alias the same memory, and a mutation on one input needs to be reflected to the other input.
But there are two problems with the example above;
(1) The repro above exhibits **false aliasing**. Even though `x` and `y` share the same storage, their memory locations are actually completely disjoint. We can generate significantly more efficient code if we detect this false aliasing. Otherwise, if we run the "synthetic base" logic, we will have to materialize two updated versions of `base` throughout the compiled graph for `f`.
(2) The synthetic base logic has a few complicated corner cases that it cannot handle. In particular, if your inputs alias each other and have non-trivial `._base` attributes, this is a case that we don't currently handle today. Fixing it is pretty non-trivial ([code](https://github.com/pytorch/pytorch/blob/main/torch/_functorch/aot_autograd.py#L1817)).
What should we do? In theory, we should be able to use the sizes/strides/storage_offset of our two inputs to figure out that they do **not** have any overlapping memory, even though they alias the same storage.
We have a version of this check in C++ today, but it only handles very simple cases because it needs to run in eager mode, which means it needs to be fast. It cannot handle the check above, because it bails out if either of its inputs are non-contiguous. Since we are compiling, we have the luxury of being okay with being a bit slower in how we check for overlapping memory.
Context: this appears to show up in the Shampoo optimizer. This could also potentially help other cases, such as:
- FSDP (uses flat params)
- LSTM (I believe the LSTM module has an optimization where its parameters are part of a single buffer)
cc @ezyang @msaroufim @wconstab @anijain2305
| 1 |
1,705 | 106,455 |
vmap, jacrev, jacfwd, hessian, etc., in libTorch
|
module: cpp, triaged, module: vmap, module: functorch
|
### π The feature, motivation and pitch
Apparently, right now there is a gap between the PyTorch APIs and the libTorch APIs regarding vmap and its related (previously functorch) capabilities. Moreover, I am afraid that the two set of APIs are drifting further apart, especially in newer versions of PyTorch, see e.g. the comparison between PyTorch's grad and libTorch's grad:
https://pytorch.org/docs/stable/generated/torch.autograd.grad.html
https://pytorch.org/cppdocs/api/function_namespacetorch_1_1autograd_1a1e03c42b14b40c306f9eb947ef842d9c.html
A vmap implementation in libTorch would allow us to compute Jacobians of batched computations much more efficiently, which is often a necessary ingredient in the conventional scientific computing world (by which I mean communities like numerical methods for solving partial differential equations).
Some potentially related open issues:
https://github.com/pytorch/pytorch/issues/40208
https://github.com/pytorch/functorch/issues/767
We might be able to have some one take a stab at this next FY, and hopefully submit a PR. I would like to see what the PyTorch core development team thinks about this. Some pointers to get us started would be appreciated.
Just to prove I am not crazy about this and indeed this is possible, I have put together a repository to demonstrate the feasibility of this. Some examples are here:
https://github.com/hugary1995/jaxformtorch/blob/main/tests/test.cxx
I stopped there as I realized I would need to access some of the libTorch internals to get this to work for the most general case.
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 2 |
1,706 | 106,454 |
Add half specializations for load of sum
|
module: cpu, open source, module: half, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/mps, ciflow/inductor
|
Add half specializations for load of sum
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
1,707 | 106,453 |
wip docker issue debug
|
Stale, with-ssh, topic: not user facing
|
Fixes #ISSUE_NUMBER
| 2 |
1,708 | 106,451 |
DISABLED test_cat_addmm (__main__.TestDoBench)
|
module: rocm, triaged, skipped
|
Platforms: rocm
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/inductor%2Ftest_max_autotune.py%3A%3ATestDoBench%3A%3Atest_cat_addmm)).
Same issue as https://github.com/pytorch/pytorch/issues/106026, still investigating root cause. Currently seems like the triton template tests will fail on ROCm GPUs upstream due to lower number of SMs to return True in https://github.com/pytorch/pytorch/blob/main/torch/_inductor/utils.py#L655
cc: @jithunnair-amd
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang
| 1 |
1,709 | 106,450 |
Check for output_padding <= stride/dilation in ConvTranspose1d
|
module: convolution, triaged, module: padding
|
### π The feature, motivation and pitch
I'm working on a network that should output the exact same sequence length (L) as the input has. I use a CNN to extract feature information and shorten the sequence length, feed the results into an LSTM and use transpose convolutional layers to get the original sequence length again. For that I'm using two custom formulas for padding for each convolutional and transposed convolutional layer. This basically works like ``padding="same"``, but also for strides > 1 and for ConvTranspose1d, which doesn't have this option. The input sequence length then needs to be divisible by the stride to the power of the number of layers chosen. Everything works as intended **except** for using stride=1 and an even kernel_size when using transpose convolutional layers. The output padding in this case would need to be 1 and it would work flawlessly, but Pytorch doesn't allow that: ``RuntimeError: output padding must be smaller than either stride or dilation, but got output_padding_height: 0 output_padding_width: 1 stride_height: 1 stride_width: 1 dilation_height: 1 dilation_width: 1``. All other combinations of even and uneven kernel size and stride work as I intend. Would it be possible to check for ``output_padding <= stride`` instead of ``output_padding < stride``?
Code to test on:
```python
import torch
import math
def xpadding(l_in, l_out, stride, dilation, kernel_size):
"""Padding formula for Conv1d.
The formula is adapted the formula for calculating the output sequence length (L_out)
from the Pytorch documentation. The desired Lout is: L_out = L_in/stride.
Args:
l_in: 'int', input sequence length
l_out: 'int', output sequence length
stride: 'int', strides/steps of kernel points
dilation: 'int', spacing between kernel points
kernel_size: 'int'
Returns:
padding: 'int', padding needed to achieve division without remainder
of sequence length, e.g. input seq_len: 10, stride: 2, expected/
necessary output seq_len: 5
"""
padding = math.ceil(((l_out - 1) * stride - l_in + dilation * (kernel_size - 1) + 1) / 2)
# math.ceil to avoid rounding half to even/bankers rounding, only needed for even L_in
return padding
def trans_padding(l_in, l_out, stride, dilation, kernel_size, output_pad):
"""Padding formula for TransposeConv1d.
The formula is adapted the formula for calculating the output sequence length (L_out)
from the Pytorch documentation. The desired Lout is: L_out = L_in * stride. The output
padding equals zero, if the stride is uneven, and one otherwise.
Args:
l_in: 'int', input sequence length
l_out: 'int', output sequence length
stride: 'int', strides/steps of kernel points
dilation: 'int', spacing between kernel points
kernel_size: 'int'
output_pad: 'int', additional size added to one side of the output shape
Returns:
padding: 'int', padding needed to achieve multiplication "without remainder"
of sequence length, e.g. input seq_len: 5, stride: 2, expected/
necessary output seq_len: 10
"""
padding = ((l_in - 1) * stride - l_out + dilation * (kernel_size - 1) + output_pad + 1) / 2
return int(padding)
seq_len=21384
stride=1
layers = 2
x_in = seq_len / (stride ** layers)
print(x_in)
assert seq_len % stride ** layers == 0, "sequence length is not divisible by the stride to the power of layers"
x = torch.rand(2,4,seq_len)
kernel_size = 9
filter_size = 24
input_size=4
dilation=1
# CNN
# --------------
print("CNN\n---------------")
for layer in range(layers):
l_in = seq_len / stride ** layer
l_out = seq_len / stride ** (layer + 1)
print("input size:", l_in, "desired output size", l_out)
xpad = xpadding(l_in, l_out, stride, dilation, kernel_size)
conv = torch.nn.Conv1d(input_size, filter_size, kernel_size, stride=stride, padding=xpad, dilation=dilation)
x = conv(x)
input_size = filter_size
print(x.size())
# Transposed CNN
# ----------------
print("Transposed CNN\n---------------")
out_pad = 0 if (kernel_size+stride) % 2 == 0 else 1
print("output padding:", out_pad)
for layer in range(layers, 0, -1):
l_in = seq_len / stride ** layer
l_out = l_in * stride
tpad = trans_padding(l_in, l_out, stride, dilation, kernel_size, out_pad)
print("input size:", l_in, "desired output size", l_out)
transconv = torch.nn.ConvTranspose1d(24, 24, kernel_size, stride=stride, padding=tpad,
dilation=dilation, output_padding=out_pad)
x = transconv(x)
print(x.size())
```
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
1,710 | 106,449 |
Enable optional tensorList fallback to cpu.
|
triaged, open source, release notes: quantization, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Fixes #102081
Only Tensor and TensorList copy to cpu when fallback to cpu device, index_put and index op fallback failed, cause OptionalTensorLIst is not copy to cpu.
| 2 |
1,711 | 106,446 |
Fix Clang compilation error with Lib ATen for ppc64le
|
module: cpu, triaged, open source
|
This patch fixes error while compiling with Clang for ppc64le
I have used clang version 15.0.7
Errors are as follow:
No matching function for call to 'vec_selβ
No matching function for call to 'vec_splats'
Excess elements in scalar initializer
Use of undeclared identifier 'vec_vsubudm'
Fix for multiple error within int64_t DEFINE_MEMBER_OP_AND_ONE
References:
https://releases.llvm.org/9.0.0/tools/clang/docs/AttributeReference.html
https://reviews.llvm.org/D81083
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 5 |
1,712 | 106,445 |
[dynamo] Support dict.get with no default specified, and dict.copy
|
triaged, open source, Stale, module: dynamo, ciflow/inductor
|
Previously, `.get(key, default)` was supported, but `.get(key)` was not.
Also adds `.copy`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @Xia-Weiwen @ipiszy
| 4 |
1,713 | 106,439 |
DISABLED test_aot_sequence_nr_dynamic_shapes (dynamo.test_aot_autograd.DynamicShapesAotAutogradFallbackTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_sequence_nr_dynamic_shapes&suite=DynamicShapesAotAutogradFallbackTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15538280405).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 18 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_sequence_nr_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 122 |
1,714 | 106,437 |
[PG NCCL][RFC] Pause before throwing exception
|
fb-exported, release notes: distributed (c10d)
|
Summary:
When this exception is thrown, torchx will kick in and take down the
remaining processes. Adding a bit of time before throwing it so that other
processes can also have their watchdogs fire, which can better contribute to
log understanding in terms of potential collective mismatches, shape
misalignment, etc.
Tested by running some jobs and ensuring all the watchdogs fire and report
errors in logs when collectives are misaligned, instead of just one.
Test Plan: CI
Differential Revision: D47975982
| 5 |
1,715 | 106,435 |
[JIT] .item() dict keys cause `RuntimeError: Cannot create dict for key type 'Scalar', only int, float, complex, Tensor, device and string keys are supported`
|
oncall: jit
|
### π Describe the bug
**Tasks**:
* [ ] (maybe) fix this - seems hard because torchscript doesn't know whether a tensor is a FloatTensor or an IntTensor
* [ ] Improve source attribution: maybe this error should be a FrontendError(? is that what it's called?) that traces back to the source that had this issue.
Repro:
```python
import torch
from typing import List, Tuple, Dict
def fn(x: List[torch.Tensor]):
return {x[i].item(): i for i in range(len(x))}
torch.jit.script(fn)
```
Error:
```
Traceback (most recent call last):
File "/data/users/dberard/scripts/dict_comprehension.py", line 7, in <module>
torch.jit.script(fn)
File "/data/users/dberard/pytorch/torch/jit/_script.py", line 1377, in script
fn = torch._C._jit_script_compile(
RuntimeError: Cannot create dict for key type 'Scalar', only int, float, complex, Tensor, device and string keys are supported
```
### Versions
main branch as of aug 1, 2023
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
1,716 | 106,432 |
inconsistent dtype of scale and zero_point in observers
|
high priority, triage review, oncall: quantization, triaged
|
### π Describe the bug
in https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/observer.py#L323-L324 scale is float (32bit) and zero_point is long (64 bit), we should make sure both of them are 32 bit
### Versions
master
cc @ezyang @gchanan @zou3519 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 0 |
1,717 | 106,427 |
`torch.nn.utils.clip_grad_norm_()` causes H2D sync with foreach ops.
|
module: nn, triaged, module: mta
|
`clip_grad_norm_()` has an optional `foreach` flag ([here](https://github.com/pytorch/pytorch/blob/ef0576f203bff957d7be976840244f45ff510d2c/torch/nn/utils/clip_grad.py#L14)), that specifies that a foreach kernel should be used.
Unfortunately, that foreach kernel (called [here](https://github.com/pytorch/pytorch/blob/ef0576f203bff957d7be976840244f45ff510d2c/torch/nn/utils/clip_grad.py#L76)) performs a H2D sync:
(1) `clip_coef_clamped` is a scalar-tensor
(2) `aten._foreach_mul_` has a `Scalar` overload and a `TensorList` overload, but not a `Tensor` overload.
(3) `torch._foreach_mul_` coerces the scalar-tensor into a tensor to call the Scalar overload, resulting in an `item()` call (causing a H2D sync).
I had a tentative patch up here to try to force `clip_grad_norm_()` to use the existing `_foreach_mul_.List` overload. It avoids the H2D sync, but ends up dispatching several smaller kernels, which is still not ideal for performance. Instead, we should add a dedicated `_foreach_mul_.Tensor` overload.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @crcrpar @mcarilli
| 15 |
1,718 | 106,409 |
[WIP][Not for landing now] TP benchmark for perf
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106409
* #107181
* #106524
| 1 |
1,719 | 106,408 |
PyTorchMPS not showing up in Instruments for `torch.mps.profiler`
|
triaged, module: mps
|
### π Describe the bug
Following these instructions: https://developer.apple.com/videos/play/wwdc2023/10050/?time=355, I'm not seeing the `PyTorchMPS` intervals showing up when trying to profile SDXL. I can see `events` in the `com.apple.Metal.AGXSignposts` but I would expect to see PyTorchMPS here, with intervals breaking down each layer's latency.
<img width="618" alt="image" src="https://github.com/pytorch/pytorch/assets/1981179/6de491b4-5cbb-46eb-b614-96a7419cc85f">
Here's the code I'm using to profile:
```
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")
pipe.to("mps")
prompt = "test prompt"
negative_prompt = "test negative prompt"
torch.mps.profiler.start(mode='interval', wait_until_completed=True)
print(os.getpid())
image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=1).images[0]
torch.mps.profiler.stop()
```
This may be related to macOS 14 beta? I'm calling this from a notebook being run via VSCode, but I tested just running it from a standalone python file via iTerm2 and similar issue.
### Versions
PyTorch version: 2.1.0.dev20230801
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.10.12 (main, Jun 20 2023, 19:43:52) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==2.1.0.dev20230801
[pip3] torchaudio==2.1.0.dev20230801
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.16.0.dev20230801
[conda] numpy 1.24.3 py311hd0e4f0d_0
[conda] numpy-base 1.24.3 py311h947b413_0
[conda] numpydoc 1.5.0 py311hecd8cb5_0
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 8 |
1,720 | 106,393 |
[Pytorch][Vulkan] aten::gt
|
fb-exported, Stale, module: vulkan, release notes: vulkan, ciflow/periodic
|
Summary:
Add support for Vulkan [greater than](https://pytorch.org/docs/stable/generated/torch.gt.html) and its variants.
Note, gt is broadcastable one-way:
```
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
```
Also check scalar type matches. For non-inplace variants, output is a bool tensor. For in-place variants, an int tensor (same as input).
Test Plan:
New tests:
```
lfq@lfq-mbp fbsource % buck run --target-platforms ovr_config//platform/macos:arm64-fbsource //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 -c pt.vulkan_full_precision=1 -- --gtest_filter="*gt*"
Building: finished in 0.2 sec (100%) 265/265 jobs, 0/265 updated
Total time: 0.2 sec
BUILD SUCCEEDED
Running main() from xplat/third-party/gmock/googletest-1.12.1/googletest/src/gtest_main.cc
Note: Google Test filter = *gt*
[==========] Running 7 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 7 tests from VulkanAPITest
[ RUN ] VulkanAPITest.gt
[ OK ] VulkanAPITest.gt (67 ms)
[ RUN ] VulkanAPITest.gt_broadcast
[ OK ] VulkanAPITest.gt_broadcast (1 ms)
[ RUN ] VulkanAPITest.gt_broadcast_invalid
[ OK ] VulkanAPITest.gt_broadcast_invalid (0 ms)
[ RUN ] VulkanAPITest.gt_
[ OK ] VulkanAPITest.gt_ (3 ms)
[ RUN ] VulkanAPITest.gt_broadcast_
[ OK ] VulkanAPITest.gt_broadcast_ (0 ms)
[ RUN ] VulkanAPITest.gt_scalar
[ OK ] VulkanAPITest.gt_scalar (2 ms)
[ RUN ] VulkanAPITest.gt_scalar_
[ OK ] VulkanAPITest.gt_scalar_ (1 ms)
[----------] 7 tests from VulkanAPITest (77 ms total)
[----------] Global test environment tear-down
[==========] 7 tests from 1 test suite ran. (77 ms total)
[ PASSED ] 7 tests
```
All tests: https://www.internalfb.com/phabricator/paste/view/P797564233
```
xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp:6751: Skipped
QueryPool is not available
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log (0 ms)
[----------] 324 tests from VulkanAPITest (6024 ms total)
[----------] Global test environment tear-down
[==========] 324 tests from 1 test suite ran. (6024 ms total)
[ PASSED ] 323 tests.
[ SKIPPED ] 1 test, listed below:
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log
```
clang-format on `BinaryOp.cpp`
Reviewed By: SS-JIA
Differential Revision: D47767546
| 5 |
1,721 | 106,389 |
[torch.compile] autograd.Function with multiple return values
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
```
import torch
import torch._dynamo
class Foo(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x.sin(), x.sin()
@staticmethod
def backward(ctx, grad0, grad1):
x, = ctx.saved_tensors
return grad * x.cos(), grad * x.cos()
@torch.compile(backend='aot_eager', fullgraph=True)
def f(x):
return Foo.apply(x)
x = torch.randn([], requires_grad=True)
f(x)
```
produces:
```
le.call_function(self, tx, args, kwargs)
771 return None
773 p_args = (
774 *(arg.as_proxy() for arg in args),
775 *(arg for arg in body_lifted_freevars.keys()),
776 )
--> 777 r = body_r.as_proxy().node.meta["example_value"]
778 example_value = r
780 _, p_kwargs = proxy_args_kwargs([], kwargs)
InternalTorchDynamoError: 'tuple' object has no attribute 'node'
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 4 |
1,722 | 106,388 |
Better export story for autograd.Function?
|
triaged, oncall: pt2
|
### π Describe the bug
A trampoline appears in the exported graph of autograd.Function:
```py
import torch
import torch._dynamo
class Foo(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x.sin()
@staticmethod
def backward(ctx, grad):
x, = ctx.saved_tensors
return grad * x.cos()
def f(x):
return Foo.apply(x)
x = torch.randn([], requires_grad=True)
gm, *_ = torch._dynamo.export(f, x)
print(gm.code)
```
returns:
```
def forward(self, x):
arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
function_ctx = torch.autograd.function.FunctionCtx()
trampoline_autograd_apply = torch__dynamo_variables_misc_trampoline_autograd_apply(arg0); arg0 = None
return pytree.tree_unflatten([trampoline_autograd_apply], self._out_spec)
```
We probably want to turn autograd.Function into a legit HigherOrderOp at some point.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,723 | 106,387 |
[torch.compile] autograd.Function where we assign a Tensor directly to ctx
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
```py
import torch
class Foo(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.x = x
return x.sin()
@staticmethod
def backward(ctx, grad):
return grad * ctx.x.cos()
@torch.compile(backend='aot_eager')
def f(x):
return Foo.apply(x)
x = torch.randn([], requires_grad=True)
f(x)
```
produces:
```
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT f <ipython-input-1-1dbbf3a0bb5e> line 12
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] due to:
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] File "/raid/rzou/pt/debug-cpu/torch/_dynamo/output_gr
aph.py", line 1302, in lift_tracked_freevar_to_input
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] assert (
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] AssertionError: lift_tracked_freevar_to_input should no
t be called on root SubgraphTracer
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING]
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] from user code:
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] File "/raid/rzou/pt/debug-cpu/torch/_dynamo/variable
s/misc.py", line 241, in trampoline_autograd_bwd
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] return fn_cls.backward(*args, **kwargs)
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] File "<ipython-input-1-1dbbf3a0bb5e>", line 10, in ba
ckward
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] return grad * ctx.x.cos()
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING]
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for
more information
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING]
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING]
[2023-08-01 08:54:23,054] torch._dynamo.convert_frame: [WARNING] converting frame raised error, suppressing error
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 0 |
1,724 | 106,382 |
Installing torchvision for CPU leads to unwanted upgrade of torch + pip would not install nightly as considers that release is the latest (?)
|
oncall: binaries, triaged
|
### π Describe the bug
I have torch cpu install '2.0.0+cpu' (laptop without CUDA/GPU; no torchvision installed). Doing official `python3 -m pip install torchvision --index-url https://download.pytorch.org/whl/cpu` command downloads `torchvision-0.15.2%2Bcpu-cp38-cp38-linux_x86_64.whl` and then leads to undesired large upgrade of torch core to `torch-2.0.1%2Bcpu-cp38-cp38-linux_x86_64.whl`. Is it expected? Can it install automatically the good version of torchvision? Or should one manually specify the right version from https://pytorch.org/get-started/previous-versions/?
Also, `torchvision` seems to have `requests` as a hard dependency, which is not ideal, as `requests` module can sometimes be broken (because of broken certificate intalls etc)? I can understand that this module can be required when calling some dataset/model download, but not at general `import torchvision.transforms.functional as TF`
### Versions
2.0.0+cpu
cc @seemethere @malfet
| 4 |
1,725 | 106,378 |
[dynamo] can't compile if tensor subclass implements __torch_function__ using super()
|
triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
I can't compile operations on a simple subclass of `torch.Tensor` if it implements `__torch_function__` like so:
```python
# fails
import torch
class MyTensor(torch.Tensor):
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
return super().__torch_function__(
func,
(torch.Tensor for _ in types),
tuple(torch.Tensor() for _ in args),
kwargs,
)
def add(a, b):
return torch.add(a, b)
t = MyTensor()
add(t, t) # works
torch.compile(add, backend="eager")(t, t) # RuntimeError
```
However, if I explicitly replace `super()` with `torch.Tensor` it works:
```python
# works
import torch
class MyTensor(torch.Tensor):
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
return torch.Tensor.__torch_function__(
func,
(torch.Tensor for _ in types),
tuple(torch.Tensor() for _ in args),
kwargs,
)
def add(a, b):
return torch.add(a, b)
t = MyTensor()
add(t, t) # works
torch.compile(add, backend="eager")(t, t) # works
```
This seems like a bug to me.
Or am I misunderstanding the usage of `super()` in this context?
Thanks!
### Error logs
```
/home/johannes/Documents/jina/docarrayv2/venv/bin/python /home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_185.py
Traceback (most recent call last):
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/variables/constant.py", line 62, in unpack_var_sequence
return [ConstantVariable(x, **options) for x in self.as_python_constant()]
TypeError: 'NoneType' object is not iterable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1726, in run
super().run()
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 576, in run
and self.step()
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 540, in step
getattr(self, inst.opname)(inst)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 342, in wrapper
return inner_fn(self, inst)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 965, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 474, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/variables/misc.py", line 702, in call_function
new_kwargs = args[3].items
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/variables/constant.py", line 43, in items
return self.unpack_var_sequence(tx=None)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/variables/constant.py", line 64, in unpack_var_sequence
raise NotImplementedError from e
NotImplementedError:
from user code:
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_185.py", line 8, in __torch_function__
return super().__torch_function__(
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_185.py", line 22, in <module>
torch.compile(add, backend="eager")(t, t) # RuntimeError
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/home/johannes/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_185.py", line 17, in add
return torch.add(a, b)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/johannes/Documents/jina/docarrayv2/venv/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError
Process finished with exit code 1
```
### Minified repro
running `env TORCHDYNAMO_REPRO_AFTER="dynamo" python /path/to/my/file.py` gives the exact same error output, and is not generating any code. Maybe I am doing something wrong?
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 140
Model name: 11th Gen Intel(R) Core(TM) i7-1195G7 @ 2.90GHz
Stepping: 2
CPU MHz: 2789.703
CPU max MHz: 5000,0000
CPU min MHz: 400,0000
BogoMIPS: 5836.80
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 128 KiB
L2 cache: 5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.3.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.4
[pip3] torch==2.0.1+cpu
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 5 |
1,726 | 106,377 |
Command to reproduce error is incorrect
|
good first issue, module: tests, triaged, module: infra, module: testing
|
### π Describe the bug
```
To execute this test, run the following from the base repo dir: TEST_WITH_TORCHINDUCTOR=1 python test/test_torch.py -k test_nondeterministic_alert_NLLLoss_cuda
```
But I had to run it with
```
PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_torch.py -k test_nondeterministic_alert_NLLLoss_cuda
```
Ref CI failure: https://github.com/pytorch/pytorch/actions/runs/5678841216/job/15391208767?pr=106118
Ref PR: https://github.com/pytorch/pytorch/pull/106118
### Versions
master
cc @mruberry
| 2 |
1,727 | 106,375 |
nll_loss reference shouldn't be registered as a decomposition.
|
triaged, module: primTorch, module: decompositions
|
### π Describe the bug
`nll_loss` is a CompositeImplicit Op
https://github.com/pytorch/pytorch/blob/16df54239fecb97afbda30026931bbcac09542c4/aten/src/ATen/native/native_functions.yaml#L11043-L11046
But it is also registered as a decomposition:
https://github.com/pytorch/pytorch/blob/16df54239fecb97afbda30026931bbcac09542c4/torch/_refs/nn/functional/__init__.py#L734-L740
So, `nll_loss` shouldn't be registered as decomposition and the reference should be similar to actual implementation calling into other ops.
Also, note that this is a ref for `torch.nn.functional.nll_loss` which calls into `nll_loss_nd`.
cc: @lezcano
### Versions
master
cc @ezyang @mruberry @Lezcano @peterbell10 @SherlockNoMad
| 0 |
1,728 | 106,373 |
[ONNX] scatter_reduce does not support `include_self=False`
|
module: onnx, triaged
|
### π Describe the bug
the latest verison pytorch[2023-08-01] has support aten::scatter_reduce operator, when export the model to ONNX. but occurs error:
raise errors.OnnxExporterError(
"ONNX does not support include_self=False for scatter_reduce"
)
how to fix the error?
### Versions
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0.dev20230731+cu118
[pip3] torchaudio==2.1.0.dev20230731+cu118
[pip3] torchvision==0.16.0.dev20230731+cu118
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.1.0.dev20230731+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230731+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230731+cu118 pypi_0 pypi
| 1 |
1,729 | 106,370 |
Link compiled protobuf files to `protobuf::libprotobuf`
|
triaged, open source, Stale
|
This correctly propagates the required compile flags (such as `-DPROTOBUF_USE_DLLS` for shared builds of protobuf).
The manual extraction of the include path and adding that globally is also no longer required as that is part of that targets interface.
Fixes #106297
Fixes #106206
| 4 |
1,730 | 106,362 |
Calling ops.aten.embedding_bag() function got silent crash
|
module: crash, module: nn, triaged
|
### π Describe the bug
```
def test2():
import torch as t
weight = t.tensor([
[ 8.4520, -1.2693, -0.0696, -2.0721, -7.5149],
[ 4.3191, -8.9345, 5.5872, 6.7340, 8.5114],
[-2.1229, -7.3948, 2.0235, 4.9718, -8.9578],
[-2.0429, -5.3951, -0.7872, -4.4299, -3.6789],
[-2.8571, -8.5527, 7.3846, 7.5450, -1.4118],
[-1.0249, -3.6731, -8.1276, -8.7583, 3.3449],
[-4.9414, -5.7859, -0.7022, -2.9971, -2.9117],
[ 0.2892, -1.9090, -3.0988, -4.3093, -7.3244],
[ 7.5466, -3.6017, 2.3848, -3.1227, 0.7314],
[ 8.3907, 4.1465, -7.7994, 3.5721, 8.5432]
])
indices = t.tensor([
[1, 6, 3, 6, 2],
[5, 8, 4, 2, 8],
[3, 0, 5, 2, 4],
[8, 4, 3, 1, 2],
[0, 3, 4, 2, 2]
])
offsets = t.tensor([0, 2])
mode = 2
r = t.ops.aten.embedding_bag(
weight, indices, offsets, mode=mode)
print(r)
test2()
```
### Versions
2.0.1+CPU
cc @ezyang @gchanan @zou3519 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 6 |
1,731 | 106,361 |
DISABLED test_ddp_apply_optim_in_backward_ignored_params (__main__.TestDistBackendWithSpawn)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ddp_apply_optim_in_backward_ignored_params&suite=TestDistBackendWithSpawn) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15502591692).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ddp_apply_optim_in_backward_ignored_params`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_distributed_spawn.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 5 |
1,732 | 106,360 |
Improving save_on_cpu's performance by overlapping memory transfers with compute
|
oncall: distributed, module: autograd, triaged, enhancement
|
### π The feature, motivation and pitch
The current implementation invokes h2d/d2h on the same stream as compute, essentially blocking the GPU from doing other computation. A straightforward approach to improve the performance is to call h2d/d2h on a different stream, allowing GPU's DMA engine to work concurrently with other computation.
I got a prototype working, but my limited knowledge of pytorch internals prevented me from making this perfect. So I would like to get some help and suggestions, and hopefully making this into pytorch. My code is shown below and the points for discussion are written in the comments.
```python3
from torch.autograd.graph import saved_tensors_hooks
import torch
from typing import Tuple
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import ActivationWrapper
from torch.distributed.fsdp._limiter_utils import _FreeEventQueue
class faster_save_on_cpu(saved_tensors_hooks):
copy = None
queue = _FreeEventQueue()
def __init__(self):
if faster_save_on_cpu.copy is None:
# make sure we only create the h2d/d2h stream once
faster_save_on_cpu.copy = torch.cuda.Stream()
def pack_to_cpu(tensor: torch.Tensor):
# Here, we may redundantly save full weights tensors to cpu.
# For example, by default, a linear layer will save its input, weight
# and bias for backward. However, if we already have FSDP, we probably
# don't want save the **unsharded** weight and bias to CPU and want to
# leave them on GPU to reduce memory transfers. Is there a way to skip
# saving weight tensors?
assert not tensor.is_sparse # not yet supported
if not tensor.is_cuda:
return None, tensor
self.limit_prefetch()
faster_save_on_cpu.copy.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(faster_save_on_cpu.copy):
cpu_tensor = tensor.to("cpu", non_blocking=True)
self.record_event()
tensor.record_stream(faster_save_on_cpu.copy)
return (tensor.device, cpu_tensor)
def unpack_from_cpu(packed: Tuple[torch.device, torch.Tensor]):
device, cpu_tensor = packed
if device is None:
return cpu_tensor
self.limit_prefetch()
with torch.cuda.stream(faster_save_on_cpu.copy):
# here, we must allocate tensor on the copy stream to allow
# copy to start early. If pytorch can provide a method to
# allocate an empty block **immediately** rather than reusing a
# previous block on the compute stream, the problem below can be solved.
tensor = cpu_tensor.to(device, non_blocking=True)
self.record_event()
# subsequent gradient computation need to wait for h2d to complete
torch.cuda.current_stream().wait_stream(faster_save_on_cpu.copy)
# Now we have a problem. Ideally, we want to avoid this memcpyD2D to save time and memory,
# but the tensor is allocated on a different stream as opposed to the compute stream, so
# to make this work, we need to call .record_stream after this tensor's lifetime ends
# during backward. However, autogard seems like a blackbox to me, making it not
# practical to insert a .record_stream(). As a workaround, we can clone this tensor.
# I've tried hacking the destructor of out_tensor by subclassing and overriding Tensor's __del__,
# but this introduced significant python overhead due to all subsequent tensors having to call
# .record_stream().
out_tensor = tensor.clone()
tensor.record_stream(torch.cuda.current_stream())
return out_tensor
super().__init__(pack_to_cpu, unpack_from_cpu)
def limit_prefetch(self):
# limit the number of h2d/d2h submitted to reduce peak memory usage, like FSDP's limit_all_gathers option
prev_ev = self.queue.dequeue_if_needed()
if prev_ev:
prev_ev.synchronize()
def record_event(self):
event = torch.cuda.Event()
event.record(faster_save_on_cpu.copy)
self.queue.enqueue(event)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ezyang @albanD @zou3519 @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 8 |
1,733 | 106,359 |
backwards compatibility about class _LRScheduler
|
triaged, module: LrScheduler
|
### π Describe the bug
In older version of pytorch, StepLR is a subclass of _LRScheduler, but in the latest version, StepLR is not. Therefore, the following previous code will not work as expected.
```python
from torch.optim.lr_scheduler import StepLR
scheduler = StepLR(opt, step_size=100, gamma=0.1)
if isinstance(lr_scheduler, torch.optim.lr_scheduler._LRScheduler)):
do something
else:
error
```
The backward compatibility code related in the latest version of pytorch :
```python
# Including _LRScheduler for backwards compatibility
# Subclass instead of assign because we want __name__ of _LRScheduler to be _LRScheduler (assigning would make it LRScheduler).
class _LRScheduler(LRScheduler):
pass
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 43008K
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] warmup-scheduler-pytorch==0.1.2
| 0 |
1,734 | 106,344 |
[PyTorch][export] type check utils for Basic Python types
|
fb-exported, Stale, module: export
|
Differential Revision: D47935910
| 5 |
1,735 | 106,342 |
[CoDev Test] Pay no attention to this, just a noisy pr for testing ghimport
|
Stale, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106342
| 2 |
1,736 | 106,341 |
Transformer.generate_square_subsequent_mask has nan values on MPS device
|
triaged, module: NaNs and Infs, module: mps
|
### π Describe the bug
Square subsequent mask has nan values in place of zeroes when created on MPS device. To reproduce:
```
import torch.nn as nn
print(nn.modules.transformer.Transformer.generate_square_subsequent_mask(4, device='mps'))
```
prints:
```
tensor([[nan, -inf, -inf, -inf],
[nan, nan, -inf, -inf],
[nan, nan, nan, -inf],
[nan, nan, nan, nan]], device='mps:0')
```
However, the correct tensor is printed when generated on CPU, `nn.modules.transformer.Transformer.generate_square_subsequent_mask(4, device='cpu')` prints:
```
tensor([[0., -inf, -inf, -inf],
[0., 0., -inf, -inf],
[0., 0., 0., -inf],
[0., 0., 0., 0.]])
```
After looking into the `Transformer.generate_square_subsequent_mask` function, the problem appears to come from `torch.triu` when called with `diagonal=1`, since `torch.full` works as expected on MPS device.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.17 (main, Jul 5 2023, 15:35:09) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] numpy 1.25.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
```[tasklist]
### Tasks
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
1,737 | 106,339 |
ReduceLROnPlateau increases learning rate exponentially, causing training to diverge
|
triaged, module: LrScheduler
|
### π Describe the bug
I've been training some Segformer variants from HuggingFace via Lightning using the following optimizer setup.
```python
def configure_optimizers(self):
from torch.optim.lr_scheduler import ReduceLROnPlateau
optimizer = torch.optim.AdamW(self.parameters(), lr=self.hparams.learning_rate)
return {
"optimizer": optimizer,
"lr_scheduler": {
"scheduler": ReduceLROnPlateau(
optimizer,
mode="min",
verbose=True,
patience=self.hparams.learning_rate_schedule_patience,
),
"monitor": "val/loss",
},
}
return
```
This has worked fine, apart from a recent run where the loss suddenly exploded. I'm not sure what the trigger was because the validation curve largely looked OK and at the point where the LR exploded, had even smoothed out quite nicely.
<img width="1411" alt="image" src="https://github.com/pytorch/pytorch/assets/3159591/932ab660-3459-461a-95a1-49aec76500f3">
Regardless of some instability in training, it seems like ReduceLROnPlateau progressively ramped the learning rate until it settles around 1 and everything else tanked with it.
Any idea why this occurred?
Here's a run where things worked as expected:
<img width="1376" alt="image" src="https://github.com/pytorch/pytorch/assets/3159591/5b0e876e-c564-4fb6-b556-e89ae906ba60">
### Versions
```
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.13.4
Libc version: glibc-2.28
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
Nvidia driver version: 470.57.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
Stepping: 3
CPU MHz: 2000.172
BogoMIPS: 4000.34
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] pytorch-lightning==2.0.5
[pip3] segmentation-models-pytorch==0.3.0
[pip3] torch==1.12.1
[pip3] torch-tb-profiler==0.4.0
[pip3] torchaudio==0.12.1
[pip3] torchgeo==0.4.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
[conda] mkl_fft 1.3.1 py310h2b4bcf5_1 conda-forge
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.4 pypi_0 pypi
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
[conda] pytorch-lightning 2.0.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] segmentation-models-pytorch 0.3.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py310_cu113 pytorch
[conda] torchgeo 0.4.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.13.1 py310_cu113 pytorch
```
| 0 |
1,738 | 106,338 |
Inefficient code generated - does not use 256b registers
|
triaged, oncall: pt2, module: cpu inductor
|
### π Describe the bug
Followup on a discussion in #106219. cc: @jgong5
The following code gets generated by torch.compile (all scalar code relying on GCC vectorization?):
```
#include "ci5uspp363v3ky6jkccllm3bxudy2fkdpqinkqhmpehfihejs7ko.h"
extern "C" void kernel(float* in_out_ptr0,
const float* in_ptr0,
const float* in_ptr1,
const float* in_ptr2,
const float* in_ptr3,
const float* in_ptr4,
const float* in_ptr5,
const float* in_ptr6,
const float* in_ptr7,
const float* in_ptr8,
float* out_ptr0,
float* out_ptr1,
float* out_ptr2,
float* out_ptr3,
float* out_ptr4,
float* out_ptr5)
{
auto out_ptr6 = in_out_ptr0;
{
{
float tmp_acc0 = 0;
float tmp_acc1 = 0;
float tmp_acc2 = 0;
for(long i0=static_cast<long>(0L); i0<static_cast<long>(40L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr0[static_cast<long>(i0)];
auto tmp1 = in_ptr1[static_cast<long>(i0)];
auto tmp4 = in_ptr2[static_cast<long>(i0)];
auto tmp5 = in_ptr3[static_cast<long>(i0)];
auto tmp7 = in_ptr4[static_cast<long>(i0)];
auto tmp2 = decltype(tmp0)(tmp0 * tmp1);
auto tmp6 = tmp4 + tmp5;
auto tmp8 = decltype(tmp6)(tmp6 * tmp7);
auto tmp9 = std::numeric_limits<float>::infinity();
auto tmp10 = tmp8 == tmp9;
auto tmp11 = -std::numeric_limits<float>::infinity();
auto tmp12 = tmp8 == tmp11;
auto tmp13 = std::isnan(tmp8);
auto tmp14 = static_cast<float>(0.0);
auto tmp15 = tmp13 ? tmp14 : tmp8;
auto tmp16 = static_cast<float>(-3.4028234663852886e+38);
auto tmp17 = tmp12 ? tmp16 : tmp15;
auto tmp18 = static_cast<float>(3.4028234663852886e+38);
auto tmp19 = tmp10 ? tmp18 : tmp17;
auto tmp20 = static_cast<float>(-10.0);
auto tmp21 = max_propagate_nan(tmp19, tmp20);
auto tmp22 = static_cast<float>(10.0);
auto tmp23 = min_propagate_nan(tmp21, tmp22);
auto tmp24 = decltype(tmp0)(tmp0 * tmp23);
auto tmp26 = decltype(tmp1)(tmp1 * tmp23);
tmp_acc0 = tmp_acc0 + tmp2;
tmp_acc1 = tmp_acc1 + tmp24;
tmp_acc2 = tmp_acc2 + tmp26;
}
auto tmp3 = tmp_acc0;
out_ptr0[static_cast<long>(0L)] = tmp3;
auto tmp25 = tmp_acc1;
out_ptr1[static_cast<long>(0L)] = tmp25;
auto tmp27 = tmp_acc2;
out_ptr2[static_cast<long>(0L)] = tmp27;
}
}
{
auto tmp0 = out_ptr0[static_cast<long>(0L)];
out_ptr3[static_cast<long>(0L)] = tmp0;
}
{
auto tmp0 = out_ptr1[static_cast<long>(0L)];
out_ptr4[static_cast<long>(0L)] = tmp0;
}
{
auto tmp0 = out_ptr2[static_cast<long>(0L)];
out_ptr5[static_cast<long>(0L)] = tmp0;
}
{
{
float tmp_acc0 = 0;
for(long i0=static_cast<long>(0L); i0<static_cast<long>(3L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr5[static_cast<long>(i0)];
auto tmp1 = in_ptr6[static_cast<long>(i0)];
auto tmp2 = decltype(tmp0)(tmp0 * tmp1);
tmp_acc0 = tmp_acc0 + tmp2;
}
auto tmp3 = tmp_acc0;
out_ptr6[static_cast<long>(0L)] = tmp3;
}
}
{
auto tmp0 = in_ptr7[static_cast<long>(0L)];
auto tmp1 = out_ptr6[static_cast<long>(0L)];
auto tmp3 = in_ptr8[static_cast<long>(0L)];
auto tmp2 = tmp0 + tmp1;
auto tmp4 = tmp2 + tmp3;
auto tmp5 = decltype(tmp4)(1) / (decltype(tmp4)(1) + std::exp(-tmp4));
in_out_ptr0[static_cast<long>(0L)] = tmp5;
}
}
```
Host properties for test:
processor : xx
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
[...]
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
vmx flags : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_mode_based_exec tsc_scaling
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs
[...]
### Error logs
Generated code uses only 128b xmm registers when built with GCC:
```
0000000000001650 <kernel>:
1650: 41 54 push %r12
1652: 66 0f ef ed pxor %xmm5,%xmm5
1656: 44 0f 28 0d c2 09 00 movaps 0x9c2(%rip),%xmm9 # 2020 <_fini+0x7b0>
165d: 00
165e: 31 c0 xor %eax,%eax
1660: 55 push %rbp
1661: 44 0f 28 05 c7 09 00 movaps 0x9c7(%rip),%xmm8 # 2030 <_fini+0x7c0>
1668: 00
1669: 0f 28 e5 movaps %xmm5,%xmm4
166c: 0f 28 dd movaps %xmm5,%xmm3
166f: 0f 28 3d ca 09 00 00 movaps 0x9ca(%rip),%xmm7 # 2040 <_fini+0x7d0>
1676: 0f 28 35 d3 09 00 00 movaps 0x9d3(%rip),%xmm6 # 2050 <_fini+0x7e0>
167d: 53 push %rbx
167e: 48 89 fb mov %rdi,%rbx
1681: 4c 8b 54 24 20 mov 0x20(%rsp),%r10
1686: 48 89 f7 mov %rsi,%rdi
1689: 4c 8b 64 24 40 mov 0x40(%rsp),%r12
168e: 48 89 d6 mov %rdx,%rsi
1691: 48 8b 6c 24 48 mov 0x48(%rsp),%rbp
1696: 48 89 ca mov %rcx,%rdx
1699: 4c 8b 5c 24 50 mov 0x50(%rsp),%r11
169e: 48 8b 4c 24 28 mov 0x28(%rsp),%rcx
16a3: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
16a8: 0f 10 14 06 movups (%rsi,%rax,1),%xmm2
16ac: 44 0f 10 14 07 movups (%rdi,%rax,1),%xmm10
16b1: 44 0f 28 ee movaps %xmm6,%xmm13
16b5: 0f 10 0c 02 movups (%rdx,%rax,1),%xmm1
16b9: 44 0f 59 d2 mulps %xmm2,%xmm10
16bd: 41 0f 10 14 00 movups (%r8,%rax,1),%xmm2
16c2: 0f 58 ca addps %xmm2,%xmm1
16c5: 41 0f 10 14 01 movups (%r9,%rax,1),%xmm2
16ca: 0f 59 ca mulps %xmm2,%xmm1
16cd: 41 0f 58 da addps %xmm10,%xmm3
16d1: 0f 28 d1 movaps %xmm1,%xmm2
16d4: 44 0f c2 e9 05 cmpnltps %xmm1,%xmm13
16d9: 0f 28 c1 movaps %xmm1,%xmm0
16dc: 0f c2 d1 07 cmpordps %xmm1,%xmm2
16e0: 41 0f c2 c1 05 cmpnltps %xmm9,%xmm0
16e5: 0f 54 d1 andps %xmm1,%xmm2
16e8: 0f 54 d0 andps %xmm0,%xmm2
16eb: 41 0f 55 c1 andnps %xmm9,%xmm0
16ef: 0f 56 c2 orps %xmm2,%xmm0
16f2: 41 0f 28 d0 movaps %xmm8,%xmm2
16f6: 44 0f 28 d8 movaps %xmm0,%xmm11
16fa: 0f c2 d0 01 cmpltps %xmm0,%xmm2
16fe: 44 0f 28 e0 movaps %xmm0,%xmm12
1702: 44 0f c2 d8 03 cmpunordps %xmm0,%xmm11
1707: 44 0f c2 e7 01 cmpltps %xmm7,%xmm12
170c: 66 41 0f eb d3 por %xmm11,%xmm2
1711: 66 45 0f eb e3 por %xmm11,%xmm12
1716: 66 44 0f 6f f2 movdqa %xmm2,%xmm14
171b: 66 45 0f 6f dc movdqa %xmm12,%xmm11
1720: 66 41 0f df d5 pandn %xmm13,%xmm2
1725: 66 45 0f db f5 pand %xmm13,%xmm14
172a: 66 45 0f db de pand %xmm14,%xmm11
172f: 66 45 0f df e6 pandn %xmm14,%xmm12
1734: 44 0f 28 f6 movaps %xmm6,%xmm14
1738: 44 0f c2 f1 01 cmpltps %xmm1,%xmm14
173d: 66 0f 6f ca movdqa %xmm2,%xmm1
1741: 41 0f 28 d0 movaps %xmm8,%xmm2
1745: 0f 54 d1 andps %xmm1,%xmm2
1748: 0f 55 c8 andnps %xmm0,%xmm1
174b: 41 0f 54 c3 andps %xmm11,%xmm0
174f: 0f 56 ca orps %xmm2,%xmm1
1752: 0f 10 14 06 movups (%rsi,%rax,1),%xmm2
1756: 66 45 0f eb e6 por %xmm14,%xmm12
175b: 44 0f 55 d9 andnps %xmm1,%xmm11
175f: 0f 28 cf movaps %xmm7,%xmm1
1762: 44 0f 56 d8 orps %xmm0,%xmm11
1766: 41 0f 28 c4 movaps %xmm12,%xmm0
176a: 41 0f 54 cc andps %xmm12,%xmm1
176e: 41 0f 55 c3 andnps %xmm11,%xmm0
1772: 0f 56 c1 orps %xmm1,%xmm0
1775: 0f 10 0c 07 movups (%rdi,%rax,1),%xmm1
1779: 48 83 c0 10 add $0x10,%rax
177d: 0f 59 c8 mulps %xmm0,%xmm1
1780: 0f 59 c2 mulps %xmm2,%xmm0
1783: 0f 58 e1 addps %xmm1,%xmm4
1786: 0f 58 e8 addps %xmm0,%xmm5
1789: 48 3d a0 00 00 00 cmp $0xa0,%rax
178f: 0f 85 13 ff ff ff jne 16a8 <kernel+0x58>
1795: 0f 28 c5 movaps %xmm5,%xmm0
1798: 48 8b 44 24 58 mov 0x58(%rsp),%rax
179d: 0f 12 c5 movhlps %xmm5,%xmm0
17a0: 0f 58 c5 addps %xmm5,%xmm0
17a3: 0f 28 c8 movaps %xmm0,%xmm1
17a6: 0f c6 c8 55 shufps $0x55,%xmm0,%xmm1
17aa: 0f 58 c8 addps %xmm0,%xmm1
17ad: 0f 28 c4 movaps %xmm4,%xmm0
17b0: 0f 12 c4 movhlps %xmm4,%xmm0
17b3: 0f 58 c4 addps %xmm4,%xmm0
17b6: 0f 28 d0 movaps %xmm0,%xmm2
17b9: 0f c6 d0 55 shufps $0x55,%xmm0,%xmm2
17bd: 0f 58 d0 addps %xmm0,%xmm2
17c0: 0f 28 c3 movaps %xmm3,%xmm0
17c3: 0f 12 c3 movhlps %xmm3,%xmm0
17c6: 0f 58 c3 addps %xmm3,%xmm0
17c9: 0f 28 d8 movaps %xmm0,%xmm3
17cc: 0f c6 d8 55 shufps $0x55,%xmm0,%xmm3
17d0: 0f 58 c3 addps %xmm3,%xmm0
17d3: f3 41 0f 11 04 24 movss %xmm0,(%r12)
17d9: f3 0f 11 55 00 movss %xmm2,0x0(%rbp)
17de: f3 41 0f 11 0b movss %xmm1,(%r11)
17e3: f3 41 0f 10 04 24 movss (%r12),%xmm0
17e9: f3 0f 11 00 movss %xmm0,(%rax)
17ed: 48 8b 44 24 60 mov 0x60(%rsp),%rax
17f2: f3 0f 10 45 00 movss 0x0(%rbp),%xmm0
17f7: f3 0f 11 00 movss %xmm0,(%rax)
17fb: 48 8b 44 24 68 mov 0x68(%rsp),%rax
1800: f3 41 0f 10 03 movss (%r11),%xmm0
1805: f3 0f 11 00 movss %xmm0,(%rax)
1809: f3 0f 7e 09 movq (%rcx),%xmm1
180d: f3 41 0f 7e 02 movq (%r10),%xmm0
1812: 48 8b 44 24 30 mov 0x30(%rsp),%rax
1817: 0f 59 c1 mulps %xmm1,%xmm0
181a: f3 41 0f 10 4a 08 movss 0x8(%r10),%xmm1
1820: f3 0f 59 49 08 mulss 0x8(%rcx),%xmm1
1825: 0f 28 d0 movaps %xmm0,%xmm2
1828: 0f c6 c0 e5 shufps $0xe5,%xmm0,%xmm0
182c: f3 0f 58 c2 addss %xmm2,%xmm0
1830: f3 0f 58 c8 addss %xmm0,%xmm1
1834: f3 0f 11 0b movss %xmm1,(%rbx)
1838: f3 0f 10 00 movss (%rax),%xmm0
183c: 48 8b 44 24 38 mov 0x38(%rsp),%rax
1841: f3 0f 58 00 addss (%rax),%xmm0
1845: f3 0f 58 c1 addss %xmm1,%xmm0
1849: 0f 57 05 10 08 00 00 xorps 0x810(%rip),%xmm0 # 2060 <_fini+0x7f0>
1850: e8 db f7 ff ff call 1030 <expf@plt>
1855: f3 0f 10 0d a7 07 00 movss 0x7a7(%rip),%xmm1 # 2004 <_fini+0x794>
185c: 00
185d: f3 0f 58 c1 addss %xmm1,%xmm0
1861: f3 0f 5e c8 divss %xmm0,%xmm1
1865: f3 0f 11 0b movss %xmm1,(%rbx)
1869: 5b pop %rbx
186a: 5d pop %rbp
186b: 41 5c pop %r12
186d: c3 ret
Disassembly of section .fini:
0000000000001870 <_fini>:
1870: f3 0f 1e fa endbr64
1874: 48 83 ec 08 sub $0x8,%rsp
1878: 48 83 c4 08 add $0x8,%rsp
187c: c3 ret
```
### Minified repro
n/a
### Versions
fbcode
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
1,739 | 106,330 |
add private API for generating all CompositeImplicit decomps from the dispatcher
|
Stale
|
What do people think of this API / location?
Part of the reason it's a bit ugly is because there are 3 different places that decomp can live - the C++ dispatcher, the Python dispatcher, and the python global decomp table.
This should be useful to QAT (cc @andrewor14) - the idea is that they want to be able to use pre_dispatch tracing to trace out an ATen graph above the dispatcher, but they want an easy way to be able to say "specify a decomp table that includes every CompositeImplicitAutograd op that would have decomposed if we had traced through the dispatcher".
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #105240
* __->__ #106330
* #106329
* #105865
| 17 |
1,740 | 106,315 |
pre_dispatch tracing: fix for nn.MultiheadAttention
|
Stale
|
Fixes https://github.com/pytorch/pytorch/issues/106302, see the issue for details.
One minor risk in this PR is that: if you pre_dispatch trace `nn.MultiheadAttention`, **and** your inputs are some custom torch_function subclass, we'll incorrectly assume that the fast-path is okay to use. Given that pre_dispatch is only used underneath our `torch.export` and `torch.compile` subsystems, and it runs in a second pass **after** dynamo has captured the graph, then I think the risk here is pretty low - in both of those cases, dynamo should have already desugared away any torch_function subclasses/modes before we get to our second round of tracing to run with pre_dispatch.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106315
* #106461
* #106460
| 6 |
1,741 | 106,310 |
Don't set CUDA_HOME when not compiled with CUDA support
|
open source, Stale, ciflow/trunk, release notes: cpp, topic: bug fixes
|
It doesn't make sense to set this (on import!) as CUDA cannot be used with PyTorch in this case but leads to messages like
> No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
when CUDA happens to be installed which is at least confusing.
| 8 |
1,742 | 106,308 |
DISABLED test_cuda_assert_should_not_stop_common_distributed_test_suite_cuda (__main__.TestTestingCUDA)
|
oncall: distributed, module: ci, triaged, skipped
|
## Reason
PR https://github.com/pytorch/pytorch/pull/105206 broke one of the slow test "test_testing.py::TestTestingCUDA::test_cuda_assert_should_not_stop_common_distributed_test_suite_cuda"
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @seemethere @malfet @pytorch/pytorch-dev-infra
| 1 |
1,743 | 106,302 |
`torch.nn.modules.MultiheadAttention` yields different graph under pre_dispatch tracing
|
triaged, module: __torch_function__, module: export, pre_dispatch tracing
|
**The problem**
Repro posted at the bottom of the issue. Why does `pre_dispatch` tracing yield a different graph? Answer:
(1) `pre_dispatch` tracing uses `PreDispatchTorchFunctionMode` so that it can properly intercept and capture autograd and autocast API's during tracing, like no_grad/enable_grad ([here](https://github.com/pytorch/pytorch/blob/0af3203c728278920f6e64b22f63181c6f88628a/torch/fx/experimental/proxy_tensor.py#L520))
(2) `MultiheadAttention` has a series of checks to decide whether or not it can use a fast-path kernel during inference. One of them: it will ignore the fast path if any `TorchFunctionMode` is active ([here](https://github.com/pytorch/pytorch/blob/0af3203c728278920f6e64b22f63181c6f88628a/torch/nn/modules/activation.py#L1166))
repro:
```
import torch
from torch.nn.modules import MultiheadAttention
from torch.fx.experimental.proxy_tensor import make_fx
m = MultiheadAttention(4, 4, batch_first=True)
m.eval()
inp = torch.randn(4, 4, 4)
with torch.no_grad():
gm1 = make_fx(m, pre_dispatch=False)(inp, inp, inp)
# First graph has "torch.ops.aten._native_multi_head_attention.default" in the trace
print(gm1.code)
# Second graph does not
gm2 = make_fx(m, pre_dispatch=True)(inp, inp, inp)
print(gm2.code)
```
**Solutions**
The main question here is: "Is `MultiheadAttention` doing something unusual that we're okay with applying a one-off fix for"? User code can do all sorts of branching on whether or not its inputs have a `__torch_function__` or `__torch_dispatch__` enabled (or if there is a mode active), which can cause us to execute a different set of code when we trace compared to running eager mode.
(1) one-off fix for `MultiheadAttention`: we could augment the check [here](https://github.com/pytorch/pytorch/blob/0af3203c728278920f6e64b22f63181c6f88628a/torch/nn/modules/activation.py#L1165) to ignore the case when a pre_dispatch tracing TorchFunctionMode is active. This is actually a bit annoying, since we'd still probably like that check to return true if pre_dispatch is tracing **and** any other torch_function modes are active.
(2) Agree that "when users ask if torchfunction is enabled in some form, they're really asking about **user** torch_function code, and not our tracing infra - augment `has_torch_function(...)` to explicitly ignore the torchfunctionmode used by pre_dispatch tracing.
Open to suggestions - I don't have a strong opinion, although (2) is probably easier to implement.
cc @hameerabbasi @rgommers @peterbell10 @ezyang
| 4 |
1,744 | 106,298 |
Torch.onnx.export a fp16 model but get the output tensor fp32
|
module: onnx, triaged
|
### π Describe the bug
Hi thereοΌiβve just upgraded my torch from version 1.12.1 to 2.0.1.
i got the problem as mentioned in the title, that the output tensor in the exported onnx become a fp32 one with the others fp16, in torch 2.0.1. however, the whole tensors/ops in onnx should be fp16, as exported in 1.12.1
the code as follows,
```python
deploy = deploy.half()
torch.onnx.export(
deploy, opset_version=11, args=(dummy_a, dummy_b), f=βexport.onnxβ,
input_names=[βaβ, βbβ], output_names=[βcβ], verbose=False
)
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3080
GPU 1: NVIDIA GeForce RTX 3080
GPU 2: NVIDIA GeForce RTX 3080
GPU 3: NVIDIA GeForce RTX 3080
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 28
On-line CPU(s) list: 0-27
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-10940X CPU @ 3.30GHz
Stepping: 7
CPU MHz: 4205.361
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6599.98
Virtualization: VT-x
L1d cache: 448 KiB
L1i cache: 448 KiB
L2 cache: 14 MiB
L3 cache: 19.3 MiB
NUMA node0 CPU(s): 0-27
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
| 0 |
1,745 | 106,297 |
Can't build with non-static protobuf
|
module: build, triaged
|
### π Describe the bug
When building with SYSTEM protobuf (i.e. `-DBUILD_CUSTOM_PROTOBUF=OFF`) and a shared protobuf library the build will fail with link errors such as
```
ld: /tmp/pytorch-v1.13.1/build/lib/libtorch_cpu.so: undefined reference to `google::protobuf::internal::ThreadSafeArena::thread_cache_'
```
Reason for this is the same as #106206: The generated protobuf files are not correctly linked against the `protobuf::libprotobuf` target which would have defined `PROTOBUF_USE_DLLS` which leads to different places where that variable is defined and expected, see e.g. https://github.com/protocolbuffers/protobuf/blob/v23.0/src/google/protobuf/arena.cc#L522-L530
### Versions
1.13.1 - master
cc @malfet @seemethere
| 2 |
1,746 | 106,296 |
[xla hash update] update the pinned xla hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
1,747 | 106,294 |
torch DDP oom caused by weak protocol
|
oncall: distributed, module: memory usage, module: ddp
|
### π Describe the bug
Current torch DDP distributed trainiing protocol is weak, that it use simple tcp protocol listening over master port to choose action to exectue. While it is efficient, but it may cause issue when abonormal network traffic comes.
Here we met an OOM issue, which cause machine quickly run out cpu memory. And we root cause this issue to be torch DDP mistake nmap scan message as addHandler message, and create a very large mmaped pool hold to prepare the incoming message.
Process to reproduce DDP OOM:
```
nmap -p[master port] -sS -sV [torch DDP master node IP]
```
The trigger trace is as:
```
https://github.com/pytorch/pytorch/blob/v2.0.1/torch/csrc/distributed/c10d/TCPStore.cpp#L588
-> https://github.com/pytorch/pytorch/blob/v2.0.1/torch/csrc/distributed/c10d/TCPStore.cpp#L281
-> https://github.com/pytorch/pytorch/blob/v2.0.1/torch/csrc/distributed/c10d/TCPStore.cpp#L384
-> https://github.com/pytorch/pytorch/blob/v2.0.1/torch/csrc/distributed/c10d/Utils.hpp#L659
inline std::string recvString(int socket) {
SizeType valueSize;
recvBytes<SizeType>(socket, &valueSize, 1);
std::vector<char> value(valueSize);
recvBytes<char>(socket, value.data(), value.size());
return std::string(value.data(), value.size());
}
```
@recvString, it would parse nmap message into a very large mesage to recv, more than 1T, which lead torch to request 1T+ memory to system, leading to OOM.
It is very common to use nmap to scan port in data center, so shall we consider to make torch DDP protocol more robust to such kind of "attack"? I think only add some magic number would greatly reduce such issue.
Thx
### Versions
Seems to me, since DDP is supported, this issue is existed.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 3 |
1,748 | 106,290 |
Added More Information About Adadelta Optimizer
|
triaged, open source, Stale, release notes: optimizer
|
I have added more information about Adadelta Optimizer to developers understand faster ways to what is doing that.
It's my changes code looks like this:

| 3 |
1,749 | 106,287 |
Many tests in test/dynamo fail if run in the context of just 'pytest test/dynamo'
|
module: ci, triaged, module: dynamo
|
### π Describe the bug
```
FAILED [0.0085s] test/dynamo/test_misc.py::MiscTests::test_cond_nested - AssertionError: <class 'torch._dynamo.variables.torch.TorchVariable'>
FAILED [0.0073s] test/dynamo/test_recompile_ux.py::RecompileUxTests::test_drop_cache_on_skip - AssertionError: False is not true
FAILED [0.0126s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_fn_call_args - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0277s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_local_module - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0234s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_nonlocal_fn_call - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0202s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_nonlocal_module_class - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0238s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_nonlocal_module_fn_call - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0234s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_successful_inline - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0173s] test/dynamo/test_replay_record.py::ReplayRecordTests::test_unsuccessful_inline - AssertionError: no logs of level ERROR or higher triggered on torch._dynamo
FAILED [0.0941s] test/dynamo/test_repros.py::ReproTests::test_reformer_train - AssertionError: '3' != '1'
```
Many of these do not fail if you run them individually. Some of these tests have already been flagged as flaky by CI https://github.com/pytorch/pytorch/issues/101731
I already filed the ReplayRecordTests five days ago https://github.com/pytorch/pytorch/issues/105944
### Versions
main
cc @seemethere @malfet @pytorch/pytorch-dev-infra @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,750 | 106,271 |
RuntimeError: GlooDeviceFactory::makeUVDevice(): interface or hostname can't be empty
|
oncall: distributed
|
### π Describe the bug
File "C:\Users\me\Desktop\rvc5\RVC-beta0717\runtime\lib\site-packages\torch\distributed\distributed_c10d.py", line 994, in _new_process_group_helper
backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
RuntimeError: GlooDeviceFactory::makeUVDevice(): interface or hostname can't be empty
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 536.67
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2496
DeviceID=CPU0
Family=198
L2CacheSize=4096
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2496
Name=11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.1
[pip3] torchcrepe==0.0.20
[pip3] torchgen==0.0.1
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 9 |
1,751 | 106,269 |
Make our source attribution debug prints more useful for Compiler Explorer
|
triaged, oncall: pt2
|
### π Describe the bug
Just for fun, I decided to redo an experiment @msaroufim did a year ago, which was how could we integrate PT2 with Compiler Explorer. I had a few objectives:
1. Make @Chillee's "shape viewer"; that is to say, write some PyTorch code, and then get it annotated variable by variable saying what the sizes of all intermediates are (without having to, e.g., insert prints everywhere). Should work with symbolic sizes.
2. Make a nice frontend for https://gist.github.com/ezyang/192c1b5eb57ff95a46d3b50aa46e3193 ; put in some PyTorch code, annotate it with ranks, find valid input shapes for the function.
3. Take various IRs that torch.compile stack can produce and let you introspect it, the same way usual Godbolt works
I cannibalized the existing Python bytecode disassembler to call into torch.compiler https://gist.github.com/ezyang/bb4bbb5b54eebbff1dc8fd7583c5d2bc which was pretty easy. To setup line number correspondences, I have to update processAsm to parse out line numbers and annotate lines of the produced output with them:
```
override async processAsm(result) {
const lineRe = /^\s{0,4}(\d+)(.*)/;
const bytecodeLines = result.asm.split('\n');
const bytecodeResult: ParsedAsmResultLine[] = [];
let lastLineNo: number | undefined;
let sourceLoc: AsmResultSource | null = null;
for (const line of bytecodeLines) {
const match = line.match(lineRe);
if (match) {
const lineno = parseInt(match[1]);
sourceLoc = {line: lineno, file: null};
lastLineNo = lineno;
} else if (line) {
sourceLoc = {line: lastLineNo, file: null};
} else {
sourceLoc = {line: undefined, file: null};
lastLineNo = undefined;
}
bytecodeResult.push({text: line, source: sourceLoc});
}
return {asm: bytecodeResult};
}
```
Our logs are not super well equipped for doing this.
Let's start with (3). To do this, I need two ingredients: I need to print out the IR, and the IR needs to be annotated with line numbers. None of our log outputs accessible by TORCH_LOGS do this.
TORCH_LOGS=graph_code (this is the closest, but the attribution is ... totally useless?)
```
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] TRACED GRAPH
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] ===== __compiled_fn_0 =====
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] def forward(self, L_args_0_ : torch.Tensor):
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] l_args_0_ = L_args_0_
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /Users/ezyang/miniconda3/lib/python3.8/site-packages/torch/_dynamo/external_utils.py:17, code: return fn(*args, **kwargs)
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] mul = l_args_0_ * 2; l_args_0_ = None
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /Users/ezyang/miniconda3/lib/python3.8/site-packages/torch/_dynamo/external_utils.py:17, code: return fn(*args, **kwargs)
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] add = mul + 1; mul = None
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG] return (add,)
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-07-29 22:57:03,698] torch._dynamo.output_graph.__graph_code: [DEBUG]
```
TORCH_LOGS=aot_graphs (same problem as graph_code)
```
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] TRACED GRAPH
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] ===== Forward graph 0 =====
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] <eval_with_key>.4 from /Users/ezyang/miniconda3/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py:477 in wrapped class <lambda>(torch.nn.Module):
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] def forward(self, arg0_1: f32[2]):
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] # File: /Users/ezyang/miniconda3/lib/python3.8/site-packages/torch/_dynamo/external_utils.py:17, code: return fn(*args, **kwargs)
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] mul: f32[2] = torch.ops.aten.mul.Tensor(arg0_1, 2); arg0_1 = None
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO]
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] # File: /Users/ezyang/miniconda3/lib/python3.8/site-packages/torch/_dynamo/external_utils.py:17, code: return fn(*args, **kwargs)
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] add: f32[2] = torch.ops.aten.add.Tensor(mul, 1); mul = None
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO] return (add,)
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO]
[2023-07-29 23:16:30,543] torch._functorch.aot_autograd.__aot_graphs: [INFO]
```
TORCH_LOGS=output_code
```
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] extern "C" void kernel(const float* in_ptr0,
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] float* out_ptr0)
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] {
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] {
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] #pragma GCC ivdep
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] for(long i0=static_cast<long>(0L); i0<static_cast<long>(2L); i0+=static_cast<long>(1L))
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] {
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] auto tmp0 = in_ptr0[static_cast<long>(i0)];
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] auto tmp1 = static_cast<float>(2.0);
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] auto tmp2 = decltype(tmp0)(tmp0 * tmp1);
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] auto tmp3 = static_cast<float>(1.0);
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] auto tmp4 = tmp2 + tmp3;
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] out_ptr0[static_cast<long>(i0)] = tmp4;
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] }
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] }
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] }
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] ''')
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG]
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG]
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] async_compile.wait(globals())
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] del async_compile
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG]
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] def call(args):
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] arg0_1, = args
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] args.clear()
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] assert_size_stride(arg0_1, (2, ), (1, ))
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] buf0 = empty_strided((2, ), (1, ), device='cpu', dtype=torch.float32)
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] cpp_fused_add_mul_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()))
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] del arg0_1
[2023-07-29 22:59:22,230] torch._inductor.graph.__output_code: [DEBUG] return (buf0, )
```
We do have a print for tensor sizes, there's no way to directly get at it as an artifact, but anyway it looks like this:
```
[2023-07-29 23:17:55,235] torch._dynamo.output_graph.__graph_sizes: [DEBUG] TRACED GRAPH TENSOR SIZES
[2023-07-29 23:17:55,235] torch._dynamo.output_graph.__graph_sizes: [DEBUG] ===== __compiled_fn_0 =====
[2023-07-29 23:17:55,235] torch._dynamo.output_graph.__graph_sizes: [DEBUG] l_args_0_: (2,)
[2023-07-29 23:17:55,235] torch._dynamo.output_graph.__graph_sizes: [DEBUG] mul: (2,)
[2023-07-29 23:17:55,235] torch._dynamo.output_graph.__graph_sizes: [DEBUG] add: (2,)
```
So what are the problems?
1. Source code attribution seems to not work if you compile()/exec() some source code. I can manually work around this by using importloader to load the module to execute, so this isn't a hard blocker
2. Many of the IR outputs don't print line numbers. They should, so that I can setup the correspondences easily (inductor code output, tensor sizes)
cc @msaroufim @wconstab @bdhirsh @anijain2305 @williamwen42 @voznesenskym
### Versions
main
| 2 |
1,752 | 106,265 |
RuntimeError: Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'grad_y'
|
module: autograd, triaged, actionable, module: mps
|
### π Describe the bug
##Code
Encoder:
```
import torch
import torch.nn as nn
import torch.optim as optim
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, batch_first=True,dropout = dropout,)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
outputs, (hidden, cell) = self.rnn(embedded)
return hidden, cell
```
Decoder:
```
import torch
import torch.nn as nn
import torch.optim as optim
'''The Decoder class does a single step of decoding, i.e. it ouputs single token per time-step'''
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, batch_first=True,dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
input = input.unsqueeze(1)
embedded = self.dropout(self.embedding(input))
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
prediction = self.fc_out(output.squeeze(1))
return prediction, hidden, cell
```
Seq2Seq
```
import torch
import torch.nn as nn
import torch.optim as optim
import random
from utils import PrintDebug
from loadData2 import VOCAB_TRANSFORM,SRC_LANGUAGE,TGT_LANGUAGE
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
batch_size = trg.shape[0]
trg_len = trg.shape[1]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(batch_size,trg_len, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[:,0]
for t in range(1, trg_len):
output, hidden, cell = self.decoder(input, hidden, cell)
outputs[:,t] = output
teacher_force = random.random() < teacher_forcing_ratio
top1 = torch.argmax(output,axis=1)
input = trg[:,t] if teacher_force else top1
return outputs
```
The issue only happens when running the code on MPS on Mac M2. On CPU the code runs fine and no errors pop up. Only on MPS and the DEVICE this issue happens on ```loss.backward()```. I found another issue with the same title ```https://github.com/pytorch/pytorch/issues/103430``` and I see that a PR was made to fix this though i belive a fix is still needed.
```
File "/Users/deepakrishi/Documents/Repo/pytorch/SequentialData/run.py", line 105, in <module>
train_loss = train(MODEL, TRAIN_DATALOADER, OPTIMIZER, CRITERION, CLIP)
File "/Users/deepakrishi/Documents/Repo/pytorch/SequentialData/run.py", line 52, in train
loss.backward()
File "/Users/deepakrishi/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/Users/deepakrishi/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'grad_y'
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (main, Apr 20 2023, 13:58:42) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.4
[pip3] torch==2.0.1
[pip3] torcharrow==0.1.0
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[conda] numpy 1.21.4 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_0 pytorch
[conda] torcharrow 0.1.0 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cpu pytorch
[conda] torchdata 0.6.1 py310 pytorch
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 py310_cpu pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @kulinseth @malfet @DenisVieriu97 @razarmehr @abhudev
| 5 |
1,753 | 106,259 |
Fix sym node printing
|
triaged, open source, Stale, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes #103602
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
| 7 |
1,754 | 106,257 |
Type annotations for functional.py
|
Stale
|
Fixes #ISSUE_NUMBER
| 3 |
1,755 | 106,256 |
Runtime Error: Empty tensor
|
needs reproduction, triaged, module: mps
|
### π Describe the bug
M1 Max 64 GB RAM running Stable Diffusion in Automatic1111 with SDXL 1.0 model:
RuntimeError: [srcBuf length] > 0 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/OperationUtils.mm":277, please report a bug to PyTorch. Placeholder tensor is empty!
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.34.3)
CMake version: version 3.27.1
Libc version: N/A
Python version: 3.9.6 (default, Jul 7 2023, 20:26:47) [Clang 15.0.0 (clang-1500.0.34.3)] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
1,756 | 106,255 |
Network module momory is not released in C++ libtorch 2.0.1
|
module: windows, module: cpp, triaged
|
### π Describe the bug
Easily we can define the network that consisting of various nn::modules is defined as a class.
Calling a module's destructor fails to free memory.
void Dispose()
{
delete model;
}
At this time, the exception is not caught and the program terminates.
We separately implemented the function of allocating and removing an object in this network module.
When using torch version 1.13, it was confirmed that the GPU memory was completely removed normally with the "delete model" command.
However, when this code with torch 2.0.1 is executed, the Network class defined as Module::holder fails to release the memory normally and the program terminates abnormally.
Can I free the model memory of pytorch for developing object-oriented C++ software.
thank you
### Versions
Libtorch C++ 2.0.1
Windows 10, CUDA 11.8, RTX 3090 environment.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jbschlosser
| 1 |
1,757 | 106,251 |
Improve Error Message in MultiMarginLoss for Inconsistent Target Size
|
module: nn, good first issue, module: error checking, triaged, actionable
|
### π Describe the bug
While using the MultiMarginLoss function in PyTorch, I encountered a RuntimeError that could be more informative. The error message received was:
```
RuntimeError: inconsistent target size, got: [7]
```
This error occurred with the following code:
```python
import torch
import torch.nn as nn
ip_size = [8,8]
target_size = [7]
input_tensor = torch.rand(ip_size)
target_tensor = torch.rand(target_size).type(torch.LongTensor)
loss_function = nn.MultiMarginLoss()
# Compute loss
loss = loss_function(input_tensor, target_tensor)
```
For a more effective debugging experience, I suggest revising the error message to provide more specifics about the size and dimension mismatches, similar to other loss functions error messages. A proposed change could be:
```python
"MultiMarginLoss: The size of input tensor ({input size}) must match the size of target tensor ({target size}) at non-singleton dimension {dimension}"
```
This message would not only communicate the problem but also provide the specific mismatched sizes and dimension, which would assist users in correcting their code.
Please let me know if there's any additional information you need or any other ways I can assist. If this behavior is expected and known, feel free to close this issue.
While this message indicates that there is an inconsistency with the target size, it does not provide specific details about the expected size or the mismatched dimensions.
### Versions
PyTorch version: 2.1.0.dev20230622+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 12.0.0-3ubuntu1~20.04.5
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2070
GPU 1: NVIDIA GeForce RTX 2070
GPU 2: NVIDIA GeForce RTX 2070
GPU 3: NVIDIA GeForce RTX 2070
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1224.656
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.39
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230622+cu118
[pip3] torchaudio==2.1.0.dev20230622+cu118
[pip3] torchvision==0.16.0.dev20230622+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230622+cu118 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| 4 |
1,758 | 106,244 |
Change behavior of .users and .args of SchedulerNode to match the same API in fx.Node
|
fb-exported, Stale, module: inductor, module: dynamo, ciflow/inductor
|
Summary: Following the discussion in https://github.com/pytorch/pytorch/pull/100762/files#r1214655330, this PR changes behavior of `.users` and `.args` of SchedulerNode to match the same API in fx.Node, so that it's easier for people who are familiar with fx.Node APIs to get started on Inductor IR (SchedulerNode).
Differential Revision: D47892324
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 9 |
1,759 | 106,243 |
OneCycleLR's state_dict includes a full reference to the optimizer
|
module: optimizer, triaged, needs design, module: LrScheduler
|
### π Describe the bug
When reading PyTorch documentation (and the base `LRScheduler` class's implementation of `state_dict`), it is clear that the intention behind a learning rate scheduler's `state_dict` method is that it should _NOT_ include the optimizer.
However, the `OneCycleLR` does include (indirectly) a reference to the optimizer in its `state_dict`. This can be reproduced with this snippet of code:
```python
import os
import tempfile
import torch
def main():
"""Main."""
model = torch.nn.Linear(3, 5)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-5)
lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer=optimizer, max_lr=1e-5, total_steps=123
)
_, f = tempfile.mkstemp()
try:
torch.save(lr_scheduler.state_dict(), f)
# Evidently, when saving (and therefore, reloading) the LRS's state, the "anneal_func"
# is serialized as a bound method of the optimizer. This means it has a reference, via
# `__self__`, to the optimizer instance. This in turn means that the _entire_ optimizer
# object itself is pickled into the state - which means so is its reference to the
# optimizer!
lr_state = torch.load(f)
print(lr_state["anneal_func"].__self__.optimizer.param_groups)
finally:
os.remove(f)
if __name__ == "__main__":
main()
```
# Suggestion for a fix.
Override the `OneCycleLR.state_dict` to change how it serializes the `anneal_func`.
### Versions
The snippet of code was run using the `pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime` docker container.
```
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz
Stepping: 4
CPU MHz: 1200.245
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 7599.80
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 16.5 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchdata 0.6.1 py310 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.15.2 py310 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu117 pytorch```
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 1 |
1,760 | 106,240 |
[vision hash update] update the pinned vision hash
|
open source, Stale, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 5 |
1,761 | 106,239 |
[inductor] AOTInductor w/ saved weights
|
module: inductor, module: dynamo, ciflow/inductor, module: export, release notes: export
|
When generating the C++ code, we will unlift the parameters from the module inputs, and output the constants as a dictionary mapping constant name to the constant tensor to a separate file (the same file name where the code is generated, suffixed with `_constants.pt`). These constants can then be loaded through the `AOTInductorModelContainerSetConstants` function, which can be called before `AOTInductorModel.run_impl` to update the global `AOTInductorModel.constants_info_` dictionary of constant name to constant tensor. During `run_impl`, we can query `constants_info_` for the tensor values.
An example of the resulting C++ code is: [P805713957](https://www.internalfb.com/phabricator/paste/view/P805713957?lines=233%2C242).
This diff is mainly addressing the frontend of AOTInductor, where we save the constants to a different file. The changes to the cpp wrapper are for some initial testing to make sure the flow works. @muchulee8 will follow up with more legit changes on the codegen/cpp wrapper related to `AOTInductorModelContainer`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 6 |
1,762 | 106,237 |
ghstack + mergebot race condition
|
module: ci, triaged
|
### π Describe the bug
"@pytorchbot merge" command was issued for both https://github.com/pytorch/pytorch/pull/105713 and https://github.com/pytorch/pytorch/pull/105808, which are stacked on top of each other resulted in 3 commits instead of two:
- https://github.com/pytorch/pytorch/commit/ac6d8fb16e64868ec78548882ef5951a05e437f8
- https://github.com/pytorch/pytorch/commit/2e02dfae9a23664f351c49d4a1b4106e45c9f4e1
- https://github.com/pytorch/pytorch/commit/f15b6ec6d6d983cfc399b49b2f72e7ab5635c4e8 (which is a duplicate of https://github.com/pytorch/pytorch/commit/ac6d8fb16e64868ec78548882ef5951a05e437f8 )
Merge-bot should detect such slightly incorrect usecases and cancel concurrent merges on all but topmost PR on the stack
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra
| 0 |
1,763 | 106,234 |
[Compiled Autograd] Refactor duplicate symint handling
|
module: inductor, module: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106234
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 4 |
1,764 | 106,221 |
RProp improvement tracker
|
module: optimizer, triaged, actionable
|
As I was reading through RProp, I realize there are several things that would make it better:
- [ ] Finish converting step to be a Tensor instead of a Scalar (every other optimizer in torch.optim has made this leap a few years ago)
- [ ] The foreach implementation can be faster with the introduction of
- [ ] torch._foreach_sign
- [ ] torch._foreach_clamp
cc @vincentqb @jbschlosser @albanD @crcrpar
| 5 |
1,765 | 106,220 |
torch compile does not work with torch.nn.functional.softmax ?
|
triaged, oncall: pt2
|
### π Describe the bug
Hi,
I'm trying to compile our model with torch.compile stack.
It looks like its not compatible with softmax.
Here is a full error message.
```
rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] File "/mnt/xarfuse/uid-194481/e0143f7f-seed-bac47847-3826-49af-9334
-48541a87a0c7-ns-4026535691/torch/_subclasses/fake_tensor.py", line 1382, in dispatch
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] return decomposition_table[func](*args, **kwargs)
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] File "/mnt/xarfuse/uid-194481/e0143f7f-seed-bac47847-3826-49af-9334
-48541a87a0c7-ns-4026535691/torch/_refs/__init__.py", line 2309, in amax
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] return _reduction(
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] File "/mnt/xarfuse/uid-194481/e0143f7f-seed-bac47847-3826-49af-9334
-48541a87a0c7-ns-4026535691/torch/_refs/__init__.py", line 2097, in _reduction
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] raise RuntimeError(
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] RuntimeError: reducing over zero-size dimension for reduction operation
without identity
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING]
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] While executing %amax : [num_users=1] = call_function[target=torch.ops.aten
.amax.default](args = (%view_4, [-1], True), kwargs = {})
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] Original traceback:
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] File "/mnt/xarfuse/uid-194481/e0143f7f-seed-bac47847-3826-49af-9334
-48541a87a0c7-ns-4026535691/pyspeech/modules/emformer_attention.py", line 955, in <resume in forward>
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] rc_output_memory = self.prepare_attention_output(
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] File "/mnt/xarfuse/uid-194481/e0143f7f-seed-bac47847-3826-49af-9334
-48541a87a0c7-ns-4026535691/pyspeech/modules/emformer_attention.py", line 811, in prepare_attention_output
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING] attention_weights_float = torch.nn.functional.softmax(
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING]
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING]
[rank11]:[2023-07-28 04:35:51,728] torch._dynamo.convert_frame: [WARNING]
[rank11]:[2023-07-28 04:35:51,729] torch._dynamo.convert_frame: [WARNING] converting frame raised error, suppressing error
```
Can you help me to look?
### Versions
Meta internal
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
1,766 | 106,219 |
Decomposition of bmm, addmm, mm for dot product
|
fb-exported, module: inductor, ciflow/inductor, release notes: inductor
|
Summary: Decomposition of bmm, addmm, mm for dot product
Test Plan:
sandcastle ; github ci
Differential Revision: D47881983
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 25 |
1,767 | 106,217 |
[dynamo.export] Assertion Error: Mutating module attribute during export.
|
triaged, oncall: pt2, module: export
|
Examples: #105530, #105531.
Some thoughts:
1. For concern over model being mutated during export, maybe we can backup the original attribute value and revert afterwards.
2. For concern over exported model soundness.
1. If the original value of the mutated attribute was never used, the graph can safely emit a variable to be used instead of the attribute.
2. Otherwise, the attribute can be lifted as additional graph input and output, with hinted connection to the original attribute which can be processed by downstream users of export.
Some of the functionalities already exist. The remaining work seems to be mainly formalizing the existence of 2.2. and making it play well with `Dynamo input/output is not consistent with traced input/output` check.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
1,768 | 106,207 |
dynamo struggles with list of tensor elements
|
good first issue, triaged, module: dynamo
|
### π Describe the bug
```python
def fn():
x = torch.tensor([1, 2, 3, 4, 5, 6], dtype=torch.int64)
y = [x[0], x[2], x[4]]
return torch.LongTensor(y)
```
fails to compile throwing a
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <class 'torch.LongTensor'>(*([FakeTensor
(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(), d
type=torch.int64)],), **{}):
The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy alloca
tion, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
```
Same same if we use a `torch.as_tensor` in place of the `LongTensor` call. It actually passes if we change it by a `torch.tensor`.
### Versions
master
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
1,769 | 106,206 |
Build failure due to C++ version mismatch
|
module: build, triaged
|
### π Describe the bug
I'm building with GCC 12.2 and "system"/preinstalled protobuf.
The build fails on `caffe2.pb.cc` which includes protobuf which includes abseil which tries to use `std::string_view`.
Abseil was build without specific C++-standard flags so it uses the compilers default, which is C++17. However PyTorchs C++ standard (used to) be C++14: https://github.com/pytorch/pytorch/blob/dff70a5e1a6e7a184a2e75c1b22a8e0ce0944b41/CMakeLists.txt#L40
Both Abseil and Protobuf correctly export (via INTERFACE properties) their required C++ standard but PyTorchs doesn't consistently use the CMake target `protobuf::libprotobuf`
The 2 affected locations are:
- https://github.com/pytorch/pytorch/blob/dffa4e14b912914d461c1af5cea632460ca545aa/caffe2/proto/CMakeLists.txt#L9 which adds an (object-)library containing (generated) protobuf C++ files but that doesn't link against `protobuf::libprotobuf` and hence doesn't know about it's C++ standard
- https://github.com/pytorch/pytorch/blob/dffa4e14b912914d461c1af5cea632460ca545aa/cmake/ProtoBuf.cmake#L120-L122 which hides the issue by adding the protobuf include path (read from the target) globally (which is discouraged CMake practice)
I was able to resolve this by adding `target_link_libraries(Caffe2_PROTO PUBLIC protobuf::libprotobuf)` and removing https://github.com/pytorch/pytorch/blob/dffa4e14b912914d461c1af5cea632460ca545aa/cmake/ProtoBuf.cmake#L120-L122
However I'm wondering why the object-library is necessary at all (the objects are used once to create `caffe2_protos`)
Although this (currently) might not be an issue anymore after 36ac095ff8918bf7c208029bf6ad28418f1620c1 raised the standard to C++17 I still think it is wrong to just ignore the C++ standard of (in this case) libprotobuf and any other INTERFACE options it might have set and which may be required to successfully compile it on all systems.
### Versions
1.13.1 - master
cc @malfet @seemethere
| 0 |
1,770 | 106,202 |
Enable mypy check for torch/_inductor/codegen/cpp.py
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes [#105230](https://github.com/pytorch/pytorch/issues/105230)
Summary:
As suggested in [#105230](https://github.com/pytorch/pytorch/issues/105230) mypy checking is enabled in torch/_inductor/codegen/cpp.py.
Before the fix:
```
mypy --follow-imports=skip torch/_inductor/codegen/cpp.py
torch/_inductor/codegen/cpp.py:390:9: error: Attribute "current_node" already defined on line 383 [no-redef]
torch/_inductor/codegen/cpp.py:1184:38: error: Name "Tuple" is not defined [name-defined]
torch/_inductor/codegen/cpp.py:1184:38: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple")
torch/_inductor/codegen/cpp.py:1189:15: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1204:16: error: Cannot determine type of "cse" [has-type]
torch/_inductor/codegen/cpp.py:1204:34: error: Cannot determine type of "loads" [has-type]
torch/_inductor/codegen/cpp.py:1219:9: error: Cannot determine type of "stores" [has-type]
torch/_inductor/codegen/cpp.py:1229:13: error: Cannot determine type of "loads" [has-type]
torch/_inductor/codegen/cpp.py:1237:13: error: Cannot determine type of "stores" [has-type]
torch/_inductor/codegen/cpp.py:1240:21: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1265:13: error: Cannot determine type of "stores" [has-type]
torch/_inductor/codegen/cpp.py:1269:18: error: Cannot determine type of "cse" [has-type]
torch/_inductor/codegen/cpp.py:1294:13: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1295:13: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1304:13: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1414:18: error: Cannot determine type of "loads" [has-type]
torch/_inductor/codegen/cpp.py:1414:30: error: Cannot determine type of "compute" [has-type]
torch/_inductor/codegen/cpp.py:1414:44: error: Cannot determine type of "stores" [has-type]
torch/_inductor/codegen/cpp.py:1414:57: error: Cannot determine type of "cse" [has-type]
torch/_inductor/codegen/cpp.py:1418:20: error: Cannot determine type of "cse" [has-type]
torch/_inductor/codegen/cpp.py:1450:22: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1507:22: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1660:25: error: Name "reduction_type" is not defined [name-defined]
torch/_inductor/codegen/cpp.py:1703:29: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1707:23: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1708:27: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1709:31: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1710:17: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1712:31: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1713:17: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1723:43: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:1849:37: error: "list" is not subscriptable, use "typing.List" instead [misc]
torch/_inductor/codegen/cpp.py:1856:38: error: "list" is not subscriptable, use "typing.List" instead [misc]
torch/_inductor/codegen/cpp.py:1864:28: error: "list" is not subscriptable, use "typing.List" instead [misc]
torch/_inductor/codegen/cpp.py:2081:13: error: Invalid signature "Callable[[Any], Any]" for "__getattr__" [misc]
torch/_inductor/codegen/cpp.py:2114:33: error: Need type annotation for "i32_iinfo" [var-annotated]
torch/_inductor/codegen/cpp.py:2122:33: error: Need type annotation for "f32_iinfo" [var-annotated]
torch/_inductor/codegen/cpp.py:2150:28: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2151:28: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2153:34: error: Item "None" of "Optional[Any]" has no attribute "__iter__" (not iterable) [union-attr]
torch/_inductor/codegen/cpp.py:2163:41: error: Argument 1 to "zip" has incompatible type "Optional[Any]"; expected "Iterable[Any]" [arg-type]
torch/_inductor/codegen/cpp.py:2163:56: error: Argument 2 to "zip" has incompatible type "Optional[Any]"; expected "Iterable[Any]" [arg-type]
torch/_inductor/codegen/cpp.py:2172:37: error: Need type annotation for "i32_iinfo" [var-annotated]
torch/_inductor/codegen/cpp.py:2189:32: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2205:34: error: Value of type "Optional[Any]" is not indexable [index]
torch/_inductor/codegen/cpp.py:2636:29: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2646:51: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2699:46: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
torch/_inductor/codegen/cpp.py:2738:33: error: Incompatible types in assignment (expression has type "KernelGroup", variable has type "CppWrapperKernelGroup") [assignment]
torch/_inductor/codegen/cpp.py:2909:41: error: Incompatible types in assignment (expression has type "None", variable has type "Dict[str, str]") [assignment]
torch/_inductor/codegen/cpp.py:2910:27: error: Incompatible types in assignment (expression has type "None", variable has type "LoopLevel") [assignment]
torch/_inductor/codegen/cpp.py:2915:25: error: Incompatible types in assignment (expression has type "None", variable has type "CppKernel") [assignment]
torch/_inductor/codegen/cpp.py:3067:29: error: Incompatible types in assignment (expression has type "None", variable has type "List[LoopLevel]") [assignment]
torch/_inductor/codegen/cpp.py:3068:25: error: Incompatible types in assignment (expression has type "None", variable has type "CppKernel") [assignment]
torch/_inductor/codegen/cpp.py:3079:27: error: Incompatible types in assignment (expression has type "None", variable has type "LoopLevel") [assignment]
torch/_inductor/codegen/cpp.py:3080:52: error: Argument 1 to "zip" has incompatible type "Optional[Any]"; expected "Iterable[Any]" [arg-type]
torch/_inductor/codegen/cpp.py:3080:62: error: Argument 2 to "zip" has incompatible type "Optional[Any]"; expected "Iterable[Any]" [arg-type]
torch/_inductor/codegen/cpp.py:3082:16: error: Unsupported operand types for >= ("int" and "None") [operator]
torch/_inductor/codegen/cpp.py:3082:16: note: Right operand is of type "Optional[Any]"
torch/_inductor/codegen/cpp.py:3086:45: error: Argument 2 to "LoopNestWithSplit" has incompatible type "int"; expected "CppKernel" [arg-type]
torch/_inductor/codegen/cpp.py:3086:49: error: Argument 1 to "len" has incompatible type "Optional[Any]"; expected "Sized" [arg-type]
Found 59 errors in 1 file (checked 1 source file)
```
After the fix:
```
mypy --follow-imports=skip torch/_inductor/codegen/cpp.py
Success: no issues found in 1 source file
```
Reviewers: @eellison
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
1,771 | 106,197 |
Scalar Tensor lowering to Fake Tensor inside Inductor
|
triaged, module: inductor, module: dynamo
|
### π Describe the bug
In PR: https://github.com/pytorch/pytorch/pull/105894, we change quantization `scale`, `zero point` from scalar to scalar tensor. However, the scalar tensor of `scale` has lowered as fake tensor and failed to execute.
Here is the software component to reproduce this issue:
- Test pytorch branch to reproduce this issue: https://github.com/leslie-fang-intel/pytorch/tree/leslie/quant_scale_scalar_tensor
- Test script: https://gist.github.com/leslie-fang-intel/a3220284b5206b7a54ffcb990f664286
- Inductor generated code is: https://gist.github.com/leslie-fang-intel/e801ff6aa7b521d52a3e729e748775e3
- Error message is:
```
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_inductor/compile_fx.py", line 861, in wrapper
return optimized_function(args_new)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_inductor/codecache.py", line 370, in __call__
return self.get_current_callable()(inputs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_inductor/codecache.py", line 397, in _run_from_cache
return compiled_graph.compiled_artifact(inputs)
File "/tmp/torchinductor_root/i7/ci7tla2nhhmckgf4wli4jnteqsxiupg6w7rnvac3hj7z3nmrpewb.py", line 123, in call
buf1 = torch.ops.onednn.qconv2d_pointwise.tensor(buf0, constant3, constant4, constant2, constant0, constant1, None, [1, 1], [0, 0], [1, 1], 1, None, None, True, 'none', [], '')
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_subclasses/fake_tensor.py", line 1077, in __torch_dispatch__
return func(*args, **kwargs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_ops.py", line 435, in __call__
return self._op(*args, **kwargs or {})
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_subclasses/fake_tensor.py", line 1199, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_subclasses/fake_tensor.py", line 1310, in dispatch
) = self.validate_and_convert_non_fake_tensors(func, converter, args, kwargs)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_subclasses/fake_tensor.py", line 1503, in validate_and_convert_non_fake_tensors
args, kwargs = tree_map_only(
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_pytree.py", line 397, in tree_map_only
return tree_map(map_only(ty)(fn), pytree)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_pytree.py", line 327, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_pytree.py", line 327, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/utils/_pytree.py", line 378, in inner
return f(x)
File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_subclasses/fake_tensor.py", line 1491, in validate
raise Exception(
Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in onednn.qconv2d_pointwise.tensor(tensor([...], size=(1, 3, 8, 8), dtype=torch.uint8), FakeTensor(..., size=()), FakeTensor(..., size=(), dtype=torch.int64), tensor([...], size=(128, 3, 3, 3), dtype=torch.int8, layout=torch._mkldnn), tensor([...], size=(128,)), tensor([...], size=(128,)), None, [1, 1], [0, 0], [1, 1], 1, None, None, True, 'none', [], '')
```
From the error message, looks like the second/third parameter as `x_scale` and `x_zp` has been converted as `FakeTensor`.
### Versions
```
(inductor_quant) [root@CPX-4 inductor_quant]# python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0+git0f71239
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.17
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.19.5-1.el7.elrepo.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel Genuine CPU
Stepping: 10
CPU MHz: 1200.521
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 39424K
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+git7ae3928
[pip3] torchvision==0.16.0a0+08c9938
[conda] mkl 2023.0.0 intel_25398 intel
[conda] mkl-include 2023.0.0 <pip>
[conda] mkl-include 2023.0.0 intel_25398 intel
[conda] mkl-service 2.4.0 py38h3605609_14 intel
[conda] mkl-static 2023.0.0 <pip>
[conda] mkl_fft 1.3.1 py38hcab1719_22 intel
[conda] mkl_random 1.2.2 py38hbf47bc3_22 intel
[conda] mkl_umath 0.1.1 py38hf66a691_32 intel
[conda] numpy 1.24.3 <pip>
[conda] numpy 1.22.3 py38hf0956d0_5 intel
[conda] numpy-base 1.22.3 py38h45c9ace_5 intel
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 4 |
1,772 | 106,193 |
[caffe2] Clean up platform-specific fbobjc deps/flags
|
fb-exported, Stale, topic: not user facing
|
Summary:
Replace `platform_deps` with `select()`s and use to make sure the `cpukernel_avx2`
dep is x86-specific.
https://fb.prod.workplace.com/groups/buck2users/posts/3469961626593527/
Test Plan:
```
$ buck2 build //xplat/rtc/media/tools/newton:newton_pcAppleMac#macosx-arm64 --target-platforms //xplat/rtc/webrtc/platforms:bwe-dbg-arm64
```
Differential Revision: D47844555
| 5 |
1,773 | 106,185 |
[Reland] fix building errors on FreeBSD
|
triaged, open source, Stale, release notes: build, topic: bug fixes
|
PR #105897 was reverted because we may have a function name conflict. This PR renames the function.
cc @kit1980 @PaliC
| 8 |
1,774 | 106,184 |
Revert "[quant][pt2e] store scale/zero_point as tensor attributes to support serialization (#105894)"
|
Stale, release notes: quantization
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #106291
* __->__ #106184
manual revert as mergebot refuses to revert https://github.com/pytorch/pytorch/pull/105894
This reverts commit 3ca71ed735257cb7ad377b57a45057c265893a40.
| 3 |
1,775 | 106,183 |
[dynamo.export] symbolic_shapes.GuardOnDataDependentSymNode
|
triaged, oncall: pt2, module: dynamic shapes
|
This was observed when exporting `BertForMaskedLM`. It turned out the model code is doing a input value check to produce a warning. Hence an intuitive workaround appears to be applying `@torch._dynamo.assume_constant_result` on the checker function. That indeed worked around the issue and enables export.
I'm wondering if it is technically feasible to create an "unsafe" config that, when opt in, automatically applies `assume_constant_result` onto the call that triggers `GuardOnDataDependentSymNode`, to essentially consider it a constant and prevent the graph break?
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
Repro script
```python
import torch
import transformers
model = transformers.AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
gm, _ = torch._dynamo.export(model, aten_graph=True, return_dict=False, input_ids=torch.tensor([[1, 2, 3]]))
gm.print_readable()
# Or below after the api change
# torch._dynamo.export(model)(return_dict=False, input_ids=torch.tensor([[1, 2, 3]]))
```
```
GuardOnDataDependentSymNode: It appears that you're trying to get a value out of symbolic int/float whose value is data-dependent (and
thus we do not know the true value.) The expression we were trying to evaluate is Eq(i0, 1) (unhinted: Eq(i0, 1)). Scroll up to see
where each of these data-dependent accesses originally occurred.
from user code:
File "transformers/models/bert/modeling_bert.py", line 970, in
forward
self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
File "transformers/modeling_utils.py", line 3491, in
warn_if_padding_and_no_attention_mask
if self.config.pad_token_id in input_ids[:, [-1, 0]]:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
| 4 |
1,776 | 106,179 |
[ONNX] Set dynamic_shapes default to True
|
module: onnx, open source, Stale, release notes: onnx, topic: improvements
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106179
* #106178
TODO: Check benchmark result.
- Investigate regressions.
- Consolidate how internal dynamo dynamic shapes api should be exposed.
- Diagnostics.
| 5 |
1,777 | 106,173 |
`torch.ops.aten.split.Tensor._schema` return alias annotations are wrong
|
triaged, module: viewing and reshaping
|
The schema of `split.Tensor` is:
```
split.Tensor(Tensor(a -> *) self, SymInt split_size, int dim=0) -> Tensor(a)[]
```
where the output is a TensorList with the same alias annotation as the input (`a`).
It looks like `torch.ops.aten.split.Tensor`'s schema info doesn't capture this information:
```
>>> torch.ops.aten.split.Tensor._schema.returns[0].alias_info.before_set
set()
>>> torch.ops.aten.split.Tensor._schema.arguments[0].alias_info.before_set
{'a'}
>>> torch.ops.aten.split.Tensor._schema.arguments[0].alias_info.after_set
{'*'}
# This should return {'a'}, probably?
>>> torch.ops.aten.split.Tensor._schema.returns[0].alias_info.before_set
set()
>>> torch.ops.aten.split.Tensor._schema.returns[0].alias_info.after_set
set()
```
| 0 |
1,778 | 106,171 |
torch compile changes model output
|
triaged, oncall: pt2
|
### π Describe the bug
For some reason, torch compile changes model output. I checked multiple transformer models, here is one example:
```
import torch
import transformers
if __name__ == "__main__":
device = torch.device('cuda')
model = transformers.AutoModelForTokenClassification.from_pretrained(
"Jean-Baptiste/roberta-large-ner-english").to(device)
model.eval()
a = torch.randint(100, 2000, (128, 256), device=device)
with torch.no_grad():
out_not_compiled = model(input_ids=a, attention_mask=torch.ones_like(a)).logits
model = torch.compile(model)
with torch.no_grad():
out_compiled = model(input_ids=a, attention_mask=torch.ones_like(a)).logits
print(torch.sum(torch.abs(out_compiled - out_not_compiled))) # tensor(0.4234, device='cuda:0')
```
Same orders of magnitude in difference for CPU and any other precisions (float16, bfloat16) with/without torch.cuda.amp.autocast decorator.
Transformers versions:
```
transformers version: 4.31.0
Platform: Linux-5.15.0-1034-oracle-x86_64-with-glibc2.29
Python version: 3.8.10
Huggingface_hub version: 0.15.1
Safetensors version: 0.3.1
Accelerate version: 0.20.3
Accelerate config: not found
PyTorch version (GPU?): 2.0.0+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: does not matter (provide GPU results, same on CPU)
Using distributed or parallel set-up in script?: No
```
Related huggingface issue: https://github.com/huggingface/transformers/issues/25155
### Versions
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1034-oracle-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.105.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7J13 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2550.000
CPU max MHz: 3673.0950
CPU min MHz: 1500.0000
BogoMIPS: 4900.11
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==2.0.4
[pip3] torch==2.0.0
[pip3] torchmetrics==0.11.4
[pip3] triton==2.0.0
[conda] Could not collect
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
1,779 | 106,168 |
Automated submodule update: FBGEMM
|
triaged, open source, topic: not user facing
|
This is an automated pull request to update the first-party submodule for [pytorch/FBGEMM](https://github.com/pytorch/FBGEMM).
New submodule commit: https://github.com/pytorch/FBGEMM/commit/3579b4d627a17e0cbff39b553038e95830fc4685
Test Plan: Ensure that CI jobs succeed on GitHub before landing.
| 2 |
1,780 | 106,165 |
Add foreach functions to docs
|
open source, Stale
|
rel:
- #58833
| 3 |
1,781 | 106,164 |
distributed.batch_isend_irecv() crash when send/recv refers to itself
|
oncall: distributed, module: crash, module: c10d
|
### π Describe the bug
Hi, I am implementing the all-to-all wrapper for gloo backend using batched_isend_irecv. The program crashes when the isend and irecv set the rank to be the current rank. The traceback does not refer to any Python code.
```python
import torch
import torch.distributed as dist
def all_to_all(in_tensors: List[torch.Tensor]) -> List[torch.Tensor]:
world_size = dist.get_world_size()
rank = dist.get_rank()
part2size = [None] * world_size
sizes = [in_tensor.shape for in_tensor in in_tensors]
dist.all_gather_object(part2size, sizes)
op_list = []
buffer = [None] * world_size
for i in range(world_size):
# if i == rank: # This fix the crash
# buffer[i] = in_tensors[i]
# continue
op_list.append(dist.P2POp(dist.isend, in_tensors[i], i))
buffer[i] = torch.empty(part2size[i][rank], dtype=in_tensors[i].dtype)
op_list.append(dist.P2POp(dist.irecv, buffer[i], i))
reqs = dist.batch_isend_irecv(op_list)
_ = [req.wait() for req in reqs]
return buffer
```
This is a test case.
```python
data = [
[
torch.tensor([0, 1]),
torch.tensor([2, 3]),
torch.tensor([4]),
torch.tensor([5]),
],
[
torch.tensor([10, 11, 12]),
torch.tensor([13, 14]),
torch.tensor([15, 16]),
torch.tensor([17, 18]),
],
[
torch.tensor([20, 21]),
torch.tensor([22]),
torch.tensor([23]),
torch.tensor([24]),
],
[
torch.tensor([30, 31]),
torch.tensor([32, 33]),
torch.tensor([34, 35]),
torch.tensor([36]),
],
]
res = all_to_all(data[dist.get_rank()])
```
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.2.5
Python version: 3.7.16 (default, Mar 10 2023, 03:25:26) [GCC 7.3.1 20180712 (Red Hat 7.3.1-15)] (64-bit runtime)
Python platform: Linux-5.10.179-166.674.amzn2.x86_64-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 3149.801
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] No relevant packages
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
1,782 | 106,163 |
[RFC][WIP] Extend convert_to_unspecialized for module attr mutation to module fields mutated through BINARY_SUBSCR
|
Stale, module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #106163
fixes
cc @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel @anijain2305
| 5 |
1,783 | 106,161 |
rocm support for windows
|
module: windows, module: rocm, triaged, enhancement
|
### π The feature, motivation and pitch
https://github.com/RadeonOpenCompute/ROCm/discussions/2347 now it's avaible
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 5 |
1,784 | 106,149 |
Automated submodule update: kineto
|
triaged, open source, topic: not user facing
|
This is an automated pull request to update the first-party submodule for [pytorch/kineto](https://github.com/pytorch/kineto).
New submodule commit: https://github.com/pytorch/kineto/commit/465ff4cd715aa81f08a4320820ea4965bb2fe9c1
Test Plan: Ensure that CI jobs succeed on GitHub before landing.
| 1 |
1,785 | 106,144 |
Pytorch nighlty and openAI/triton cuda
|
high priority, needs reproduction, triaged, oncall: pt2
|
### π Describe the bug
If I am not wrong I think that the nightly pytorch (Cuda 11.8) wheels are not compatible with the Triton pinned commit as I am seeing something like https://github.com/openai/triton/issues/1955
See more:
https://discuss.pytorch.org/t/any-change-of-using-cuda-12-2/184461/6
If it is true why the CI is not failing with tests on pytorch nighlty cuda 11.8?
### Versions
nightly
cc @ezyang @gchanan @zou3519 @msaroufim @wconstab @bdhirsh @anijain2305
| 12 |
1,786 | 106,141 |
Sparse COO indices are torch.Int64 -- is this necessary?
|
module: sparse, triaged
|
### π The feature, motivation and pitch
```
import torch
import numpy as np
a = np.eye(10)
t = torch.Tensor(a).to_sparse_coo()
print(t.indices().dtype)
```
returns:
```torch.int64```
A 64-bit int seems excessive for storing coordinate indices, especially given the main intent of sparse formats is size reduction. Perhaps a torch.int32 would suffice?
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
1,787 | 106,137 |
`export(..., pre_dispatch=True)` for model in eval mode still inserts autograd ops
|
triaged, module: export, pre_dispatch tracing
|
Support for `export(..., pre_dispatch=True)` was added to make sure autograd (and autocast) functionality (e.g. `torch.no_grad`) continue to work on exported graphs. This is needed for training on an exported model. The way this works is it inserts these `torch._C._set_grad_enabled` ops [into the graph](https://github.com/pytorch/pytorch/blob/646fa36875821f2bcf4fbfbf669c1f4f9f69700d/torch/fx/experimental/proxy_tensor.py#L517-L536).
We have a use case where we want to use `export(..., pre_dispatch=True)` for models in eval mode. However, I just tried this out and looks like I'm still seeing these `torch._C._set_grad_enabled` ops in the graph. This may lead to unexpected behavior during eval, where some gradients may be computed but we actually don't need them.
**Minimal repro:**
```
import torch
import torch._dynamo
class ToyModel(torch.nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.linear1 = torch.nn.Linear(10, 10)
self.linear2 = torch.nn.Linear(10, 5)
def forward(self, x):
x = self.linear1(x)
with torch.no_grad():
x = self.linear2(x)
return x
example_inputs = (torch.randn(10, 10),)
m = ToyModel().eval()
m, _ = torch._dynamo.export(
m,
*example_inputs,
aten_graph=True,
pre_dispatch=True,
)
print(m)
```
**Output:**
```
def forward(self, x):
arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
_param_constant0 = self._param_constant0
_param_constant1 = self._param_constant1
linear_default = torch.ops.aten.linear.default(arg0, _param_constant0, _param_constant1); arg0 = _param_constant0 = _param_constant1 = None
_set_grad_enabled = torch._C._set_grad_enabled(False)
_param_constant2 = self._param_constant2
_param_constant3 = self._param_constant3
linear_default_1 = torch.ops.aten.linear.default(linear_default, _param_constant2, _param_constant3); linear_default = _param_constant2 = _param_constant3 = None
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return pytree.tree_unflatten([linear_default_1], self._out_spec)
```
| 1 |
1,788 | 106,136 |
bc-linter false positive with TypeAliases
|
triaged, module: devx
|
### π Describe the bug
The BC-linter warned on my PR even though the actual type has not changed from `Dict[str, Any]`. In actuality, I just introduced a TypeAlias called StateDict for clarity.
https://github.com/pytorch/pytorch/pull/105953#issuecomment-1653715848
### Versions
main
cc @ZainRizvi @kit1980 @huydhn @clee2000
| 5 |
1,789 | 106,135 |
Registering function that takes const std::vector<c10::SymInt>& to SymInt[] schema gives confusing error message
|
triaged, module: dynamic shapes
|
### π Describe the bug
```
what(): Inferred operator schema for a C++ kernel function doesn't match the expected function schema.
operator: fbgemm::jagged_to_padded_dense
expected schema: fbgemm::jagged_to_padded_dense(Tensor values, Tensor[] offsets, int[] max_lengths, float padding_value=0.) -> Tensor
registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_cpu.cpp:1601
inferred schema: (Tensor _0, Tensor[] _1, SymInt[] _2, float _3) -> Tensor _0
registered at fbcode/deeplearning/fbgemm/fbgemm_gpu/src/jagged_tensor_ops/jagged_tensor_ops_autograd.cpp:867 reason: Type mismatch in argument 3: int[] vs SymInt[]
```
### Versions
main
| 0 |
1,790 | 106,130 |
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.nonzero.default
|
triaged, module: fakeTensor
|
Hi team, I'm trying to grab a computational graph using FakeTensor, but encountered DynamicOutputShapeException exception.
PyTorch Version: 2.0.1+cu118
```
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
import traceback
from torch.fx.experimental.symbolic_shapes import ShapeEnv
from torch._dynamo.output_graph import config
a = torch.ones([1, 0])
mode = FakeTensorMode()
a = mode.from_tensor(a)
try:
with mode:
b = torch.nonzero(a)
except Exception as e:
traceback.print_exc()
```
The stacktrace is:
```
Traceback (most recent call last):
File "<ipython-input-15-5efd30f1231c>", line 13, in <cell line: 11>
b = torch.nonzero(a)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 987, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 1162, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/fake_tensor.py", line 403, in dyn_shape
raise DynamicOutputShapeException(func)
torch._subclasses.fake_tensor.DynamicOutputShapeException: aten.nonzero.default
```
So, anyone can help me to bypass this exception with nonzero operator?
Any response will be appreciated, thanks.
| 0 |
1,791 | 106,129 |
Allow settingsetGraphExecutorOptimize default for all threads
|
oncall: jit
|
### π The feature, motivation and pitch
Our teams have had numerous issues with torchscipt optimization and have had to disabled it to avoid breaking the code.
I recently debugged one such issue here: https://github.com/pytorch/pytorch/issues/106128
`torch::jit::setGraphExecutorOptimize` sets a thread-local variable, but the model may be run on a different thread. This was difficult to debug when a change to our test execution framework started running unit tests on new threads, but the model was only loaded once and reused across the tests.
Disabling optimization as a workaround would be much easier if it could be set globally for all threads.
### Alternatives
Surfacing and resolving optimizer bugs instead of working around them and sacrificing performance would be ideal, but when it blocks research work we need a stable workaround.
### Additional context
Refer to https://github.com/pytorch/pytorch/issues/106128 for an example.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,792 | 106,128 |
Torchscript optimizer incorrectly applies constant propagation to convert prim::ListConstruct() into prim::Constant
|
oncall: jit
|
### π Describe the bug
The following code fails when run from C++:
```python
@torch.jit.script
class Container:
def __init__(self) -> None:
self.values: typing.List[int] = []
assert len(self.values) == 0 # fails for the second or third instance Container
def append(self, value: int):
self.values.append(value)
class TorchscriptExample(torch.nn.Module):
@torch.jit.export
def create_container_object(self) -> Container:
return Container()
torch.jit.script(TorchscriptExample()).save("example.pt")
```
I also observed that this fails with optimization enabled (as by default).
Setting `torch::jit::setGraphExecutorOptimize(false);` passes.
```c++
auto mod = torch::jit::script::Module(torch::jit::load("example.pt"));
auto c1 = mod.create_class("__torch__.Container", {});
auto c2 = mod.create_class("__torch__.Container", {});
if (c1.toObjectRef().getAttr("values").is(c2.toObjectRef().getAttr("values"))) {
throw std::runtime_error("expect different lists"); // fails here
}
```
By running with `PYTORCH_JIT_LOG_LEVEL='>>profiling_graph_executor_impl'` I saw:
```
[DEBUG profiling_graph_executor_impl.cpp:558] After DecomposeOps, before ConstantPropagation
[DEBUG profiling_graph_executor_impl.cpp:558] graph(%self.1 : __torch__.Container):
[DEBUG profiling_graph_executor_impl.cpp:558] %1 : NoneType = prim::Constant()
[DEBUG profiling_graph_executor_impl.cpp:558] %2 : str = prim::Constant[value="AssertionError: "]() # :0:0
[DEBUG profiling_graph_executor_impl.cpp:558] %3 : int = prim::Constant[value=0]()
[DEBUG profiling_graph_executor_impl.cpp:558] %4 : __torch__.Container[] = prim::ListConstruct()
[DEBUG profiling_graph_executor_impl.cpp:558] = prim::SetAttr[name="values"](%self.1, %4)
[DEBUG profiling_graph_executor_impl.cpp:558] %values.1 : __torch__.Container[] = prim::GetAttr[name="values"](%self.1)
[DEBUG profiling_graph_executor_impl.cpp:558] %6 : int = aten::len(%values.1)
[DEBUG profiling_graph_executor_impl.cpp:558] %7 : bool = aten::eq(%6, %3)
[DEBUG profiling_graph_executor_impl.cpp:558] = prim::If(%7)
[DEBUG profiling_graph_executor_impl.cpp:558] block0():
[DEBUG profiling_graph_executor_impl.cpp:558] -> ()
[DEBUG profiling_graph_executor_impl.cpp:558] block1():
[DEBUG profiling_graph_executor_impl.cpp:558] = prim::RaiseException(%2, %1)
[DEBUG profiling_graph_executor_impl.cpp:558] -> ()
[DEBUG profiling_graph_executor_impl.cpp:558] return (%1)
[DEBUG profiling_graph_executor_impl.cpp:560] After ConstantPropagation, before EliminateDeadCode
[DEBUG profiling_graph_executor_impl.cpp:560] graph(%self.1 : __torch__.Container):
[DEBUG profiling_graph_executor_impl.cpp:560] %1 : NoneType = prim::Constant()
[DEBUG profiling_graph_executor_impl.cpp:560] %2 : str = prim::Constant[value="AssertionError: "]() # :0:0
[DEBUG profiling_graph_executor_impl.cpp:560] %3 : int = prim::Constant[value=0]()
[DEBUG profiling_graph_executor_impl.cpp:560] %8 : __torch__.Container[] = prim::Constant[value=annotate(List[__torch__.Container], [])]()
[DEBUG profiling_graph_executor_impl.cpp:560] = prim::SetAttr[name="values"](%self.1, %8)
[DEBUG profiling_graph_executor_impl.cpp:560] %values.1 : __torch__.Container[] = prim::GetAttr[name="values"](%self.1)
[DEBUG profiling_graph_executor_impl.cpp:560] %6 : int = aten::len(%values.1)
[DEBUG profiling_graph_executor_impl.cpp:560] %7 : bool = aten::eq(%6, %3)
[DEBUG profiling_graph_executor_impl.cpp:560] = prim::If(%7)
[DEBUG profiling_graph_executor_impl.cpp:560] block0():
[DEBUG profiling_graph_executor_impl.cpp:560] -> ()
[DEBUG profiling_graph_executor_impl.cpp:560] block1():
[DEBUG profiling_graph_executor_impl.cpp:560] = prim::RaiseException(%2, %1)
[DEBUG profiling_graph_executor_impl.cpp:560] -> ()
[DEBUG profiling_graph_executor_impl.cpp:560] return (%1)
```
### Versions
PyTorch version: 2.0.1+cu118.post4
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.7 (main, Sep 9 2022, 04:02:34) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 1080
GPU 1: Quadro P2000
Nvidia driver version: 470.82.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz
Stepping: 1
CPU MHz: 1200.000
CPU max MHz: 3500.0000
CPU min MHz: 1200.0000
BogoMIPS: 5986.61
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 3 MiB
L3 cache: 30 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] fft-conv-pytorch==1.1.3
[pip3] gpytorch==1.9.1
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.3.0
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.7.3
[pip3] pytorch3d==0.7.4
[pip3] torch==2.0.1+cu118.post4
[pip3] torch-geometric==2.3.0
[pip3] torch-tb-profiler==0.4.1
[pip3] torch-tensorrt==1.4.0
[pip3] torchmetrics==0.11.0
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
1,793 | 106,126 |
Libtorch report C10 error when compiling on my own project
|
needs reproduction, oncall: binaries, module: cpp, triaged
|
### π Describe the bug
Hi, guys
Some errors occur when i compiled Libtorch with other third party, such as onnx. The errors are as following:
I don't know if it's about the torch flags, D_GLIBCXX_USE_CXX11_ABI is 1 or 0.
`-- TORCH_CXX_FLAGS: -D_GLIBCXX_USE_CXX11_ABI=1`
### Error logs
```
`/home/xxx/code/dev/nn_tools/third_party/libtorch/include/c10/util/Exception.h:220:83: error: exception handling disabled, use β-fexceptionsβ to enable
220 | throw ::c10::err_type({__func__, __FILE__, static_cast<uint32_t>(__LINE__)}, msg)
/home/xxx/code/dev/nn_tools/third_party/libtorch/include/c10/core/DispatchKeySet.h:147:38: error: βfindFirstSetβ is not a member of βllvmβ
147 | uint64_t firstKeyIndex = llvm::findFirstSet(masked_data);
/home/xxx/code/dev/nn_tools/third_party/libtorch/include/ATen/core/ivalue_inl.h:555:14: error: βeβ was not declared in this scope; did you mean βllvm::numbers::eβ?
555 | return e.what();
| ^
| llvm::numbers::e`
```
As the log says, if i add `add_compile_options(-fexceptions)`, it will report other c10::detail::torchInternalAssertFail errors:
```
quantizationCalibration.cpp:(.text._ZN3c1020intrusive_ptr_targetD2Ev[_ZN3c1020intrusive_ptr_targetD5Ev]+0xdd): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
/usr/bin/ld: quantizationCalibration.cpp:(.text._ZN3c1020intrusive_ptr_targetD2Ev[_ZN3c1020intrusive_ptr_targetD5Ev]+0x137): undefined reference to `c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, char const*)'
/usr/bin/ld: lib/libMLIRTop.a(quantizationCalibration.cpp.o): in function `c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_()':
quantizationCalibration.cpp:(.text._ZN3c1013intrusive_ptrINS_10TensorImplENS_19UndefinedTensorImplEE6reset_Ev[_ZN3c1013intrusive_ptrINS_10TensorImplENS_19UndefinedTensorImplEE6reset_Ev]+0xa): undefined reference to `c10::UndefinedTensorImpl::_singleton'
/usr/bin/ld: lib/libMLIRTop.a(quantizationCalibration.cpp.o): in function `std::_MakeUniq<torch::autograd::AutogradMeta>::__single_object std::make_unique<torch::autograd::AutogradMeta, c10::TensorImpl*, bool&>(c10::TensorImpl*&&, bool&)':
```
I have test Libtorch standalone, it can compile and run correctly. So why it report above errors when i compile on my own project?
Thanks in advance.
### Minified repro
_No response_
### Versions
The Libtorch version is 2.0.1, CPU and GPU version
cc @seemethere @malfet @jbschlosser @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
1,794 | 106,124 |
[feature request] Better argument checks and error messaging for `tensor.repeat`
|
module: error checking, triaged, actionable, release notes: python_frontend
|
### π Describe the bug
```python
erb_indices = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 5, 7, 7, 8, 10, 12, 13, 15, 18, 20, 24, 28, 31, 37, 42, 50, 56, 67]
torch.tensor(erb_indices).reciprocal().repeat(erb_indices)
# RuntimeError: Storage size calculation overflowed with sizes=[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 5, 7, 7, 8, 10, 12, 13, 15, 18, 20, 24, 28, 31, 37, 42, 50, 56, 2144]
```
It would be good if the error is more clear.
also, strangely `torch.repeat` does not exist as a module function as opposed to `torch.repeat_interleave`: `AttributeError: module 'torch' has no attribute 'repeat'`
Btw, I also noticed that shape checks for torch.hstack are much more permissive compared to torch.stack. This might be okay for compat with NumPy (or maybe there should even be a `strict` mode that would enable stricter checks; or that pytorch should be stricter for consistency sake between torch.stack and torch.hstack which appear similar enough), but I suggest that this should be mentioned in docs.
### Versions
'2.0.0+cpu'
cc @malfet @svekars @carljparker
| 3 |
1,795 | 106,121 |
Got error when train models with more than one param_group in torch2.0
|
module: optimizer, triaged, has workaround
|
### π Describe the bug
I create optimizer in this way:
```python
for name, module in model.named_modules():
if isinstance(module, QuantModule_int2lora) and module.ignore_reconstruction is False:
avg_delta = torch.sum(module.weight_quantizer.delta) / torch.numel(module.weight_quantizer.delta)
params = [param for name, param in module.named_parameters() if 'lora' in name]
if firstone:
optimizer = torch.optim.AdamW(params, lr=avg_delta / 300)
firstone = False
else:
optimizer.add_param_group({'params': params, 'lr': avg_delta / 300})
```
which will lead to an error in torch >= 2.0.0
```
File "/fs03/dl65/jing/Yefei/stable-diffusion-v1/./ldm/models/diffusion/plms.py", line 477, in p_sample_plms
self.optimizer.step()
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 69, in wrapper
return wrapped(*args, **kwargs)
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/adamw.py", line 171, in step
adamw(
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/adamw.py", line 321, in adamw
func(
File "/projects/dl65/jliu/conda_envs/stablediffusion/lib/python3.8/site-packages/torch/optim/adamw.py", line 568, in _multi_tensor_adamw
torch._foreach_addcdiv_(device_params, device_exp_avgs, denom, step_size)
TypeError: _foreach_addcdiv_() received an invalid combination of arguments - got (list, list, tuple, list), but expected one of:
* (tuple of Tensors self, tuple of Tensors tensor1, tuple of Tensors tensor2, tuple of Scalars scalars)
didn't match because some of the arguments have invalid types: (list of [Parameter, Parameter], list of [Tensor, Tensor], tuple of (Tensor, Tensor), list of [Tensor, Tensor])
* (tuple of Tensors self, tuple of Tensors tensor1, tuple of Tensors tensor2, Tensor scalars)
didn't match because some of the arguments have invalid types: (list of [Parameter, Parameter], list of [Tensor, Tensor], tuple of (Tensor, Tensor), list of [Tensor, Tensor])
* (tuple of Tensors self, tuple of Tensors tensor1, tuple of Tensors tensor2, Number value)
didn't match because some of the arguments have invalid types: (list of [Parameter, Parameter], list of [Tensor, Tensor], tupl
```
However, it works fine with torch==1.9.0. I wonder why ?
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6140M CPU @ 2.30GHz
Stepping: 4
CPU MHz: 1000.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4600.00
Virtualization: VT-x
L1d cache: 1.1 MiB
L1i cache: 1.1 MiB
L2 cache: 36 MiB
L3 cache: 49.5 MiB
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.4
[pip3] pytorchcv==0.0.67
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py38h5eee18b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.6 py38h417a72b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.2 py38h417a72b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 1.24.3 py38hf6e8229_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.24.3 py38h060ed82_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] pytorch 2.0.1 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-lightning 2.0.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py38_cu117 pytorch
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtriton 2.0.0 py38 pytorch
[conda] torchvision 0.15.2 py38_cu117 pytorch
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 5 |
1,796 | 106,119 |
Documentation fix in contributing.md file
|
triaged, open source, Stale, topic: not user facing
|
Fixes #104872 Errors in contributing.md file
| 6 |
1,797 | 106,114 |
Removal of Object Check in CUDAPluggableAllocator::raw_delete()
|
triaged, open source, Stale, topic: not user facing
|
I've observed that the current sanity check is designed to prevent the deallocation of an illegal address and ensure the correctness of subsequent size and device information. The allocation_metadata_ is an unordered_map, indicating that a unique address can only correspond to a unique active object.
However, if users employ custom allocators based on a priori knowledge or profiling-based strategies, combined with Python's irregular garbage collection, it's plausible for multiple active objects to possess the same address. This doesn't necessarily lead to actual conflicts, but rather is due to Python not deallocating them in time.
In light of this situation, I would like to propose a consideration to possibly remove or at least downgrade this mandatory check to a mere warning. This is not to undermine the importance of maintaining object metadata integrity or checking for illegal addresses. Instead, it's a suggestion to potentially allow the responsibility to rest more with the user's custom allocator, since mature memory allocators should be capable of managing the object metadata independently. Their free() methods would primarily require the pointer information, and their methods would already include sufficient checks.
This proposal is meant to foster a more flexible design space for user custom allocators. Please take this as a well-intended recommendation for an open discussion.
| 5 |
1,798 | 106,112 |
MPS cumprod gradient is broken even when using cpu fallback on macos 13.2.1
|
triaged, module: mps
|
### π Describe the bug
While working on #104688 (which enables support for mps accelerated cumprod), I noticed that the gradient test of cumprod (`test_output_grad_match_cumprod_cpu_float32`) was failing only on the task [macos-12-py3-arm64-mps](https://github.com/pytorch/pytorch/actions/runs/5658964066/job/15332115951?pr=104688#logs). This task counter intuitively seems to use macos version 13.2.1 (see the Print Runner OS/HW Info section in the job).
Originally I assumed this was my mistake or a bug with the mps cumprod op on 13.2.1 However I tried a version of the pr where I fell back to the cpu for cumprod instead of using the mps operation and noticed the issue was still there. However the problem could have still been in my code and I did not have a computer with mac os 13.2.1 to easily test on. So I created a [test pr](https://github.com/pytorch/pytorch/pull/106096) which changes none of the core pytorch code from main but runs the failing test `test_output_grad_match_cumprod_cpu_float32`with `PYTORCH_ENABLE_MPS_FALLBACK` enabled.
You can see for yourself by looking at the results of the macos-12-py3-arm64-mps job on that pr that `test_output_grad_match_cumprod_cpu_float32` is failing with the exact same error despite none of the actually pytorch code changing. This confirms that the error is not in the code for #104688 but somewhere else in the gradient code for cumprod on mps. The gradient of cumprod uses [a bunch of different ops besides cumprod](https://github.com/pytorch/pytorch/blob/6f1042c0496731694635f559b9ae48a0d79d9099/aten/src/ATen/native/ReduceOps.cpp#L629) when not creating a graph so the problem could be in the mps versions of any of those.
If you have access to a mac with version 13.2.1 then reproing should be as easy as:
```
PYTORCH_ENABLE_MPS_FALLBACK=1 pytest test/test_mps.py -k test_output_grad_match_cumprod_cpu_float32
```
Unfortunately I cannot fix this bug as I do not have access to the operating system it's breaking on except through the CI.
### Versions
In this particular case I don't think this makes sense because I repro'ed this error via the CI not on my computer. Since I cannot run `python collect_env.py` on the CI machine I cannot report much info.
```
machdep.cpu.brand_string: Apple M1
kern.osproductversion: 13.2.1
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
1,799 | 106,111 |
Fix test flakiness in test_sparse_triangular_solve
|
triaged, open source, Stale, topic: not user facing
|
Test test_sparse_triangular_solve_cuda_DTYPE sometimes fails with `nan` values in the output tensor. The tests only fail if run along together with the full test suite, but pass if run by itself. I think there are some flakiness in the memory management.
By running `empty_cache` before the test, the flakiness can be fully removed, and the test is fixed. It might slow down the test suites by a few seconds, but I think the time can be saved from reruns if it fails in the first few reruns.
internal ref: /4185082
| 10 |
1,800 | 106,110 |
llama model failed for dynamic shape path
|
needs reproduction, triaged, oncall: pt2, module: dynamic shapes, module: dynamo
|
### π Describe the bug
When running the dynamic shape path for the Llama model, there report an assert error:
```
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/home/xiaobing/pytorch-offical/benchmarks/dynamo/common.py", line 2226, in warmup
fn(model, example_inputs)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/eval_frame.py", line 310, in _fn
return fn(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/eval_frame.py", line 470, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/convert_frame.py", line 559, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/convert_frame.py", line 130, in _fn
return fn(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/convert_frame.py", line 378, in _convert_frame_assert
return _compile(
File "/home/xiaobing/pytorch-offical/torch/_dynamo/utils.py", line 194, in time_wrapper
r = func(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/convert_frame.py", line 506, in _compile
check_fn = CheckFunctionManager(
File "/home/xiaobing/pytorch-offical/torch/_dynamo/guards.py", line 887, in __init__
guard.create(local_builder, global_builder)
File "/home/xiaobing/pytorch-offical/torch/_guards.py", line 216, in create
return self.create_fn(self.source.select(local_builder, global_builder), self)
File "/home/xiaobing/pytorch-offical/torch/_dynamo/guards.py", line 564, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
File "/home/xiaobing/pytorch-offical/torch/fx/experimental/symbolic_shapes.py", line 2809, in produce_guards
raise ConstraintViolationError(f"Constraints violated!\n{err}")
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated!
1. Could not validate constraint RelaxedUnspecConstraint(L['inputs'][0].size()[0]) as L['inputs'][0].size()[0] was inferred to be constant (32). For more information about why it is constant, run with TORCH_LOGS=dynamic
```
### Error logs
_No response_
### Minified repro
```
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --performance --float32 -dcpu --inference -n5 --inductor --timeout 9000 --dynamic-shapes --dynamic-batch-only --only llama
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitdcacff5
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.7
[pip3] dalle2-pytorch==1.12.4
[pip3] ema-pytorch==0.2.2
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] intel-extension-for-pytorch==2.1.0+git785443c
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-triton==2.1.0+9e3e10c5ed
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.2.1
[pip3] torch==2.1.0a0+git0391d8d
[pip3] torch-fidelity==0.3.0
[pip3] torch-scatter==2.1.1+pt20cpu
[pip3] torch-sparse==0.6.17+pt20cpu
[pip3] torch-struct==0.5
[pip3] torchaudio==2.1.0a0+d5b2996
[pip3] torchdata==0.7.0a0+f2bfd3d
[pip3] torchmetrics==0.11.4
[pip3] torchrec-nightly==2023.3.23
[pip3] torchtext==0.16.0a0+60bea66
[pip3] torchvision==0.16.0a0+d24db8c
[pip3] vector-quantize-pytorch==1.1.2
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.7 pypi_0 pypi
[conda] dalle2-pytorch 1.12.4 pypi_0 pypi
[conda] ema-pytorch 0.2.2 pypi_0 pypi
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] intel-extension-for-pytorch 2.1.0+git785443c pypi_0 pypi
[conda] mkl 2023.0.0 h6d00ec8_25399
[conda] mkl-include 2023.1.0 pypi_0 pypi
[conda] mkl-static 2023.1.0 pypi_0 pypi
[conda] numpy 1.24.3 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-triton 2.1.0+9e3e10c5ed pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.1 pypi_0 pypi
[conda] torch 2.1.0a0+git0391d8d dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-scatter 2.1.1+pt20cpu pypi_0 pypi
[conda] torch-sparse 0.6.17+pt20cpu pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.1.0a0+d5b2996 pypi_0 pypi
[conda] torchdata 0.7.0a0+f2bfd3d pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchrec-nightly 2023.3.23 pypi_0 pypi
[conda] torchtext 0.16.0a0+60bea66 pypi_0 pypi
[conda] torchvision 0.16.0a0+d24db8c pypi_0 pypi
[conda] vector-quantize-pytorch 1.1.2 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.