Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
801 | 109,749 |
[Quantization] Add "quantization_tag" as metadata to fx proxy
|
release notes: quantization
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109749
Summary:
In order to make sure that quantization_tag is preserved through second
stage export, this PR adds it as a special metadata that should be
preserved.
Since quantization in export path will work on top of pre dispatch
graph, subsequent post dispatch op decomposition, will decompose ops
that quant workflow tagged. In order to make sure that the patterns
identified by quantizer, remains identifiable, even after decompositions
are applied, we must preserve "quantization_tag".
This enables backend delegates, that quantized a model for specific
backend, to be able to identify "quantized" patterns.
Test Plan:
metadata porting tests
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D49481562](https://our.internmc.facebook.com/intern/diff/D49481562)
| 2 |
802 | 109,747 |
[RFC][TorchElastic] topology info in training apps/ranks
|
oncall: distributed, triaged, module: elastic
|
### π The feature, motivation and pitch
RFC PR: https://github.com/pytorch/rfcs/pull/57 ([preview]( https://github.com/pytorch/rfcs/pull/57/files?short_path=c8cb2fb#diff-c8cb2fba3757888ba87a91b1f9c5d39c5e19b64b6289994d0e6d2e699aac625e))
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @dzhulgakov @kiukchung @d4l3k @wconstab
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
803 | 109,745 |
[caffe2/torch] Package Importer with compatibility for Lazy Imports
|
fb-exported, release notes: package/deploy
|
Summary:
This adds Lazy Imports compatibility to the Package Importer. Adds support for "lazy side effects".
With lazy imported modules, those don't add attributes to children submodules to parents. Lazy Imports adds "lazy attributes". These of course need to be added somehow. Differently from vanilla CPython, where it simply adds attributes using `setattr()`, Cinder with Lazy Imports has two APIs for doing this that need to be called during the import process:
+ `_imp._maybe_set_parent_attribute()`
+ `_imp._set_lazy_attributes()`
This diff adds support to using these APIs while importing modules in PyTorch package importer.
Test Plan:
This should be a noop if the library is not using Cinder and lazy imports.
---
Existing tests
Differential Revision: D46404107
| 4 |
804 | 109,739 |
[C10D] Report detected failures when emitting collective end events.
|
open source, release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109739
* #111239
* #111246
We leverage the fact that errors get tracked in Work to report them.
The error message by itself is not crazy useful and we might need to
include some form of error code in the mix (nccl failure, timeout, etc).
| 2 |
805 | 109,734 |
[dynamo] lift the constraint that cannot make_fx a dynamo compiled function
|
ciflow/trunk, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109734
* #109916
Since we've added support for FakeTensor and SymBool inputs in dynamo, this PR removes the constraint that cannot make_fx a dynamo compiled function.
Checking the commit history, the is_fx_tracing flag in dynamo [code here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/eval_frame.py#L382C1-L383C1) seems to be mainly used to provide a nice error message for symbolic_trace. This PR changes how to detect we're during symbolic_trace by looking at is_fx_tracing flag and checking any arguments are proxies. This allows make_fx to be able to call into an optimized function.
**Test Plan:**
See added test.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 7 |
806 | 109,728 |
Histogram Fixes for QAT
|
fb-exported, release notes: quantization, release notes: AO frontend
|
Summary: Histogram observer needs to avoid in-place detach , followed by re-shape tensors for per-channel quant
Test Plan:
Before:
```
File "/tmp/jetter.vxtgekjr/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/tmp/jetter.vxtgekjr/torch/ao/quantization/observer.py", line 1195, in forward
self.min_val.detach_().resize_(combined_min.shape)
RuntimeError: Can't detach views in-place. Use detach() instead. If you are using DistributedDataParallel (DDP) for training, and gradient_as_bucket_view is set as True, gradients are views of DDP buckets, and hence detach_() cannot be called on these gradients. To fix this error, please refer to the Optimizer.zero_grad() function in torch/optim/optimizer.py as the solution.
```
After:
no issue.
Differential Revision: D49466364
| 6 |
807 | 109,725 |
Profiler should implicitly synchronize gpu devices
|
oncall: profiler
|
### π The feature, motivation and pitch
There are currently some unit tests that are flakey due to a race condition when using the profiler. The pattern typically looks like this:
> with torch.profiler.profile() as p:
> actual = self.func(*inputs, **kwargs)
> keys = tuple([e.key for e in p.key_averages()])
> mta_called = any("multi_tensor_apply_kernel" in k for k in keys)
This intent here is to call a function and see if a particular kernel was run. However it is starting the profiler, dispatching the kernel, then immediately stopping the profiler. There is no guarantee the kernel executed before the profiler was stopped. This is a pure race.
The proper fix is to add a 'torch.cuda.synchronize()' inside the context block. However, it will be an ongoing struggle to identify every unit test that makes this error (including the new ones). Would it be appropriate to have the context manager add an implicit device synchronize in __exit__?
Generally, a profiler should not be changing the behavior of what it is being profiled. Would this case be a worthy exception, under principle of least surprise, to benefit casual users?
cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98 @jithunnair-amd @pruthvistony
### Alternatives
_No response_
### Additional context
This seems to be causing random failures for any of the `is_fastpath` tests e.g. `TestForeachCUDA::test_lerp_is_fastpath_True__foreach_lerp_cuda_float32`
| 6 |
808 | 109,724 |
assert_is_valid_input_type is too weak
|
triaged, module: dispatch
|
### π Describe the bug
It doesn't reject things like `const int&` even though this really shouldn't be allowed
### Versions
main
| 0 |
809 | 109,719 |
Make torch.cuda.graphs.is_current_stream_capturing() available in TorchScript
|
oncall: jit, module: cuda graphs
|
### π The feature, motivation and pitch
I would like to be able to detect if CUDA graph capturing is underway from inside a TorchScript model. Currently this example fails to script:
```python
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
is_capturing = (
x.is_cuda
and torch.cuda.graphs.is_current_stream_capturing()
)
if not is_capturing:
torch.cuda.synchronize()
y = x*2
return y
x = torch.ones(1).to("cuda").requires_grad_(True)
model = MyModel()
model = torch.jit.script(model)
stream = torch.cuda.Stream()
with torch.cuda.stream(stream):
#Warmup
y = model(x)
model = torch.cuda.make_graphed_callables(model, (x,), allow_unused_input=True)
y = model(x)
```
With the following error:
```shell
Traceback (most recent call last):
File "/home/raulp/ex.py", line 20, in <module>
model = torch.jit.script(model)
^^^^^^^^^^^^^^^^^^^^^^^
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_script.py", line 1324, in script
return torch.jit._recursive.create_script_module(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_recursive.py", line 559, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_recursive.py", line 636, in create_script_module_impl
create_methods_and_properties_from_stubs(
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_recursive.py", line 469, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_recursive.py", line 1010, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/jit/_script.py", line 1381, in script
fn = torch._C._jit_script_compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError:
Python builtin <built-in function _cuda_isCurrentStreamCapturing> is currently not supported in Torchscript:
File "/shared/raul/mambaforge/envs/torchmd-net-latest/lib/python3.11/site-packages/torch/cuda/graphs.py", line 32
If a CUDA context does not exist on the current device, returns False without initializing the context.
"""
return _cuda_isCurrentStreamCapturing()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'is_current_stream_capturing' is being compiled since it was called from 'MyModel.forward'
File "/home/raulp/ex.py", line 10
def forward(self, x):
is_capturing = (
x.is_cuda
~~~~~~~~~ <--- HERE
and torch.cuda.graphs.is_current_stream_capturing()
)
```
### Alternatives
Using this inline extension does the job:
```python
import torch
from torch.utils.cpp_extension import load_inline
import torch.nn as nn
cpp_source = '''
#include <torch/script.h>
#include <c10/cuda/CUDAStream.h>
#include <cuda_runtime_api.h>
bool is_stream_capturing() {
at::cuda::CUDAStream current_stream = at::cuda::getCurrentCUDAStream();
cudaStream_t cuda_stream = current_stream.stream();
cudaStreamCaptureStatus capture_status;
cudaError_t err = cudaStreamGetCaptureInfo(cuda_stream, &capture_status, nullptr);
if (err != cudaSuccess) {
throw std::runtime_error(cudaGetErrorString(err));
}
return capture_status == cudaStreamCaptureStatus::cudaStreamCaptureStatusActive;
}
static auto registry =
torch::RegisterOperators()
.op("torch_extension::is_stream_capturing", &is_stream_capturing);
'''
# Create an inline extension
torch_extension = load_inline(
"is_stream_capturing",
cpp_sources=cpp_source,
functions=["is_stream_capturing"],
with_cuda=True,
verbose=True,
)
@torch.jit.script
def check_stream_capturing():
return torch.ops.torch_extension.is_stream_capturing()
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
is_capturing = (
x.is_cuda
and check_stream_capturing()
)
if not is_capturing:
torch.cuda.synchronize()
y = x*2
return y
x = torch.ones(1).to("cuda").requires_grad_(True)
model = MyModel()
model = torch.jit.script(model)
stream = torch.cuda.Stream()
with torch.cuda.stream(stream):
#Warmup
y = model(x)
model = torch.cuda.make_graphed_callables(model, (x,), allow_unused_input=True)
y = model(x)
```
but has some downsides of its own and poses the question of how to ship the extension itself.
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ezyang
| 0 |
810 | 109,709 |
Regression on 2.1 RC RoCm: data parallel error on `torch._C._broadcast_coalesced`
|
high priority, oncall: binaries, in progress, module: rocm, oncall: releng, triaged
|
### π Describe the bug
Hi, I noticed today a bug in data parallel on RoCm system that is not present on NVIDIA ones.
Reproduction: run the following script `CUDA_VISIBLE_DEVICES=0,1 python run_dp.py`:
```python
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
import torch
from datasets import Dataset
import torch.nn as nn
model = AutoModelForSequenceClassification.from_pretrained("textattack/bert-base-uncased-SST-2")
training_arguments = TrainingArguments(output_dir="foo", skip_memory_metrics=True, per_device_train_batch_size=4)
dummy = {}
sequence_length = 128
n_samples = 256
dummy["input_ids"] = torch.randint(
low=0,
high=10,
size=(n_samples, sequence_length))
dummy["attention_mask"] = torch.randint(
low=0,
high=2,
size=(n_samples, sequence_length))
dummy["labels"] = torch.randint(
low=0,
high=2,
size=(n_samples,))
task_dataset = Dataset.from_dict(dummy)
task_dataset.set_format(
type="torch", # for now we're using pytorch tensors
columns=list(task_dataset.features.keys()),
)
train_dataset = task_dataset
print("train_dataset", train_dataset)
print("train_dataset", train_dataset[0])
# Same issue with Transformers Trainer.
"""
trainer = Trainer(
model=model,
args=training_arguments,
train_dataset=train_dataset,
)
trainer.train()
"""
model = nn.DataParallel(model)
model.to("cuda")
inp = {
"input_ids": train_dataset[:8]["input_ids"].to("cuda"),
"attention_mask": train_dataset[:8]["attention_mask"].to("cuda"),
"labels": train_dataset[:8]["labels"].to("cuda")
}
print("inp", inp)
output = model(**inp)
```
raises
```
Traceback (most recent call last):
File "/home/user/transformers-regression/run_dp.py", line 58, in <module>
output = model(**inp)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 184, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 189, in replicate
return replicate(module, device_ids, not torch.is_grad_enabled())
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/replicate.py", line 110, in replicate
param_copies = _broadcast_coalesced_reshape(params, devices, detach)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/replicate.py", line 83, in _broadcast_coalesced_reshape
tensor_copies = Broadcast.apply(devices, *tensors)
File "/home/user/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/_functions.py", line 23, in forward
outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parallel/comm.py", line 57, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: NCCL Error 3: internal error
```
on torch 2.1 RC (`2.1.0+rocm5.6`), while no error is raised on 2.1 RC (`2.1.0+cu118`) on an NVIDIA system.
On 2.0.1, there is no issue (specifically on `rocm/pytorch:rocm5.6_ubuntu20.04_py3.8_pytorch_2.0.1`).
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0+rocm5.6
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.6.31061-8c743ae5d
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 16.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.6.0 23243 be997b2f3651a41597d7a41441fff8ade4ac59ac)
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI210
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.6.31061
MIOpen runtime version: 2.20.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7643 48-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 3640.9170
CPU min MHz: 1500.0000
BogoMIPS: 4600.40
Virtualization: AMD-V
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] pytorch-triton-rocm==2.1.0+34f8189eae
[pip3] torch==2.1.0+rocm5.6
[pip3] torchvision==0.15.2a0+fa99a53
[pip3] triton==2.0.0
[conda] No relevant packages
root@f3d18347865e:/var/lib/jenkins# [conda] No relevant packages
bash: [conda]: command not found
```
cc @ezyang @gchanan @zou3519 @kadeng @seemethere @malfet @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 8 |
811 | 109,706 |
Make standard container classes satisfy container Protocols.
|
module: nn, triaged, needs research
|
### π The feature, motivation and pitch
The container classes in [`torch/nn/modules/container`](https://github.com/pytorch/pytorch/blob/b30ee35a6f141d3247a24fd09f96ea50a7e2b3c7/torch/nn/modules/container.py) satisfy the necessary methods to use corresponding `collections.abc` Mixins, I propose to either subclass these, or to satisfy the protocols manually:
- `nn.Sequential` -> `MutableSequence[M]`
- `nn.ModuleList` -> `MutableSequence[M]`
- `nn.ModuelDict` -> `MutableMapping[str, M]`
- `nn.ParameterList` -> `Sequence[P]` (missing: `__delitem__` and `insert` for `MutableSequence[M]`)
- `nn.ParameterDict` -> `MutableMapping[str, P]`
- `M` would be a `TypeVar` bound to `nn.Module`.
- `P` would be a `TypeVar` bound to `nn.Parameter`.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
812 | 109,700 |
[inductor][cpu] performance regression
|
triaged, oncall: pt2, module: inductor, module: cpu inductor
|
<p>new_perf_regression 2023-09-17 compare with 2023-09-13 nightly release</p>
<p>Note: multi threads secnario for models above first *, single thread for models between two *</p>
<p>new_perf_regression</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>hf_T5_generate</td>
<td>1</td>
<td>0.969546</td>
<td>1.571509982</td>
<td>1.523651217008172</td>
<td>381.9721</td>
<td>1</td>
<td>1.087152</td>
<td>1.3870738859999998</td>
<td>1.5079601493126717</td>
<td>205.124549</td>
<td>0.89</td>
<td>0.99</td>
<td>0.88</td>
<td>0.54</td>
</tr>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
<tr>
<td>lennard_jones</td>
<td>1</td>
<td>0.979394</td>
<td>7.776900000000001e-05</td>
<td>7.616649198600001e-05</td>
<td>9.569067</td>
<td>1</td>
<td>1.172059</td>
<td>6.3225e-05</td>
<td>7.4103430275e-05</td>
<td>9.954492</td>
<td>0.84</td>
<td>0.97</td>
<td>0.81</td>
<td>1.04</td>
</tr>
<tr>
<td>pyhpc_equation_of_state</td>
<td>1</td>
<td>19.094604</td>
<td>5.9531e-05</td>
<td>0.001136720870724</td>
<td>12.447889</td>
<td>1</td>
<td>22.186139</td>
<td>5.1889e-05</td>
<td>0.001151216566571</td>
<td>12.786719</td>
<td>0.86</td>
<td>1.01</td>
<td>0.87</td>
<td>1.03</td>
</tr>
<tr>
<td>pyhpc_isoneutral_mixing</td>
<td>1</td>
<td>50.675985</td>
<td>5.9053e-05</td>
<td>0.002992568942205</td>
<td>18.191096</td>
<td>1</td>
<td>58.979956</td>
<td>5.122e-05</td>
<td>0.0030209533463200003</td>
<td>18.466515</td>
<td>0.86</td>
<td>1.01</td>
<td>0.87</td>
<td>1.02</td>
</tr>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
</tbody>
</table>
<p>SW info</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>SW</th>
<th>Nightly commit</th>
<th>Main commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pytorch</td>
<td>0de2555</td>
<td></td>
</tr>
<tr>
<td>Torchbench</td>
<td>/</td>
<td>ffbbebb9</td>
</tr>
<tr>
<td>torchaudio</td>
<td>475b6ae</td>
<td></td>
</tr>
<tr>
<td>torchtext</td>
<td>142d029</td>
<td></td>
</tr>
<tr>
<td>torchvision</td>
<td>8636bf3</td>
<td></td>
</tr>
<tr>
<td>torchdata</td>
<td>eb9bf61</td>
<td></td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>0200b11</td>
<td>/</td>
</tr>
</tbody>
</table>
<p>Repro</p>
<a href=https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh>
inductor_single_run.sh </a>
<code>
bash inductor_single_run.sh multiple/single inference performance torchbench/huggingface/timm_models model float32 first static default 0
</code>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 5 |
813 | 109,699 |
[TorchScript] Support ScriptFunction arguments in torch.jit.script calls.
|
oncall: jit
|
### π The feature, motivation and pitch
I'm trying to use TorchScript with code that uses an ordinary differential equation solver. This code basically has a loop that iteratively evaluates an ODE step function, which itself takes another function as part of its inputs and evaluates it multiple times at different points. The particular function to use in the ODE steps is decided at runtime from a set based on a condition.
Putting things together, it looks roughly like this:
```python
def solver(..., step_f1: callable, step_f2: callable):
if x:
for i in range(...):
y = step_f1(y, ...)
return y
else:
z = ...
for i in range(...):
y = step_f2(y, z, ...)
return y
def f1(...):
...
def f2(...):
...
def ode_step(func, ...):
# Invokes func as part of its logic.
...
```
This solver function would be normally used with some `func_f1`, `func_f2` like this:
```python
step_f1 = lambda *args: ode_step(f1, *args)
step_f2 = lambda *args: ode_step(f2, *args)
```
When exporting the model, because of the if condition and the for loops, the solver function itself cannot be traced and requires using `torch.jit.script`. However, since `f1` and `f2` would actually be expensive model evaluations I trace these:
```python
step_f1 = torch.jit.trace(lambda *args: ode_step(f1, *args))
step_f2 = torch.jit.trace(lambda *args: ode_step(f2, *args))
```
Ideally I would like to be able to use the same solver function for both use cases: one without `torch.jit.script` and with no tracing, and another when exporting the model where I manually call `torch.jit.script` and use traced functions. However, if I try to use `torch.jit.script` I get the runtime error "function cannot be used as a value" despite the fact that these are not really python functions but `torch.jit.ScriptFunction` from tracing.
I also found that this seems to be functionally possible, since I found a way to get `torch.jit.script` to use my traced functions, but it comes with nasty consequences. See the alternatives section below for more details.
Since this does affect script/trace composability and apparently it is already functionally possible, albeit somewhat broken, would it make sense to allow using `torch.jit.ScriptFunction` objects as valid argument types for torch.jit.script?
### Alternatives
Since `torch.jit.script` complains if I pass any `torch.jit.ScriptFunction` argument directly or in any nested function it calls, I tried something like this:
```python
def trace_functions(...):
step_f1 = torch.jit.trace(lambda *args: ode_step(f1, *args))
step_f2 = torch.jit.trace(lambda *args: ode_step(f2, *args))
def solver_local(...):
if x:
for i in range(...):
y = func_f1(y, ...)
return y
else:
z = ...
for i in range(...):
y = func_f2(y, z, ...)
return y
solver_script = torch.jit.script(solver_local)
```
In this case, the `torch.jit.ScriptFunction` objects `func_f1` and `func_f2` are captured in `solver_local` instead of passed as arguments. This version actually doesn't raise any exceptions, but it forces me to create an inner copy of `solver` that I cannot reuse either in other parts of the code, effectively duplicating the code maintenance of this function and making it error-prone.
Trying to use `torch.jit.script` with some other function, lambda, or result from `functools.partial` that calls `solver` and sets `func_f1` and `func_f2` does not work.
### Additional context
_No response_
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
814 | 109,697 |
[DDP + Dynamo] Traceable DDP hooks
|
oncall: distributed, triaged, oncall: pt2
|
### π The feature, motivation and pitch
https://github.com/pytorch/pytorch/pull/106738 has support to trace FSDP. Is there any plan to add similar support for traceable DDP along with using functional collectives?
With support for CompiledAutoGrad + traceable DDP hook I am expecting the complete graph including the backward graph + (mul + allreduce) for all buckets which are present in the DDP hooks to be available at the compiler backend. This helps the custom compiler backend to schedule the Allreduce calls for better compute + communication overlap.
def compiler_fn(gm):
return torch.compile(gm, backend="inductor", fullgraph=True)
ddp_m = DDP(m, gradient_as_bucket_view=True, static_graph=True)
ddp_m = torch.compile(ddp_m, backend="inductor")
with compiled_autograd.enable(compiler_fn):
loss = ddp_m(inp, target)
loss.backward()
optimizer.step()
optimizer.zero_grad()
The current DDP hooks are not traceable where the first place dynamo tracing breaks is with this particular stack trace during pre_forward hooks.
> inputs, kwargs = self._pre_forward(*inputs, **kwargs)
> self._lazy_init()
> self._lazy_init_ran = True.
The graph compilation also fails with this particular error:
> AttributeError: type object '_DDPSink' has no attribute '_compiled_autograd_key'
Other than this there are multiple parts of the DDP backward hook implementation which are not traceable. The expectation is with traceable DDP hook + CompiledAutoGrad by registering a comm hook with functional collectives we can get a combined graph along with the collective calls at the compiler backend.
@wconstab
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
815 | 109,691 |
Extends the functionality of `nn.BatchNorm1d`.
|
oncall: distributed, module: nn, triaged, needs research
|
### π The feature, motivation and pitch
This feature request proposes the extension of `BatchNorm1d` to the PyTorch library. `NaiveSyncBatchNorm1d` is an extension of the existing `nn.BatchNorm1d` module, designed to support synchronization across multiple devices, either locally or globally.
The motivation behind this feature request is to enhance PyTorch's capabilities in distributed deep learning scenarios. the proposed `NaiveSyncBatchNorm1d` module would fill this gap.
I propose the addition of the `NaiveSyncBatchNorm1d` module to PyTorch. The module provides the following features:
- Synchronization of batch normalization statistics (mean and variance) across multiple devices.
- Support for both global synchronization and local synchronization based on user requirements.
### Alternatives
While there are alternative ways to implement synchronized batch normalization, `NaiveSyncBatchNorm1d` offers a simple and efficient solution.
### Additional context
Here's an example of how `NaiveSyncBatchNorm1d` can be used in PyTorch:
```python
sync_bn = NaiveSyncBatchNorm1d(num_sync_devices=4, global_sync=False, num_features=64)
output = sync_bn(input_tensor)
```
Source: https://github.com/facebookresearch/pytorchvideo/blob/64e5a17ccefcd6b93ad331d1a9c2a130f179ff44/pytorchvideo/layers/batch_norm.py#L10C44-L10C44
I would like to create a PR for the same.
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 9 |
816 | 109,687 |
[RFC]: Moving most torch.compile backends out of core by 12/1/23
|
triaged, topic: bc breaking, dependency issue, oncall: pt2, module: dynamo
|
## TL;DR
If you own a backend in https://github.com/pytorch/pytorch/tree/main/torch/_dynamo/backends and we don't hear back from you **we would like to delete it from core by 12/1/23** - if you strongly disagree please let us know ASAP and explain below why you feel the custom registration mechanism is not suitable
This will be BC breaking
I realize this date might be frustrating but I've given a heads up to all the POCs listed below since March of this year so without some due date I worry this won't be prioritized.
## Problem
When `torch.compile()` was first released it came with a `backend` parameter that made it really easy to try out different compilers. The problem was that
1. Many backends were not tested in CI so it was unclear what their expected pass rate was on the HF, TIMM or torchbench suite
2. The backends may not be running on the right hardware relative to CI
3. Because those backends are not maintained by the pytorch core/compiler team we can't offer the same support model to fix bugs
4. The backends will fail by default pending some optional dependencies that we can't add as pytorch dependencies or pytorch CI so out of the box users still need to install extra things
5. There is a perception that backends in core are higher quality than backends out of core which may or may not be the case, we don't want to be in the business of blessing some backends at the expense of others
## Solution
So the solution we proposed was an out of core registration mechanism highlighted here https://pytorch.org/docs/main/torch.compiler_custom_backends.html#registering-custom-backends which meant that end users of pytorch could retain the convenience of `torch.compile(..., backend="...")` without having backend specific code living in core. In fact we believe that having the code out of core will create a superior backend since it will increase dev velocity and the testing infrastructure can and should mimic what your end users care about. For us a good proxy for what users care about has been a combination of huggingface, timm and torchbench models https://github.com/pytorch/pytorch/tree/main/benchmarks/dynamo
If registering a custom backend is too clunky then please provide feedback on this RFC and we can make it better, our goal should be to have little to no friction imposed on end users relative to the current in core backends. Remember that for an existing in core backend they will fail by default unless you install something.
If the goal was to promote a new backend, again there is no requirement that the code needs to live in `pytorch/pytorch` we in fact have a precedent for promoting research backends like Hidet that used an out of core registration mechanism
which we featured prominently on our blog and retweeted https://pytorch.org/blog/introducing-hidet/
So we want to really separate marketing vs technical reasons for code to be in core so the general principle for new dynamo/compile backends is if your code will be maintained by pytorch core/compiler engineers it should be in core and if it is not then the Linux foundation marketing team can help you out.
## Criteria to market a backend
Again we don't want to be in the business of blessing or endorsing backends, what matters most is what customers use and care about and not what I care about. So that end in order for us to market a backend there are some quality guidelines that will be helpful to follow so you can use this as a checklist before engaging with the Linux foundation
1. The backend has a clear list of maintainers
2. The backend has been maintained for at least 6 months
3. The pass rate is comparable to inductor, see https://hud.pytorch.org/benchmark/compilers
4. The perf speedups are comparable to inductor, see https://hud.pytorch.org/benchmark/compilers
You can leverage our benchmark scripts as a starting point to prove out 3,4 https://github.com/pytorch/pytorch/tree/main/benchmarks/dynamo and moving forward we'll use this criteria to for example include a backend here https://pytorch.org/docs/main/torch.compiler.html - Again this does not mean that your backend is pytorch blessed/approved/supported it's a just proxy for what we think is a good backend
## Backend Tracker
We already have a precedent for not accepting new backends https://github.com/pytorch/pytorch/pull/104426 but the next step is making sure that all backends in core are maintained by the pytorch core/compiler team
These are the backends that we should move out of core
* [x] IPEX - this is registered out core https://github.com/pytorch/pytorch/pull/104329 - owner @ashokei
* [x] NVfuser was already removed https://github.com/pytorch/pytorch/pull/105789 - owner @IvanYashchuk
* [ ] ONNXRT - owner @abock
* [ ] TensorRT - this is already a shim backend and can be removed in fact the out of core registration mechanism already exists https://github.com/pytorch/TensorRT/pull/2311 - owner @narendasan
* [ ] TorchXLA - owner @JackCaoG
* [ ] TVM - owner @shingjan
cc @ezyang @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @malfet @gottbrath @gchanan
| 1 |
817 | 109,684 |
[pytree] Make `optree` optional and populate members from `_pytree` when not available
|
triaged, open source, ciflow/trunk, release notes: releng, module: pytree, ciflow/inductor, module: export
|
Follow-up PR for #93139
- #93139
Changes:
- Remove `torch/utils/_pytree.py` from the lint ignore list.
- Move functions in `torch._functorch.pytree_hacks` to `torch.utils._pytree` and update imports.
- Remove duplicate registration for immutable collections.
- Make `optree` optional in `torch.utils._cxx_pytree` and populate members from `_pytree` when not available.
- Register pytree nodes in both Python pytree and CXX pytree.
- Format files and update type hints.
See also: https://github.com/pytorch/pytorch/pull/93139#discussion_r1329479085
> -> Diff 1 add new dep
> Diff 2 Integration behind disabled flag
> Diff 3 pass tests with flag enabled and disabled
> Diff 4 set flag to default on let soak a few weeks
> Diff 5 remove flag, remove old system
cc @zou3519 @avikchaudhuri @gmagogsfm
| 11 |
818 | 109,677 |
[WIP] Dynamo CPU backend under Windows
|
open source, module: inductor, module: dynamo, ciflow/inductor, release notes: dynamo
|
Adding dynamo CPU backend support under Windows #90768
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
819 | 109,675 |
[FSDP] UnpicklingError when calling save_state_dict in distributed run
|
oncall: distributed, triaged, module: fsdp
|
### π Describe the bug
we are using multi-nodes training with FSDP, and we got the following error during checkpointing through `torch/distributed/checkpoint/state_dict_saver.py`
```
File "/opt/ml/code/checkpoints.py", line 153, in _save
checkpoint.save_state_dict(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py", line 119, in save_state_dict
return distW.all_reduce("write", write_data, finish_checkpoint)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 236, in all_reduce
final_result = self.broadcast_object(result)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 88, in broadcast_object
dist.broadcast_object_list(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2277, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1970, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '\x00'.
```
originally, we thought that it only happened when we overwrite checkpoint to an existing folder, but we recently observed the error occurred even when writing to fresh folder. thus this error is non-deterministic. it might be related to these issues https://github.com/pytorch/pytorch/issues/109396
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 4 |
820 | 109,666 |
FSDP: ShardedStateDict support for world_size = 1
|
oncall: distributed, module: fsdp
|
### π Describe the bug
When iterating with FSDP code, it's sometimes useful to set world_size = 1 to sanity check some things before launching larger job. However, this currently requires switching the state_dict code logic to use FULL_STATE_DICT instead of ShardedStateDict, which is a bit inconvenient from developer velocity perspective.
This can be repro'd by setting world_size = 1 + using StateDictType.ShardedStateDict, and trying to load in a checkpoint.
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 1 |
821 | 109,665 |
Add Pass to move constructors from cpu to cuda
|
ciflow/trunk, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109665
Sometimes indexing tensors are constructed on cpu and then used to index a cuda tensor. This prevents cudagraphs when it does not need to. Adding a pass which moves constructors from cpu->cuda when we can prove the downstream uses can be safely converted.
This pr allows us to cudagraph `clip` from the blueberries model which improves perf from ~1.5x -> ~4x.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
822 | 109,664 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
823 | 109,658 |
Inductor lowering error for aten fallbacks with multiple outputs
|
triaged, oncall: pt2, module: inductor
|
### π Describe the bug
aten.sort and aten.topk are aten fallback ops in inductor, and have MultiOutputLayout as `layout`.
'MultiOutputLayout' object has no attribute 'size'
Questions:
I would expect ir.MultiOutputLayout to be a subclass of ir.Layout, but it's not. Why?
I am surprised that we don't have UT catching this?
### Error logs
Traceback (most recent call last):
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/testing/_internal/common_utils.py", line 2406, in wrapper
method(*args, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/deeplearning/aot_inductor/test/test_custom_ops.py", line 119, in test_export_extern_fallback_nodes
self._test_export_extern_fallback_nodes(dynamic_shape=False)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/deeplearning/aot_inductor/test/test_custom_ops.py", line 103, in _test_export_extern_fallback_nodes
so_path = torch._inductor.aot_compile(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/__init__.py", line 48, in aot_compile
result = compile_fx_aot(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 850, in compile_fx_aot
return compile_fx(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 951, in compile_fx
return compile_fx(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 980, in compile_fx
return compile_fx(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 1159, in compile_fx
return aot_autograd(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_dynamo/backends/common.py", line 55, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_functorch/aot_autograd.py", line 3891, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_functorch/aot_autograd.py", line 3429, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_functorch/aot_autograd.py", line 2212, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_functorch/aot_autograd.py", line 2392, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_functorch/aot_autograd.py", line 1573, in aot_dispatch_base
compiled_fw = compiler(fw_module, flat_args)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 1096, in fw_compiler_base
return inner_compile(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 246, in wrapper
return inner_compile(gm, example_inputs, **kwargs_patched)
File "/usr/local/fbcode/platform010/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/debug.py", line 228, in inner
return fn(*args, **kwargs)
File "/usr/local/fbcode/platform010/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/fb/utils.py", line 101, in newFunction
return old_func(*args, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 340, in compile_fx_inner
compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/compile_fx.py", line 550, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/graph.py", line 975, in compile_to_fn
code, linemap = self.codegen()
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/graph.py", line 925, in codegen
self.scheduler.codegen()
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/scheduler.py", line 1782, in codegen
self.codegen_extern_call(node)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/scheduler.py", line 1705, in codegen_extern_call
node.codegen(V.graph.wrapper_code)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/ir.py", line 3835, in codegen
super().codegen(wrapper)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/ir.py", line 3381, in codegen
V.graph.wrapper_code.generate_extern_kernel_alloc(self, args)
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/codegen/wrapper.py", line 1378, in generate_extern_kernel_alloc
size = self.codegen_shape_tuple(tuple(extern_kernel.get_size()))
File "/data/users/bahuang/fbsource/buck-out/v2/gen/fbcode/575d10d63021fe15/deeplearning/aot_inductor/test/__test_custom_ops__/test_custom_ops#link-tree/torch/_inductor/ir.py", line 2366, in get_size
return list(self.layout.size)
AttributeError: 'MultiOutputLayout' object has no attribute 'size'
### Minified repro
```
class ModelWithMultiOutut(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(
self,
x,
):
return torch.sort(x, dim=-1)
device = torch.device("cuda")
m = ModelWithMultiOutut().to(device=device)
x = torch.zeros(8, device=device)
user_args = (x,)
export_model = export(m, user_args)
with torch.no_grad():
gm = export_model.graph_module
so_path = torch._inductor.aot_compile(gm, list(user_args))
```
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
824 | 109,654 |
support for fp8 allgather FSDP
|
release notes: distributed (c10d)
|
Combo with https://github.com/facebookresearch/fairscale/pull/1136
| 4 |
825 | 109,653 |
[WIP] compiled autograd on inductor torchbench
|
release notes: releng, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109653

| 4 |
826 | 109,652 |
InstanceNorm does not catch dim mismatch
|
module: nn, triaged, actionable
|
### π Describe the bug
InstanceNorm with a specified input C works on tensors of arbitrary dimensions/channels (with the number of channels not necessarily matching C) without throwing a dimension mismatch error. For example:
```
m = torch.nn.InstanceNorm1d(64)
input = torch.randn(4, 80, 300)
output = m(input)
```
### Versions
PyTorch v2.0
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 1 |
827 | 109,650 |
Add ``onnx`` backend to ``torch.export`` API
|
module: onnx, open source, release notes: onnx, module: dynamo, ciflow/inductor, module: export
|
Related to https://github.com/pytorch/pytorch/issues/109131
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109650
* #109649
This PR adds the ONNX exporter backend for ``torch.export``.
The ONNX exporter will leverage Torch Dynamo under the hood, but produce
a ONNX model as a result of the export. The ONNX backend will also
enable ONNX Runtime to transparently execute the ONNX model.
An example usage is shown below:
```python
import torch
def func(x):
y = x + 1
z = y.relu()
return (y, z)
inputs = (torch.randn(1, 1, 2),)
export_options = torch.onnx.ExportOptions()
exported_onnx = torch.export.export(func, inputs, backend="onnx", options={"export_options": export_options})
onnxruntime_output = exported_onnx(*inputs)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
828 | 109,649 |
Add `backend` concept to `torch.export` API
|
module: bc-breaking, open source, topic: bc breaking, module: dynamo, module: export, suppress-bc-linter
|
Related to https://github.com/pytorch/pytorch/issues/109131
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109650
* __->__ #109649
This PR proposes a way to allow ``torch.export.export`` API to support
multiple backends, similar in principle to the ``torch.compile`` API.
With the new API, one backend can produce, for example, ``GraphModule`` (default backend?)
while the ``onnx`` backend can produce a ``onnx.ModelProto``, unifying all export APIs
from different vendors into one cohesive API.
Backend implementation are separate as it is today, and users could register
custom backends through ``register_backend`` or
list the available ones through ``list_backends``
Below is an example on how a model would be exported with the new API
```python
import torch
def f(x, y):
return x[0] + y
inp = ([torch.ones(1, 3)], torch.ones(1, 3))
exported_program = torch.export.export(f, inp, backend="dynamo", options={}) # ``backend`` and ``options`` are optionals
output = exported_program(*inp)
```
Besides the main `torch.export.export` API, there are others that need to be discussed:
* `torch.export.load` and `torch.export.save` are also backend specific and ONNX backend, for example, could also implement the same functionalities
* `constrain_as_size`, `constrain_as_value`, `dynamic_dim` that may or may not be reused by other backends
cc @ezyang @gchanan @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 8 |
829 | 109,642 |
Enable masked_scatter_backward for inductor
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109642
masked_scatter_backward was previously implemented as a
CompositeExplicitAutograd, which involved a decomp that calls
masked_select, and masked_select in general produces data-dependent
shapes that inductor doesn't support. But masked_scatter_backward
reshapes the return value of masked_select such that the end result has
a static shape again.
I have converted masked_scatter_backward into an aten op to avoid this
issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
830 | 109,633 |
dynamo: break graph when "out" has complex dtype
|
open source, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109633
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
831 | 109,630 |
[Inductor] Move fake_tensors to the same device as example_inputs
|
topic: not user facing, module: inductor, ciflow/inductor
|
When I torch.export a model in GPU, and then AOTInductor compile it with CPU sample_inputs, I expect hitting the CPU compilation path, however, I got triton compiled path.
This PR fixes this.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
832 | 109,627 |
[ONNX] Enable more OpInfo tests in fx
|
open source, topic: not user facing
|
### TODO
1. Tolerance
2. DType xfail
| 1 |
833 | 109,610 |
AsyncCompile loses useful exception backtrace in __get_result
|
good first issue, module: logging, triaged, oncall: pt2, module: inductor
|
### π Describe the bug
Steps to reproduce:
1. Inject a synthetic failure in some code that is invoked solely from the generated wrapper py file. E.g.,
```
diff --git a/torch/_inductor/triton_heuristics.py b/torch/_inductor/triton_heuristics.py
index badacf0d2a4..7364a5b8909 100644
--- a/torch/_inductor/triton_heuristics.py
+++ b/torch/_inductor/triton_heuristics.py
@@ -961,6 +961,7 @@ def pointwise(size_hints, meta, tile_hint=None, filename=None):
def reduction(size_hints, reduction_hint=False, meta=None, filename=None):
"""args to @triton.heuristics()"""
assert meta is not None
+ assert False
rnumel = size_hints[-1]
if len(size_hints) == 2:
contiguous_config = triton_config_reduction(
```
2. Trigger this codepath in a test program
```
import torch
@torch.compile(dynamic=True)
def f(x):
return (x + 1).sum()
f(torch.randn(20000, device='cuda'))
```
Here's the backtrace you get:
```
File "/tmp/torchinductor_ezyang/ww/cwwnfs543wa53bsz7p6na72nsllphuso5hdekryzodahas6hcwlu.py", line 112, in <module>
async_compile.wait(globals())
File "/data/users/ezyang/a/pytorch/torch/_inductor/codecache.py", line 1858, in wait
scope[key] = result.result()
File "/data/users/ezyang/a/pytorch/torch/_inductor/codecache.py", line 1705, in result
self.future.result()
File "/home/ezyang/local/a/pytorch-env/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/ezyang/local/a/pytorch-env/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError:
```
This is useless, we lost the stack trace associated with the exception in the other process.
In small scale cases, you can recover useful information with `TORCHINDUCTOR_COMPILE_THREADS=1 python n.py`
```
File "/data/users/ezyang/a/pytorch/torch/_inductor/codecache.py", line 1415, in load
mod = PyCodeCache.load(source_code)
File "/data/users/ezyang/a/pytorch/torch/_inductor/codecache.py", line 1281, in load
return cls.load_by_key_path(key, path, linemap)
File "/data/users/ezyang/a/pytorch/torch/_inductor/codecache.py", line 1303, in load_by_key_path
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_ezyang/i3/ci3j5qcext3jp3rjk5l45e5smehavbnh4cs36yj7zxe5th7tg2gx.py", line 10, in <module>
@reduction(
File "/data/users/ezyang/a/pytorch/torch/_inductor/triton_heuristics.py", line 964, in reduction
assert False
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError:
```
which is a nice workaround, but we should just fix this properly.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 9 |
834 | 109,607 |
"RuntimeError: (*bias): last dimension must be contiguous" with F.scaled_dot_product_attention + torch.compile
|
triaged, oncall: pt2, module: inductor
|
### π Describe the bug
My code is giving a RuntimeError when I try to use `F.scaled_dot_product_attention()` together with `torch.compile()`, please see the minimal example below. This is on the latest nightly version
### Error logs
```
Traceback (most recent call last):
File "/mnt/c/sw/plb_hpc/src/bug.py", line 54, in <module>
static(query, key, value, bias, weights)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/mnt/c/sw/plb_hpc/src/bug.py", line 12, in fn
def fn(q, k, v, b, w):
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3905, in forward
return compiled_fn(full_args)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1482, in g
return f(*args)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2533, in runtime_wrapper
all_outs = call_func_with_args(
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1506, in call_func_with_args
out = normalize_as_list(f(args))
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1594, in rng_functionalization_wrapper
return compiled_fw(args)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 396, in __call__
return self.get_current_callable()(inputs)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 616, in run
return model(new_inputs)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 423, in _run_from_cache
return compiled_graph.compiled_artifact(inputs)
File "/tmp/torchinductor_caleb/zb/czb6rbgh2vvegcwcj2yimlnpwwzdkqfw3k727aos2kyyabox2th3.py", line 75, in call
buf2 = aten._scaled_dot_product_efficient_attention(arg0_1, arg1_1, arg2_1, buf1, False)
File "/home/caleb/miniconda3/envs/debug/lib/python3.10/site-packages/torch/_ops.py", line 692, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: (*bias): last dimension must be contiguous
```
### Minified repro
```python
import torch
import torch.nn.functional as F
from einops import rearrange
def fn(q, k, v, b, w):
# apply linear layer to bias
b = rearrange(b @ w, "B Q K H -> B H Q K")
# if you comment out this line, the error goes away
b = b.contiguous()
# attention
return F.scaled_dot_product_attention(
query=q,
key=k,
value=v,
attn_mask=b,
)
if __name__ == "__main__":
# setup
DEVICE = torch.device("cuda:0")
DTYPE = torch.float16
torch.manual_seed(999)
B = 3
H = 8
Q = 99
K = 80
D = 32
C_bias = 128
# inputs
query = torch.randn((B, H, Q, D), device=DEVICE, dtype=DTYPE)
key = torch.randn((B, H, K, D), device=DEVICE, dtype=DTYPE)
value = torch.randn((B, H, K, D), device=DEVICE, dtype=DTYPE)
bias = torch.randn((B, Q, K, C_bias), device=DEVICE, dtype=DTYPE)
weights = torch.randn((C_bias, H), device=DEVICE, dtype=DTYPE)
# eager version is okay
fn(query, key, value, bias, weights)
# compiled version causes error
static = torch.compile(fn)
static(query, key, value, bias, weights)
```
### Versions
```
PyTorch version: 2.2.0.dev20230918
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 536.23
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 10
On-line CPU(s) list: 0-9
Thread(s) per core: 2
Core(s) per socket: 5
Socket(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 113
Model name: AMD Ryzen 5 3600 6-Core Processor
Stepping: 0
CPU MHz: 3593.258
BogoMIPS: 7186.51
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 160 KiB
L1i cache: 160 KiB
L2 cache: 2.5 MiB
L3 cache: 16 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr virt_ssbd arat umip rdpid
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.2.0.dev20230918
[pip3] triton==2.1.0
[conda] blas 1.0 mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.25.2 py310ha4c1d20_0 conda-forge
[conda] pytorch 2.2.0.dev20230918 py3.10_cuda11.8_cudnn8.7.0_0 pytorch-nightly
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchtriton 2.1.0+6e4932cda8 py310 pytorch-nightly
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
835 | 109,601 |
[inductor] Update triton pin
|
open source, with-ssh, ciflow/trunk, topic: not user facing, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110911
* #109132
* #106581
* __->__ #109601
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 5 |
836 | 109,598 |
ensure uint8 is honoured for cpu operations in dynamo
|
module: cpu, triaged, open source, module: amp (automated mixed precision), NNC, release notes: quantization, topic: not user facing, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Fixes #105264
follow on from pull request #108513
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
837 | 109,592 |
test_memory_timeline fails on PPC due to extra temopraries
|
triaged, module: POWER, oncall: profiler
|
### π Describe the bug
Running the test on PPC64le with `python profiler/test_memory_profiler.py -k test_memory_timeline` results in a failure. Printing the full diff yields an extra temporary between the first 2:
```
+ create TEMPORARY 11(v0) 0.7 kB
+ destroy TEMPORARY 11(v0) 0.7 kB
```
Raising the minimum size from 256 to 1024 to filter this allocation makes the test pass. So I assume it is to sensitive.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux Server release 7.6 (Maipo) (ppc64le)
GCC version: (GCC) 12.2.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.17
Python version: 3.10.8 (main, Jul 25 2023, 10:52:38) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-4.14.0-115.19.1.el7a.ppc64le-ppc64le-with-glibc2.17
cc @robieta @chaekit @aaronenyeshi @nbcsm @guotuofeng @guyang3532 @gaoteng-git @tiffzhaofb @dzhulgakov @davidberard98
| 0 |
838 | 109,590 |
[WIP] Trace model attribute mutation
|
module: dynamo, ciflow/inductor, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109590
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
839 | 109,586 |
Max pool with negative integer inputs and channels_last memory layout gives the wrong values
|
module: nn, triaged
|
### π Describe the bug
When running max pool operation on integral tensors with negative values and channels_last memory format the lowest value in the returned tensor is always zero.
``` python
import torch
t = torch.arange(24).view(1,1,4,6) - 12
expected = torch.max_pool2d(t, 2)
actual = torch.max_pool2d(t.clone(memory_format=torch.channels_last), 2)
assert torch.all(expected == actual).item()
# Note that the assert fails because
# expected is
# tensor([[[[-5, -3, -1],
# [ 7, 9, 11]]]])
# actual is
# tensor([[[[ 0, 0, 0],
# [ 7, 9, 11]]]])
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230730+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.6 (https://github.com/conda-forge/clangdev-feedstock ceeebe884c3cfd7160cf5a43e147f94439fafee3)
CMake version: version 3.25.3
Libc version: glibc-2.35
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 60
On-line CPU(s) list: 0-59
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 30
Socket(s): 1
Stepping: 1
BogoMIPS: 4890.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmulqdq rdpid arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.9 MiB (30 instances)
L1i cache: 1.9 MiB (30 instances)
L2 cache: 15 MiB (30 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 2
NUMA node0 CPU(s): 0-29
NUMA node1 CPU(s): 30-59
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] torch==2.1.0.dev20230730+cpu
[pip3] torchvision==0.16.0.dev20230730+cpu
[pip3] triton==2.0.0
[conda] numpy 1.25.2 py310ha4c1d20_0 conda-forge
[conda] torch 2.1.0.dev20230730+cpu pypi_0 pypi
[conda] torchvision 0.16.0.dev20230730+cpu pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
840 | 109,585 |
[Torch-Onnx] Exporting the operator 'quantized::conv_transpose2d' to ONNX opset version 13 is not supported.
|
module: onnx, triaged
|
### π Describe the bug
Exporting the operator 'quantized::conv_transpose2d' to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
After I added the quantized::conv_transpose2d symbolic function as follow:
```
# @parse_args("v", "v", "f", "f")
# def convert_quant_conv_transpose2d(g, q_input, q_weight, output_scale, output_zero_point):
# inp, input_scale, _, _ = dequantize_helper(g, q_input)
# unpacked_inputs = _unpack_param(q_weight)#2 6 51 52
# output = opset9.conv2d(g, input, weight, bias, stride, padding, dilation, groups)
# return quantize_helper(g, output, op_scale, op_zero_point)
# torch.onnx.register_custom_op_symbolic(
# 'quantized::conv_transpose2d', convert_quant_conv_transpose2d, 13)
```
I found that the packed params needs to be unpacked.
maybe need to support quantize::convtranspose in unpackQuantizedWeightsHelper, then I can support this op by adding symbolic function.
if anyone can help, thx ~
### Versions
```
import torch
import random
x = torch.ones(2, 6, 51, 52)#(5, 3)
scale = random.uniform(0., 2.)
zp = random.randint(0, 5)
input_shapes = [x.shape]
spatial_size = len(input_shapes[0]) - 2
in_channels= input_shapes[0][1]
out_channels =6
stride = [1,1]
padding = [0,0]
dilation =[1,1]
groups = 1
bias = True
output_padding = [0,0]
kernel_size=3
kernel_size = [kernel_size] * spatial_size
weight_shape = [input_shapes[0][1], out_channels // groups] + list(kernel_size)
w = torch.quantize_per_tensor(torch.randn(weight_shape), scale=0.5, zero_point=2, dtype=torch.qint8)
class test_module(torch.nn.Module):
def __init__(self):
super(test_module, self).__init__()
self.deconv = torch.nn.quantized.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
if bias:
b = torch.randn(out_channels)
else:
b = torch.zeros(out_channels)
self.deconv.set_weight_bias(w, b)
self.deconv.scale = scale
self.deconv.zero_point = zp
def forward(self, x):
x = torch.quantize_per_tensor(x, scale=2.1, zero_point=2, dtype=torch.quint8)
return self.deconv(x)
model1 = test_module()
input=x
out1=model1(input)
print('..')
# traced_model= torch.jit.trace(model,input_batch,strict=False)
traced_model= torch.jit.trace(model1,input)
traced_model.save('torch_quant_convtranspose.pt')
# script_model=torch.jit.script(model)
# script_model.save('deep_lab_v3_script.pt')
# For inference
model1.eval()
from torch.onnx import register_custom_op_symbolic
from torch.onnx.symbolic_helper import parse_args,quantized_args,dequantize_helper,_unpack_list
from torch.onnx import symbolic_opset9 as opset9
# @parse_args("v", "v", "f", "f")
# def convert_quant_conv_transpose2d(g, q_input, q_weight, output_scale, output_zero_point):
# inp, input_scale, _, _ = dequantize_helper(g, q_input)
# unpacked_inputs = _unpack_param(q_weight)#2 6 51 52
# #output = opset9.conv2d(g, input, weight, bias, stride, padding, dilation, groups)
# #return quantize_helper(g, output, op_scale, op_zero_point)
#
#
# torch.onnx.register_custom_op_symbolic(
# 'quantized::conv_transpose2d', convert_quant_conv_transpose2d, 13)
# optionally, if you want to export the model to ONNX:
torch.onnx.export(traced_model, input, "torch_conv_transpose.onnx", opset_version = 13)
```
```[tasklist]
### Tasks
```
| 0 |
841 | 109,583 |
[dynamo][jagged tensor] Slow compilation time for a helper function of jagged tensor
|
triaged, oncall: pt2, module: dynamic shapes
|
### π Describe the bug
~~~
import copy
import torch
mylist = []
@torch.compile(backend="eager")
def foo(stride_per_key_per_rank):
return [sum(stride) for stride in stride_per_key_per_rank]
x = list(range(100))
x = [copy.deepcopy(x) for _ in range(100)]
foo(x)
~~~
This type of pattern is common and is very expensive for Dynamo and is really slow.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
NA
cc @ezyang @msaroufim @wconstab @bdhirsh
| 2 |
842 | 109,582 |
Make Dropout take a dim=... argument
|
module: nn, triaged, needs research, has workaround
|
### π The feature, motivation and pitch
Currently PyTorch has Dropout, Dropout1d, Dropout2d and Dropout3d, for handling data with shape either shape (*, D), (N, C, D), (N, C, W, H) or (N, C, W, H, Z).
However none of them support the case where I want to do channel dropout but have shape, say, (B, N, C, D).
It would be much easier if the standard Dropout class just took a `dim` argument for specifying the level at which to do dropout.
For example, `dim=-1` could be the default, and would correspond to the current Dropout behavior.
`dim=-2` would be like Dropout1d, but support more than one batch dimensions, e.g. (B, N, C, D).
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 4 |
843 | 109,581 |
torch.optim.Adafactor
|
feature, module: optimizer, triaged, needs research
|
### π The feature, motivation and pitch
Models are getting harder to fit on a limited number of GPUs and ADAM doesn't help since its memory overhead is 2N where N is the number of parameters in a model

We don't like to merge optimizers in core because they rarely stand the test of time but ADAM has and a memory efficient alternative that's been in use at many startups I've talked to, Twitter and larger companies https://github.com/google-research/t5x/blob/main/t5x/adafactor.py, has been Adafactor, see the discussion here https://twitter.com/HamelHusain/status/1702004478369743125 - it's also come up in a github issue before here https://github.com/pytorch/pytorch/issues/98434
Assigning to myself since I want a starter task in optimizers
### Alternatives
There is a working implementation in fairseq which Huggingface has borrowed https://github.com/facebookresearch/fairseq/blob/main/fairseq/optim/adafactor.py#L66 which is a good starting point
### Additional context
You might be thinking why merge yet another optimizer? Optimizers are plagued by lack of reproducibility and sensitivity to hyperparameters
However, ADAM has stood the test of time but ADAM also has a high memory overhead, for each parameter you need to store the first and second moment so if your model has N parameters then your optimizer state is 2N. Also few people change its hyperparameters in fact the hyperparams have remained the same since torch days
* https://github.com/pytorch/pytorch/blob/main/torch/optim/adam.py
* https://github.com/torch/optim/blob/master/adam.lua
So as Blueberries projects move towards finetuning after PyTorch conference and as more members of the community try to fit larger models on their GPUs it's critical we find memory efficient alternative to ADAM which is why Adafactor might just be a strong candidate for a default optimizer we recommend to people finetuning with a limited amount of GPUs and this includes ourselves in the torchbench CI, Adafactor comes with its own LR scheduler and will help us not to have to dramatically reduce batch sizes in our benchmarks
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 3 |
844 | 109,580 |
Support FloatFunctional subclasses in eager mode
|
release notes: quantization, release notes: AO frontend
|
Summary:
X-link: https://github.com/facebookresearch/d2go/pull/618
Allow eager mode prepare to insert observer for classes subclassing `FloatFunctional`
Differential Revision: D48377683
| 2 |
845 | 109,579 |
[Android: React Native] couldn't find DSO to load: libtorch-code-gen.so when loading model
|
oncall: mobile
|
### π Describe the bug

my build.gradle:
```
.....
// exclude to prevent duplication class
implementation ('org.pytorch:pytorch_android_lite:1.9.0'){
exclude module: 'fbjni-java-only'
}
implementation ('org.pytorch:pytorch_android_torchvision:1.9.0'){
exclude module: 'fbjni-java-only'
}
```
My code to load model
```
this.module = LiteModuleLoader.load(assetFilePath(context, "optimized_model.ptl"));
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture=9
CurrentClockSpeed=1382
DeviceID=CPU0
Family=205
L2CacheSize=5120
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=1382
Name=11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.25.2
[conda] Could not collect
```[tasklist]
### Tasks
```
| 0 |
846 | 109,578 |
Add Half support for AvgPool2d on CPU
|
module: cpu, open source, module: half, ciflow/trunk, topic: not user facing, ciflow/mps, ciflow/inductor
|
Add Half support for AvgPool2d (both channels last and channels first) on CPU
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
847 | 109,577 |
ONNX Export error
|
module: onnx, triaged
|
### π Describe the bug
@thiagocrepaldi I met this Error `ValueError Expected more than 1 value per channel when training`, when setting training=torch.onnx.TrainingMode.TRAINING on my export() call. It seems that this error is caused by batchnorm, how to fix this issue ? #95867
I cannot reopen that issue, so I create a new issue here. Thanks a lot.
### Versions
PyTorch version: 1.13.0a0+git49444c3
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN RTX
GPU 1: NVIDIA TITAN RTX
GPU 2: NVIDIA TITAN RTX
GPU 3: NVIDIA TITAN RTX
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Stepping: 4
CPU MHz: 2318.408
CPU max MHz: 3000.0000
CPU min MHz: 800.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 20 MiB
L3 cache: 27.5 MiB
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.23.1
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==1.13.0a0+git49444c3
[pip3] torchvision==0.15.1a0+42759b1
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.23.1 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 1.13.0a0+git49444c3 pypi_0 pypi
[conda] torchvision 0.15.1a0+42759b1 pypi_0 pypi
| 1 |
848 | 109,573 |
[Not for merge][Repro] Unbacked symint in Inductor size_hint output
|
module: inductor, ciflow/inductor
|
Repro script:
```python
# Test 1: tolist -> zeros -> cat
import torch
import torch.export
import torch._dynamo.config
def f(x):
a, b = x.tolist()
torch.export.constrain_as_size(a)
torch.export.constrain_as_size(b)
return torch.cat([torch.zeros(a, device="cuda"), torch.zeros(b, device="cuda")])
with (
torch._dynamo.config.patch(
dynamic_shapes=True,
capture_dynamic_output_shape_ops=True,
capture_scalar_outputs=True,
),
):
compiled = torch.compile(f, fullgraph=True, dynamic=True)
x = torch.tensor([1, 4], device="cuda")
ref = f(x)
actual = compiled(x)
print(f"ref: {ref}, actual: {actual}")
assert torch.allclose(ref, actual)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
849 | 109,569 |
[WIP] fix: added check for convolution output shape wrt kernel_size and input length
|
open source
|
Fixes #109552
TODO: check more convolution types and add tests
| 3 |
850 | 109,565 |
Adding T4 GPUs to inductor nightly benchmarks
|
open source, topic: not user facing
|
1. Forks the nightly benchmarks workflow to support multiple hardware types.
2. Adds g4dn instances with T4 Turing GPUs to the list of tested devices (Inference only)
| 1 |
851 | 109,563 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
852 | 109,552 |
[fake/meta] Bad meta kernel for conv1d
|
triaged, oncall: pt2, module: fakeTensor
|
### π Describe the bug
```
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv1d(6, 6, 6, groups=2)
def forward(self, x):
return self.conv(x)
mod = M()
opt_mod = torch.compile(mod, backend="aot_eager")
opt_mod(torch.rand(1, mod.conv.in_channels, mod.conv.weight.shape[1]))
```
### Error logs
```
File "/data/users/anijain/pytorch/torch/_dynamo/utils.py", line 1416, in run_node
return nnmodule(*args, **kwargs)
File "/data/users/anijain/pytorch/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/anijain/pytorch/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/anijain/pytorch/torch/nn/modules/conv.py", line 310, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/data/users/anijain/pytorch/torch/nn/modules/conv.py", line 306, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
File "/data/users/anijain/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1304, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 1547, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/data/users/anijain/pytorch/torch/_subclasses/fake_tensor.py", line 737, in conv
out = func(**kwargs)
File "/data/users/anijain/pytorch/torch/_ops.py", line 458, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/anijain/pytorch/torch/_meta_registrations.py", line 2021, in meta_conv
out = input_tensor.new_empty(shape_out)
File "/data/users/anijain/pytorch/torch/_refs/__init__.py", line 4569, in new_empty
return torch.empty(
torch._dynamo.exc.TorchRuntimeError: Failed running call_module L__self___conv(*(FakeTensor(..., size=(1, 6, 3)),), **{}):
Trying to create tensor with negative dimension -2: [1, 6, -2]
```
### Minified repro
_No response_
### Versions
NA
cc @ezyang @msaroufim @wconstab @bdhirsh @eellison
| 1 |
853 | 109,551 |
[wip]: fsspec remote code cache
|
topic: not user facing, module: inductor, ciflow/inductor
|
The applications for this PR are around reducing cold start times
1. Going from training to inference in case you're using the same machine
2. Distributed
3. CI where the majority of some code does not change and you'd like to reduce test time
fsspec is amazing and can support the widest possible amount of file systems in my tests i mocked s3 and local paths but more setups could work, it's already a pytorch dependency https://download.pytorch.org/whl/nightly/fsspec/
In case test is skipped in CI
```
(sourcetorch) ubuntu@ip-172-31-1-136:~/pytorch/test/inductor$ pytest -s test_cache.py
============================================================================ test session starts ============================================================================
platform linux -- Python 3.10.10, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/ubuntu/pytorch
configfile: pytest.ini
plugins: hypothesis-6.75.1
collected 4 items
test_cache.py ....
============================================================================= 4 passed in 2.74s =============================================================================
(sourcetorch) ubuntu@ip-172-31-1-136:~/pytorch/test/inductor$
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
854 | 109,550 |
[foreach] check for empty tensors before dispatching to MTA
|
release notes: foreach_frontend, topic: bug fixes
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109550
* #109534
| 2 |
855 | 109,545 |
Update torchbench pin
|
ciflow/trunk, topic: not user facing, ciflow/inductor, merging
|
To include https://github.com/pytorch/benchmark/pull/1907
| 4 |
856 | 109,539 |
Torch FX SubgraphMatcher Any / Oneof Patterns
|
triaged, module: fx, module: fx.passes
|
### π The feature, motivation and pitch
I'm using the [SubgraphMatcher](https://github.com/pytorch/pytorch/blob/aed9bee0413dac190452fbfa9ab2a44b6e6843f5/torch/fx/passes/utils/matcher_utils.py#L51) to match torch.fx graphs. I'd like to have some sort of set of nodes functionality when authoring patterns (ie match a convolution -> batch norm -> {relu, gelu, ... any activation}), without having to author one pattern per node type.
This was [previously implemented](https://github.com/pytorch/pytorch/pull/82853), but [then backed out](https://github.com/pytorch/pytorch/pull/83922) (for reasons I did not see). Is this on the roadmap, or is there an alternative way to achieve this functionality that I'm not aware of?
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| 0 |
857 | 109,528 |
attn_output_weights sometimes rerurn `None`
|
triaged, module: multi-headed-attention
|
### π Describe the bug
```python
attn_output, attn_output_weights = F.multi_head_attention_forward(
query, key, value, self.embed_dim, self.num_heads,
self.in_proj_weight, self.in_proj_bias,
self.bias_k, self.bias_v, self.add_zero_attn,
self.dropout, self.out_proj.weight, self.out_proj.bias,
training=self.training,
key_padding_mask=key_padding_mask,
need_weights=need_weights,
attn_mask=attn_mask,
average_attn_weights=average_attn_weights,
is_causal=is_causal)
```
attn_output_weights = None sometimes
### Versions
2.0.1
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2023-09-18 20:10:44-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21653 (21K) [text/plain]
Saving to: βcollect_env.pyβ
collect_env.py 100%[===========================================================================================================================================================>] 21.15K --.-KB/s in 0s
2023-09-18 20:10:45 (88.8 MB/s) - βcollect_env.pyβ saved [21653/21653]
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] easy-torch==1.3.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==2.0.8
[pip3] torch==2.0.1
[pip3] torch-summary==1.4.5
[pip3] torchaudio==2.0.2
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.22.3 pypi_0 pypi
[conda] pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.0.8 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.0.2 py310_cu118 pytorch
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.1.2 pypi_0 pypi
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.2 py310_cu118 pytorch
| 0 |
858 | 109,521 |
DRAFT
|
release notes: quantization
|
This is https://github.com/pytorch/pytorch/pull/107782 but without GH stack
| 2 |
859 | 109,520 |
`TORCH_DISTRIBUTED_DEBUG=DETAIL` raises a RuntimeError on `_start_coalescing()`
|
oncall: distributed
|
### π Describe the bug
When using `TORCH_DISTRIBUTED_DEBUG=DETAIL`, calling `group._start_coalescing(device)` with an NCCL group raises:
```
RuntimeError: Backend nccl does not implement startCoalescing
```
The root cause is likely similar to #75011
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+29c30b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 19.3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] nvidia-pytriton==0.3.0
[pip3] pytorch-lightning==2.0.7
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+29c30b1
[pip3] torch-tensorrt==2.0.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchmetrics==0.9.1
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0+440fd1b
[pip3] tritonclient==2.37.0.9383150
[conda] Could not collect
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
860 | 109,514 |
_assert_bound_is_rational can fail
|
triaged, module: dynamic shapes
|
### π Describe the bug
This test
```
def test_tensor_symfloat(self):
def f(a):
r = torch.tensor(a.size(0) * 1.0)
assert r.dtype is torch.float
return r
r = str(make_fx(f, tracing_mode="symbolic")(torch.randn(2)).code).strip()
# NB: this specializes, which is fine, the point is to make sure the
# dtype inference is correct
self.assertExpectedInline(r, """""")
```
fails with
```
File "/data/users/ezyang/a/pytorch/test/test_proxy_tensor.py", line 1052, in f
r = torch.tensor(a.size(0) * 1.0)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 991, in guard_float
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/recording.py", line 227, in wrapper
return fn(*args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 3798, in evaluate_expr
static_expr = self._maybe_evaluate_static(expr)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 1557, in wrapper
return fn_cache(self, *args, **kwargs)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 3478, in _maybe_evaluate_static
_assert_bound_is_rational(new_expr, out)
File "/data/users/ezyang/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 603, in _assert_bound_is_rational
assert bound.lower.is_rational or bound.lower.is_Boolean or not bound.lower.is_finite or expr.has(sympy.Pow), (bound, expr)
AssertionError: (ValueRanges(lower=2.00000000000000, upper=9.22337203685478e+18, is_bool=False), 1.0*shape_0 + 1.0)
```
cc @ysiraichi
### Versions
main
| 1 |
861 | 109,506 |
[torch.optim/C++] Add NAdam optimizer
|
triaged, open source, release notes: optim
|
Adds Adadelta optimizer for #107224.
cc @albanD @jbschlosser @soumith @iramazanli @vincentqb @janeyx99
| 1 |
862 | 109,505 |
[dynamo] torch._dynamo.exc.Unsupported: call_function BuiltinVariable(setattr) [TensorVariable(), ConstantVariable(str), ConstantVariable(bool)] {}
|
triaged, oncall: pt2, module: aotdispatch, module: dynamo
|
### π Describe the bug
Occurs when you do
```
foo.requires_grad = True
```
in torch.compile'd region.
Occurred while implementing torch.tensor polyfill.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 7 |
863 | 109,504 |
[dynamo] torch._dynamo.exc.Unsupported: comparison SymNodeVariable() <built-in function is_> ListVariable()
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### π Describe the bug
Self explanatory.
Occurred when implementing self-referentiality check in torch.tensor ref
```
if cur_item is obj:
raise TypeError("new(): self-referential lists are incompatible")
```
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
864 | 109,503 |
Report NameError when name is not defined, rather than unimplemented
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109503
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
865 | 109,497 |
Very big differences in output of `torch.lobpcg` (values and run-time) compared to SciPy on a very ill-conditioned Laplacian matrix
|
triaged, module: linear algebra
|
### π Describe the bug
The input matrix is dumped from https://www.vision.huji.ac.il/SpectralMatting/ octave code, and represents a matting laplacian matrix - very ill-conditioned, has 9 nonzero entries per row (smallest eigval of 0, largest eigenvalue of 9), a lot of very near-zero eigenvalues.
The output of torch.lobpcg seems much more correct than scipy.sparse.linalg.lobpcg, but very different :) And also much slower. Any ideas why this might happen? @lobpcg @pearu, maybe you would have an advice? I thought that torch.lobpcg was essentially a port of scipy.sparse.linalg.lobpcg, no? Is the default value of `tol` determined differently?
Is there anywhere an example of how such laplacian systems should be preconditioned?
- In https://github.com/scipy/scipy/issues/19235#issuecomment-1722516740 I noted that octave produces output much faster (despite that both SciPy and Octave use ARPACK under the hood).
- In https://github.com/scikit-learn/scikit-learn/blob/7f9bad99d6e0a3e8ddf92a7e5561245224dab102/sklearn/manifold/_spectral_embedding.py#L346 I found that scikit-learn applies LOBPCG on input with diag shifted by 1e-5 - but doing this does not improve the SciPy's output (the correct smallest eigvals are all ~0, not ~5)
[L_octave.mat.zip](https://github.com/pytorch/pytorch/files/12648072/L_octave.mat.zip)
```python
import time
import scipy
import numpy as np
import torch
eigs_num=50;
i, j, v, m, n = map(scipy.io.loadmat('L_octave.mat').get, ['i', 'j', 'v', 'm', 'n'])
i = i.squeeze(1).astype('int32') - 1
j = j.squeeze(1).astype('int32') - 1
v = v.squeeze(1).astype('float32')
m = int(m)
n = int(n)
print(i.shape, j.shape, v.shape, int(m), int(n))
L = scipy.sparse.coo_matrix((v, (i, j)), shape = (m, n))
np.random.seed(42)
X = np.random.normal(size=(m, eigs_num )).astype('float32')
tic = time.time()
eigval = scipy.sparse.linalg.eigsh(L, eigs_num, which='SM', return_eigenvectors=False)
print(eigval)
print('scipy.sparse.linalg.eigsh', time.time() - tic)
tic = time.time()
eigval, eigvec = scipy.sparse.linalg.lobpcg(L, X, maxiter = 1000, largest = False)
print(eigval)
print('scipy.sparse.linalg.logpcg', time.time() - tic)
tic = time.time()
eigval, eigvec = torch.lobpcg(torch.sparse_coo_tensor(torch.stack([torch.as_tensor(i), torch.as_tensor(j)]), torch.as_tensor(v), size = L.shape), X = torch.as_tensor(X), k = eigs_num, largest = False, niter = 1000)
print(eigval)
print('torch.lobpcg', time.time() - tic)
```
output:
```
(3188406,) (3188406,) (3188406,) 128400 128400
[4.5030788e-04 4.2453315e-04 4.1517158e-04 3.8193684e-04 3.8071713e-04
3.4807756e-04 3.4010404e-04 3.2250569e-04 2.7762813e-04 2.7402333e-04
2.6094113e-04 2.3607175e-04 2.2584437e-04 2.0594637e-04 2.0127479e-04
1.9350117e-04 1.8327046e-04 1.7235774e-04 1.6348275e-04 1.5041916e-04
1.4643844e-04 1.3529375e-04 1.2902766e-04 1.2850901e-04 1.2593951e-04
1.0800803e-04 1.0034799e-04 9.7508135e-05 9.6017233e-05 8.9910092e-05
8.8589368e-05 7.2698742e-05 7.0671063e-05 6.5970075e-05 6.1649480e-05
5.8771104e-05 5.7743473e-05 5.7501289e-05 4.6025161e-05 4.5723471e-05
4.1400210e-05 3.8752623e-05 3.6583613e-05 2.6378684e-05 2.2586199e-05
1.6014528e-05 1.0540600e-05 9.0244203e-06 7.9671645e-06 6.7056526e-08]
scipy.sparse.linalg.eigsh 266.0989365577698
[5.5764084 5.5895004 5.5967197 5.599278 5.6036086 5.6075673 5.612417
5.622675 5.6258216 5.627 5.629461 5.63777 5.641882 5.6433797
5.6449986 5.648331 5.651633 5.6590447 5.659555 5.670805 5.6723266
5.6753798 5.6767316 5.6811604 5.685404 5.688234 5.6918545 5.693018
5.694217 5.7012777 5.703604 5.7075367 5.708476 5.712419 5.7155733
5.717812 5.723774 5.728133 5.7328434 5.736811 5.7427206 5.7465687
5.7523456 5.754298 5.7585073 5.7666636 5.7695017 5.772464 5.7793713
5.783919 ]
scipy.sparse.linalg.logpcg 1.3224408626556396
tensor([8.7688e-05, 1.0392e-04, 1.1558e-04, 1.0228e-04, 1.0984e-04, 1.2023e-04,
1.3466e-04, 1.4477e-04, 1.3094e-04, 1.3702e-04, 1.3840e-04, 1.5007e-04,
1.2557e-04, 1.3130e-04, 1.4117e-04, 1.5102e-04, 1.5749e-04, 1.7784e-04,
1.8517e-04, 1.8209e-04, 2.0007e-04, 1.8993e-04, 1.9568e-04, 2.1500e-04,
2.1919e-04, 2.2342e-04, 2.3812e-04, 2.4824e-04, 2.8042e-04, 2.2152e-04,
2.0526e-04, 2.2028e-04, 2.5449e-04, 2.8130e-04, 2.9647e-04, 3.0863e-04,
3.4404e-04, 3.4340e-04, 3.7165e-04, 3.6539e-04, 4.2566e-04, 4.3179e-04,
4.2837e-04, 3.1011e-04, 4.0076e-04, 4.4517e-04, 4.9045e-04, 5.1192e-04,
5.5956e-04, 6.4424e-04])
torch.lobpcg 123.57711958885193
```
### Versions
torch.__version__
'2.1.0.dev20230802+cpu'
scipy.__version__
'1.10.1'
np.__version__
'1.24.2'
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| 17 |
866 | 109,494 |
Performance degradation on AMD + A800 when computation is small
|
module: performance, module: cuda, triaged
|
### π Describe the bug
Simply run torch.matmul on different machines, when gemm is large, we get almost **equal performance**.
However when it is small, **A800 + AMD is unstable and slow.**
Test Code:
```
import torch
import time
n_test = 10
rep = 10;
M = 512; N = 512; K = 512;
x = torch.randn((M,K),device='cuda', dtype=torch.float16)
y = torch.randn((K,N),device='cuda', dtype=torch.float16)
for i in range(n_test):
torch.cuda.synchronize()
t_start = time.perf_counter_ns()
for _ in range(rep):
z = torch.matmul(x, y)
torch.cuda.synchronize()
_t = (time.perf_counter_ns() - t_start)/1e6
print(f'test-{i}, torch.matmul for ({M},{K}) @ ({K},{N}): {_t:.4}ms')
```
Testing results, M,N,K = (512,512,512):
**AMD + A800:**
<img width="872" alt="image" src="https://github.com/pytorch/pytorch/assets/33060203/6f9e7953-3ffe-4d64-b2c8-934de190d80e">
**Intel + A100:**
<img width="694" alt="image" src="https://github.com/pytorch/pytorch/assets/33060203/14d7a171-a55e-4d88-a974-7ae14462a4a2">
Testing results, M,N,K = (5120,5120,512):
**AMD + A800:**
<img width="883" alt="image" src="https://github.com/pytorch/pytorch/assets/33060203/093c5e86-4c2b-4682-9fc6-d1637c5f0252">
**Intel + A100:**
<img width="677" alt="image" src="https://github.com/pytorch/pytorch/assets/33060203/416f0f59-93c5-4096-9c8e-92573100f28c">
### Versions
**Intel + A100 env:**
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3400.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] numpy==1.22.4
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==2.0.0
[pip3] torch==2.0.1
[pip3] torch-tensorrt==1.4.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[pip3] tritonclient==2.30.0
[conda] Could not collect
**AMD + A800 env:**
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-128-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA Graphics Device
Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1796.581
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.50
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==2.0.0
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.1+cu117
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.0.1
[pip3] torchsde==0.2.5
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[pip3] tritonclient==2.34.0
cc @ptrblck
| 2 |
867 | 109,493 |
Avoid cuda stubs libraries being RPATHed
|
triaged, module: mkldnn, open source, release notes: build
|
Rebase of #87593 which is a continuation of #68912
The new function `link_cuda_libraries` is a drop-in replacement for `target_link_libraries` which calls the latter and does the check and manual-rpath-setting on the libraries.
I also made it to work on CMake prior to 3.20 by using `get_filename_component` instead of `cmake_path`
Fixes #35418
@kumpera in the other PR you mentioned that some changes should/need to be split off. How would this need to look like? The changes depend on each other: The 3 CMakeLists.txt require the new module/file in the `cmake` folder and the `test_torch.py` will fail without those changes.
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen
| 3 |
868 | 109,492 |
[inductor] Remove `is_big_gpu` check
|
triaged, open source, module: inductor, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/109489
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 2 |
869 | 109,491 |
Fix MultiProcess failure on nodes with 1 GPU
|
triaged, open source
|
The decorator(s) is written to `sys.exit` when the test function is called which is AFTER the `setup` call which forks the processes and uses (potentially) a GPU/NCCL based barrier which requires "n GPUs" to be present befor checking if "n GPUs" are available.
Rewrite those decorators to use `unittest.skipIf` which will not even enter the `setup` function.
This also exposed that `require_n_gpus_for_nccl_backend` is the same as `nccl_skip_if_lt_x_gpu` but the former has a better name so I removed the latter.
Fixes #89686
Note that the torch.cuda.is_available() check is redundant as torch.cuda.device_count() will return a value of at most zero and the former might already use the latter, i.e. is_available == device_count() > 0
Reopened from #89750 after rebase
| 2 |
870 | 109,489 |
Investigate Strictness of torch.compile `is_big_gpu`
|
triaged, oncall: pt2, module: inductor
|
### π Describe the bug
The check was introduced in https://github.com/pytorch/pytorch/pull/90738 to fix: https://github.com/pytorch/torchdynamo/issues/2015
However, this causes Triton matmul template codegen not to occur on smaller GPUs (sms < 80), including most people's commercial GPUs.
We make two observations:
1. This check was a hotfix that is too strict for most cases
2. V100 is on the testing path of Triton, so it is likely that this issue is fixed
Hence, my proposal is to get rid of the check entirely, as long as CI passes.
CC @jansel as prev discussed (and original code author)
### Error logs
```
[WARNING] not enough SMs to use max_autotune_gemm mode
```
### Minified repro
Use torch.compile on small GPU
### Versions
<details>
PyTorch version: 2.2.0a0+gitc29c493
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 5400.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] jax-triton==0.1.4
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0a0+git84680cb
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.1.0
[conda] Could not collect
</details>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 8 |
871 | 109,488 |
[bug] FALLBACK path has been taken inside: runCudaFusionGroup
|
oncall: jit
|
### π Describe the bug
We are facing a similar issue as https://github.com/pytorch/pytorch/issues/88050.
```sh
WARNING/MainProcess] /opt/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1130: UserWarning: FALLBACK path has been taken inside: runCudaFusionGroup. This is an indication that codegen Failed for some reason.
```
After setting PYTORCH_NVFUSER_DISABLE=fallback, I get the following:
```sh
[ERROR] RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: stride == cur_contig_stride || size == 1 || (still_rightmost && stride == 1) || (!still_rightmost && stride % word_size == 0) INTERNAL ASSERT FAILED at "../third_party/nvfuser/csrc/executor_utils.cpp":621, please report a bug to PyTorch. Vectorization of T0_g[ iS63{( ceilDiv(( ceilDiv(( ceilDiv(( 1 * ( T0.size[1] * ( T0.size[2] * 1 ) ) ), 2) ), 1) ), 128) )}, iS62{1}, iS60{2}, iS64{128} ] with word size 2 not possible due to invalid stride. Domain: iS60{2}, stride: 42
Traceback (most recent call last):
File "/back/src/jobs/timeout_runner.py", line 33, in run_with_timeout
behaviour(shared_dict)
File "/back/src/jobs/quick_api_pipeline_job.py", line 32, in <lambda>
behaviour=lambda shared_dict: __do_work(photo_redis_key, options, shared_dict),
File "/back/src/jobs/quick_api_pipeline_job.py", line 87, in __do_work
output_data, medias = pipeline_service.process_face(
File "/back/src/serve/pipeline_service.py", line 164, in process_face
runner.run_beautifying(client=beautify_client)
File "/back/src/serve/pipeline_run.py", line 132, in run_beautifying
beautified_raw = client.beautify(self.__previous_segmap())
File "/back/src/serve/clients/beautify_client.py", line 29, in beautify
beautified_tensors = self.service.beautify(segmap.tensor_one_hot)
File "/back/src/serve/models_unified/src/service/beautify_service.py", line 60, in beautify
out_seg = self.codec_model.decoder(out_seg, seg_mask)
File "/opt/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: stride == cur_contig_stride || size == 1 || (still_rightmost && stride == 1) || (!still_rightmost && stride % word_size == 0) INTERNAL ASSERT FAILED at "../third_party/nvfuser/csrc/executor_utils.cpp":621, please report a bug to PyTorch. Vectorization of T0_g[ iS63{( ceilDiv(( ceilDiv(( ceilDiv(( 1 * ( T0.size[1] * ( T0.size[2] * 1 ) ) ), 2) ), 1) ), 128) )}, iS62{1}, iS60{2}, iS64{128} ] with word size 2 not possible due to invalid stride. Domain: iS60{2}, stride: 42
```
### Versions
```sh
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.107+-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 2
Core(s) per socket: 3
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 96 KiB
L1i cache: 96 KiB
L2 cache: 768 KiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-5
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cu117
[pip3] torchvision==0.15.2+cu117
[pip3] triton==2.0.0
[conda] Could not collect
```
```[tasklist]
### Tasks
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 1 |
872 | 109,487 |
Fix access to unitialized memory in VSX vector functions for quantized values
|
module: cpu, triaged, open source, ciflow/trunk, release notes: quantization, topic: bug fixes
|
Similar to #89833 those function may access uninitialized memory leading to undefined behavior/results.
Initialize with zeros as done before.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 4 |
873 | 109,485 |
[POC] Add caching for faketensor propagation
|
ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109726
* __->__ #109485
* #108841
| 5 |
874 | 109,484 |
[dynamo][symbolic shapes] Long compilation time for KJT helper function
|
triaged, oncall: pt2, module: dynamic shapes
|
### π Describe the bug
Code - https://gist.github.com/anijain2305/94f59df4e5267fefa046b8cb718be2cd
Input has a list of integers, which changes from run to run, and this list goes to torch.split, triggering a large number of symbolic shape guards. I think Dynamo should skip compiling this type of function. Looking for ideas on how to proceed on this.
@voznesenskym @ezyang
### Error logs
_No response_
### Minified repro
_No response_
### Versions
NA
cc @ezyang @msaroufim @wconstab @bdhirsh
| 1 |
875 | 109,481 |
[xla hash update] update the pinned xla hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
876 | 109,480 |
[Docs][Distributed] Add migration notes for `--local-rank` option style change for `torchrun` in PyTorch 2.0
|
module: docs, triaged, open source, release notes: distributed (tools)
|
Fixes https://github.com/pytorch/pytorch/pull/94505#issuecomment-1722777767
cc @svekars @carljparker
| 3 |
877 | 109,478 |
ProcessGroup is not automatically destroyed when the process exits
|
oncall: distributed
|
### π Describe the bug
I may have discovered a bug when I registered the corresponding ProcessGroup development.
I found that the destructor of the ProcessGroup subclass object was not called when the program exited.
The destroy_process_group() function must be explicitly called to destruct the corresponding ProcessGroup object.
In my current test, there is a problem with 2.1, but there is no problem with 2.0.1 and previous versions.
The following is a simple use case of nccl. You need to add printing, etc. to the destructor `~ProcessGroupNCCL` in advance to see whether the destructor is called. (`torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp`)
```
import os
import sys
import gc
import torch
import torch.distributed as dist
from torch.testing._internal.common_utils import TestCase, run_tests
class Test(TestCase):
world_size_2p = 1
data = torch.randn(10, 20)
@classmethod
def _init_dist_nccl(cls, rank, world_size):
dist.init_process_group(
backend='nccl', init_method= "tcp://127.0.0.1:29502", world_size=world_size, rank=rank)
return dist
def _test_alltoall_2p(cls, rank, data, world_size, init_pg):
pg = init_pg(rank, world_size)
print(torch.distributed.distributed_c10d.GroupMember.WORLD)
print("----------")
print("sys", sys.getrefcount(torch.distributed.distributed_c10d.GroupMember.WORLD))
print("gc", gc.get_referrers(torch.distributed.distributed_c10d.GroupMember.WORLD))
#pg.destroy_process_group()
def _test_multiprocess_2p(self, f, init_pg):
ws = self.world_size_2p
data = self.data
expected = []
self._test_alltoall_2p(0, data, ws, init_pg)
def test_alltoall_2p_dist(self):
self._test_multiprocess_2p(
Test._test_alltoall_2p,
Test._init_dist_nccl)
print('test_alltoall_2p_dist ')
if __name__ == '__main__':
run_tests()
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git591cb77
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.1 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-184-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz
Stepping: 7
CPU MHz: 1001.086
CPU max MHz: 2601.0000
CPU min MHz: 1000.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.22.4
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.0a0+git591cb77
[pip3] torchvision==0.7.0
[conda] cudatoolkit 11.3.1 h2bc3f7f_2 https://repo.anaconda.com/pkgs/main
[conda] mkl 2023.0.0 pypi_0 pypi
[conda] mkl-include 2023.0.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.1.0a0+git591cb77 pypi_0 pypi
[conda] torchvision 0.9.0a0+8fb5838 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 3 |
878 | 109,477 |
[DTensor] optimizer step performance is still too bad
|
oncall: distributed, module: dtensor
|
### π Describe the bug
Following #109101, I find that the performance of optimizer step with DTensor still does not meet expectations even with fixes #109306 and #109428, and it is not related to the overhead of gradient averaging actually. To verify, I create a benchmark script using a model with only one `torch.nn.Linear(32, 5).to(device)` layer on only 1 GPU:
- The optimizer step overhead is 531ΞΌs for native tensor
- The optimizer step overhead is is 4.82ms for DTensor.
Native tensor optimizer uses `multiply_tensor_apply_kernel` , while DTensor optimizer uses `vectorized_elementwise_kernel` and there are extra Memcpy DtoD . I think it may be related to implementation of DTensor. I would like to know if there is still room for improvement.
```python
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.distributed._tensor import (
DTensor,
DeviceMesh,
distribute_tensor,
distribute_module,
Shard,
Replicate
)
import torch.multiprocessing as mp
import os
import time
WORLD_SIZE = 1
ITER_TIME = 20
def _data_parallel_fn(
name: str,
module: nn.Module,
device_mesh: DeviceMesh,
) -> None:
for name, param in module.named_parameters():
dist_spec = ([Replicate()])
dist_param = torch.nn.Parameter(
distribute_tensor(param, device_mesh, dist_spec)
)
name = '_'.join(name.split('.'))
module.register_parameter(name, dist_param)
def native_tensor_baseline():
in_shape = [20, 32]
output_shape = [20, 5]
device = torch.device("cuda", 0)
torch.cuda.set_device(device)
torch.cuda.set_per_process_memory_fraction(1.0, device)
model = torch.nn.Linear(32, 5).to(device)
nn.init.ones_(model.weight)
nn.init.zeros_(model.bias)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, amsgrad=False)
x = torch.randn(*in_shape).to(device).requires_grad_()
y_grad = torch.randn(*output_shape).to(device)
# warm up
y = model(x)
optimizer.zero_grad()
y.backward(y_grad)
optimizer.step()
torch.cuda.synchronize(device)
start = time.time()
for i in range(ITER_TIME):
print(f"---------------{i}--------------------")
torch.cuda.nvtx.range_push("model_iter"+str(i))
torch.cuda.nvtx.range_push("forward")
y = model(x)
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("zero_grad")
optimizer.zero_grad()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("backward")
y.backward(y_grad)
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("optimizer_step")
optimizer.step()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
end = time.time()
max_reserved_memory = torch.cuda.max_memory_reserved(device)
max_allocated_memory = torch.cuda.max_memory_allocated(device)
print(f"{ITER_TIME} iterations, latency {(end - start)/ITER_TIME*1000} ms, max reserved {max_reserved_memory/1024/1024/1024:8.2f} GiB, max allocated {max_allocated_memory/1024/1024/1024:8.2f} GiB")
def demo_data_parallel(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("nccl", rank=rank, world_size=world_size)
in_shape = [20, 32]
output_shape = [20, 5]
device = torch.device("cuda", rank)
torch.cuda.set_device(device)
mesh = DeviceMesh("cuda", torch.arange(world_size))
model = torch.nn.Linear(32, 5).to(device)
nn.init.ones_(model.weight)
nn.init.zeros_(model.bias)
model = distribute_module(model, mesh, _data_parallel_fn, input_fn=None, output_fn=None)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, amsgrad=False)
x = torch.randn(*in_shape).to(device).requires_grad_()
y_grad = torch.randn(*output_shape).to(device)
x = distribute_tensor(x, mesh, [Shard(0)])
y_grad = distribute_tensor(y_grad, mesh, [Shard(0)])
# warm up
y = model(x)
optimizer.zero_grad()
y.backward(y_grad)
optimizer.step()
torch.cuda.synchronize(device)
start = time.time()
for i in range(ITER_TIME):
print(f"---------------{i}--------------------")
torch.cuda.nvtx.range_push("model_iter"+str(i))
torch.cuda.nvtx.range_push("forward")
y = model(x)
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("zero_grad")
optimizer.zero_grad()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("backward")
y.backward(y_grad)
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_push("optimizer_step")
optimizer.step()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
torch.cuda.synchronize(device)
torch.cuda.nvtx.range_pop()
end = time.time()
max_reserved_memory = torch.cuda.max_memory_reserved(device)
max_allocated_memory = torch.cuda.max_memory_allocated(device)
print(f"rank {rank}, {ITER_TIME} iterations, latency {(end - start)/ITER_TIME*1000} ms, max reserved {max_reserved_memory/1024/1024/1024:8.2f} GiB, max allocated {max_allocated_memory/1024/1024/1024:8.2f} GiB")
dist.destroy_process_group()
if __name__ == "__main__":
print(f"==========Navtive Tensor 1 GPU==========")
native_tensor_baseline()
print(f"==========DTensor 1 GPU==========")
mp.spawn(demo_data_parallel, args=(WORLD_SIZE,), nprocs=WORLD_SIZE, join=True)
```
### Versions
docker image: [nvcr.io/nvidia/pytorch:23.08-py3](http://nvcr.io/nvidia/pytorch:23.08-py3)
torch version: '2.1.0a0+29c30b1'
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 5 |
879 | 109,469 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
880 | 109,466 |
Revert "[inductor] let codegen not rely on node order (#107320)"
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109466
This reverts commit 556bfe7cb547f8073c86982e186c780ba5a53a8a.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
881 | 109,465 |
Revert "[inductor] Fix inputs with existing offsets (#108168)"
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109465
This reverts commit 2c87ef3dbfe1e5bd9ee4b6a9e4aa5030dc714284.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
882 | 109,464 |
[Decomposition] hann_window.periodic
|
fb-exported, ciflow/inductor
|
Summary: Add decomposition for hann_window.periodic and include it in core_aten_decompositions
Test Plan:
Phabricator + OSS Tests
```
EXPECTTEST_ACCEPT=1 python3 test/test_decomp.py -k test_quick_hann_window
```
Reviewed By: SS-JIA
Differential Revision: D48939996
| 12 |
883 | 109,462 |
Inconsistent behavior for in-place operations on coalesced sparse tensors
|
module: sparse, triaged
|
### π Describe the bug
When working with coalesced sparse COO tensors, calling `.div_` keeps the tensor coalesced, but `.mul_` does not. However, calling `.mul` returns a coalesced tensor.
```python
>>> i = torch.tensor([[0, 1, 1], [2, 0, 2]])
>>> v = torch.tensor([3, 4, 5], dtype=torch.float32)
>>> t = torch.sparse_coo_tensor(i, v, [2, 4]).coalesce()
>>> t.mul(2).is_coalesced()
True
>>> t.div(2).is_coalesced()
True
>>> t.div_(2).is_coalesced()
True
>>> t.mul_(2).is_coalesced()
False
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.26
Python version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 3028.998
BogoMIPS: 5599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] robust-loss-pytorch==0.0.2
[pip3] torch==2.0.1
[pip3] torch-dct==0.1.6
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numexpr 2.8.3 mkl_py311h0b1cd97_1 conda-forge
[conda] numpy 1.24.4 py311h64a7726_0 conda-forge
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_3 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] robust-loss-pytorch 0.0.2 pypi_0 pypi
[conda] torch-dct 0.1.6 pypi_0 pypi
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu118 pytorch
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 9 |
884 | 109,460 |
[BUG][pytree] treespec serialization for locally defined classes and namedtuple types
|
triaged, module: pytree
|
### π Describe the bug
The serialization of a treespec object is to losslessly convert it into a byte/char stream. Then the serialized representation can be losslessly recovered (deserialized) into the original treespec object. Serialization and deserialization are very useful for dumping object states into files (checkpoints) or in multiprocess communication (distributed training).
The serialization and deserialization are reverse operations. The most common methods in Python are `json` (serialize to `str`) and `pickle` (serialize to `bytes`).
------
### Limitation for `pickle`
For `pickle`, it serializes `class`es and `function`s into their `__module__` and `__qualname__`, which can be used during recovery in another process. However, this is not working for locally defined classes and functions. The identifier will contain `*.<local>.*` and `pickle` will raise an error when dumping these objects.
```python
import pickle
def main():
def func():
return 1
pickle.dumps(func)
main()
```
```console
$ python3 test.py
Traceback (most recent call last):
File "/home/user/Projects/pytorch/test.py", line 11, in <module>
main()
File "/home/user/Projects/pytorch/test.py", line 8, in main
pickle.dumps(func)
AttributeError: Can't pickle local object 'main.<locals>.func'
```
```python
import pickle
from collections import namedtuple
def main():
Point = namedtuple('Point', ['x', 'y'])
pickle.dumps(Point)
main()
```
```console
$ python3 test.py
Traceback (most recent call last):
File "/home/user/Projects/pytorch/test.py", line 11, in <module>
main()
File "/home/user/Projects/pytorch/test.py", line 8, in main
pickle.dumps(Point)
_pickle.PicklingError: Can't pickle <class '__main__.Point'>: attribute lookup Point on __main__ failed
```
------
### Current implementation
The current implementation for the `treespec` in `torch.utils._pytree` is to convert the treespec object into a serializable context and then dump it into a JSON string. For locally defined `class`es and `namedtuple` types, it dumps to "how to create the same type" rather than saves the `__module__` and `__qualname__`. Then we can serialize the locally defined `namedtuple` types into its name and fields without raising errors.
```python
from collections import namedtuple
import pickle
import torch.utils._pytree as pytree
GlobalPoint = namedtuple('GlobalPoint', ['x', 'y'])
def main():
assert pickle.loads(pickle.dumps(GlobalPoint)) is GlobalPoint # round trip for pickle
Point = namedtuple('Point', ['x', 'y'])
tree = Point(x=0, y=1)
print(Point)
_, spec = pytree.tree_flatten(tree)
serialized = pytree.treespec_dumps(spec)
deserialized = pytree.treespec_loads(serialized)
print(serialized) # -> json string
print(deserialized) # -> recovered treespec
print(deserialized == spec) # -> False
print(type(tree) == type(pytree.tree_unflatten([0, 1], deserialized))) # -> False
print(pytree.treespec_loads(serialized) == pytree.treespec_loads(serialized)) # -> False
main()
```
```console
$ python3 test.py
<class '__main__.Point'>
[1, {"type": "collections.namedtuple", "context": {"class_name": "Point", "fields": ["x", "y"]}, "children_spec": [{"type": null, "context": null, "children_spec": []}, {"type": null, "context": null, "children_spec": []}]}]
TreeSpec(namedtuple, <class 'torch.utils._pytree.Point'>, [*,
*])
False
False
False
```
The `<class '__main__.Point'>` type is changed to `<class 'torch.utils._pytree.Point'>` after deserialization. And it creates a new type every time we call `treespec_loads`:
```python
pytree.treespec_loads(serialized) == pytree.treespec_loads(serialized) # -> False
```
Each time we recover a treespec object representing a namedtuple, we create a new namedtuple type via the fields rather than refer to the original type. This has a lot of overhead and puts a lot of burden on the gc module. Also, the round trip property will not hold anymore.
### Versions
<details>
<summary>Environment</summary>
```text
Collecting environment information...
PyTorch version: 2.2.0.dev20230915
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz
Stepping: 6
CPU MHz: 3500.000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp_epp avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[conda] blas 1.0 mkl
[conda] brotlipy 0.7.0 py311h9bf148f_1002 pytorch-nightly
[conda] cffi 1.15.1 py311h9bf148f_3 pytorch-nightly
[conda] cryptography 38.0.4 py311h46ebde7_0 pytorch-nightly
[conda] filelock 3.9.0 py311_0 pytorch-nightly
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 1.24.3 pypi_0 pypi
[conda] packaging 22.0 py311_0 pytorch-nightly
[conda] pluggy 1.0.0 py311_1 pytorch-nightly
[conda] pysocks 1.7.1 py311_0 pytorch-nightly
[conda] pytest 7.1.2 py311_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cpu pytorch-nightly
[conda] requests 2.28.1 py311_0 pytorch-nightly
[conda] tomli 2.0.1 py311_0 pytorch-nightly
[conda] urllib3 1.26.14 py311_0 pytorch-nightly
```
</details>
cc @zou3519
| 1 |
885 | 109,457 |
Training results from using MPS backend are poor compared to CPU and CUDA
|
needs reproduction, triaged, module: mps
|
### π Describe the bug
I have a Mac M1 GPU and I've been trying to replicate the results in [this google colab notebook](https://colab.research.google.com/drive/1_X7O2BkFLvqyCdZzDZvV2MB0aAvYALLC) on using a transformer-type architecture for time series forecasting. The notebook comes from [this repo](https://github.com/zhouhaoyi/Informer2020). I'm aware that certain issues regarding mps vs cpu and cuda have been raised in the past, such as [this issue using LSTMs on mps](https://github.com/pytorch/pytorch/issues/92615). This code does not utilize lstm and I'm having a hard time identifying the exact PyTorch method that is causing the problem.
In summary, when I run the training phase in the notebook above, I get bad results using the mps backend compared to my Mac M1 CPU as well as CUDA on google colab. (The speed between mps and cuda is a different issue). This is the training loss that I get:
```
Use GPU: mps
>>>>>>>start training : informer_ETTh1_ftM_sl96_ll48_pl24_dm512_nh8_el2_dl1_df2048_atprob_fc5_ebtimeF_dtTrue_mxTrue_exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8521
val 2857
test 2857
iters: 100, epoch: 1 | loss: 0.5028459
speed: 0.2630s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 393.6544s
iters: 200, epoch: 1 | loss: 0.5656900
speed: 0.2554s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 356.7680s
Epoch: 1 cost time: 68.72129678726196
Epoch: 1, Steps: 266 | Train Loss: 0.5757653 Vali Loss: 1.2751399 Test Loss: 1.0501673
Validation loss decreased (inf --> 1.275140). Saving model ...
Updating learning rate to 0.0001
iters: 100, epoch: 2 | loss: 0.9338713
speed: 0.6757s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 831.7784s
iters: 200, epoch: 2 | loss: 0.9569200
speed: 0.2548s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 288.1538s
Epoch: 2 cost time: 67.94761419296265
Epoch: 2, Steps: 266 | Train Loss: 0.9442326 Vali Loss: 1.4567285 Test Loss: 1.1148015
EarlyStopping counter: 1 out of 3
Updating learning rate to 5e-05
iters: 100, epoch: 3 | loss: 0.9309775
speed: 0.6728s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 649.2323s
iters: 200, epoch: 3 | loss: 1.1810662
speed: 0.2581s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 223.2210s
Epoch: 3 cost time: 68.36700391769409
Epoch: 3, Steps: 266 | Train Loss: 0.9918775 Vali Loss: 1.4783044 Test Loss: 1.1274033
EarlyStopping counter: 2 out of 3
Updating learning rate to 2.5e-05
iters: 100, epoch: 4 | loss: 1.0169864
speed: 0.6707s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 468.7928s
iters: 200, epoch: 4 | loss: 0.8940969
speed: 0.2543s[/iter](https://file+.vscode-resource.vscode-cdn.net/iter); left time: 152.3422s
Epoch: 4 cost time: 67.84642291069031
Epoch: 4, Steps: 266 | Train Loss: 0.9954335 Vali Loss: 1.4753704 Test Loss: 1.1228207
EarlyStopping counter: 3 out of 3
Early stopping
>>>>>>>testing : informer_ETTh1_ftM_sl96_ll48_pl24_dm512_nh8_el2_dl1_df2048_atprob_fc5_ebtimeF_dtTrue_mxTrue_exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2857
test shape: (89, 32, 24, 7) (89, 32, 24, 7)
test shape: (2848, 24, 7) (2848, 24, 7)
mse:1.0500280857086182, mae:0.7747534513473511
```
In contrast, the loss obtained using cuda, as shown in the google colab notebook, is
```
Use GPU: cuda:0
>>>>>>>start training : informer_ETTh1_ftM_sl96_ll48_pl24_dm512_nh8_el2_dl1_df2048_atprob_fc5_ebtimeF_dtTrue_exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8521
val 2857
test 2857
iters: 100, epoch: 1 | loss: 0.3484939
speed: 0.0800s/iter; left time: 119.7995s
iters: 200, epoch: 1 | loss: 0.3274963
speed: 0.0773s/iter; left time: 108.0117s
Epoch: 1 cost time: 20.9348361492157
Epoch: 1, Steps: 266 | Train Loss: 0.3885468 Vali Loss: 0.6522534 Test Loss: 0.6147651
Validation loss decreased (inf --> 0.652253). Saving model ...
Updating learning rate to 0.0001
iters: 100, epoch: 2 | loss: 0.2812596
speed: 0.1925s/iter; left time: 236.9607s
iters: 200, epoch: 2 | loss: 0.2148246
speed: 0.0797s/iter; left time: 90.1093s
Epoch: 2 cost time: 21.145679235458374
Epoch: 2, Steps: 266 | Train Loss: 0.2568903 Vali Loss: 0.6256742 Test Loss: 0.5904896
Validation loss decreased (0.652253 --> 0.625674). Saving model ...
Updating learning rate to 5e-05
iters: 100, epoch: 3 | loss: 0.2377600
speed: 0.1962s/iter; left time: 189.2880s
iters: 200, epoch: 3 | loss: 0.1902070
speed: 0.0812s/iter; left time: 70.2002s
Epoch: 3 cost time: 21.539507389068604
Epoch: 3, Steps: 266 | Train Loss: 0.2063076 Vali Loss: 0.6280525 Test Loss: 0.5942183
EarlyStopping counter: 1 out of 3
Updating learning rate to 2.5e-05
iters: 100, epoch: 4 | loss: 0.1939584
speed: 0.1975s/iter; left time: 138.0229s
iters: 200, epoch: 4 | loss: 0.1632166
speed: 0.0813s/iter; left time: 48.7252s
Epoch: 4 cost time: 21.655611753463745
Epoch: 4, Steps: 266 | Train Loss: 0.1804204 Vali Loss: 0.6678267 Test Loss: 0.6165376
EarlyStopping counter: 2 out of 3
Updating learning rate to 1.25e-05
iters: 100, epoch: 5 | loss: 0.1675630
speed: 0.1995s/iter; left time: 86.3623s
iters: 200, epoch: 5 | loss: 0.1715067
speed: 0.0826s/iter; left time: 27.5224s
Epoch: 5 cost time: 21.931761741638184
Epoch: 5, Steps: 266 | Train Loss: 0.1663280 Vali Loss: 0.6778610 Test Loss: 0.6354805
EarlyStopping counter: 3 out of 3
Early stopping
>>>>>>>testing : informer_ETTh1_ftM_sl96_ll48_pl24_dm512_nh8_el2_dl1_df2048_atprob_fc5_ebtimeF_dtTrue_exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2857
test shape: (89, 32, 24, 7) (89, 32, 24, 7)
test shape: (2848, 24, 7) (2848, 24, 7)
mse:0.5913315415382385, mae:0.5571437478065491
```
The mse and the mae using cuda are much lower than that of mps backend. The plots obtained from cuda approximate much better the actual time series data as well.
The GitHub code is mostly written for cuda but I used grep to search the repo and enabled the device to be 'mps' by adding the following code in the file `exp_basic.py`:
```
def _acquire_device(self):
if self.args.use_gpu:
if torch.cuda.is_available():
os.environ["CUDA_VISIBLE_DEVICES"] = str(
self.args.gpu) if not self.args.use_multi_gpu else self.args.devices
device = torch.device('cuda:{}'.format(self.args.gpu))
print('Use GPU: cuda:{}'.format(self.args.gpu))
elif getattr(torch,'has_mps',False):
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
device = "mps"
print('Use GPU: mps')
```
I believe the error is in the training stage, perhaps gradient computation, because if I load the neural network weights obtained from training using the CPU, and I evaluate the neural network model with mps backend on the test data (without gradient computations), I can reproduce the output of using cpu on the test data.
I also used grep on the whole repository to find what methods from PyTorch were being utilized to identify the source of discrepancy.
Here's a list of methods I found: Conv1d, LayerNorm, Dropout, init.kaiming_normal_, Embedding, Parameter, Linear, BatchNorm1d, ELU, MaxPool1d, torch.softmax, MSELoss, functional.relu.
I checked Github issues for any mps backend issues with these methods but I couldn't find any.
Does anyone have suggestions how I can locate the method that's causing the difference between mps and cpu/cuda?
Thanks!
### Versions
python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2a0
[conda] numpy 1.25.2 py311he598dae_0
[conda] numpy-base 1.25.2 py311hfbfe69c_0
[conda] pytorch 2.0.1 py3.11_0 pytorch
[conda] torchaudio 2.0.2 py311_cpu pytorch
[conda] torchvision 0.15.2 cpu_py311he74fb5d_0
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
886 | 109,453 |
Inconsistent Behavior of `ConvTranspose2d` on CPU and CUDA
|
needs reproduction, module: nn, triaged
|
### π Describe the bug
```python
import torch
import torch.nn as nn
# ConvTranspose spec
in_channels = 192
out_channels = 192
kernel_size = 5
stride = 2
padding = 0
input_height = 8
input_width = 12
cuda = torch.device("cuda:0")
cpu = torch.device("cpu")
conv2d_transpose = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding=0)
# Initialization
torch.nn.init.normal_(conv2d_transpose.weight)
torch.nn.init.normal_(conv2d_transpose.bias)
conv2d_transpose.to(cuda)
x = torch.rand((1, 192, 19, 27))
x = x.to(cuda)
# Check
conv2d_transpose.train(False)
cuda_out = torch.abs(conv2d_transpose(x) - conv2d_transpose(x)).sum()
print(cuda_out)
conv2d_transpose.to(cpu)
x = x.to(cpu)
cpu_out = torch.abs(conv2d_transpose(x) - conv2d_transpose(x)).sum()
print(cpu_out)
```
The code initializes a ConvTranspose2d layer, sets its weights and biases, and computes the output on both CPU and CUDA devices. However, the cpu_out and cuda_out results are inconsistent, which suggests a potential issue with the ConvTranspose2d implementation.
Expected Behavior:
I expect the results of conv2d_transpose(x) to be the same on both CPU and CUDA devices, so cuda_out and cpu_out should be 0.0.
Actual Behavior:
In the actual behavior, cuda_out is not 0.0, indicating that different values are produced during the convolution operation. This suggests that unexpected floting errors may accumulate during the computation.
Environment:
PyTorch version: 2.0.1
CUDA version: 11.8
Operating system: ubuntu 22.04
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:34:09) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN Xp
GPU 1: NVIDIA TITAN Xp
GPU 2: NVIDIA TITAN Xp
GPU 3: NVIDIA TITAN Xp
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 11 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-msssim==1.0.0
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] cudatoolkit 11.8.0 h4ba93d1_12 conda-forge
[conda] numpy 1.24.4 py311h64a7726_0 conda-forge
[conda] pytorch-msssim 1.0.0 pypi_0 pypi
[conda] torch 2.0.1+cu118 pypi_0 pypi
[conda] torchaudio 2.0.2+cu118 pypi_0 pypi
[conda] torchvision 0.15.2+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
887 | 109,452 |
[dynamo]Scuba log some debug info about list of integer inputs
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109452
* #109410
* #109411
Some helper functions for jagged tensor have list of integers, which upset dynamic shapes as their content change from run to run. Logging some information to write heuristics.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
888 | 109,448 |
Backward pass of inverse FFT is sometimes incorrect on GPU
|
high priority, module: cuda, triaged, module: third_party, module: correctness (silent), module: fft
|
### π Describe the bug
On GPU, if you compute FFT of a tensor followed by inverse FFT of the tensor, the backward pass is (sometimes) incorrect. This only happens when the tensor does not have a gradient on it already, and when the inverse FFT is the last operation in a sequence before computing the backward pass.
I believe the problem is that the gradient of the input tensor is being initialized with the value of the tensor itself, instead of with zeros.
Here's a minimal example to reproduce it:
```Python
import torch
device = 'cuda'
x = torch.randn(32, dtype=torch.cfloat, device=device, requires_grad=True)
dout = torch.zeros(32, dtype=torch.cfloat, device=device)
# compute iFFT(FFT(x))
out = torch.fft.ifft(torch.fft.fft(x))
out.backward(dout, retain_graph=True)
print('Gradient of iFFT(FFT(x)) should be FFT(iFFT(dout))')
dx = torch.fft.fft(torch.fft.ifft(dout))
print('Difference between x.grad and what it should be. This should be zero!')
print((x.grad - dx).abs().max())
print('Difference between x.grad and x. This should be non-zero.')
print((x.grad - x).abs().max())
```
Expected output:
```
Gradient of iFFT(FFT(x)) should be FFT(iFFT(dout))
Difference between x.grad and what it should be. This should be zero!
tensor(4.7777e-07, device='cuda:0')
Difference between x.grad and x. This should be non-zero.
tensor(1.7879, device='cuda:0', grad_fn=<MaxBackward1>)
```
Actual output:
```
Gradient of iFFT(FFT(x)) should be FFT(iFFT(dout))
Difference between x.grad and what it should be. This should be zero!
tensor(1.7879, device='cuda:0')
Difference between x.grad and x. This should be non-zero.
tensor(4.7777e-07, device='cuda:0', grad_fn=<MaxBackward1>)
```
This bug goes away if the last operation is not an iFFT, or if `x.grad` is initialized before the first call to FFT.
For example, adding these lines before to call to `out = torch.fft...` fixes it:
```
x.backward(dout, retain_graph=True)
x.grad.data.zero_()
```
Switching the order of FFT/iFFT, or adding some other operation afterwards also both fix the bug:
* `out = torch.fft.fft(torch.fft.ifft(x))` is correct
* `out = torch.fft.ifft(torch.fft.fft(x)) * 2` is correct
This bug does not happen on CPU, so I suspect something is broken in the backward pass in C++/CUDA for the inverse FFT, in the case where the gradient on the input tensor is not initialized.
### Versions
I'm using NVIDIA PyTorch Docker image, version 23.05.
```
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-fast-transformers==0.4.0
[pip3] pytorch-lightning==2.0.1.post0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.0
[pip3] torch-optimizer==0.3.0
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchdata==0.6.0
[pip3] torchmetrics==0.11.3
[pip3] torchtext==0.15.1
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0.dev20221202
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @ptrblck @mruberry @peterbell10
| 9 |
889 | 109,447 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
890 | 109,446 |
torch pollutes libgomp symbols when import _C
|
oncall: binaries, triaged, module: openmp, module: third_party
|
### π Describe the bug
In torch `__init__.py`, the _C library is imported with the `RTLD_GLOBAL` mode. It pollutes the global symbols provided by libgomp. When importing other libraries later, symbols and functions from libgomp are resolved to the libgomp library provided by pytorch. This name pollution causes bugs when using pytorch and libraries that rely on libgomp. One may receive segfault or strange results due to version conflicts in libgomp.
I noticed that in `__init__.py`, this behavior is controlled by the variable `USE_GLOBAL_DEPS`. Disabling this option can solve the problem. Is there a way to configure this variable at runtime?
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.27.4
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.11.0-36-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Ti
Nvidia driver version: 470.199.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 28
On-line CPU(s) list: 0-27
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-7940X CPU @ 3.10GHz
Stepping: 4
CPU MHz: 3100.000
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 6199.99
Virtualization: VT-x
L1d cache: 448 KiB
L1i cache: 448 KiB
L2 cache: 14 MiB
L3 cache: 19.3 MiB
NUMA node0 CPU(s): 0-27
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rd
tscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr p
dcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti ssbd mb
a ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clf
lushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window h
wp_epp hwp_pkg_req md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] cudatoolkit 10.1.243 h6bb024c_0
[conda] mkl 2019.0 118
[conda] mkl-service 1.1.2 py37h90e4bf4_5
[conda] mkl_fft 1.0.4 py37h4414c95_1
[conda] mkl_random 1.0.1 py37h4414c95_1
[conda] numpy 1.21.6 <pip>
[conda] numpy 1.15.4 py37h99e49ec_0
[conda] numpy-base 1.15.4 py37h2f8d375_0
[conda] numpydoc 0.8.0 py37_0
[conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
cc @seemethere @malfet
| 1 |
891 | 109,445 |
Memory usage steadily increasing when using back propagation with sparse CSR parameter matrices on CPU
|
module: sparse, module: memory usage, triaged
|
### π Description of the Bug
I encountered steadily increasing RAM usage when using back propagation with sparse CSR matrices on CPU. Below is the reduced code, which for me produces this behavior. As you can see, I was trying to train an RNN network with sparse parameter matrices (then they were actually sparse). When running this code with the non-working line RAM usage keeps increasing at a rate of about 100 MB/s on my machine.
```python
import torch
class my_net(torch.nn.Module):
def __init__(self):
super(my_net, self).__init__()
#self.A = torch.nn.Parameter(torch.rand(100, 100)) # works
#self.A = torch.nn.Parameter(torch.rand(100, 100).to_sparse_coo()) # works
self.A = torch.nn.Parameter(torch.rand(100, 100).to_sparse_csr()) # doesn't work
self.B = torch.nn.Parameter(torch.rand(1, 100))
def forward(self, u, state): # u not used for simplicity
state = torch.sparse.mm(self.A, state)
y = torch.sparse.mm(self.B, state)
return state, y
rnn_net = my_net()
for epoch in range(1000000):
print(f"epoch {epoch}")
rnn_net.zero_grad()
state = torch.zeros(100,1)
for ii in range(2):
state, output = rnn_net.forward(torch.tensor([[0]]), state)
output.backward()
```
Notably when using a dense or sparse COO layout, it works just fine. I tried some possible solutions, like manual garbage collection and detaching the state and output variables from the computational graph in the outer loop. Nothing short of using detach() on the state in the inner loop worked for me, which obviously would defeat the purpose of an RNN. I therefore concluded that this likely is unintended behavior.
My setup is in no way special. I just installed PyTorch using pip and ran my code.
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Linux Mint 21.2 (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 3400,0000
CPU min MHz: 400,0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.5
[pip3] numpydoc==1.5.0
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.1
[pip3] torchsparsegradutils==0.1.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] No relevant packages
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 3 |
892 | 109,443 |
RNN Documentation is Confusing / Wrong
|
module: docs, module: nn, module: rnn, triaged, actionable
|
### π The doc issue
The docs on https://pytorch.org/docs/stable/generated/torch.nn.RNN.html say:
> For each element in the input sequence, each layer computes the following function:
>
> h(t) = tanh(x(t) * W_ih^T + b_ih + h(t-1) * W_hh^T + b_hh)
>
> Where:
>
> h(t) is the hidden state at time t.
> x(t) is the input at time t.
> h(t-1) is the hidden state of _the previous layer_ at time t-1 or the initial hidden state at time 0.
I'd like to point out the ambiguity surrounding the term "the previous layer" in the context above. The term "layer" here appears to be inconsistent with the definition used in the "num_layers" argument.
For an RNN with `num_layers=1`, the phrase "hidden state of the previous layer at time t-1" could be misinterpreted. Does this mean that we always refer to the initial hidden state since there's no preceding layer? This would imply the RNN isn't recurrent.
Upon testing a `num_layers=1` RNN, I observed that changing the input at t=1 did indeed influence the output at t=10, confirming the network's recurrent nature. Thus, I suspect the correct phrasing should be that x(t) is taken "from the previous layer".
One last thing: It would be useful to have a bit of information on how the RNN is implemented.
Is there a trick to evaluate it over a whole sequence as a batch operation?
Or is there a python/C loop under the hood, making `L` matrix multiplications sequentially for each layer?
### Suggest a potential alternative/fix
> h(t) is the hidden state at time t.
> x(t) is the input at time t _or output from the previous layer_.
> h(t-1) is the hidden state of _the same layer_ at time t-1 or the initial hidden state at time 0.
cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zou3519
| 4 |
893 | 109,442 |
CPU memory cannot get released after `torch.compile` (caused by importing `AsyncCompile`)
|
triaged, oncall: pt2, module: inductor
|
### π Describe the bug
I noticed that the following line (presumably called upon first compilation) create child processes that never terminate. This cause problems as (CPU) memory released in the main process (but created before creating the child processes) will still be referenced in the child processes.
```python
from torch._inductor.codecache import AsyncCompile
```
### Error logs
NA
### Minified repro
NA
### Versions
```
root@92383b32f00e:~# CUDA_VISIBLE_DEVICES="" python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.27.0
Libc version: glibc-2.27
Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 3783.358
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] pytorch-minimize==0.0.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.0
[pip3] torchdata==0.6.0
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.15.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch-cuda 11.7 h778d358_3 pytorch
[conda] pytorch-minimize 0.0.2 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.0 py310_cu117 pytorch
[conda] torchdata 0.6.0 py310 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.15.0 py310 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0.15.0 py310_cu117 pytorch
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 1 |
894 | 109,441 |
Improve IDE Type Hinting for torch.Tensor class methods
|
open source
|
By adding _tensor.pyi to the torch package, IDEs such as PyCharm will pick up type hinting information that was previously not found as it was contained in torch._C.__init__.pyi
This commit will enable autocomplete for all torch.Tensor class methods for the PyCharm IDE
Fixes #109438
| 3 |
895 | 109,440 |
[FSDP] supports QLora finetuning
|
feature, triaged, module: fsdp
|
### π The feature, motivation and pitch
Currently FSDP is rejecting tensor parameters with dtype unit8. is_floating_point() only allows one of the (torch.float64, torch.float32, torch.float16, and torch.bfloat16)
```
torch/distributed/fsdp/flat_param.py:720
>>> if dtype is None and not tensor.is_floating_point():
>>> raise ValueError("Cannot flatten integer dtype tensors")
```
Since in Qlora setup, the base model is loaded as unit8 dtype. This makes qlora + fsdp imcompatible.
### Alternatives
I have considered simply removed this dtype check but concerned about unknown impact.
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 0 |
896 | 109,437 |
Can dtensor flexibly modify the layout via devicemesh?
|
oncall: distributed, module: dtensor
|
### π The feature, motivation and pitch
Hi,torch team:
Large tensor can not be stored in a single device, usually need to global tensor chunking, if dtensor can be flexible to modify the layout according to devicemesh or torch.chunk is best!
<img width="1016" alt="image" src="https://github.com/pytorch/pytorch/assets/48181529/17401024-cab9-4e63-aae4-e1abcfb28707">
### Alternatives
```python
@spawn_threads_and_init_comms
def shard_big_tensor(world_size):
mesh = DeviceMesh("cpu", [[0,1],[2,3]])
mesh1 = DeviceMesh("cpu", [0,1,2,3])
big_tensor = torch.range(0,15).reshape(4,4)
dtensor = distribute_tensor(big_tensor, mesh, [Shard(0),Shard(1)])
print(f"on rank: {dist.get_rank()}",dtensor.to_local())
dtensor1 = distribute_tensor(dtensor, mesh1, [Shard(0)])
print(f"on rank: {dist.get_rank()}",dtensor1.to_local())
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 2 |
897 | 109,432 |
gh-108197 Update in `AdaptiveMaxPool2d` func of `pytorch/torch/nn/modules/pooling.py`
|
triaged, open source, release notes: nn, topic: docs
|
Fixes #108197
| 4 |
898 | 109,427 |
[Dynamo] Match closures by code ID
|
triaged, open source, Merged, Reverted, ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Closes https://github.com/pytorch/pytorch/issues/107866
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 20 |
899 | 109,425 |
Cannot export a quantized model that permutes a quantized tensor to ONNX
|
module: onnx, oncall: quantization, triaged
|
### π Describe the bug
I'm attempting to export a quantized model to ONNX that contains a number of tensor permutations. The input (and expected output) of these permutations are quantized tensors.
I can successfully quantize the model via `prepare_fx` and `convert_fx`, and generate a ScriptModule via `torch.jit.script`. However, attempts to export the quantized model to ONNX fail. ONNX (and ONNX Runtime) support permutation (they call it transpose) of quantized tensors, but it appears the symbolic function itself doesn't support the quantized version of the base operator.
This simple model reproduces the error.
```
import io
import torch
from torch.ao.quantization import (
get_default_qconfig_mapping,
get_default_qconfig,
)
from torch.ao.quantization.quantize_fx import (
prepare_fx,
convert_fx,
)
class TransposeModel(torch.nn.Module):
def __init__(self, dims: torch.types._size):
super().__init__()
self.dims = dims
def forward(self, x: torch.Tensor) -> torch.Tensor:
# x = torch.permute(x, self.dims)
x = x.permute(self.dims)
return x
if __name__ == "__main__":
model = TransposeModel(dims=(0, 2, 1))
qconfig = get_default_qconfig("qnnpack")
qconfig_mapping = get_default_qconfig_mapping("qnnpack") \
.set_global(qconfig)
inputs = {
"x": torch.reshape(torch.arange(0, 99, dtype=torch.float32), (1, 3, 33)),
}
prepared = prepare_fx(
model=model,
qconfig_mapping=qconfig_mapping,
example_inputs=tuple((v for v in inputs.values())),
)
quantized = convert_fx(
graph_module=prepared,
qconfig_mapping=qconfig_mapping,
)
quantized.graph.print_tabular()
script_module = torch.jit.script(
obj=quantized,
example_inputs=[tuple((v for v in inputs.values()))],
)
with io.BytesIO() as f:
torch.onnx.export(
f=f,
model=script_module,
args=[v for v in inputs.values()],
input_names=["x"],
output_names=["x"],
verbose=True,
)
```
The error itself:
```
torch.onnx.errors.SymbolicValueError: ONNX symbolic expected the output of `%permute : Tensor(*, *, *) = onnx::Transpose[perm=[0, 2, 1]](%quantize_per_tensor), scope: torch.fx.graph_module.GraphModule:: # <eval_with_key>.6:8:14
` to be a quantized tensor. Is this likely due to missing support for quantized `onnx::Transpose`. Please create an issue on https://github.com/pytorch/pytorch/issues [Caused by the value 'permute defined in (%permute : Tensor(*, *, *) = onnx::Transpose[perm=[0, 2, 1]](%quantize_per_tensor), scope: torch.fx.graph_module.GraphModule:: # <eval_with_key>.6:8:14
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Transpose'.]
(node defined in File "<eval_with_key>.6", line 8
_input_zero_point_0 = self._input_zero_point_0
quantize_per_tensor = torch.quantize_per_tensor(x, _input_scale_0, _input_zero_point_0, torch.quint8); x = _input_scale_0 = _input_zero_point_0 = None
permute = quantize_per_tensor.permute((0, 2, 1)); quantize_per_tensor = None
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
dequantize_1 = permute.dequantize(); permute = None
return dequantize_1
)
Inputs:
#0: quantize_per_tensor defined in (%quantize_per_tensor : Tensor = prim::TupleConstruct(%7, %6, %5), scope: torch.fx.graph_module.GraphModule:: # <eval_with_key>.6:7:26
) (type 'Tensor')
Outputs:
#0: permute defined in (%permute : Tensor(*, *, *) = onnx::Transpose[perm=[0, 2, 1]](%quantize_per_tensor), scope: torch.fx.graph_module.GraphModule:: # <eval_with_key>.6:8:14
) (type 'Tensor')
```
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5879.8818
CPU min MHz: 3000.0000
BogoMIPS: 8982.37
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 1 |
900 | 109,422 |
Use weakref in fast tracebacks
|
triaged, open source, topic: not user facing, module: dynamo
|
I think this fixes https://github.com/pytorch/pytorch/issues/107469#issuecomment-1713031756
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.