Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
โ | Body
stringlengths 9
74.5k
โ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,401 | 102,966 |
how to workaround the error "don't have an op for vulkan_prepack::create_linear_context" ?
|
module: build, triaged, module: vulkan, ciflow/periodic
|
### ๐ Describe the bug
I have a modified resnet-50 network, which I want to run on android using vulkan backend.
The custom build of pytorch with USE_VULKAN=1 works fine, but I got the error message "We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case." during "optimize_for_mobile" API invocation.
What's the problem here, and how to deal with it?
(I tried on both release 1.13 and release v2.0.1 tags, but got the same error message above).
```
git clone -b release/1.13 --recursive https://github.com/pytorch/pytorch
cd pytorch
git submodule sync
git submodule update --init --recursive
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build # or cmake-gui build
BUILD_LITE_INTERPRETER=0 USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python setup.py develop
BUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_android.sh
BUILD_LITE_INTERPRETER=0 ANDROID_ABI=arm64-v8a USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 bash ./scripts/build_pytorch_android.sh
```
```
>>> import torch
>>> import os
>>>
>>> from torch.utils.mobile_optimizer import optimize_for_mobile
>>>
>>> #file_dir = '.'
>>> file_dir = '../pytorch-script/'
>>> model = torch.jit.load(file_dir + '/modified-resnet50-image.pt')
>>> model.eval()
RecursiveScriptModule(original_name=ImageModel)
>>> script_model = torch.jit.script(model)
>>> script_model_vulkan = optimize_for_mobile(script_model, backend='vulkan')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/DataExt/devroot/src/pytorch/torch/utils/mobile_optimizer.py", line 67, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: 0 INTERNAL ASSERT FAILED at "/mnt/DataExt/devroot/src/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case. Argument types: Tensor, Tensor,
Candidates:
>>> exit()
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0a0+gite9ebda2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A4000
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 45 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218N CPU @ 2.30GHz
Stepping: 7
CPU MHz: 2294.609
BogoMIPS: 4589.21
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 16 MiB
L3 cache: 22 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.13.0a0+gitd922c29
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.3 py311h08b1b3b_1
[conda] numpy-base 1.24.3 py311hf175353_1
[conda] torch 1.13.0a0+gitd922c29 pypi_0 pypi
cc @malfet @seemethere
| 2 |
2,402 | 102,963 |
torch.svd fails on large matrices
|
module: cuda, triaged, module: cublas, module: linear algebra
|
### ๐ Describe the bug
I had to compute SVD of a 300000x10000 matrix (3b elements) on an A100 GPU, but it fails...
```>>> import torch
>>> device = 'cuda:5'
>>> a = torch.randn(300000, 10000, device=device)
>>> torch.svd(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: cusolver error: CUSOLVER_STATUS_INVALID_VALUE, when calling `cusolverDnSgesvdj_bufferSize(handle, jobz, econ, m, n, A, lda, S, U, ldu, V, ldv, lwork, params)`
```
Please note that this bug is not always present, but it seems like most of the runs fail.
Maybe the issue is that 3b > maxint and the CUDA code still uses int32_t somewhere?
### Versions
Collecting environment information...
PyTorch version: 1.13.0a0+936e930
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-64-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.42.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 107
On-line CPU(s) list: 0-106
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 107
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7662 64-Core Processor
Stepping: 0
CPU MHz: 1996.104
BogoMIPS: 3992.20
Virtualization: AMD-V
L1d cache: 6.7 MiB
L1i cache: 6.7 MiB
L2 cache: 53.5 MiB
L3 cache: 1.7 GiB
NUMA node0 CPU(s): 0-52
NUMA node1 CPU(s): 53-106
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni
pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Versions of relevant libraries:
[pip3] functorch==1.13.0a0+936e930
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==1.13.0a0+936e930
[pip3] torch-tensorrt==1.3.0a0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[conda] Could not collect
```
cc @ptrblck @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @Lezcano
| 1 |
2,403 | 102,958 |
[Inductor] add debugging tools
|
triaged, open source, Stale, topic: not user facing, intel, module: inductor, ciflow/inductor
|
Add debugging tools for Inductor.
### Description
- Graph merger: Merge FX graphs of a model into a single large FX graph. Thus, graphs can be compared quickly between different versions of PyTorch.
- Graph matching: In order to know what each kernel does, this tool matches cpp kernel with FX graph operators and adds correspondingย operators before each kernel in cpp output code.
#### Example result of graph merger:
```
GRAPH_INDEX:0
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[2, 3]):
# File: test_cpu_repro.py:1845, code: x = x.pow(5)
pow_1: f32[2, 3] = torch.ops.aten.pow.Tensor_Scalar(arg0_1, 5); arg0_1 = None
return (pow_1,)
GRAPH_INDEX:1
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[2, 3]):
# File: test_cpu_repro.py:1847, code: return torch.sin(x)
sin: f32[2, 3] = torch.ops.aten.sin.default(arg0_1); arg0_1 = None
return (sin,)
```
#### Example result of graph matching:
```
#line 42: pow_1 = torch.ops.aten.pow.Tensor_Scalar(arg0_1, 5); arg0_1 = None
#line 43: sin = torch.ops.aten.sin.default(pow_1); pow_1 = None
cpp_fused_pow_sin_0 = async_compile.cpp('''
#include "/tmp/torchinductor_root/mq/cmqzxwuyo7ryvun3egqos5jq5ak4fue7d2jbopbqs7pgpkhdpfh4.h"
extern "C" void kernel(const float* in_ptr0,
float* out_ptr0)
{
{
for(long i0=static_cast<long>(0L); i0<static_cast<long>(48L); i0+=static_cast<long>(16L))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<long>(i0));
auto tmp1 = tmp0 * tmp0;
auto tmp2 = tmp1 * tmp1;
auto tmp3 = tmp2 * tmp0;
auto tmp4 = tmp3.sin();
tmp4.store(out_ptr0 + static_cast<long>(i0));
}
#pragma omp simd simdlen(8)
for(long i0=static_cast<long>(48L); i0<static_cast<long>(60L); i0+=static_cast<long>(1L))
{
auto tmp0 = in_ptr0[static_cast<long>(i0)];
auto tmp1 = decltype(tmp0)(tmp0 * tmp0);
auto tmp2 = decltype(tmp1)(tmp1 * tmp1);
auto tmp3 = decltype(tmp2)(tmp2 * tmp0);
auto tmp4 = std::sin(tmp3);
out_ptr0[static_cast<long>(i0)] = tmp4;
}
}
}
''')
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @Xia-Weiwen @ngimel
| 4 |
2,404 | 102,953 |
TypeError: (): incompatible function arguments
|
oncall: distributed, triaged
|
### ๐ Describe the bug
Hello๏ผI am customizing process group backends using cpp extensions according to PyTorch Tutorials๏ผ[Customize Process Group Backends Using Cpp Extensions โ PyTorch Tutorials 2.0.1+cu117 documentation](https://pytorch.org/tutorials/intermediate/process_group_cpp_extension_tutorial.html) . But an error occurred . It seems to be type error, but I find the type is right.
**In addition, I tried on the PyTorch Release 23.05 image and found no issues. However, when I uninstalled torch2.0.0 and reinstalled it through pip install, I was able to reproduce the type error. Does this mean that installing torch through pip install may miss some dependencies, causing type errors?**
The specific error message is shown below:
```
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 895, in init_process_group
default_pg = _new_process_group_helper(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1034, in _new_process_group_helper
backend_class = creator_fn(store, group_rank, group_size, timeout)
TypeError: (): incompatible function arguments. The following argument types are supported:
1. (arg0: c10d::Store, arg1: int, arg2: int, arg3: datetime.timedelta) -> c10d::Backend
Invoked with: <torch.distributed.distributed_c10d.PrefixStore object at 0x7f8db2ec9230>, 1, 4, datetime.timedelta(seconds=1800)
```
### Versions
nvcr.io/nvidia/pytorch:23.05-py3
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,405 | 102,948 |
[onnx] aten::cumprod cannot be exported to ONNX
|
module: onnx, triaged, OSS contribution wanted
|
### ๐ Describe the bug
aten::cumprod fails to be exported to ONNX
```
import torch
from torch import nn
class Test(nn.Module):
def forward(self, x):
return torch.cumprod(x, dim=-1)
if __name__ == '__main__':
model = Test()
torch.onnx.export(model, torch.randn(1, 1, 1024, 65), "cumprod.onnx")
```
error message:
```
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::cumprod' to ONNX opset version 14 is not supported.
```
### Versions
PyTorch version: 2.0.1+cu117
python : 3.9
| 1 |
2,406 | 102,947 |
torch.onnx.export error ------RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
|
module: onnx, triaged
|
/home/xxx/anaconda3/envs/prediction/lib/python3.11/site-packages/torch/onnx/utils.py:689: UserWarning: Constant folding in symbolic shape inference fails: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:439.)
_C._jit_pass_onnx_graph_shape_type_inference(
============= Diagnostic Run torch.onnx.export version 2.0.0+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/home/xxx/depoly_and_infer/convert_onnx.py", line 15857, in <module>
torch.onnx.export(
File "/home/xxx/anaconda3/envs/prediction/lib/python3.11/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/xxx/anaconda3/envs/prediction/lib/python3.11/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/home/xxx/anaconda3/envs/prediction/lib/python3.11/site-packages/torch/onnx/utils.py", line 1180, in _model_to_graph
params_dict = _C._jit_pass_onnx_constant_fold(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
code here:
#step 1, ok
prob , coord = model(target_history_mcg_input_data.cuda(),
target_history_lstm_data.cuda(),
target_history_lstm_data_diff.cuda(),
other_history_mcg_input_data.cuda(),
other_agent_history_scatter_numbers.cuda(),
other_agent_history_scatter_idx.cuda(),
other_history_lstm_data.cuda(),
other_history_lstm_data_diff.cuda(),
road_network_embeddings.cuda(),
road_network_scatter_numbers.cuda(),
road_network_scatter_idx.cuda())
print(prob)
print(coord)
#step 2, error
torch.onnx.export(
model,
(target_history_mcg_input_data.cuda(),
target_history_lstm_data.cuda(),
target_history_lstm_data_diff.cuda(),
other_history_mcg_input_data.cuda(),
other_agent_history_scatter_numbers.cuda(),
other_agent_history_scatter_idx.cuda(),
other_history_lstm_data.cuda(),
other_history_lstm_data_diff.cuda(),
road_network_embeddings.cuda(),
road_network_scatter_numbers.cuda(),
road_network_scatter_idx.cuda()),
onnx_path,
input_names=["target_history_mcg_input_data",
"target_history_lstm_data",
"target_history_lstm_data_diff",
"other_history_mcg_input_data",
"other_agent_history_scatter_numbers",
"other_agent_history_scatter_idx",
"other_history_lstm_data",
"other_history_lstm_data_diff",
"road_network_embeddings",
"road_network_scatter_numbers",
"road_network_scatter_idx"],
output_names=["probas", "coordinates"],
dynamic_axes= None,
opset_version= 11,
custom_opsets={"torch_scatter":11},
# verbose=True,
)
| 2 |
2,407 | 102,938 |
Support for efficiently processing categorical distributions with varying dimensions
|
module: distributions, feature, triaged, needs research
|
### ๐ The feature, motivation and pitch
Say I have a list of probability distribution tensors of varying size. It would be nice if `Categorical` could accept a flattened tensor `probs_flat` containing all the distributions, along with a pointer `ptr` that indicates the starting index of each distribution. Standard functions can efficiently be computed, e.g. entropy can be computed as `-segment(probs_flat.log() * probs_flat, ptr)`, where `segment` is something like `segment_add_csr` from `torch_scatter` (I'm not sure if torch provides this function out of the box).
I'm not too sure of the best way to integrate this with the existing code. Just an idea!
### Alternatives
Let `probs_list` be a list of probability distribution tensors, and `xs` be a list of samples from their respective distributions. There are currently two options to e.g. evaluate the log-probabilities of the samples:
1. (evaluate for each)
```Python
log_probs = torch.stack([
Categorical(probs=probs).log_prob(x)
for probs, x in zip(probs_list, xs)
])
```
2. (pad, stack, evaluate)
```Python
max_size = max(probs.numel() for probs in probs_list)
probs_padded_list = []
for probs in probs_list:
probs_padded = probs.new_zeros(max_size)
probs_padded[:probs.numel()] = probs
probs_padded_list += [probs_padded]
pi = Categorical(torch.stack(probs_padded_list))
log_probs = pi.log_prob(torch.stack(xs))
```
### Additional context
This can be useful in RL, where the number of possible actions varies between environment steps, as a less wasteful alternative to action masking.
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
2,408 | 102,936 |
torch.cuda.is_available() returns False on GTX 1650 with cuda 11.7 and torch==2.0.0+cpu
|
module: build, triaged
|
### ๐ Describe the bug
i get False when i run torch.cuda.is_available()
nvidia-smi
**+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.98 Driver Version: 535.98 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1650 WDDM | 00000000:01:00.0 On | N/A |
| 30% 39C P0 15W / 75W | 1145MiB / 4096MiB | 1% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+**
i even installed 11.7 and it still shows 12.2
when i run pip show torch it gives me version 2.0.1 + cu117 which is why i installed cuda 11.7 but i previously had 12.2
**C:\Users\ambsu>pip show torch
Name: torch
Version: 2.0.1+cu117**
py -m torch.utils.collect_env
gives
**Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0+cpu
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2
[conda] Could not collect**
everything is showing different versions
### Error logs
**Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0+cpu
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2
[conda] Could not collect**
nvidia-smi
**+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.98 Driver Version: 535.98 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1650 WDDM | 00000000:01:00.0 On | N/A |
| 30% 39C P0 15W / 75W | 1145MiB / 4096MiB | 1% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+**
### Minified repro
import torch
print(torch.cuda.is_available()) # False
### Versions
cuda 11.7 gtx
torch 2.0.1 and 2.0.0
cc @malfet @seemethere @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
2,409 | 102,921 |
Unbox expectedml
|
triaged
|
### ๐ Describe the bug
Stable Diffusion failed to generate an image. In an unusual twist, the console asks me to submit a bug report to PyTorch. So I am, although I don't exactly have details. SD has been acting up this evening, despite doing just fine for two days. I've never seen this before. This occurred while I was using Neurogen 1.1.
### Versions
The script pops up and disappears. If there is an output somewhere, I can't find it.

| 2 |
2,410 | 102,911 |
PackedSequences on MPS accelerator yields `grad_y` missing or crashes the kernel.
|
triaged, module: mps
|
### ๐ Describe the bug
# PackedSequences on MPS accelerator yields `grad_y` missing or crashes the kernel.
This issue is related to [Issue 96416][loss error on m1] ([Loss.backward() error when using MPS on M1 #96416][loss error on m1]), [Issue 94691][gru nan] ([Nan is output by GRU on mps #94691][gru nan]), [Issue 97552][packed sequence failure] ([PackedSequence failure with MPS #97552][packed sequence failure]), and [PR 96601][grad_y missing fix] ([[MPS] LSTM grad_y missing fix #96601][grad_y missing fix]).
To faithfully reproduce the error I have provided a self contained [Github Gist][Github Gist].
In addition, I already pasted some [results][Results] showing that this does consistently yield `Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'grad_y'` or crash a Jupyter Notebook kernel.
A `env.yml` file is provided in the [Github Gist][Github Gist] and using `collect_env.py`:
```sh
# For security purposes, please check the contents of collect_env.py before running it.
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 19:01:19) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.0.1
[pip3] torchmetrics==0.11.4
[conda] numpy 1.24.3 py310haa1e00c_0 conda-forge
[conda] pytorch 2.0.1 py3.10_0 pytorch
[conda] pytorch-lightning 2.0.2 pyhd8ed1ab_0 conda-forge
[conda] torchmetrics 0.11.4 pyhd8ed1ab_0 conda-forge
```
Of note @alexdremov has started to fix the error ([[MPS] LSTM grad_y missing fix #96601][grad_y missing fix]), but it may not work with PackedSequences. Quote:
> What worries me is that the example uses packed sequences. Not sure, but this may be the problem as MPS LSTM does not support it.
>
> Concerning GRU, it simply does not work on MPS ;). No need to even test it.
[packed sequence failure]: https://github.com/pytorch/pytorch/issues/97552
[loss error on m1]: https://github.com/pytorch/pytorch/issues/96416
[grad_y missing fix]: https://github.com/pytorch/pytorch/pull/96601
[gru nan]: https://github.com/pytorch/pytorch/issues/94691
[Github Gist]: https://gist.github.com/dsm-72/1cea0601145a8b92155d8d08c90bf998
[Results]: https://github.com/pytorch/pytorch/issues/94691#issuecomment-1574365231
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 7 |
2,411 | 102,905 |
nn.ChannelShuffle1d
|
module: nn, triaged, needs research
|
### ๐ The feature, motivation and pitch
nn.ChannelShuffle operates on 2D tensors. Either:
- Introduce nn.ChannelShuffle1d, nn.ChannelShuffle2d, etc
- Adjust nn.ChannelShuffle to support 1d as well
### Alternatives
Convert 1D -> 2D artificially before applying ChannelShuffle:
```python
channel_shuffle = nn.ChannelShuffle(4)
x = torch.arange(64).reshape(1, 8, 8)
x = channel_shuffle(x[..., None])[..., 0]
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,412 | 102,904 |
Unable to checkpoint model and optimizer state when using Hybrid Sharding Strategy
|
high priority, oncall: distributed, triaged, module: fsdp, module: distributed_checkpoint
|
### ๐ Describe the bug
Fully Sharded Data Parallel (FSDP) is a wrapper for sharding module parameters across data parallel workers. It supports various sharding strategies for distributed training including ShardingStrategy.HYBRID_SHARD. In this strategy, the parameters are fully sharded within a node and replicated across nodes.
Experimenting with ShardingStrategy.HYBRID_SHARD, we were able to successfully complete multiple iterations (forward and backward passes). However, trying to checkpoint the model and optimizer states throws errors.
Relevant Code Snippet to retrieve the model and optimizer states:
```
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy, StateDictType
with FSDP.state_dict_type(
model,
StateDictType.SHARDED_STATE_DICT,
):
model_state = model.state_dict()
optim_state = FSDP.optim_state_dict(model, optimizer)
```
Relevant Code Snippet to save the model and optimizer states:
```
from torch.distributed._shard.checkpoint import (
FileSystemWriter,
SavePlan,
save_state_dict,
)
from torch.distributed.checkpoint.default_planner import DefaultSavePlanner
save_state_dict(state_dict=model, storage_writer=writer, planner=DefaultSavePlanner())
if optimizer is not None:
torch.save(optimizer, os.path.join(save_name, f"optim_{rank}.pth"))
```
Error while retrieving the optimizer state:
```
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2072, in all_gather_object
object_list[i] = _tensor_to_object(tensor, tensor_size)object_list[i] = _tensor_to_object(tensor, tensor_size)dist.all_gather_object(object_list, processed_state)
```
Error while writing model state:
```
save_state_dict(state_dict=model, storage_writer=writer, planner=DefaultSavePlanner())
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_saver.py", line 104, in save_state_dict
central_plan = distW.reduce_scatter("plan", local_step, global_step)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 175, in reduce_scatter
all_data = self.gather_object(local_data)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 106, in gather_object
dist.gather_object(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2166, in gather_object
gather(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2713, in gather
work = default_pg.gather(output_tensors, input_tensors, opts)
```
### Versions
PyTorch Nightly Build - 05-31-2023
@lessw2020 @chauhang
cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,413 | 102,898 |
After dynamo minifier generates repros that don't entirely match what we minified over
|
triaged, module: dynamo
|
### ๐ Describe the bug
The after dynamo minifier tests minified graphs by feeding them directly to the backend in question. However, this is not what the repro script does: the repro script runs dynamo (torch.compile) on the serialized FX graph, with the backend set. If running dynamo on a dynamo'ed FX graph is not idempotent, then you can end up with a slightly different dynamo graph, which could in turn mean you end up with a repro that doesn't actually repro.
Fortunately, for the most part, Dynamo'ing a serialized FX graph idempotent. But there's one notable case I recently ran into where it isn't: when the graph is empty! (In that case we never bother calling the graph at all.) In my case (testing) this is easy enough to work around; I'm not sure if there are other cases that actually matter in practice.
### Versions
main
cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy
| 0 |
2,414 | 102,894 |
BCELoss and BCEWithLogitsLoss differ when one of the input logits is float("inf")
|
module: nn, triaged, module: edge cases
|
### ๐ Describe the bug
Based on the docs, it seems like `BCEWithLogitsLoss` is supposed to be the more numerically table sibling of `BCELoss`. However, when one of the input logits to `BCEWithLogitsLoss` is `float("inf")` the result is `Nan` whereas `sigmoid(inf) = 1.0`, so it should probably be well defined. When I use `BCELoss(sigmoid(x), target)` it gives the desired result.
```
x = torch.tensor([float("inf"), 1.0])
y = torch.tensor([1.0, 0.0])
loss = F.binary_cross_entropy_with_logits(x, y)
print (loss)
loss = F.binary_cross_entropy(x.sigmoid(), y)
print (loss)
print(torch.nn.BCEWithLogitsLoss()(x, y))
print(torch.nn.BCELoss()(x.sigmoid(), y))
```
Output:
```
tensor(nan)
tensor(0.6566)
tensor(nan)
tensor(0.6566)
```
### Versions
1.12.1
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
2,415 | 102,878 |
[dynamo] Diffusers - Graph break on OrderedDict
|
triaged, module: dynamo
|
UPDATE - There are simpler repros to fix in the comments below. The one in the first post requires more interaction because of how dataclass and OrderedDict works.
~~~
import diffusers
from diffusers.utils import BaseOutput
import dataclasses
from dataclasses import dataclass
import torch
@dataclass
class Transformer2DModelOutput(BaseOutput):
"""
Args:
sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete):
Hidden states conditioned on `encoder_hidden_states` input. If discrete, returns probability distributions
for the unnoised latent pixels.
"""
sample: torch.FloatTensor
def fn(x):
return Transformer2DModelOutput(sample=x)
x = torch.randn(4)
ref = fn(x)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
res = opt_fn(x)
~~~
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy
| 6 |
2,416 | 102,870 |
Inductor: delete code that extracts out sizevars by inspecting tensor inputs to find a size that handled it
|
triaged, better-engineering, module: inductor
|
### ๐ Describe the bug
This is no longer needed since we now past in all free symbols as explicit arguments
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 0 |
2,417 | 102,853 |
pdb but for dynamo (and time travel debugging)
|
feature, triaged, oncall: pt2
|
### ๐ Describe the bug
Pitch: In normal eager PyTorch, I can breakpoint() inside Python code and then single step forward.
If I torch.compile() a module, I can no longer do this anymore. What I would like is a way to do this, and single step, but in the context of dynamo symbolic tracing.
Because dynamo symbolic tracing is non-destructive, it should also be possible to time travel debug (e.g., step backwards).
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
2,418 | 102,852 |
The operator 'aten::poisson' is not currently implemented for the MPS device
|
triaged, module: mps
|
### ๐ The feature, motivation and pitch
I am running some MCMC sample using torch. currently, the torch distributions don't support sampling from a Poisson distribution when the intensity parameter is a tensor (say 2D). Essentially, it cannot do a vector/matrix of entries.
I get this error:
NotImplementedError: The operator 'aten::poisson' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
WARNING: this will be slower than running natively on MPS.
### Alternatives
_No response_
### Additional context
_No response_
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,419 | 102,839 |
Dynamo should only unroll loops by a preset factor (unless otherwise explicitly instructed)
|
triaged, oncall: pt2
|
### ๐ Describe the bug
Today, dynamo is willing to unroll loops for arbitrarily long. This can lead to extremely large graphs which in turn lead to slow compile times, for example, https://github.com/pytorch/pytorch/issues/102622
Instead, we should unroll only by a pre-set factor, e.g., 5 or something. If we notice that we have looped too many times (and the loop is generating graph), we should graph break, forcing us to generate separate code for the loop body. (In the further future, we could automatically host side loop'ify the graph, so we could avoid having to do guard tests on entry.)
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
2,420 | 102,833 |
LibTorch-Lite 1.13.0.1 Crash on iOS 12 on app startup
|
triaged, module: ios
|
### ๐ Describe the bug
When app start on iOS 12.5.7 (latest version for iphone 5s, 6 and some iPad), the app crash with the following logs:
```
dyld: Symbol not found: ___chkstk_darwin
Referenced from: /var/containers/Bundle/Application/XXXX/PlantNet.app/PlantNet
Expected in: /usr/lib/libSystem.B.dylib
in /var/containers/Bundle/Application/XXXX/PlantNet.app/PlantNet
(lldb)
```
The [Podspec](https://github.com/CocoaPods/Specs/blob/master/Specs/c/c/3/LibTorch-Lite/1.13.0.1/LibTorch-Lite.podspec.json) indicate iOS 12.0
#### Impact
All iOS Users on 12.5.7 cannot use the app as the app crash on start after the update.
Potential workaround are time consuming involving releasing a specific version for them on the App Store before publishing a new app dropping the support for iOS 12
Stacktrace as follow:
```
# 0 __abort_with_payload
dyld`:
0x1087f2410 <+0>: mov x16, #0x209
0x1087f2414 <+4>: svc #0x80
-> 0x1087f2418 <+8>: b.lo 0x1087f2430 ; <+32>
0x1087f241c <+12>: stp x29, x30, [sp, #-0x10]!
0x1087f2420 <+16>: mov x29, sp
0x1087f2424 <+20>: bl 0x1087f1850 ; cerror_nocancel
0x1087f2428 <+24>: mov sp, x29
0x1087f242c <+28>: ldp x29, x30, [sp], #0x10
0x1087f2430 <+32>: ret
```
```
# 5 _dyld_start
dyld`:
0x1087b1000 <+0>: mov x28, sp
0x1087b1004 <+4>: and sp, x28, #0xfffffffffffffff0
0x1087b1008 <+8>: mov x0, #0x0
0x1087b100c <+12>: mov x1, #0x0
0x1087b1010 <+16>: stp x1, x0, [sp, #-0x10]!
0x1087b1014 <+20>: mov x29, sp
0x1087b1018 <+24>: sub sp, sp, #0x10
0x1087b101c <+28>: ldr x0, [x28]
0x1087b1020 <+32>: ldr x1, [x28, #0x8]
0x1087b1024 <+36>: add x2, x28, #0x10
0x1087b1028 <+40>: adrp x4, -1
0x1087b102c <+44>: add x4, x4, #0x0
0x1087b1030 <+48>: adrp x3, 90
0x1087b1034 <+52>: ldr x3, [x3, #0xad8]
0x1087b1038 <+56>: sub x3, x4, x3
0x1087b103c <+60>: mov x5, sp
0x1087b1040 <+64>: bl 0x1087b1088 ; dyldbootstrap::start(macho_header const*, int, char const**, long, macho_header const*, unsigned long*)
-> 0x1087b1044 <+68>: mov x16, x0
0x1087b1048 <+72>: ldr x1, [sp]
0x1087b104c <+76>: cmp x1, #0x0
0x1087b1050 <+80>: b.ne 0x1087b105c ; <+92>
0x1087b1054 <+84>: add sp, x28, #0x8
0x1087b1058 <+88>: br x16
0x1087b105c <+92>: mov x30, x1
0x1087b1060 <+96>: ldr x0, [x28, #0x8]
0x1087b1064 <+100>: add x1, x28, #0x10
0x1087b1068 <+104>: add x2, x1, x0, lsl #3
0x1087b106c <+108>: add x2, x2, #0x8
0x1087b1070 <+112>: mov x3, x2
0x1087b1074 <+116>: ldr x4, [x3]
0x1087b1078 <+120>: add x3, x3, #0x8
0x1087b107c <+124>: cmp x4, #0x0
0x1087b1080 <+128>: b.ne 0x1087b1074 ; <+116>
0x1087b1084 <+132>: br x16
```
### Versions
LibTorch-Lite 1.13.0.1
React Native 0.70.9
| 0 |
2,421 | 102,832 |
TypeError: (): incompatible function arguments
|
needs reproduction, module: cpp-extensions, triaged
|
### ๐ Describe the bug
Hello๏ผI am customizing process group backends using cpp extensions according to PyTorch Tutorials๏ผ[Customize Process Group Backends Using Cpp Extensions โ PyTorch Tutorials 2.0.1+cu117 documentation](https://pytorch.org/tutorials/intermediate/process_group_cpp_extension_tutorial.html) . But an error occurred . It seems to be type error, but I find the type is right. So how can I fix it? The following is the detailed error:
```
File โ/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.pyโ, line 895, in init_process_group
default_pg = _new_process_group_helper(
File โ/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.pyโ, line 1029, in _new_process_group_helper
backend_class = creator_fn(backend_prefix_store, group_rank, group_size, timeout)
TypeError: (): incompatible function arguments. The following argument types are supported:
1. (arg0: c10d::Store, arg1: int, arg2: int, arg3: datetime.timedelta) โ c10d::Backend
```
### Versions
torch.__version__: 2.0.0+cu117
cc @malfet @zou3519
| 7 |
2,422 | 102,830 |
Unknow error when using `make_graphed_callables`
|
triaged, module: cuda graphs
|
### ๐ Describe the bug
When I try to run the following code,
```python
import torch
def test_func(t1: torch.Tensor, t2: torch.Tensor):
return t1 + t2
r1 = torch.randn([1, 4, 8, 3, 3]).to('cuda')
r2 = torch.randn([1, 4, 8, 3, 3]).to('cuda')
tmp = torch.randn([1, 4, 8, 2]).to('cuda').requires_grad_()
r2[..., 1, 1] = tmp[..., 1]
cudagraph_func = torch.cuda.make_graphed_callables(test_func, (r1, r2))
```
I got the error msg:
```
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: operation would make the legacy stream depend on a capturing blocking stream
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
PyTorch version: 2.0.0a0+1767026
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
cc @mcarilli @ezyang
| 2 |
2,423 | 102,821 |
Unable to resume job using FSDP with 64 nodes, errors appeared during loading sharded optimizer state dict
|
triaged, module: fsdp
|
### ๐ Describe the bug
I am following this example code to save and load optimizer states using load_sharded_optimizer_state_dict: https://github.com/pytorch/pytorch/blob/c75e064dd6a2f800476bc84d4f27d7f49cedd055/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py
The resuming went well on 1 node and 8 nodes jobs. But when I tested it on 64 nodes (512 workers) with large model size, two errors appeared on two workers:
Here is first error:
> optim_state = dist_cp_op.load_sharded_optimizer_state_dict(
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/checkpoint/optimizer.py", line 282, in load_sharded_optimizer_state_dict
state_dict[key] = _shard_tensor(
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/_shard/api.py", line 70, in _shard_tensor
st = sharding_spec.shard(tensor, src_rank=src_rank, process_group=process_group)
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/_shard/sharding_spec/chunk_sharding_spec.py", line 181, in shard
dist.scatter(
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2774, in scatter
_check_tensor_list(scatter_list, "scatter_list")
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 599, in _check_tensor_list
raise RuntimeError(
RuntimeError: Invalid function argument. Expected parameter `scatter_list` to be of type List[torch.Tensor].
And second error:
> File "/src/shopqa/generative/workflows/seq2seq/train/snapshot.py", line 363, in load_model_optim_final
optim_state = dist_cp_op.load_sharded_optimizer_state_dict(
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/checkpoint/optimizer.py", line 282, in load_sharded_optimizer_state_dict
state_dict[key] = _shard_tensor(
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/_shard/api.py", line 70, in _shard_tensor
st = sharding_spec.shard(tensor, src_rank=src_rank, process_group=process_group)
File "/opt/conda/lib/python3.9/site-packages/torch/distributed/_shard/sharding_spec/chunk_sharding_spec.py", line 172, in shard
assert local_tensor is not None
AssertionError
Also to confirm the saved checkpoint is not corrupted, I ran the same resuming job on 8 nodes with 64 workers. The checkpoint was loaded without errors.
I am wondering is there anything missing when I used load_sharded_optimizer_state_dict in large-scale training?
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.26
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] (64-bit runtime)
Python platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1522.122
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology
nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch in
vpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] botorch==0.6.0
[pip3] flake8==6.0.0
[pip3] gpytorch==1.9.1
[pip3] mypy==0.991
[pip3] mypy-boto3-batch==1.26.34
[pip3] mypy-boto3-ec2==1.26.96
[pip3] mypy-boto3-iam==1.26.91
[pip3] mypy-boto3-s3==1.26.62
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.5.10
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.0.1
[pip3] torch-model-archiver==0.7.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.1.0.dev20230523+cu117
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.10.3
[pip3] torchserve==0.7.1
[pip3] torchsnapshot-nightly==2023.3.15
[pip3] torchtext==0.15.1
[pip3] torchvision==0.16.0.dev20230523+cu117
[pip3] torchx-nightly==2023.4.21
[pip3] triton==2.0.0
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230523+cu117 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchsnapshot-nightly 2023.3.15 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230523+cu117 pypi_0 pypi
[conda] torchx-nightly 2023.4.21 pypi_0 pypi
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 11 |
2,424 | 102,814 |
mark_dynamic may error too aggressively
|
triaged, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
Suppose you have pseudocode like this:
```
B = 4
x = torch.randn(B, 16)
y = torch.randn(B, 16)
torch._dynamo.mark_dynamic(x, 0)
@torch.compile()
def f(x, y):
return x + y
f(x, y)
```
This will fail, because the addition with y will cause x to get specialized.
One might reasonably argue that this is what you want to happen, because you asked for x to be dynamic, and it wasn't. But in non-synthetic examples, it might be quite difficult to identify all of the locations you need to mark_dynamic to ensure all of the unifications work out. In particular, I was trying to understand why we fail to maintain dynamic batch size in HuggingFace T5, and the reason is that in generation, there are a decent number of graph breaks in some of the preparatory code which allocates tensors that also have batch size (remember, eager mode does not currently propagate dynamic dim). Understanding where you have to insert `mark_dynamic` also requires you to understand where all the graph breaks are happening, not an easy feat!
It would be simple enough to delete the "require that computation involving this tensor must be dynamic" test (or perhaps, keep it but under some other name; in fact, ShapeEnv is directly factored this way). But could there be something even better we can do? One observation is we might still want mark dynamic to interact with `assume_static_by_default`; the desire is to avoid a recompilation by noting that a kernel should be compiled dynamically from the getgo.
In this case, there is something we can do. We could allocate dynamic dimensions for all shapes in the beginning, and then infer all the ones that should be dynamic (e.g., because they were marked dynamic). For all other unconstrained dimensions, we would monomorphize them (that's the assume static by default). We give up the benefits of performing dynamo tracing with static shapes (something that is quite profitable for compile time), but save a recompile. This may be a case of penny-wise pound-foolish though, depending on the relative costs of allocating lots of symbols vs eating a recompile. A hybrid approach is to optimistically trace with everything static, and only if a mark_dynamic is violated, we trigger a retrace with everything dynamic by default.
As a short term unblocker, it probably makes sense to special case T5 a little so that it goes the slow route (but then successfully compiles dynamic batch size). Maybe if we can eliminate enough graph breaks things will get better.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
2,425 | 102,812 |
[DTensor] Error in distribute_module with module._apply
|
oncall: distributed, triaged
|
### ๐ Describe the bug
In DTensor's README, the use case of distribute_module is given, but I found an error when I tried it. The specific code is as follows:
```python
mesh = DeviceMesh(device_type="cuda", mesh=[0, 1])
def shard_params(mod_name, mod, mesh):
rowwise_placement = [Shard(0)]
def to_dist_tensor(t):
return distribute_tensor(t, mesh, rowwise_placement)
mod._apply(to_dist_tensor)
sharded_module = distribute_module(MyModule(), mesh, partition_fn=shard_params)
```
The example code in README is:
https://github.com/pytorch/pytorch/blob/683753fb0fc842afa9c43bc2a4576053af529429/torch/distributed/_tensor/README.md?plain=1#L124-L129
And the error is:
```
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at path/c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xae (0x7f3d8f9b1e6e in path/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xf3 (0x7f3d8f967a0b in path/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x3f2 (0x7f3d9e168e72 in path/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x14ef4 (0x7f3d9e131ef4 in path/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x18358 (0x7f3d9e135358 in path/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x1870c (0x7f3d9e13570c in path/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x479024 (0x7f3d6d079024 in path/torch/lib/libtorch_python.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0xd (0x7f3d8f98d79d in path/torch/lib/libc10.so)
frame #8: <unknown function> + 0x72cfd8 (0x7f3d6d32cfd8 in path/torch/lib/libtorch_python.so)
frame #9: THPVariable_subclass_dealloc(_object*) + 0x2b5 (0x7f3d6d32d305 in path/torch/lib/libtorch_python.so)
frame #10: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4e07e0]
frame #11: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4e085a]
frame #12: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4e085a]
frame #13: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4f1908]
frame #14: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4f1569]
frame #15: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4f152d]
frame #16: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x4ce758]
frame #17: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x506c83]
frame #18: _PyEval_EvalFrameDefault + 0x2412 (0x4da402 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #19: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #20: _PyFunction_Vectorcall + 0x19c (0x4e807c in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #21: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #22: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #23: _PyEval_EvalFrameDefault + 0x399 (0x4d8389 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #24: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #25: _PyFunction_Vectorcall + 0x19c (0x4e807c in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #26: _PyEval_EvalFrameDefault + 0x1150 (0x4d9140 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #27: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #28: PyEval_EvalCodeEx + 0x39 (0x585d79 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #29: PyEval_EvalCode + 0x1b (0x585d3b in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #30: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x5a5a91]
frame #31: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x5a4a9f]
frame #32: PyRun_StringFlags + 0x7b (0x5a24ab in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #33: PyRun_SimpleStringFlags + 0x3b (0x4509c4 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #34: Py_RunMain + 0x278 (0x5a1ad8 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #35: Py_BytesMain + 0x39 (0x579dd9 in /home/shao/miniconda3/envs/torchmaster/bin/python)
frame #36: <unknown function> + 0x29d90 (0x7f3d9f629d90 in /lib/x86_64-linux-gnu/libc.so.6)
frame #37: __libc_start_main + 0x80 (0x7f3d9f629e40 in /lib/x86_64-linux-gnu/libc.so.6)
frame #38: /home/shao/miniconda3/envs/torchmaster/bin/python() [0x579c8d]
Traceback (most recent call last):
File "dtensor_high.py", line 47, in <module>
main()
File "dtensor_high.py", line 44, in main
mp.spawn(example, args=(world_size,), nprocs=world_size)
File "path/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "path/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "path/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "path/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "path/torch/distributed/distributed_c10d.py", line 3136, in scatter
work = default_pg.scatter(output_tensors, input_tensors, opts)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "path/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/test/dtensor/dtensor_high.py", line 32, in example
sharded_module = distribute_module(MyModule(), mesh, partition_fn=shard_params)
File "path/torch/distributed/_tensor/api.py", line 519, in distribute_module
partition_fn(name, submod, device_mesh)
File "/home/test/dtensor/dtensor_high.py", line 30, in shard_params
mod._apply(to_dist_tensor)
File "path/torch/nn/modules/module.py", line 824, in _apply
param_applied = fn(param)
File "/home/shao/workspace/test/dtensor/dtensor_high.py", line 29, in to_dist_tensor
return distribute_tensor(t, mesh, rowwise_placement)
File "path/torch/distributed/_tensor/api.py", line 433, in distribute_tensor
output = placement._shard_tensor(local_tensor, device_mesh, idx)
File "path/torch/distributed/_tensor/placement_types.py", line 168, in _shard_tensor
mesh.scatter(output, scatter_list, mesh_dim=mesh_dim)
File "path/torch/distributed/_tensor/device_mesh.py", line 289, in scatter
fut = scatter(
File "path/torch/distributed/c10d_logger.py", line 52, in wrapper
"args": f"{args}, {kwargs}",
File "path/torch/_tensor.py", line 427, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "path/torch/_tensor_str.py", line 669, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "path/torch/_tensor_str.py", line 600, in _str_intern
tensor_str = _tensor_str(self, indent)
File "path/torch/_tensor_str.py", line 352, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "path/torch/_tensor_str.py", line 137, in __init__
nonzero_finite_vals = torch.masked_select(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitd9c8f9a
Is debug build: False
CUDA used to build PyTorch: 11.5
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-43-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
arch๏ผ x86_64
Versions of relevant libraries:
[pip3] torch==2.1.0a0+gitd9c8f9a
[conda] torch 2.1.0a0+gitd9c8f9a dev_0 <develop>
[conda] torchtriton 2.0.0 py38 pytorch
[conda] torchvision 0.15.2 py38_cu117 pytorch
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,426 | 102,811 |
`torch.poisson(torch.tensor([torch.inf))` returns 0
|
module: distributions, module: error checking, triaged, module: edge cases
|
### ๐ Describe the bug
For `torch.poisson`, `torch.inf` results `0`
```python
torch.poisson(torch.tensor([torch.inf]))
```
Output:
```
tensor([0.])
```
And interestingly
```python
torch.poisson(torch.tensor([torch.inf - 1]))
```
Output:
```
tensor([9.2234e+18])
```
While for `torch.binomial`
```python
torch.binomial(torch.tensor([torch.inf]), torch.tensor([0.5]))
```
Output:
```
tensor([nan])
```
Seems that torch would like to return `nan` for `inf` ? But `binomial` is some what inconsistent with `poisson`, and I believe for `inf` lambda or rate, `0` is not a good return value, since it's valid but nearly impossible for *large* rate.
I don't know it's a bug or work as intended, so I tag bug for now.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2 (arm64)
GCC version: Could not collect
Clang version: 12.0.1
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-13.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
cc @fritzo @neerajprad @alicanb @nikitaved @malfet
| 3 |
2,427 | 102,805 |
Do smarter layout decisions with concatenate.
|
triaged, oncall: pt2, module: inductor
|
### ๐ The feature, motivation and pitch
Today, we always decide to turn the result of a concatenate into a contiguous layout (see https://github.com/pytorch/pytorch/blob/main/torch/_inductor/ir.py#L2578).
This makes sense if we're concatenating along the last dim, since all of our data is stored in contiguous tensors.
However, if we're concatenating along outer dims, and particularly if that outer dim is very small (like with stack, which lowers into a concatenate with dim size 1), then this will result in each element being stored in noncontiguous memory layouts, which may be quite unperformant to write into.
Instead of always making it contiguous, we could do some more intelligent decisions on the output layout depending on the shape of the concatenate.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 0 |
2,428 | 102,803 |
Improve shape padding in training.
|
feature, triaged, topic: performance, oncall: pt2, module: inductor
|
### ๐ The feature, motivation and pitch
Our shape padding currently doesn't really do that well in training. In training for code like
```
def f(x):
x = x + 1
return torch.mm(x, x) + 1
```
we get a graph that looks like this in the backwards.
```
def forward(self, full_default_8: f16[4095, 1], full_default_9: f16[1, 4095], full_default_10: f16[4096, 1], full_default_11: f16[1, 4096], permute: f16[4095, 4095], tangents_1: f16[4095, 4095]):
# No stacktrace found for following nodes
cat_default_4: f16[4095, 4096] = torch.ops.aten.cat.default([permute, full_default_8], 1)
cat_default_5: f16[4096, 4095] = torch.ops.aten.cat.default([tangents_1, full_default_9])
cat_default_6: f16[4096, 4096] = torch.ops.aten.cat.default([cat_default_5, full_default_10], 1); cat_default_5 = None
cat_default_7: f16[4096, 4096] = torch.ops.aten.cat.default([cat_default_4, full_default_11]); cat_default_4 = None
mm_default_5: f16[4096, 4096] = torch.ops.aten.mm.default(cat_default_7, cat_default_6); cat_default_7 = cat_default_6 = None
slice_tensor_6: f16[4095, 4096] = torch.ops.aten.slice.Tensor(mm_default_5, 0, 0, -1); mm_default_5 = None
slice_tensor_7: f16[4095, 4096] = torch.ops.aten.slice.Tensor(slice_tensor_6, 1, 0, 9223372036854775807); slice_tensor_6 = None
slice_tensor_4: f16[4095, 4096] = torch.ops.aten.slice.Tensor(slice_tensor_7, 0, 0, 9223372036854775807); slice_tensor_7 = None
slice_tensor_5: f16[4095, 4095] = torch.ops.aten.slice.Tensor(slice_tensor_4, 1, 0, -1); slice_tensor_4 = None
cat_default: f16[4095, 4096] = torch.ops.aten.cat.default([tangents_1, full_default_8], 1); tangents_1 = full_default_8 = None
cat_default_1: f16[4096, 4095] = torch.ops.aten.cat.default([permute, full_default_9]); permute = full_default_9 = None
cat_default_2: f16[4096, 4096] = torch.ops.aten.cat.default([cat_default_1, full_default_10], 1); cat_default_1 = full_default_10 = None
cat_default_3: f16[4096, 4096] = torch.ops.aten.cat.default([cat_default, full_default_11]); cat_default = full_default_11 = None
mm_default_2: f16[4096, 4096] = torch.ops.aten.mm.default(cat_default_3, cat_default_2); cat_default_3 = cat_default_2 = None
slice_tensor_2: f16[4095, 4096] = torch.ops.aten.slice.Tensor(mm_default_2, 0, 0, -1); mm_default_2 = None
slice_tensor_3: f16[4095, 4096] = torch.ops.aten.slice.Tensor(slice_tensor_2, 1, 0, 9223372036854775807); slice_tensor_2 = None
slice_tensor: f16[4095, 4096] = torch.ops.aten.slice.Tensor(slice_tensor_3, 0, 0, 9223372036854775807); slice_tensor_3 = None
slice_tensor_1: f16[4095, 4095] = torch.ops.aten.slice.Tensor(slice_tensor, 1, 0, -1); slice_tensor = None
# File: ../t.py:8, code: return torch.mm(x, x) + 1
add_2: f16[4095, 4095] = torch.ops.aten.add.Tensor(slice_tensor_1, slice_tensor_5); slice_tensor_1 = slice_tensor_5 = None
return [add_2]
```
The problem is that the joint graph looks like
```
add = ...
mm: f16[4095, 4095] = torch.ops.aten.mm.default(add, add)
permute: f16[4095, 4095] = torch.ops.aten.permute.default(add, [1, 0])
mm_1: f16[4095, 4095] = torch.ops.aten.mm.default(permute, tangents_1); permute = None
permute_1: f16[4095, 4095] = torch.ops.aten.permute.default(add, [1, 0]); add = None
mm_2: f16[4095, 4095] = torch.ops.aten.mm.default(tangents_1, permute_1); tangents_1 = permute_1 = None
```
So even after we say, pad `add` in the forwards pass, the way the tensor is used in the backwards pass depends on `permute(add)`, which has *not* been padded.
Ideally we should be able to avoid this, either by having a smarter joint graph pattern, or by doing our matmul padding on the pre-autograd graph? Not totally sure.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 0 |
2,429 | 102,775 |
DISABLED test_make_fx_symbolic_exhaustive_special_bessel_j0_cpu_float32 (__main__.TestProxyTensorOpInfoCPU)
|
triaged, module: flaky-tests, skipped, module: ProxyTensor
|
Platforms: linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_make_fx_symbolic_exhaustive_special_bessel_j0_cpu_float32&suite=TestProxyTensorOpInfoCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_make_fx_symbolic_exhaustive_special_bessel_j0_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_proxy_tensor.py`
| 131 |
2,430 | 102,774 |
Introduce OptimizerInfos to test_optim (pt1)
|
Stale, topic: not user facing
|
This PR does two things:
1. Introduces OptimizerInfos, BUT does not use them.
2. Imports all the optimizers at the top and avoid re-accessing later.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #102774
| 3 |
2,431 | 102,772 |
Support In-place Triangular Matrix Multiplication
|
triaged, enhancement, module: linear algebra
|
### ๐ The feature, motivation and pitch
The cuBLAS API supports Triangular Matrix Multiplication (trmm) https://docs.nvidia.com/cuda/cublas/index.html?highlight=trmm#cublasxt-t-trmm which is particularly cool because you can do `X = T X` _in-place!_.
The trmm function supports both upper and lower triangular matrices, and they can be stored in the same tensor T. (Using upper and lower half.)
This gives a nice way to do general matrix multiplication in-place using the LU-decomposition. Alternatively, if we are using an MLP we may just learn the triangular matrices L and U directly.
### Alternatives
For systems without cuda support, I believe BLAS also supports trmm.
It's always possible to just fall back to normal matrix mult and allocate a new matrix.
### Additional context
See also the discussion here about in-place matrix multiplication: https://stackoverflow.com/questions/25450809/is-there-an-algorithm-to-multiply-square-matrices-in-place
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 3 |
2,432 | 102,763 |
Followup on the extra graph breaks for yolov3 model caused by layout optimization
|
triaged, module: inductor
|
### ๐ The feature, motivation and pitch
Layout optimization increase graph breaks for yolov3 from 12 to 13. But number of unique graph breaks does not change.
Run
```
python benchmarks/dynamo/torchbench.py --backend inductor --amp --accuracy --only yolov3 --training
```
And found the number of graph breaks to be 13 in the csv file. (unique number of graph breaks to be 7)
Disable layout optimization and rerun:
```
TORCHINDUCTOR_LAYOUT_OPTIMIZATION=0 python benchmarks/dynamo/torchbench.py --backend inductor --amp --accuracy --only yolov3 --training
```
We can found the number of graph breaks to be 12 in the csv file (unique number of graph breaks still to be 7).
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel
| 2 |
2,433 | 102,761 |
Pytorch Build images for RISCV64 Devices in the nightly builds
|
module: build, feature, triaged
|
### ๐ The feature, motivation and pitch
Please provide Nightly Builds for the RISCV64 CPU's in a similar fashion to the other nightly builds you provide.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| 0 |
2,434 | 102,753 |
Error: no matching constructor for initialization of 'at::OptionalIntArrayRef'
|
module: build, triaged, module: macos
|
### ๐ Describe the bug
I am using the PyTorch C++. My code includes the line
```c++
Tensor realGrid = torch::fft::irfftn(recipGrid, {gridSize[0], gridSize[1], gridSize[2]}, c10::nullopt, "forward");
```
where `recipGrid` is a 3D tensor and `gridSize` is an array of ints. Using gcc on Linux this compiles and works as expected. But compiling with clang on Mac, it fails with a compilation error:
```
/Users/peastman/workspace/NNPOps/src/pytorch/pme/pmeCPU.cpp:302:57: error: no matching constructor for initialization of 'at::OptionalIntArrayRef' (aka 'OptionalArrayRef<long long>')
Tensor realGrid = torch::fft::irfftn(recipGrid, {gridSize[0], gridSize[1], gridSize[2]}, c10::nullopt, "forward");
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:71:22: note: candidate template ignored: could not match 'initializer_list<type-parameter-0-0>' against 'int'
constexpr explicit OptionalArrayRef(
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:67:22: note: candidate constructor template not viable: no known conversion from 'int' to 'c10::in_place_t' for 1st argument
constexpr explicit OptionalArrayRef(in_place_t ip, Args&&... args) noexcept
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:26:13: note: candidate constructor not viable: requires 1 argument, but 3 were provided
constexpr OptionalArrayRef(nullopt_t) noexcept {}
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:28:3: note: candidate constructor not viable: requires single argument 'other', but 3 arguments were provided
OptionalArrayRef(const OptionalArrayRef& other) = default;
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:30:3: note: candidate constructor not viable: requires single argument 'other', but 3 arguments were provided
OptionalArrayRef(OptionalArrayRef&& other) = default;
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:32:13: note: candidate constructor not viable: requires single argument 'other', but 3 arguments were provided
constexpr OptionalArrayRef(const optional<ArrayRef<T>>& other) noexcept
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:35:13: note: candidate constructor not viable: requires single argument 'other', but 3 arguments were provided
constexpr OptionalArrayRef(optional<ArrayRef<T>>&& other) noexcept
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:38:13: note: candidate constructor not viable: requires single argument 'value', but 3 arguments were provided
constexpr OptionalArrayRef(const T& value) noexcept
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:50:13: note: candidate constructor template not viable: requires single argument 'value', but 3 arguments were provided
constexpr OptionalArrayRef(U&& value) noexcept(
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:62:22: note: candidate constructor template not viable: requires single argument 'value', but 3 arguments were provided
constexpr explicit OptionalArrayRef(U&& value) noexcept(
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/c10/util/OptionalArrayRef.h:24:13: note: candidate constructor not viable: requires 0 arguments, but 3 were provided
constexpr OptionalArrayRef() noexcept {}
^
/Users/peastman/miniconda3/envs/openmm/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/fft.h:186:44: note: passing argument to parameter 's' here
at::OptionalIntArrayRef s=c10::nullopt,
^
1 error generated.
```
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.22.2
Libc version: N/A
Python version: 3.9.9 | packaged by conda-forge | (main, Dec 20 2021, 02:41:06) [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-13.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] openmmtorch==1.0
[pip3] torch==1.12.1
[pip3] torchani==2.2.3.dev2+g3dfbaf4
[conda] numpy 1.21.6 py39h690d673_0 conda-forge
[conda] openmmtorch 1.0 pypi_0 pypi
[conda] pytorch 1.12.1 cpu_py39hf1faf6a_1 conda-forge
[conda] torchani 2.2.3.dev2+g3dfbaf4 dev_0 <develop>
cc @malfet @seemethere @albanD
| 1 |
2,435 | 102,751 |
DISABLED test_ddp_has_finalized (__main__.TestDistBackendWithSpawn)
|
triaged, skipped
|
Platforms: rocm
ex https://ossci-raw-job-status.s3.amazonaws.com/log/13919698024
I think it started at https://github.com/pytorch/pytorch/pull/100773
```
2023-06-01T13:49:24.8049674Z distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_has_finalized <- ../../../../opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/distributed/distributed_test.py [W Module.cpp:1349] Warning: cuDNN Benchmark limit is not supported in MIOpen and will have no effect. (function operator())
2023-06-01T13:49:24.8050524Z libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
2023-06-01T13:49:24.8051003Z libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
2023-06-01T13:49:24.8052046Z [W reducer.cpp:1319] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
2023-06-01T13:49:24.8053669Z [W reducer.cpp:1319] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
2023-06-01T13:49:24.8054874Z [E ProcessGroupNCCL.cpp:456] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=4, OpType=ALLREDUCE, NumelIn=2048, NumelOut=2048, Timeout(ms)=5000) ran for 5219 milliseconds before timing out.
2023-06-01T13:49:24.8055652Z [E ProcessGroupNCCL.cpp:470] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
2023-06-01T13:49:24.8056300Z [E ProcessGroupNCCL.cpp:476] To avoid data inconsistency, we are taking the entire process down.
2023-06-01T13:49:24.8056991Z [E ProcessGroupNCCL.cpp:829] [Rank 1] NCCL watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=4, OpType=ALLREDUCE, NumelIn=2048, NumelOut=2048, Timeout(ms)=5000) ran for 5219 milliseconds before timing out.
2023-06-01T13:49:24.8057581Z SIGABRT(6), PID: 8353, Thread 8353:
2023-06-01T13:49:24.8058243Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8059049Z frame #1: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8059588Z frame #2: <unknown function> + 0x58d76 (0x7f0e0af4bd76 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8060169Z frame #3: <unknown function> + 0x58c1e (0x7f0e0af4bc1e in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8060759Z frame #4: <unknown function> + 0x4e909 (0x7f0e0af41909 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8061220Z frame #5: <unknown function> + 0xe3b5 (0x7f0e346cc3b5 in /opt/rocm/lib/libroctracer64.so.4)
2023-06-01T13:49:24.8061654Z frame #6: <unknown function> + 0x346493 (0x7f0e4ccba493 in /opt/rocm/hip/lib/libamdhip64.so.5)
2023-06-01T13:49:24.8062116Z frame #7: <unknown function> + 0xd89df (0x7f0e4ca4c9df in /opt/rocm/hip/lib/libamdhip64.so.5)
2023-06-01T13:49:24.8062585Z frame #8: hipEventSynchronize + 0x1da (0x7f0e4ca4f7ca in /opt/rocm/hip/lib/libamdhip64.so.5)
2023-06-01T13:49:24.8063240Z frame #9: <unknown function> + 0x1bdc673 (0x7f0e50049673 in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_hip.so)
2023-06-01T13:49:24.8064164Z frame #10: c10d::Logger::calculate_avg_time(long&, long&, c10d::Timer&, c10d::Timer::Event, c10d::Timer::Event) + 0x27 (0x7f0e5e262a47 in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
2023-06-01T13:49:24.8064993Z frame #11: c10d::Logger::set_runtime_stats_and_log() + 0x5ef (0x7f0e5e26764f in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
2023-06-01T13:49:24.8065719Z frame #12: <unknown function> + 0xc246ac (0x7f0e6950e6ac in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
2023-06-01T13:49:24.8066419Z frame #13: <unknown function> + 0x3f5b5d (0x7f0e68cdfb5d in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
2023-06-01T13:49:24.8066889Z frame #14: PyCFunction_Call + 0x52 (0x4f5652 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8067334Z frame #15: _PyObject_MakeTpCall + 0x3bb (0x4e0c8b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8067735Z frame #16: /opt/conda/envs/py_3.8/bin/python() [0x4f53fd]
2023-06-01T13:49:24.8068153Z frame #17: _PyEval_EvalFrameDefault + 0x49a9 (0x4dc999 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8068589Z frame #18: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8069047Z frame #19: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8069446Z frame #20: /opt/conda/envs/py_3.8/bin/python() [0x4f5234]
2023-06-01T13:49:24.8069812Z frame #21: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8070245Z frame #22: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8070700Z frame #23: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8071151Z frame #24: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8071529Z frame #25: /opt/conda/envs/py_3.8/bin/python() [0x4f5234]
2023-06-01T13:49:24.8071916Z frame #26: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8072361Z frame #27: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8072789Z frame #28: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8073235Z frame #29: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8073632Z frame #30: /opt/conda/envs/py_3.8/bin/python() [0x4f5234]
2023-06-01T13:49:24.8074021Z frame #31: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8074430Z frame #32: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8074884Z frame #33: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8075450Z frame #34: _PyObject_FastCallDict + 0x21b (0x4e03db in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8075865Z frame #35: _PyObject_Call_Prepend + 0x60 (0x4f1fd0 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8076270Z frame #36: /opt/conda/envs/py_3.8/bin/python() [0x5ab347]
2023-06-01T13:49:24.8076673Z frame #37: _PyObject_MakeTpCall + 0x3bb (0x4e0c8b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8077122Z frame #38: _PyEval_EvalFrameDefault + 0x4907 (0x4dc8f7 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8077547Z frame #39: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8077992Z frame #40: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8078424Z frame #41: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8078832Z frame #42: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8079302Z frame #43: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8079707Z frame #44: /opt/conda/envs/py_3.8/bin/python() [0x4f51bb]
2023-06-01T13:49:24.8080202Z frame #45: _PyEval_EvalFrameDefault + 0x399 (0x4d8389 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8080629Z frame #46: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8081034Z frame #47: /opt/conda/envs/py_3.8/bin/python() [0x4f51bb]
2023-06-01T13:49:24.8081439Z frame #48: _PyEval_EvalFrameDefault + 0x399 (0x4d8389 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8081854Z frame #49: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8082303Z frame #50: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8082745Z frame #51: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8083150Z frame #52: /opt/conda/envs/py_3.8/bin/python() [0x4f5234]
2023-06-01T13:49:24.8083514Z frame #53: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8083962Z frame #54: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8084412Z frame #55: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8084835Z frame #56: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8085287Z frame #57: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8085733Z frame #58: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8086182Z frame #59: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8086603Z frame #60: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8087135Z frame #61: _PyEval_EvalFrameDefault + 0x399 (0x4d8389 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8087589Z frame #62: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8088042Z frame #63: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8088257Z
2023-06-01T13:49:24.8088393Z SIGABRT(6), PID: 8353, Thread 8358:
2023-06-01T13:49:24.8089062Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8089728Z frame #1: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8090203Z frame #2: ioctl + 0xb (0x7f0e70d2a3ab in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8090745Z frame #3: <unknown function> + 0xe4840 (0x7f0e0afd7840 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8091467Z frame #4: <unknown function> + 0xde525 (0x7f0e0afd1525 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8092045Z frame #5: <unknown function> + 0x71de5 (0x7f0e0af64de5 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8092598Z frame #6: <unknown function> + 0x55756 (0x7f0e0af48756 in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8093171Z frame #7: <unknown function> + 0x657ca (0x7f0e0af587ca in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8093743Z frame #8: <unknown function> + 0x27f7b (0x7f0e0af1af7b in /opt/rocm-5.4.2/lib/libhsa-runtime64.so.1)
2023-06-01T13:49:24.8094306Z frame #9: <unknown function> + 0x8609 (0x7f0e70f6a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8094785Z frame #10: clone + 0x43 (0x7f0e70d35133 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8095006Z
2023-06-01T13:49:24.8095140Z SIGABRT(6), PID: 8353, Thread 8361:
2023-06-01T13:49:24.8095803Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8096724Z frame #1: c10::FatalSignalHandler::fatalSignalHandler(int) + 0x152 (0x7f0e4e3bf4f2 in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8097355Z frame #2: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8097870Z frame #3: gsignal + 0xcb (0x7f0e70c5900b in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8098349Z frame #4: abort + 0x12b (0x7f0e70c38859 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8098749Z frame #5: <unknown function> + 0xb135a (0x7f0e3482735a in /opt/conda/envs/py_3.8/lib/libstdc++.so.6)
2023-06-01T13:49:24.8099216Z frame #6: <unknown function> + 0xb13c5 (0x7f0e348273c5 in /opt/conda/envs/py_3.8/lib/libstdc++.so.6)
2023-06-01T13:49:24.8099678Z frame #7: <unknown function> + 0xb134f (0x7f0e3482734f in /opt/conda/envs/py_3.8/lib/libstdc++.so.6)
2023-06-01T13:49:24.8100327Z frame #8: <unknown function> + 0x4930f0 (0x7f0e4e9000f0 in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libtorch_hip.so)
2023-06-01T13:49:24.8100811Z frame #9: <unknown function> + 0xdbbf4 (0x7f0e34851bf4 in /opt/conda/envs/py_3.8/lib/libstdc++.so.6)
2023-06-01T13:49:24.8101372Z frame #10: <unknown function> + 0x8609 (0x7f0e70f6a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8101878Z frame #11: clone + 0x43 (0x7f0e70d35133 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8102098Z
2023-06-01T13:49:24.8102229Z SIGABRT(6), PID: 8353, Thread 8363:
2023-06-01T13:49:24.8102851Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8103509Z frame #1: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8104021Z frame #2: __poll + 0x4f (0x7f0e70d2899f in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8104650Z frame #3: <unknown function> + 0x4996 (0x7f0e7089d996 in /opt/conda/envs/py_3.8/lib/python3.8/lib-dynload/select.cpython-38-x86_64-linux-gnu.so)
2023-06-01T13:49:24.8105093Z frame #4: /opt/conda/envs/py_3.8/bin/python() [0x4f5ab4]
2023-06-01T13:49:24.8105511Z frame #5: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8105971Z frame #6: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8106395Z frame #7: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8106845Z frame #8: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8107307Z frame #9: _PyEval_EvalCodeWithName + 0x2f1 (0x4d6fb1 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8107759Z frame #10: _PyFunction_Vectorcall + 0x19c (0x4e807c in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8108297Z frame #11: _PyEval_EvalFrameDefault + 0x49a9 (0x4dc999 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8108742Z frame #12: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8109177Z frame #13: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8109587Z frame #14: _PyEval_EvalFrameDefault + 0x1f7b (0x4d9f6b in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8110040Z frame #15: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8110490Z frame #16: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8110935Z frame #17: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8111364Z frame #18: _PyEval_EvalFrameDefault + 0x6b2 (0x4d86a2 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8111818Z frame #19: _PyFunction_Vectorcall + 0x106 (0x4e7fe6 in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8112220Z frame #20: /opt/conda/envs/py_3.8/bin/python() [0x4f5234]
2023-06-01T13:49:24.8112610Z frame #21: PyObject_Call + 0x24a (0x4f768a in /opt/conda/envs/py_3.8/bin/python)
2023-06-01T13:49:24.8113061Z frame #22: /opt/conda/envs/py_3.8/bin/python() [0x5bc827]
2023-06-01T13:49:24.8113449Z frame #23: /opt/conda/envs/py_3.8/bin/python() [0x5bc7d4]
2023-06-01T13:49:24.8113970Z frame #24: <unknown function> + 0x8609 (0x7f0e70f6a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8114450Z frame #25: clone + 0x43 (0x7f0e70d35133 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8114670Z
2023-06-01T13:49:24.8114807Z SIGABRT(6), PID: 8353, Thread 8368:
2023-06-01T13:49:24.8115450Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8116118Z frame #1: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8116594Z frame #2: __poll + 0x4f (0x7f0e70d2899f in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8117116Z frame #3: <unknown function> + 0x98e5619 (0x7f0e33c03619 in /opt/rocm-5.4.2/lib/librccl.so.1)
2023-06-01T13:49:24.8117662Z frame #4: <unknown function> + 0x8609 (0x7f0e70f6a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8118135Z frame #5: clone + 0x43 (0x7f0e70d35133 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8118354Z
2023-06-01T13:49:24.8118610Z SIGABRT(6), PID: 8353, Thread 8369:
2023-06-01T13:49:24.8119258Z frame #0: c10::FatalSignalHandler::stacktraceSignalHandler(bool) + 0x8b (0x7f0e4e3befab in /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/lib/libc10.so)
2023-06-01T13:49:24.8119915Z frame #1: <unknown function> + 0x14420 (0x7f0e70f76420 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8120430Z frame #2: pthread_cond_wait + 0x216 (0x7f0e70f71376 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8120979Z frame #3: <unknown function> + 0x98e4164 (0x7f0e33c02164 in /opt/rocm-5.4.2/lib/librccl.so.1)
2023-06-01T13:49:24.8121524Z frame #4: <unknown function> + 0x8609 (0x7f0e70f6a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
2023-06-01T13:49:24.8122028Z frame #5: clone + 0x43 (0x7f0e70d35133 in /lib/x86_64-linux-gnu/libc.so.6)
2023-06-01T13:49:24.8122245Z
2023-06-01T13:49:24.8122662Z [2023-06-01 10:11:09,272] torch.testing._internal.common_distributed: [ERROR] Encountered error while trying to get traceback for process 1: [Errno 32] Broken pipe
```
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/distributed%2Ffsdp%2Ftest_fsdp_core.py%3A%3ATestParityWithDDP%3A%3Atest_delayed_optim_step_offload_true_shard_grad_op)).
| 2 |
2,436 | 102,741 |
Unlocking PyTorch's Power: README.md in Multiple Languages!
|
triaged, topic: new features, topic: docs
|
### ๐ The doc issue
We are a diverse group of students, each with different cultural backgrounds. As newcomers to the world of PyTorch, we realized the importance of having the README documentation available in our native languages.
### Suggest a potential alternative/fix
If you find our idea interesting and relevant, we would be happy to submit a pull request with the completed translations from our forked version of the project. We have already taken the initiative to translate the documentation into different languages, including German, Portuguese, Arabic, and Turkish, to ensure that beginners from diverse backgrounds can benefit from a localized experience.
By merging our translations into the main project, we can contribute to making PyTorch more accessible and user-friendly for a wider audience. We believe that providing comprehensive documentation in multiple languages will empower individuals to grasp the concepts more efficiently and foster a sense of inclusivity within the PyTorch community.
We look forward to your response and the opportunity to collaborate in making PyTorch a globally accessible resource for aspiring learners.
We encourage individuals from other nationalities to join us in this effort, extending the benefits to their own communities and facilitating the learning experience for everyone. Let's collaborate and contribute to a more inclusive and user-friendly PyTorch community.
| 3 |
2,437 | 102,740 |
parameterizations.orthogonal does not work as intended with nn.GRU or nn.LSTM
|
module: rnn, triaged, module: nn.utils.parametrize
|
### ๐ Describe the bug
This is a follow-up to [a post on the forum](https://discuss.pytorch.org/t/how-to-use-orthogonal-parameterization-with-gru/180487).
I am attempting to create a GRU that uses orthogonal parameterization for the weight matrices stored in `weight_hh_l0`. From the [GRU documentation](https://pytorch.org/docs/stable/generated/torch.nn.GRU.html), I know that `weight_hh_l0` contains three weight matrices `W_hr`, `W_hz`, and `W_hn` concatenated together. The problem that Iโm facing is that because the GRU module stores the matrices in a concatenated format, applying the orthogonal parameterization via `torch.nn.utils.parametrizations.orthogonal` yields a matrix that is entirely 0 for the bottom two-thirds of the rows.
```
import torch
from torch import nn
from torch.nn.utils.parametrizations import orthogonal
model_dim = 2
n_batch = 4
max_time = 20
input_tensor = torch.randn((max_time, n_batch, model_dim))
net = nn.GRU(model_dim, model_dim, num_layers=1, batch_first=False)
net = orthogonal(net, name=f"weight_hh_l0")
net.forward(input_tensor)
for name, param in net.named_parameters():
print(name)
print(param)
parametrizations.weight_hh_l0.original
Parameter containing:
tensor([[-1., 0.],
[ 0., -1.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]], requires_grad=True)
```
Clearly, I do not wish to constrain two-thirds of my weights to be 0s, as this defeats the purpose of using a GRU.
I have solved this by writing a [custom parameterization](https://discuss.pytorch.org/t/how-to-use-orthogonal-parameterization-with-gru/180487/8?u=user0). However, I feel that this is not a great solution.
In my opinion, the functionality that ships with PyTorch should be compatible with other functionality that ships with PyTorch. The exceptions to this principle should be rare & documented clearly in the relevant documentation pages. There are several alternative ways that this goal can be achieved. These are the possible solutions that I see to this problem:
1. `torch.nn.utils.parametrizations.orthogonal` raises an exception when applied to a non-square matrix. This is consistent with the textbook mathematical definition of orthogonal. which stipulates that orthogonal matrices are square matrices. The goal of this exception is to alert users when they are applying orthogonal outside of its intended context. It will also make it impossible to apply `torch.nn.utils.parametrizations.orthogonal` in an inappropriate context (e.g. GRU, LSTM), which is desirable because that application is obviously not an intended usage of this function. Right now, the result is a semantic bug, so this change will protect naive users from making an error; I suspect there are many users who have committed this mistake and are unaware of it. Instead, raising an exception will alert users that to use `torch.nn.utils.parametrizations.orthogonal`, either they will have to change their model or write custom parameterizations. I do not foresee any real cost to making this change, because the way that it is implemented now implies that non-square weight matrices are constrained to have blocks of 0s, so users are not gaining any appreciable value from `torch.nn.utils.parametrizations.orthogonal` admitting non-square matrices.
2. Revise the documentation to alert users that `torch.nn.utils.parametrizations.orthogonal` may result in undesirable behavior when applied to certain PyTorch modules, including `nn.GRU` and `nn.LSTM`. This will make an effort to inform users that they'll have to write their own code to achieve certain goals, at least.
3. Change the GRU (and LSTM) classes to store weight matrices separately, instead of in the concatenated format. This would allow users to apply `torch.nn.utils.parametrizations.orthogonal` to any or all of these matrices & the results will be orthogonal, so it will "just work." Presumably there's some reason why the matrices are concatenated, but in the event that users care about using _parameterizations_ of the weight matrices, the concatenation choice requires writing & maintaining custom code to split, transform, and concatenate these matrices. It also means that applying `torch.nn.utils.parametrizations.orthogonal` creates semantic bugs (per above).
4. Extend `torch.nn.utils.parametrizations.orthogonal` in some way so that it is _not naรฏve_ about the kinds of weights it is being applied to. This does not seem like a great choice, because it means the function needs to inspect and make a decision about how to parameterize weights, and that seems hard to develop & maintain. (Right now, the signature only "sees" the weights matrix, so it doesn't have context about the originating class of the weights.) It may also be hard to communicate to users how it works & why it works the way it does.
### Versions
```Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 (main, Aug 11 2022, 13:49:25) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-13.3.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[conda] Could not collect
```
cc @zou3519
| 5 |
2,438 | 102,732 |
Building NCCL with `make -l $MAX_JOBS` slows down builds
|
module: build, triaged
|
We build NCCL with a cpu load limit (see https://github.com/pytorch/pytorch/blame/main/cmake/External/nccl.cmake#L37) to avoid OOMs, but this has some unfortunate consequences for devs using Meta-managed devservers. Mine has lots of RAM and CPU, but is very "noisy" with background processes (I currently have a 3% cpu utilization but a load average of 137). I think `make -l` inspects the latter number, and essentially decides to play it safe and build NCCL serially.
The end result? My e2e clean build time is 17m50s, but if I remove `-l` it drops to 6m20s. (This is with no particular build time optimizations either!)
I'd like to find a way to keep the OOM-avoidance properties of `-l` for users that need them, while not unnecessarily penalizing developers with big, noisy boxes (i.e. most Meta engineers :-p )
cc @malfet @seemethere @peterbell10 @huydhn since y'all have worked on this area in the past :-)
| 1 |
2,439 | 102,731 |
[FSDP] When amp is enabled, there is a noticeable difference during training between `FSDP `and `DDP`
|
triaged, module: fsdp
|
### ๐ Describe the bug
When enabling amp training with PyTorch native autocast, I noticed there seems to be obvious difference for DDP based model and FSDP based model.
Here is a minimum example to reproduce the case:
```python
import copy
import os
import torch
import torch.nn as nn
from torch.cuda.amp import GradScaler, autocast
from torch.distributed import init_process_group
from torch.distributed.fsdp.fully_sharded_data_parallel import \
FullyShardedDataParallel as FSDP
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import AdamW
def build_optimizer(model: nn.Module, paramwise=True):
base_lr = 1e-4
if paramwise:
param_groups = []
for name, param in model.named_parameters():
if name.endswith('weight'):
param_groups.append({'params': [param], 'lr': base_lr * 0.1})
else:
param_groups.append({'params': [param], 'lr': base_lr})
optimizer = AdamW(param_groups, lr=base_lr)
else:
optimizer = AdamW(model.parameters(), lr=base_lr)
return optimizer
class ToyModel(nn.Module):
def __init__(self, data_preprocessor=None):
super().__init__()
self.linear1 = nn.Linear(2, 2)
self.norm = nn.BatchNorm1d(2)
self.linear2 = nn.Linear(2, 1)
def forward(self, inputs):
if isinstance(inputs, list):
inputs = torch.stack(inputs)
outputs = self.linear1(inputs)
outputs = self.norm(outputs)
outputs = self.linear2(outputs)
return outputs
if __name__ == '__main__':
init_process_group()
torch.cuda.set_device(int(os.getenv('LOCAL_RANK')))
# model: nn.Module = MODELS.build(cfg.model)
model1 = ToyModel().cuda()
model2 = copy.deepcopy(model1)
# train_dataloader = Runner.train_dataloader()
device_id = torch.cuda.current_device()
ddp_model = DDP(model1, device_ids=[device_id])
fsdp_model = FSDP(model2, device_id=device_id, use_orig_params=True, sync_module_states=True)
ddp_optim_wrapper = build_optimizer(ddp_model)
fsdp_optim_wrapper = build_optimizer(fsdp_model)
ddp_scaler = GradScaler()
fsdp_scaler = GradScaler()
with autocast():
for step in range(10):
data = torch.randn(2, 2).to(f'cuda:{device_id}')
ddp_loss = ddp_model(data).sum()
fsdp_loss = fsdp_model(data).sum()
ddp_scaler.scale(ddp_loss).backward()
fsdp_scaler.scale(fsdp_loss).backward()
ddp_scaler.step(ddp_optim_wrapper)
ddp_scaler.update()
fsdp_scaler.step(fsdp_optim_wrapper)
fsdp_scaler.update()
ddp_optim_wrapper.zero_grad()
fsdp_optim_wrapper.zero_grad()
print(f'step: {step} rank: {device_id} ddp_loss: {ddp_loss}, fsdp_loss: {fsdp_loss}')
```
run the script by
```bash
torchrun --nproc-per-node 4 work_dirs/test.py
```
You'll find the ddp_loss is different form the fsdp_loss obviously, however, this will not happen when training with fp32. Such error caused my model to converge normally when using DDP, but encounter NaN losses when using FSDP.
I've read the introduction of FSDP [here](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html?highlight=fsdp). I find the discrepancy/error to be unusual since the gathered parameters supplied to operations such as conv and linear in FSDP should be identical to those in DDP.
I've also tried to replace `GradScalar` with `ShardedGradScaler` for ddp_scaler, but the difference still exists.
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 8.5.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.17
Python version: 3.10.9 (main, Mar 8 2023, 10:47:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 2250.000
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.42
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.0
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==2.0.0+cu118
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.1+cu118
[pip3] torchdistx==0.3.0.dev0+cu118
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.1+cu118
[conda] Could not collect
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 1 |
2,440 | 102,730 |
Best practices clarification for initialization strategies
|
module: nn, triaged, module: initialization, needs research, module: python frontend
|
### ๐ The doc issue
PyTorch provides a few initialization strategies. However, changing the default (Xavier), while made easier via the `nn.init` submodule, can result in code duplication. Is it worthwhile to introduce decorators to more easily define new layers / redefine existing layers?
```python
import torch.nn as nn
from typing import (TypeAlias, List, TypeVar)
TorchLayer: TypeAlias = nn.Module
def apply_init_to_layer(
torch_layer:TorchLayer, init_func:Callable[[TorchLayer], TorchLayer],
*args, **kwargs
):
layer = torch_layer(*args, **kwargs)
layer.apply(init_func)
return layer
def apply_kaiming_init(layer:TorchLayer):
for name, param in layer.named_parameters():
if 'weight' in name:
nn.init.kaiming_normal_(param.data)
elif 'bias' in name:
param.data.fill_(0)
def apply_xavier_init(layer:TorchLayer):
for name, param in layer.named_parameters():
if 'weight' in name:
nn.init.xavier_normal_(param.data)
elif 'bias' in name:
param.data.fill_(0)
def kaiming_init(torch_layer:TorchLayer) -> Callable[..., TorchLayer]:
def init_layer(*args, **kwargs) -> TorchLayer:
layer = apply_init_to_layer(torch_layer, apply_kaiming_init, *args, **kwargs)
return layer
return init_layer
```
with existing layers:
```python
kLSTM = kaiming_init(nn.LSTM)
kGRU = kaiming_init(nn.GRU)
kLinear = kaiming_init(nn.Linear)
```
### Suggest a potential alternative/fix
See above
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,441 | 102,727 |
DISABLED test_Conv2d_dilated_cuda_tf32 (__main__.TestNN)
|
module: nn, module: cuda, triaged, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_Conv2d_dilated_cuda_tf32&suite=TestNN) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_Conv2d_dilated_cuda_tf32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nn.py`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck
| 9 |
2,442 | 102,694 |
Exporting the operator \'aten::fused_moving_avg_obs_fake_quant\' to ONNX opset version 13 is not supported
|
module: onnx, triaged
|
### ๐ The feature, motivation and pitch
May i know if there is a way to fix this or when it will be added, im trying to export quantized model but it will always fail due to that ONNX opset V13 doesnt support fused_moving_avg_obs_fake_quant
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
2,443 | 102,673 |
Fix dynamo-related debug Python 3.11 failures
|
triaged, bug, release notes: dynamo
|
Multiple `PYTORCH_TEST_WITH_DYNAMO=1` tests raise asserts on debug Python 3.11 or otherwise fail. These should be fixed.
Script to run all tests: https://gist.github.com/williamwen42/c430d1a5667c270c209be51b24490d6e (Each individual test will stop running at the first abort/segfault. To continue, skip the failing test.)
Tests to fix:
- [ ] `python test/test_optim.py -k test_adagrad_sparse` (`Python/ceval.c:2431: _PyEval_EvalFrameDefault: Assertion 'EMPTY()' failed.`)
For reference, these tests also fail without `PYTORCH_TEST_WITH_DYNAMO=1` on 3.11:
- [ ] `python test/test_tensor_creation_ops.py -k test_non_writable_buffer_cpu_bool` (`Fatal Python error: _Py_CheckSlotResult: Slot getbuffer of type bytes succeeded with an exception set`) (debug 3.11 only) (also fails in debug 3.10)
- [ ] `python test/functorch/test_ops.py -k test_vmapvjpvjp_linalg_tensorsolve_cpu_float32` (`AssertionError: Tensor-likes are not close!`) (debug 3.11 only)
- [ ] Tests `test_bool_indices_cpu` and `test_bool_indices_cuda` in `python test/test_indexing.py` (`AssertionError: Scalars are not equal!`) (only occurs when the entire test suite is run -- running individually passes, fails locally on release 3.11, passes in CI) (https://github.com/pytorch/pytorch/issues/103355)
- [ ] `python test/test_ops_gradients.py -k test_fn_grad_linalg_det_singular_cpu_float64` (`torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,`) (debug 3.11 only)
- [ ] `python test/test_sparse_csr.py -k test_invalid_input_csr_large_cpu` (`AssertionError: "32-bit integer overflow in nnz" does not match "value cannot be converted to type int32 without overflow"`) (skipped in CI)
| 2 |
2,444 | 102,670 |
Investigate the perf drop on timm for dynamic shape when layout optimization is enabled
|
triaged, topic: performance, oncall: pt2, module: inductor
|
### ๐ The feature, motivation and pitch
We see 14% perf win on timm models with layout optimization. But enabling dynamic shape cause 6% perf drop instead.
Theoretically we should also see wins with dynamic shape.
I've done some early investigation on gernet_l.
Here are the perf numbers:
static shape + layout opt: 1.474x
static shape without layout opt: 1.165x
dyn shape + layout opt: 0.912x
dyn shape without layout opt: 1.102x
Dive into the dynamic shape runs further.
For dyn shape + layout opt
- Abs latency: 80.241734 ms
- Fwd wrapper: https://gist.github.com/shunting314/147285c6ba55576ad528b6bc3c02aff7
- Fwd wrapper profiling result: โโhttps://gist.github.com/shunting314/dcb6d8a80739487a8a782f84a3c01976
- Fwd wrapper total wall time 28.636 ms
- Bwd wrapper: https://gist.github.com/shunting314/6218248272f9519d497a91403b676103
- Bwd wrapper profiling result: https://gist.github.com/shunting314/d7afef45fcfa7e5b7b776b61ed9fa7a2
- Bwd wrapper Total wall time 49.491 ms
For dyn shape without layout opt:
- Abs latency: 66.486952 ms
- Fwd wrapper: https://gist.github.com/shunting314/f0a0c038850a94283ae3a87975e75e2b
- Fwd wrapper profiling result: https://gist.github.com/shunting314/e649dc72df2c9a247952d91e60a8de10
- Fwd wrapper Total wall time 20.838 ms
- Bwd wrapper: https://gist.github.com/shunting314/5dbc4ae04c1b8c695d31651af6185509
- Bwd wrapper profiling result: https://gist.github.com/shunting314/41adfa37fd1cde3600f40ffebe4532fc
- Bwd wrapper Total wall time 44.032 ms
We can see that with layout opt, overall we slow down by ~14ms. While the fwd wrapper slows downs by ~8ms and bwd wrapper slows down by ~5.5 ms. I'll dive into the fwd wrapper as the next step.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @jansel @Chillee @eellison
### Alternatives
_No response_
### Additional context
_No response_
| 3 |
2,445 | 102,667 |
Duplicate parameters (_flat_params and original params) in the state_dict when using `use_orig_params=True` and `StateDictType.LOCAL_STATE_DICT`
|
triaged, module: fsdp
|
### ๐ Describe the bug
When you set `use_orig_params=True` and `StateDictType.LOCAL_STATE_DICT` and you try to get a model's state_dict using `with FSDP.state_dict_type(module=model state_dict_type=StateDictType.LOCAL_STATE_DICT):`, the returned state dict contains BOTH the `_flat_params*` keyed parameters and the parameters with the original name. This is an issue because it takes up unnecessary memory and causes issues if you set `strict=True` in `load_state_dict`
Minimal Repro:
```python
import torch
from torch import nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import StateDictType
import torch.distributed as dist
import os
rank, world_size = int(os.environ['RANK']), int(os.environ['WORLD_SIZE'])
print((f'Torch version is {torch.__version__}' if rank==0 else ''))
torch.cuda.set_device(f'cuda:{rank}')
dist.init_process_group(backend='nccl', world_size=world_size, rank=rank)
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(4, 4, bias=False),
)
unwrapped_model = MyModel().cuda()
wrapped_model = FSDP(
module=unwrapped_model,
use_orig_params=True
)
with FSDP.state_dict_type(module=wrapped_model, state_dict_type=StateDictType.LOCAL_STATE_DICT):
sd = wrapped_model.state_dict()
print(f'\nState Dict for Rank {rank}:')
for k,v in sd.items():
print('\t' + k + ': ', v.shape)
# This line errors out because there are extra parameters in the state_dict.
wrapped_model.load_state_dict(sd, strict=True)
```
I talked to @awgu over slack and he said `StateDictType.LOCAL_STATE_DICT` is not really recommended anymore, but I figured I would post this here for posterity.
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1505.415
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5199.90
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.1
[pip3] torch-optimizer==0.3.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.3
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] vit-pytorch==0.35.8
[conda] Could not collect
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 0 |
2,446 | 102,663 |
Test test_vjp_nn_functional_scaled_dot_product_attention_cuda_float32 fails with `query: last dimension must be contiguous` on H100
|
triaged, oncall: transformer/mha, module: functorch
|
### ๐ Describe the bug
Test `TestOperatorsCUDA.test_vjp_nn_functional_scaled_dot_product_attention_cuda_float32` fails with `query: last dimension must be contiguous` on **H100**
Also the schema test is still failing https://github.com/pytorch/pytorch/issues/102029#issuecomment-1558278905
```python
root@afe28e73ccd4:/opt/pytorch/pytorch# TORCH_SHOW_CPP_STACKTRACES=1 python test/functorch/test_ops.py -v -k TestOperatorsCUDA.test_vjp_nn_functional_scaled_dot_product_attention_cuda_float32
test_vjp_nn_functional_scaled_dot_product_attention_cuda_float32 (__main__.TestOperatorsCUDA) ... /opt/pytorch/pytorch/torch/_functorch/deprecated.py:73: UserWarning: We've integrated functorch into PyTorch. As the final step of the integration, functorch.vjp is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use torch.func.vjp instead; see the PyTorch 2.0 release notes and/or the torch.func migration guide for more details https://pytorch.org/docs/master/func.migrating.html
warn_deprecated('vjp')
ERROR
======================================================================
ERROR: test_vjp_nn_functional_scaled_dot_product_attention_cuda_float32 (__main__.TestOperatorsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 2179, in wrapper
method(*args, **kwargs)
File "/opt/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 427, in instantiated_test
raise rte
File "/opt/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 414, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1154, in wrapper
fn(*args, **kwargs)
File "/opt/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 904, in test_wrapper
return test(*args, **kwargs)
File "/opt/pytorch/pytorch/test/functorch/test_ops.py", line 670, in test_vjp
_test(op)
File "/opt/pytorch/pytorch/test/functorch/test_ops.py", line 660, in _test
out_noncontig, vjp_fn = vjp(noncontig_fn, *noncontig_primals)
File "/opt/pytorch/pytorch/torch/_functorch/deprecated.py", line 74, in vjp
return _impl.vjp(func, *primals, has_aux=has_aux)
File "/opt/pytorch/pytorch/torch/_functorch/eager_transforms.py", line 264, in vjp
return _vjp_with_argnums(func, *primals, has_aux=has_aux)
File "/opt/pytorch/pytorch/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/opt/pytorch/pytorch/torch/_functorch/eager_transforms.py", line 291, in _vjp_with_argnums
primals_out = func(*primals)
File "/opt/pytorch/pytorch/test/functorch/test_ops.py", line 128, in wrapped
result = f(*_args, **kwargs)
File "/opt/pytorch/pytorch/torch/testing/_internal/opinfo/core.py", line 1097, in __call__
return self.op(*args, **kwargs)
File "/opt/pytorch/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13237, in <lambda>
wrapper_set_seed(torch.nn.functional.scaled_dot_product_attention, *args, **kwargs),
File "/opt/pytorch/pytorch/torch/testing/_internal/common_methods_invocations.py", line 8906, in wrapper_set_seed
return op(*args, **kwargs)
RuntimeError: query: last dimension must be contiguous
Exception raised from _efficient_attention_forward at /opt/pytorch/pytorch/aten/src/ATen/native/transformers/cuda/attention.cu:862 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xae (0x7f91acbb014e in /opt/pytorch/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x7d (0x7f91acb65cdb in /opt/pytorch/pytorch/torch/lib/libc10.so)
frame #2: at::native::_efficient_attention_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<long>, double, long, bool, c10::optional<double>, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&) + 0x2b55 (0x7f9159594d35 in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #3: <unknown function> + 0x2d24a70 (0x7f91597aaa70 in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x2d24ba7 (0x7f91597aaba7 in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #5: at::_ops::_efficient_attention_forward::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<long>, double, long, bool, c10::optional<double>, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&) + 0x295 (0x7f917834be75 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #6: at::native::_scaled_dot_product_efficient_attention_cuda(at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, bool, c10::optional<double>) + 0x19b (0x7f915955ee3b in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0x2d2414a (0x7f91597aa14a in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #8: <unknown function> + 0x2d241e1 (0x7f91597aa1e1 in /opt/pytorch/pytorch/torch/lib/libtorch_cuda.so)
frame #9: at::_ops::_scaled_dot_product_efficient_attention::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, bool, c10::optional<double>) + 0xd1 (0x7f917828c401 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x3af504f (0x7f9179ca204f in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x3af5cc9 (0x7f9179ca2cc9 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x11c2bb7 (0x7f917736fbb7 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x11c31e9 (0x7f91773701e9 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #14: at::functorch::GradInterpreterPtr::sendToNextInterpreterImpl(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*, bool) + 0x3b (0x7f917737093b in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x12cc42a (0x7f917747942a in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x12c7a8b (0x7f9177474a8b in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #17: at::_ops::_scaled_dot_product_efficient_attention::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, bool, c10::optional<double>) + 0x21f (0x7f917828c54f in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #18: <unknown function> + 0x3af504f (0x7f9179ca204f in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #19: <unknown function> + 0x3af5bc4 (0x7f9179ca2bc4 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #20: at::_ops::_scaled_dot_product_efficient_attention::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, bool, c10::optional<double>) + 0x1ae (0x7f917834b4be in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #21: at::native::scaled_dot_product_attention(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, double, bool, c10::optional<double>) + 0x116a (0x7f9177d3951a in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #22: <unknown function> + 0x2709fc9 (0x7f91788b6fc9 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #23: <unknown function> + 0x11c2bb7 (0x7f917736fbb7 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #24: <unknown function> + 0x11c3a5d (0x7f9177370a5d in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #25: <unknown function> + 0x12cc2a7 (0x7f91774792a7 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #26: <unknown function> + 0x12c58ac (0x7f91774728ac in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #27: at::_ops::scaled_dot_product_attention::call(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, double, bool, c10::optional<double>) + 0x487 (0x7f91781d4427 in /opt/pytorch/pytorch/torch/lib/libtorch_cpu.so)
frame #28: <unknown function> + 0x5da5a6 (0x7f91810fe5a6 in /opt/pytorch/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
----------------------------------------------------------------------
Ran 1 test in 0.631s
FAILED (errors=1)
```
### Versions
cuda 12.1
H100
torch_commit: https://github.com/pytorch/pytorch/commit/7c2641d5f1081811e664267406df1059687aad7a
test failures started on 5/19/23
see also #102029
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @Fuzzkatt @ptrblck @ngimel
| 2 |
2,447 | 102,653 |
torchscript dataclasses have bad support for class types as fields
|
oncall: jit
|
### ๐ Describe the bug
In `_dataclass_impls.py`, there's a step where the `__init__` function signature is collected and then serialized as a string. During that process, it seems like the serialized names are not always readable by PyTorch. In the example below, the type for `MyClass` gets serialized as `__main__.MyClass`. I'm assuming that the failure happens later on when Torchscript's RCB isn't able to find `__main__.MyClass`.
A fix for this might be to find a different way to annotate the signature when it is serialized, e.g. by using the annotation types found on the dataclass fields themselves.
```python
from dataclasses import dataclass
from typing import Dict
import torch
class MyClass:
x: int
y: float
def __init__(self, x: int, y: float):
self.x = x
self.y = y
def eval(self) -> float:
return self.y**self.x
@torch.jit.script
@dataclass
class Batch:
float_features: Dict[int, torch.Tensor]
mc: MyClass
def __eq__(self, other: "Batch") -> bool:
for k in self.float_features:
if k in other.float_features:
if not torch.equal(self.float_features[k], other.float_features[k]):
return False
else:
return False
return self.mc == other.mc
class ASD(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, batch: Batch) -> torch.Tensor:
return batch.float_features
model = ASD()
torch.jit.script(model)
```
### Versions
pytorch 2.0 / main branch as of submission
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,448 | 102,631 |
Error when exporting to onnx for albert-base-v2, issue with attention_mask
|
module: onnx, triaged
|
### ๐ Describe the bug
When attempting to export the following pretrained onnx model below, which uses [albert-base-v2](https://huggingface.co/albert-base-v2), I get an error about not being able to unsqueeze the attention mask even though it is clearly a list AND the model does original model outputs as expected.
Code:
```
input_ids = [2, 184, 20, 745, 2667, 11318, 18, 37, 4335, 4914,3]
token_type_ids = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
attention_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
input_ids_tensor = torch.tensor([input_ids])
token_type_ids_tensor = torch.tensor([token_type_ids])
attention_mask_tensor = torch.tensor([attention_mask])
cur_state_dict = torch.load("./albert_model/checkpoint-69203/pytorch_model.bin", map_location="cpu")
model = AutoModelForSequenceClassification.from_pretrained("./albert_model/checkpoint-69203")
model.load_state_dict(cur_state_dict, strict=False)
model.eval()
with torch.no_grad():
output = model(input_ids_tensor, token_type_ids_tensor, attention_mask_tensor)
print("output: ", output)
torch.onnx.export(model,
(input_ids_tensor, token_type_ids, attention_mask),
"new_onnx_model",
export_params=True,
opset_version=16,
do_constant_folding=True,
input_names = ["input_ids", "token_type_ids", "attention_mask"],
output_names = ["logits", "scores"])
```
The error message for the "torch.onnx.export" doesn't make sense because the "attention_mask" should be a list
<img width="598" alt="image" src="https://github.com/pytorch/pytorch/assets/135020113/54ad99f3-b711-490f-bad1-a9cdc4619195">
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to: LAZY
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-nlp==0.5.0
[pip3] pytorch-pretrained-bert==0.6.2
[pip3] torch==1.13.1+cu116
[pip3] torchaudio==0.13.1+cu116
[pip3] torchvision==0.14.1+cu116
[conda] Could not collect
| 1 |
2,449 | 102,625 |
[inductor] Memory planning
|
topic: not user facing, module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #102625
* #111402
* #111117
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @anijain2305
| 3 |
2,450 | 102,622 |
Extremely slow will_fusion_create_cycle on nanogpt_generate
|
triaged, bug, oncall: pt2, module: inductor
|
### ๐ Describe the bug
nanogpt_generate in the benchmark suite compiles extremely slowly. When looking at py-spy top, it looks like we spend a lot of time in will_fusion_create_cycle. I noticed the model is 40k nodes, which is probably triggering some bad asymptotics.
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225
| 5 |
2,451 | 102,610 |
Cannot invoke prims.sum with output_dtype
|
triaged, module: primTorch
|
### ๐ Describe the bug
Running
```python
import torch
import traceback
import sys
a = torch.zeros([3])
# Fine
torch.ops.prims.sum(a, dims=[0])
print("Trying with output_dtype keyword arg", file=sys.stderr)
"""
Prints
TypeError: sum() received an invalid combination of arguments - got (Tensor, list, output_dtype=torch.dtype), but expected one of:
* (Tensor input, *, torch.dtype dtype)
didn't match because some of the keywords were incorrect: output_dtype
* (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out)
* (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out)
"""
try:
torch.ops.prims.sum(a, dims=[0], output_dtype=torch.float64)
except:
traceback.print_exc()
pass
print("Trying with dtype keyword", file=sys.stderr)
# RuntimeError: Unknown keyword argument 'dtype' for operator 'prims::sum'. Schema: prims::sum(Tensor inp, int[]? dims, *, ScalarType? output_dtype=None) -> Tensor
try:
torch.ops.prims.sum(a, dims=[0], dtype=torch.float64)
except:
traceback.print_exc()
pass
print("Trying with positional argument", file=sys.stderr)
# RuntimeError: prims::sum() takes 2 positional argument(s) but 3 was/were given. Declaration: prims::sum(Tensor inp, int[]? dims, *, ScalarType? output_dtype=None) -> Tensor
try:
torch.ops.prims.sum(a, [0], torch.float64)
except:
traceback.print_exc()
pass
```
fails all way of providing an output dtype to the prims.sum operation.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230526+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 15.0.7
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-166-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC Processor
CPU family: 23
Model: 1
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 2
BogoMIPS: 5988.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 virt_ssbd arat
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0.dev20230526+cpu
[pip3] torchvision==0.16.0.dev20230526+cpu
[conda] Could not collect
cc @ezyang @mruberry @ngimel @Lezcano @peterbell10
| 0 |
2,452 | 102,609 |
[prims] torch.ops.aten.le decomposition confuses scalars and tensors
|
triaged, module: primTorch
|
### ๐ Describe the bug
Running
```python
from torch import nn
import torch
from torch._decomp import get_decompositions
from torch.fx.experimental.proxy_tensor import make_fx
from torch._functorch.compile_utils import strip_overloads
class Model(nn.Module):
def forward(self, x):
return torch.ops.aten.le(x, 0)
model = Model()
arg = torch.ones([4])
model(arg)
print("JIT script on original model")
torch.jit.script(model)
fx_g = make_fx(
model,
decomposition_table=get_decompositions([torch.ops.aten.le]))(arg)
# shows: le = torch.ops.prims.le.default(arg0_1, 0.0);
print(fx_g.code)
fx_g(arg)
strip_overloads(fx_g)
print("JIT script on decomposed fx graph:")
torch.jit.script(fx_g) # ERROR is produced here
```
shows the error
```
JIT script on original model
def forward(self, arg0_1):
le = torch.ops.prims.le.default(arg0_1, 0.0); arg0_1 = None
return le
JIT script on decomposed fx graph:
.venv-3.10/lib/python3.10/site-packages/torch/jit/_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
Traceback (most recent call last):
File "repro_decomp_le.py", line 41, in <module>
torch.jit.script(fx_g)
File ".venv-3.10/lib/python3.10/site-packages/torch/jit/_script.py", line 1284, in script
return torch.jit._recursive.create_script_module(
File ".venv-3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File ".venv-3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File ".venv-3.10/lib/python3.10/site-packages/torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
prims::le(Tensor self, Tensor other) -> Tensor:
Expected a value of type 'Tensor' for argument 'other' but instead found type 'float'.
:
File "<eval_with_key>.1", line 5
def forward(self, arg0_1):
le = torch.ops.prims.le(arg0_1, 0.0); arg0_1 = None
~~~~~~~~~~~~~~~~~~ <--- HERE
return le
```
### Versions
PyTorch version: 2.1.0.dev20230526+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 15.0.7
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-166-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC Processor
CPU family: 23
Model: 1
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 2
BogoMIPS: 5988.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 virt_ssbd arat
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0.dev20230526+cpu
[pip3] torchvision==0.16.0.dev20230526+cpu
[conda] Could not collect
cc @ezyang @mruberry @ngimel @Lezcano @peterbell10
| 4 |
2,453 | 102,604 |
Fix sparse windows
|
module: sparse, module: windows, module: mkl, open source, ciflow/binaries, release notes: releng, ciflow/periodic
|
Fix https://github.com/pytorch/pytorch/issues/97352.
This PR changes the way the linking to intel MKL is done and updating MKL on Windows to mkl-2021.4.0 .
There are for both conda and pip packages MKL version with which you can link dynamically. mkl-devel contains the static versions of the dlls and MKL contains the needed dlls for the runtime. MKL dlls and static libs starting with 2021.4.0 have the version in their names( for MKL 2023 we have mkl_core.2.dll and for 2021.4.0 we have mkl_core.1.dll) so its possible to have multiple versions installed and it will work properly.
For the wheel build, I added dependency for whell MKL and on conda a dependecy for the conda MKL and on libtorch I copied the MKL binaries in libtorch.
In order to test this PR I have to use custom builder https://github.com/pytorch/builder/pull/1467
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 2 |
2,454 | 102,600 |
Support for activation checkpoint on demand in custom function
|
module: checkpoint, triaged
|
### ๐ The feature, motivation and pitch
The pack hook always runs after the forward pass of a custom function, so ``Rule 5. Stop recomputation as soon as we've recomputed the saved tensors we know we need.`` does not work if we only need to save the inputs for the backward pass.
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
2,455 | 102,596 |
Jetson NX with torch 1.12.0 :cannot import name 'ProcessGroup' from 'torch.distributed'.
|
oncall: distributed
|
### ๐ Describe the bug
`from torch.distributed import ProcessGroup`
error: cannot import name 'ProcessGroup' from 'torch.distributed'.

### Versions
Device: jetson NX,
jetpack:5.1.1
torch: 1.12.0
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 7 |
2,456 | 102,591 |
Dynamo should feed optimized upstream graph's output to downstream graph for DDP
|
oncall: distributed
|
### ๐ Describe the bug
When a model is wrapped with DDP, dynamo will introduce graph breaks. Say we have g1, g2 due to graph breaks. Right now dynamo pass the original g1's output to g2 for compiling. But ideally we should use the optimized g1's output to compile g2.
BTW, the normal graph break (those not caused by DDP) do pass optimzed upstream graph's output to downstream graph for compiling.
To repro,
1. patch https://github.com/pytorch/pytorch/pull/102376 or wait for it to be merged
2. ```TORCHINDUCTOR_KEEP_OUTPUT_STRIDE=0 python test/inductor/test_layout_optim.py -k test_2conv_with_graph_break``` . BTW disable `TORCHINDUCTOR_KEEP_OUTPUT_STRIDE` is needed to reveal the problem.
### Versions
https://gist.github.com/shunting314/59cfc853a417952ebdc4941bfd3c41c3
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 0 |
2,457 | 102,559 |
Mergebot should merge non-stacked PR
|
triaged, enhancement, module: devx
|
### ๐ Describe the bug
Otherwise dependabot is getting confused when its PRs are committed into trunk
### Versions
NA
cc @ZainRizvi @kit1980 @huydhn @clee2000
| 1 |
2,458 | 102,556 |
test_functional_autograd_benchmark.py::TestFunctionalAutogradBenchmark::test_fast_tasks passes with all NaNs
|
module: autograd, triaged, module: functorch
|
The test seems to compare results with and without functorch, but everything is printed as NaNs:
Results for model resnet18 on task vjp: nans (var: nan)
Results for model resnet18 on task vjp using Functorch: nans (var: nan)
See https://github.com/pytorch/pytorch/actions/runs/5123048143/jobs/9213382637 for example logs.
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
2,459 | 102,534 |
[RFC] Add third-party malloc library to improve pytorch memory performance on Windows
|
module: performance, module: windows, module: cpu, triaged, intel
|
### ๐ The feature, motivation and pitch
This doc is requesting comments for add third-party malloc library to improve pytorch memory performance on Windows.
During debug the issue: https://github.com/pytorch/pytorch/issues/62387 , We figure out the major performance gap between Windows to Linux is that, Windows has bad memory allocation performance.
I also write a simple malloc benchmark project, [bench_malloc](https://github.com/xuhancn/bench_malloc). Which can proof the third-party malloc (tc_malloc) can improve the memory alloction performance on Windows.

After that, I tried to evaluate same popular third-party malloc library and make a brief summay here:

From the summary, We can only select two candidate libraries:
# Option 1: tc_malloc from gperftools.
1. It is the best performance one, and it improve performance from original 11.1s to 2.9s.
2. It need upstream same code changes to gperftools. https://github.com/gperftools/gperftools/pull/1396. Actually I made it, but still not get response so far.
> Note: I found the gperftools repo is inactivate more than one year, latest commit is https://github.com/gperftools/gperftools/commit/bf8b714bf5075d0a6f2f28504b43095e2b1e11c5 on May 31, 2022
# Option 2: mimalloc
1. It could improve performance from original 11.1s to 3.9s. It not as good as tc_malloc, but still a huge improvement.
2. It better compatibility to pytorch, and can integrate to pytorch directly.
3. I have a PR for this option: https://github.com/pytorch/pytorch/pull/102595
### Alternatives
# Option 3: Implement a caching memory allocator for CPU in PyTorch.
1. It is none additional depends on third party library.
2. Need a lot of effort to develop and test.
3. Its principle is similar to tc_malloc and mimalloc. Optimize existing third party library is better.
### Additional context
---
My proposal
1. I'm not sure whether the PR for gperftools can be accecpted. We can't always wait on option 1.
2. We can enable option 2 (mimalloc) to optimize pytorch Windows firstly.
4. Design build option to switch malloc library, such as system malloc, mimalloc, and (further tc_malloc).
5. Maybe We can select option 2, enable the mimalloc. And then optimize mimalloc for pytorch.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ngimel
| 4 |
2,460 | 102,533 |
Segfault when running vulkan program linked against libtorch
|
triaged, module: vulkan, ciflow/periodic
|
### ๐ Describe the bug
Creating a `VulkanInstance` in a program linked to `libtorch` gives a segmentation fault at program exit.
Here is minimal C++ source code to reproduce.
```cpp
#include <vulkan/vulkan.h>
int main()
{
const uint32_t validationLayerCount = 1;
const char * validationLayers[] = {
"VK_LAYER_KHRONOS_validation"
};
VkApplicationInfo appInfo = {
.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.apiVersion = VK_API_VERSION_1_0
};
VkInstanceCreateInfo instanceCreateInfo = {
.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pApplicationInfo = &appInfo,
.enabledLayerCount = validationLayerCount,
.ppEnabledLayerNames = validationLayers
};
VkInstance instance;
if (vkCreateInstance(&instanceCreateInfo, NULL, &instance) != VK_SUCCESS) {
return 1;
}
vkDestroyInstance(instance, NULL);
}
```
and CMakeList.txt
```cmake
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(main)
find_package(Torch REQUIRED)
find_package(Vulkan REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(main main.cpp)
target_link_libraries(main ${TORCH_LIBRARIES} ${Vulkan_LIBRARIES})
set_property(TARGET main PROPERTY CXX_STANDARD 14)
```
I compile the program with `cmake -DCMAKE_BUILD_TYPE=Debug -S . -B build -DCMAKE_PREFIX_PATH=${LIBTORCH_PATH} && cmake --build build` where `LIBTORCH_PATH` is the path to libtorch downloaded from `https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcpu.zip` (CXX-11 ABI for CPU only).
I execute the program by running `gdb build/main`. The program quickly segfaults, see stacktrace below (`bt` in gdb session)
```
#0 0x00007fffe02a547e in __GI___libc_free (mem=0x1) at ./malloc/malloc.c:3368
#1 0x00007fffe5f02cbf in llvm::cl::Option::~Option() () from /home/cdeln/lib/libtorch-cpu/lib/libtorch_cpu.so
#2 0x00007fffe0245a56 in __cxa_finalize (d=0x7ffff74ca000) at ./stdlib/cxa_finalize.c:83
#3 0x00007fffe18eb433 in __do_global_dtors_aux () from /home/cdeln/lib/libtorch-cpu/lib/libtorch_cpu.so
#4 0x00007fffffffdc20 in ?? ()
#5 0x00007ffff7fc924e in _dl_fini () at ./elf/dl-fini.c:142
Backtrace stopped: frame did not save the PC
```
The issue is more or less identical to [this libtorch + sdl bug](https://github.com/pytorch/pytorch/issues/71283).
### Versions
Here is output from `collect_env.py`
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Apr 26 2023, 09:09:46) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-1235U
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 4
CPU max MHz: 4400.0000
CPU min MHz: 400.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
L1d cache: 352 KiB (10 instances)
L1i cache: 576 KiB (10 instances)
L2 cache: 6.5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fft-conv-pytorch==1.1.3
[pip3] numpy==1.24.3
[pip3] pytorch3d==0.7.4
[pip3] torch==2.0.0
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
```
| 0 |
2,461 | 102,524 |
[MPS] Fix MPS sorting issue with strided view tensors
|
triaged, open source, release notes: mps, ciflow/mps
|
Fixes #101878
The issue with the `sort_stable_out_mps` function is that it does not handle strided views of the input tensor correctly. Specifically, if the input tensor is a strided view, the function will produce incorrect results.
This fix ensures that the input tensor is contiguous before passing it to the mps calculation. test case also added. let me know if this is a proper solution
| 4 |
2,462 | 102,517 |
[feature request] PyTorch support for sub-interpreters with PEP 684 accepted and release in Python 3.12
|
triaged, module: python frontend
|
### ๐ The feature, motivation and pitch
It appears that sub-interpretor support for Python in PEP 684 https://discuss.python.org/t/pep-684-a-per-interpreter-gil/ is advancing and will be merged soon (https://github.com/python/cpython/blob/main/Misc/NEWS.d/3.12.0b1.rst, https://github.com/python/cpython/pull/104210, https://github.com/ericsnowcurrently/multi-core-python/wiki/0-The-Plan)
Any updates on eventual support of sub-interpreters in PyTorch?
Also, a question would be of zero-copy in-RAM IPC data exchange: maybe via named pipes? Some mmaps? shm?
I found older issues asking the same:
- https://github.com/pytorch/pytorch/issues/23490
- https://github.com/pytorch/pytorch/issues/10950
This might be important for working in hosted multi-threaded environments where every thread runs its own Python / PyTorch code. This is also related to the current torch.compile / export / mini-interpreters story: https://discuss.pytorch.org/t/torch-compiles-deployment-story-to-non-python-host-processes/180943. This scenario was explicitly supported with TorchScript / TorchJit, but seems not yet developed for torch.compile, but it's an important usecase too (similar to API's OnnxRuntime is currently offering)
cc @albanD @colesbury
### Alternatives
_No response_
### Additional context
_No response_
| 7 |
2,463 | 102,515 |
pytorch java api documentation is not clear and does not cover example
|
module: docs, triaged, oncall: mobile, oncall: java
|
### ๐ The doc issue
The pytorch java api doc is unclear and does not include examples which makes it hard to understand and makes the documentation incomplete
### Suggest a potential alternative/fix
- more example can be included
- how to's in java
- more description
cc @svekars @carljparker
| 0 |
2,464 | 102,511 |
Faster BatchSampler with big batch size
|
module: dataloader, triaged
|
### ๐ The feature, motivation and pitch
In the issue #76950, didicout made some improvements on `BatchSampler` in `torch.utils.data.sampler`, which greatly boost the speed when `drop_last=True`. I was interested in the code (as follows) of ejguan which tried to speed up when `drop_last=False` but failed. I made some improvements to make it effective.
```python3
def special_next(it):
try:
return next(it)
except StopIteration:
return PlaceHolderObj()
sampler_iter = iter(self.sampler)
if drop_last:
...
else:
while True:
batch = [special_next(sampler_iter) for _ in range(self.batch_size)]
if not isinstance(batch[-1], PlaceHolderObj):
yield batch
else:
idx = 0
for d in batch:
if isinstance(d, PlaceHolderObj):
break
idx += 1
if idx > 0:
yield batch[0: idx]
break
```
@ejguan
### Alternatives
Here are my improved code for case `drop_last=False`
```python3
def __iter__(self) -> Iterator[List[int]]:
# Implemented based on the benchmarking in https://github.com/pytorch/pytorch/pull/76951
if self.drop_last:
sampler_iter = iter(self.sampler)
while True:
try:
batch = [next(sampler_iter) for _ in range(self.batch_size)]
yield batch
except StopIteration:
break
else:
batch = [PlaceHolderObj()] * self.batch_size
sampler_iter = iter(self.sampler)
try:
while True:
for i in range(self.batch_size):
batch[i] = next(sampler_iter)
yield batch
batch = [PlaceHolderObj()] * self.batch_size
except StopIteration:
idx = 0
for value in batch:
if isinstance(value, PlaceHolderObj):
break
else:
idx += 1
if idx > 0:
yield batch[:idx]
```
### Additional context
My test env and setting:
```
os: Ubuntu 22.04
cpu: Intelยฎ Coreโข i7 13700k
python: 3.8.16 and 3.10.0
pytorch: 2.0.1
```
Test result of python 3.8.16:
```
batch_size drop_last current(avg) current(std) new(avg) new(std) speedup(new vs current)
------------ ----------- -------------- -------------- ---------- ---------- -------------------------
4 False 0.004253 0.0001049 0.005348 6.951e-05 -20.46%
8 False 0.003916 8.08e-05 0.003916 6.295e-05 0.02%
16 False 0.003652 6.122e-05 0.003019 3.849e-05 20.95%
32 False 0.00345 4.221e-05 0.002511 4.816e-05 37.43%
64 False 0.003246 4.454e-05 0.002201 3.64e-05 47.52%
128 False 0.003213 5.322e-05 0.002088 3.325e-05 53.89%
640 False 0.003319 4.307e-05 0.002169 2.648e-05 53.01%
6400 False 0.00341 4.636e-05 0.002392 6.074e-05 42.53%
```
Test result of python 3.10.0:
```
batch_size drop_last current(avg) current(std) new(avg) new(std) speedup(new vs current)
------------ ----------- -------------- -------------- ---------- ---------- -------------------------
4 False 0.004014 0.0001283 0.004929 5.085e-05 -18.57%
8 False 0.003872 0.0001085 0.003761 5.127e-05 2.95%
16 False 0.003491 6.523e-05 0.002881 3.742e-05 21.20%
32 False 0.003201 6.749e-05 0.002447 4.072e-05 30.78%
64 False 0.00333 8.508e-05 0.002371 5.883e-05 40.45%
128 False 0.003085 8.315e-05 0.002196 3.641e-05 40.48%
640 False 0.003074 6.132e-05 0.002207 3.811e-05 39.26%
6400 False 0.003101 5.371e-05 0.002343 2.617e-05 32.34%
```
The following testing code was adapted from didicout:
```python3
# coding: utf-8
# Copyright (c) Antfin, Inc. All rights reserved.
from typing import List, Union, Iterable, Iterator
from torch.utils.data import SequentialSampler, Sampler
class OldBatchSampler(Sampler[List[int]]):
r"""Wraps another sampler to yield a mini-batch of indices.
Args:
sampler (Sampler or Iterable): Base sampler. Can be any iterable object
batch_size (int): Size of mini-batch.
drop_last (bool): If ``True``, the sampler will drop the last batch if
its size would be less than ``batch_size``
Example:
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True))
[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
"""
def __init__(self, sampler: Union[Sampler[int], Iterable[int]], batch_size: int, drop_last: bool) -> None:
# Since collections.abc.Iterable does not check for `__getitem__`, which
# is one way for an object to be an iterable, we don't do an `isinstance`
# check here.
if not isinstance(batch_size, int) or isinstance(batch_size, bool) or \
batch_size <= 0:
raise ValueError("batch_size should be a positive integer value, "
"but got batch_size={}".format(batch_size))
if not isinstance(drop_last, bool):
raise ValueError("drop_last should be a boolean value, but got "
"drop_last={}".format(drop_last))
self.sampler = sampler
self.batch_size = batch_size
self.drop_last = drop_last
def __iter__(self) -> Iterator[List[int]]:
# Implemented based on the benchmarking in https://github.com/pytorch/pytorch/pull/76951
if self.drop_last:
sampler_iter = iter(self.sampler)
while True:
try:
batch = [next(sampler_iter) for _ in range(self.batch_size)]
yield batch
except StopIteration:
break
else:
batch = [0] * self.batch_size
idx_in_batch = 0
for idx in self.sampler:
batch[idx_in_batch] = idx
idx_in_batch += 1
if idx_in_batch == self.batch_size:
yield batch
idx_in_batch = 0
batch = [0] * self.batch_size
if idx_in_batch > 0:
yield batch[:idx_in_batch]
def __len__(self) -> int:
# Can only be called if self.sampler has __len__ implemented
# We cannot enforce this condition, so we turn off typechecking for the
# implementation below.
# Somewhat related: see NOTE [ Lack of Default `__len__` in Python Abstract Base Classes ]
if self.drop_last:
return len(self.sampler) // self.batch_size # type: ignore[arg-type]
else:
return (len(self.sampler) + self.batch_size - 1) // self.batch_size # type: ignore[arg-type]
class PlaceHolderObj:
pass
class NewBatchSampler(Sampler[List[int]]):
def __init__(self, sampler: Union[Sampler[int], Iterable[int]], batch_size: int, drop_last: bool) -> None:
# Since collections.abc.Iterable does not check for `__getitem__`, which
# is one way for an object to be an iterable, we don't do an `isinstance`
# check here.
if not isinstance(batch_size, int) or isinstance(batch_size, bool) or \
batch_size <= 0:
raise ValueError("batch_size should be a positive integer value, "
"but got batch_size={}".format(batch_size))
if not isinstance(drop_last, bool):
raise ValueError("drop_last should be a boolean value, but got "
"drop_last={}".format(drop_last))
self.sampler = sampler
self.batch_size = batch_size
self.drop_last = drop_last
def __iter__(self) -> Iterator[List[int]]:
# Implemented based on the benchmarking in https://github.com/pytorch/pytorch/pull/76951
if self.drop_last:
sampler_iter = iter(self.sampler)
while True:
try:
batch = [next(sampler_iter) for _ in range(self.batch_size)]
yield batch
except StopIteration:
break
else:
batch = [PlaceHolderObj()] * self.batch_size
sampler_iter = iter(self.sampler)
try:
while True:
for i in range(self.batch_size):
batch[i] = next(sampler_iter)
yield batch
batch = [PlaceHolderObj()] * self.batch_size
except StopIteration:
idx = 0
for value in batch:
if isinstance(value, PlaceHolderObj):
break
else:
idx += 1
if idx > 0:
yield batch[:idx]
def __len__(self) -> int:
# Can only be called if self.sampler has __len__ implemented
# We cannot enforce this condition, so we turn off typechecking for the
# implementation below.
# Somewhat related: see NOTE [ Lack of Default `__len__` in Python Abstract Base Classes ]
if self.drop_last:
return len(self.sampler) // self.batch_size # type: ignore[arg-type]
else:
return (len(self.sampler) + self.batch_size - 1) // self.batch_size # type: ignore[arg-type]
def _iter_on_current_sampler(batch_size, drop_last):
for _ in OldBatchSampler(SequentialSampler(range(DATA_SIZE)), batch_size=batch_size, drop_last=drop_last):
pass
def _iter_on_new_sampler(batch_size, drop_last):
for _ in NewBatchSampler(SequentialSampler(range(DATA_SIZE)), batch_size=batch_size, drop_last=drop_last):
pass
if __name__ == '__main__':
import numpy as np
import timeit
from tabulate import tabulate
DATA_SIZE = 99999
AVG_TIMES = 100
result = []
DROP_LAST = False
for BATCH_SIZE in (4, 8, 16, 32, 64, 128, 640, 6400):
print(f"batch size: {BATCH_SIZE}, drop last: {DROP_LAST}")
timer = timeit.Timer(lambda: _iter_on_current_sampler(BATCH_SIZE, DROP_LAST))
cost0 = timer.repeat(AVG_TIMES, 1)
cost0_avg = float(np.average(cost0))
cost0_std = float(np.std(cost0))
print(f"time cost(current):{cost0_avg}. origin data: {cost0}")
timer = timeit.Timer(lambda: _iter_on_new_sampler(BATCH_SIZE, DROP_LAST))
cost1 = timer.repeat(AVG_TIMES, 1)
cost1_avg = float(np.average(cost1))
cost1_std = float(np.std(cost1))
print(f"time cost(new):{cost1_avg}. origin data: {cost1}")
speedup_percent = "%.2f" % ((1 / cost1_avg - 1 / cost0_avg) * cost0_avg * 100) + "%"
print(f"speedup: {speedup_percent}\n")
current_row = [BATCH_SIZE, DROP_LAST,
"%.3e" % cost0_avg, "%.3e" % cost0_std,
"%.3e" % cost1_avg, "%.3e" % cost1_std,
speedup_percent]
result.append(current_row)
print(tabulate(result, headers=("batch_size", "drop_last", "current(avg)", "current(std)", "new(avg)", "new(std)",
"speedup(new vs current)")))
```
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 0 |
2,465 | 102,507 |
[Inductor] [CPU] hf_Longformer performance regression > 10% on 2023-05-28 nightly release
|
triaged, oncall: pt2, module: cpu inductor
|
### ๐ Describe the bug
Performance regression: **hf_Longformer** [TorchInductor CPU Performance Dashboard](https://github.com/pytorch/pytorch/issues/93531#issuecomment-1567662304) on 2023-05-28 as bellow:
| 2023-05-28 | | | | 2023-05-25 | | | | Result Comp | | |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| batch_size | speedup | inductor | eager | batch_size | speedup | inductor | eager | speedup ratio | eager ratio | inductor ratio |
|1 |1.172147 |0.082288565 |0.096454295 |1 |1.358556 |0.070512074 |0.095794601 |0.86 |0.99 |0.86
SW information:
SW | Nightly commit | Main commit
-- | -- | --
Pytorch|[3dedcb3](https://github.com/pytorch/pytorch/commit/3dedcb3)|[c3ea8cc](https://github.com/pytorch/pytorch/commit/c3ea8cc)
Torchbench|/|[987bbec3](https://github.com/pytorch/benchmark/commit/987bbec3)
torchaudio|[894388d](https://github.com/pytorch/audio/commit/894388d)|[72d3fe0](https://github.com/pytorch/audio/commit/72d3fe0)
torchtext|[3cf1db6](https://github.com/pytorch/text/commit/3cf1db6)| [0dc07eb](https://github.com/pytorch/text/commit/0dc07eb)
torchvision|[50fb988](https://github.com/pytorch/vision/commit/50fb988)|[abc40ef](https://github.com/pytorch/vision/commit/abc40ef)
torchdata|[a754743](https://github.com/pytorch/data/commit/a754743)|[ba31745](https://github.com/pytorch/data/commit/ba31745)
dynamo_benchmarks|[174d01b](https://github.com/pytorch/pytorch/commit/174d01b)|/
graph:
2023_05_28
[graph.txt](https://github.com/pytorch/pytorch/files/11595496/graph.txt)
2023_05_25
[graph.txt](https://github.com/pytorch/pytorch/files/11595499/graph.txt)
### Versions
repro:
```bash
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --performance --inference --float32 -dcpu -n50 --inductor --no-skip --dashboard --only hf_Longformer --cold_start_latency
```
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 3 |
2,466 | 102,501 |
[Utils][tensorboard]Enhancement: Include 'max_outputs' parameter in torch.utils.tensorboard.summary's 'image' method
|
triaged, module: tensorboard
|
### ๐ The feature, motivation and pitch
## Issue Description ##
The `image` function in `torch.utils.tensorboard.summary` does not have a `max_outputs` parameter, even though it is mentioned in the function's documentation. The `max_outputs` parameter is important for controlling the number of images included in the summary, but it is currently missing in the implementation.
## code in origin tensorboard ##
```def image(name, data, step=None, max_outputs=3, description=None)```
https://github.com/tensorflow/tensorboard/blob/fd75a84d30b85bdc80986da01fbf014f363ca3b7/tensorboard/plugins/image/summary_v2.py#L27
## code in torch.utils.tensorboard ##
``` def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats="CHW")```
https://github.com/pytorch/pytorch/blob/0e72ada9bba2693e55fd7177ab4d2af826d7d15f/torch/utils/tensorboard/writer.py#LL563C4-L565C7
According to the documentation, the `max_outputs` parameter should determine the maximum number of images to be included in the summary. However, in the current implementation, there is no handling or usage of this parameter.
https://github.com/pytorch/pytorch/blob/0e72ada9bba2693e55fd7177ab4d2af826d7d15f/torch/utils/tensorboard/summary.py#L424
## Expected Behavior ##
I would expect the `image` function in `torch.utils.tensorboard.summary` to accept and utilize the `max_outputs` parameter, similar to the functionality provided by the native TensorBoard.
## Steps to Reproduce (if applicable) ##
N/A
## Environment Information (if applicable) ##
N/A
## Additional Context ##
I believe adding the `max_outputs` parameter to the `image` function in `torch.utils.tensorboard.summary` would enhance the functionality and compatibility of the module with the native TensorBoard implementation.
### Alternatives
_No response_
### Additional context
## Additional Context ##
I believe the addition of the 'max_outputs' parameter to the 'image' method in torch.utils.tensorboard.summary would greatly enhance the flexibility and compatibility of the module. Currently, users are unable to control the number of images included in the summary, which can lead to excessive or insufficient visualizations in certain scenarios.
By introducing the 'max_outputs' parameter, users would have the ability to specify the maximum number of images to be included in the summary, aligning with the behavior of the native TensorBoard implementation. This would provide consistency and allow for better customization and interpretation of the summary results.
Additionally, the 'max_outputs' parameter is already mentioned in the function's documentation, indicating that it was likely intended to be implemented but was unintentionally omitted.
I believe this enhancement would greatly benefit PyTorch users who rely on the tensorboard.summary module for visualizations and analysis of their models.
| 0 |
2,467 | 102,498 |
[REQUEST] - Update Multiprocessing best practices with CPU device
|
module: multiprocessing, triaged, docs-hackathon, topic: docs
|
### ๐ The doc issue
The current documentation on `torch.multiprocessing` is primarily focusing on CUDA device. User is highly possible to encounter oversubscription issue when following the instructions on a CPU device. On a CPU device, it is usually necessary to do explicit thread affinity binding else well as numa control, just as explained at [Tuning Guide for AI on the 4th Generation Intelยฎ Xeonยฎ Scalable Processors](https://www.intel.com/content/www/us/en/developer/articles/technical/tuning-guide-for-ai-on-the-4th-generation.html)
Also the example from [mnist_hogwild](https://github.com/pytorch/examples/tree/main/mnist_hogwild) is better to be updated at the same time, to give a more clear hands-on practice of how to do multiprocessing on CPU device.
### Suggest a potential alternative/fix
We aim to complete the document as part of PyTorch Docathon 2023. cc @VitalyFedyunin @jingxu10
| 0 |
2,468 | 102,479 |
torch.onnx.errors.CheckerError: The model does not have an ir_version set properly.
|
module: onnx, triaged
|
### ๐ Describe the bug
torch=1**.13.1
onnx=1.14.0
[libprotobuf ERROR /opt/conda/conda-bld/pytorch_1670525552411/work/third_party/protobuf/src/google/protobuf/message_lite.cc:457] onnx_torch.ModelProto exceeded maximum protobuf size of 2GB: 8266614040
Traceback (most recent call last):
File "/data03/zhangxiaolei/anaconda3/envs/LLM/lib/python3.8/site-packages/torch/onnx/utils.py", line 1652, in _export
_C._check_onnx_proto(proto, full_check=True)
RuntimeError: The model does not have an ir_version set properly.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "model_construct_conformer.py", line 399, in <module>
cli_main()
File "model_construct_conformer.py", line 395, in cli_main
main(args)
File "model_construct_conformer.py", line 203, in main**
return _main(cfg, sys.stdout)
File "model_construct_conformer.py", line 299, in _main
torch.onnx.export(
File "/data03/anaconda3/envs/LLM/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/data03/anaconda3/envs/LLM/lib/python3.8/site-packages/torch/onnx/utils.py", line 1654, in _export
raise errors.CheckerError(e)
torch.onnx.errors.CheckerError: The model does not have an ir_version set properly.
### Versions
torch=1**.13.1
onnx=1.14.0
| 3 |
2,469 | 102,459 |
Matrix multiplication performance regression in case of an additional dimension of size 1
|
module: dependency bug, triaged, module: cublas
|
### ๐ Describe the bug
`A.unsqueeze(0) @ B.unsqueeze(0)` can be about 70 times slower than `A @ B` depending on the dimensions of matrices `A` and `B`.
Code for reproducing:
```python
from time import time
import torch
A = torch.randn([20, 4000000], dtype=torch.float64, device='cuda')
B = torch.randn([4000000, 20], dtype=torch.float64, device='cuda')
torch.cuda.synchronize()
for i in range(3):
start = time()
a = A @ B
torch.cuda.synchronize()
print(time() - start) # 0.064
start = time()
b = (A.unsqueeze(0) @ B.unsqueeze(0)).squeeze(0)
torch.cuda.synchronize()
print(time() - start) # 4.375
start = time()
c = torch.einsum("in,nj->ij", A, B)
torch.cuda.synchronize()
print(time() - start) # 4.375
print(torch.isclose(a, b).all().item()) # True
print((a == b).all().item()) # False
print((b == c).all().item()) # True
```
This happens on a machine with NVIDIA GeForce RTX 2080 Ti and
```
PyTorch version: 2.0.1
CUDA used to build PyTorch: 12.1
OS: Arch Linux (x86_64)
CUDA runtime version: 12.1.105
```
There is no difference in execution times (always 0.064 s) and `(a == b).all().item()` evaluates to `True` on a similar machine with
```
PyTorch version: 1.13.1
CUDA used to build PyTorch: 11.8
OS: Arch Linux (x86_64)
CUDA runtime version: 11.8.89
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 13.1.1 20230429
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.37
Python version: 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201] (64-bit runtime)
Python platform: Linux-6.3.2-arch1-1-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 530.41.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.8.0
/usr/lib/libcudnn_adv_infer.so.8.8.0
/usr/lib/libcudnn_adv_train.so.8.8.0
/usr/lib/libcudnn_cnn_infer.so.8.8.0
/usr/lib/libcudnn_cnn_train.so.8.8.0
/usr/lib/libcudnn_ops_infer.so.8.8.0
/usr/lib/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 2920X 12-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 59%
CPU max MHz: 3500.0000
CPU min MHz: 2200.0000
BogoMIPS: 6988.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 768 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 32 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.0.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.1a0
[conda] Could not collect
```
cc @csarofeen @ptrblck @xwang233
| 0 |
2,470 | 102,457 |
Batching rule for `aten::_scaled_dot_product_efficient_attention`
|
triaged, module: vmap
|
### ๐ The feature, motivation and pitch
Hi, I am trying to take batched gradients of a vector output given by `_scaled_dot_product_efficient_attention` but saw the error
```
/site-packages/optimum/bettertransformer/models/attention.py:56: UserWarning: There is a performance drop because we have not yet implemented the batching rul
e for aten::_scaled_dot_product_efficient_attention. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)
sdpa_result = torch.nn.functional.scaled_dot_product_attention
```
when running my code. Implementing this would really increase throughput in our application! I would also be happy to take a stab at implementing it, if there is a document describing what I need to do at a high level.
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519
| 11 |
2,471 | 102,453 |
RuntimeError using torch.nn.functional.pad when using MPS
|
triaged, module: mps
|
### ๐ Describe the bug
I'm having an issue with `torch.nn.function.pad` when using _MPS backend_ on an Apple Silicon macOS. I encountered this issue when I try to run a project made by others ([ECCV2022-RIFE](https://github.com/megvii-research/ECCV2022-RIFE)). Moreover, I have found a Closed Issue #87277 that reports a very similar bug to mine. The only difference is that I'm having this problem when using pad with `mode='replicate'` (instead of the default `'constant'`).
Here below is a minimal example to demonstrate the problem:
```
import torch
y = torch.rand((1, 1, 3, 32, 32))
y_pad = torch.nn.functional.pad(y, (5, 5, 5, 5, 5, 5), mode='replicate')
```
When the above code is run on CPU by default, everything works well and no error is raised.
But when the device is set to MPS and run the same code (as shown below):
```
import torch
y = torch.rand((1, 1, 3, 32, 32), device='mps')
y_pad = torch.nn.functional.pad(y, (5, 5, 5, 5, 5, 5), mode='replicate')
```
Below's error is raised:
```
Traceback (most recent call last):
File "/Users/gordon/PycharmProjects/ECCV2022-RIFE/DEBUG.py", line 3, in <module>
y_pad = torch.nn.functional.pad(y, (5, 5, 5, 5, 5, 5), mode='replicate')
RuntimeError: Argument #8: Padding size should be less than the corresponding input dimension, but got: padding (5, 5) at dimension 2 of input 5
```
Although I think this is probably a bug, I'm sorry for causing inconvenience if it turns out to be a misjudgment.
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 19:01:19) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[conda] No relevant packages
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,472 | 102,447 |
Add additional "sigmoid" approximation to GeLu activation?
|
module: nn, triaged, enhancement, needs research
|
### ๐ The feature, motivation and pitch
Hi,
I was just ranomly browsing [tinygrad](https://github.com/geohot/tinygrad) and saw the "quick_gelu" function, noticing also that this implementation is not a part of pytorch it self.
Pytorch only seems to have the `tanh` - approximation, so this is a feature request to add the second approximation also mentioned in the paper [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415v4)
I am not a AI expert, but this was just a simple thing I noticed and could gladly implement :)
### Alternatives
* Add a `sigmoid` approximation also, `x*sigmoid(1.702 * x)`
* Do nothing :)
### Additional context
Basically this commit:
https://github.com/pytorch/pytorch/commit/f7a1ca8476b4788265f7b4e2d9d088937dd1f033
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,473 | 102,445 |
Add pin_memory and is_pinned to NT
|
Stale
|
TODO:
- torch.nested.nested_tensor(..., pin_memory=True) doesn't work yet.
- tests
| 7 |
2,474 | 102,438 |
DDP multi node multi gpu inconsistent params
|
oncall: distributed
|
### ๐ Inconsistent params
@[alexeib](https://github.com/alexeib) @[rohan-varma](https://github.com/rohan-varma)ย @[mrshenli](https://github.com/mrshenli)
I also reported this on [huggingface issue](https://github.com/huggingface/accelerate/issues/1481#issue-1728799167)
I pretrain on a multi node and multi gpu specified on cluster managed by SLURM. It worked first time, but now it gives me error message :
RuntimeError: DDP expects same model across all ranks, but **Rank 0** has `237 params`, while **rank 1** has inconsistent `0 params`.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.3 (main, Apr 19 2023, 23:54:32) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1498.421
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4500.08
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.3 py311h08b1b3b_1
[conda] numpy-base 1.24.3 py311hf175353_1
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py311_cu118 pytorch
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu118 pytorch
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,475 | 102,437 |
discuss.pytorch.org signup issue
|
module: docs, triaged
|
### ๐ Describe the bug
"New registrations are not allowed from your IP address (maximum limit reached). Contact a staff member."
I do not have a registration account yet it says otherwise. On the website there is no contact information at all.
### Question
How can I sign up still?
cc @svekars @carljparker
| 8 |
2,476 | 102,436 |
multiple mps for base X86 Mac with multiples gpus
|
triaged, module: mps
|
### ๐ The feature, motivation and pitch
<h2 data-v-1c6ef997="" id="overview" style="font-style: normal; font-variant-caps: normal; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 0px; padding: 0px; color: rgb(29, 29, 31); font-size: 1.88235rem; line-height: 1.125; font-weight: 600; letter-spacing: 0.013em; font-family: var(--typography-html-font,"SF Pro Display",system-ui,-apple-system,BlinkMacSystemFont,"Helvetica Neue","Helvetica","Arial",sans-serif); caret-color: rgb(29, 29, 31); text-align: left;"><a data-v-1c6ef997="" href="https://developer.apple.com/documentation/metal/gpu_devices_and_work_submission/multi-gpu_systems/finding_multiple_gpus_on_an_intel-based_mac#overview" class="header-anchor" aria-label="Scroll to section" style="color: inherit; text-decoration: none; position: relative; padding-right: 23px; display: inline-block;">Overview</a></h2><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 0.8em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">Your app can use multiple GPUs on an Intel-based Mac, including any built-in and external GPUs. Start by getting a list of all the systemโs available GPUs, and then submit workloads to those appropriate for your appโs tasks.</p><aside data-v-63fdad41="" aria-label="note" class="note" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; display: block; margin-top: 1.6em; break-inside: avoid; border-radius: var(--aside-border-radius,15px); border-style: var(--aside-border-style,solid); border-width: var(--aside-border-width,1px 1px 1px 1px); padding: 0.94118rem; background-color: var(--color-aside-note-background); border-color: var(--color-aside-note-border); box-shadow: 0 0 1px 0 var(--color-aside-note-border) inset,0 0 1px 0 var(--color-aside-note-border); caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><p data-v-63fdad41="" class="label" style="margin: 0px; padding: 0px; font-size: 1rem; line-height: 1.52944; font-weight: 600; letter-spacing: -0.021em; font-family: var(--typography-html-font,"SF Pro Text",system-ui,-apple-system,BlinkMacSystemFont,"Helvetica Neue","Helvetica","Arial",sans-serif); color: var(--color-aside-note);">Note</p><p data-v-63fdad41="" style="margin: 0.4em 0px 0px; padding: 0px;">Mac computers with Apple silicon have a single, high-performance, and energy-efficient GPU.</p></aside><h3 data-v-1c6ef997="" id="3030767" style="font-style: normal; font-variant-caps: normal; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; color: rgb(29, 29, 31); font-size: 1.64706rem; line-height: 1.14286; font-weight: 600; letter-spacing: 0.007em; font-family: var(--typography-html-font,"SF Pro Display",system-ui,-apple-system,BlinkMacSystemFont,"Helvetica Neue","Helvetica","Arial",sans-serif); caret-color: rgb(29, 29, 31); text-align: left;"><a data-v-1c6ef997="" href="https://developer.apple.com/documentation/metal/gpu_devices_and_work_submission/multi-gpu_systems/finding_multiple_gpus_on_an_intel-based_mac#3030767" class="header-anchor" aria-label="Scroll to section" style="color: inherit; text-decoration: none; position: relative; padding-right: 23px; display: inline-block;">Get a List of GPU Devices</a></h3><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 0.8em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">Your app can get an array of<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/mtldevice" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLDevice</code></a><span class="Apple-converted-space">ย </span>instances, each of which represents an available GPU, by calling the<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/1433367-mtlcopyalldevices" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLCopy<wbr data-v-88c637be="">All<wbr data-v-88c637be="">Devices()</code></a><span class="Apple-converted-space">ย </span>function.</p><figure data-v-e05a7332="" id="3992591" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; display: block; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><div data-v-545cf0ad="" data-syntax="swift" class="code-listing single-line" data-v-e05a7332="" style="display: flex; flex-direction: column; min-height: 100%; border-radius: 22px; overflow: hidden; filter: blur(0px); background: var(--background,var(--color-code-background)); color: var(--text,var(--color-code-plain)); border-color: var(--colors-grid,var(--color-grid)); border-width: var(--code-border-width,0); border-style: var(--code-border-style,solid);"><div data-v-545cf0ad="" class="container-general" style="display: flex; overflow: auto; flex-grow: 1;"><pre data-v-545cf0ad="" style="margin: 0px; padding: 8px 0px 8px 14px; font-size: 1em; font-weight: 400; font-style: normal; overflow: unset; white-space: pre; overflow-wrap: normal; display: flex; height: 25px; flex-grow: 1;"><code data-v-545cf0ad="" style="font-size: 0.88235rem; font-weight: 400; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: -0.027em; display: flex; flex-direction: column; white-space: pre; overflow-wrap: normal; flex-grow: 9999; line-height: 1.66667;"><span data-v-545cf0ad="" class="code-line-container" style="display: flex; flex-shrink: 0; padding-right: 14px;"><span data-v-545cf0ad="" class="code-line"><span class="syntax-keyword" style="color: var(--syntax-keyword,var(--color-syntax-keywords));">let</span> devices <span class="syntax-operator">=</span> <span class="syntax-type" style="color: var(--syntax-type,var(--color-syntax-other-type-names));">MTLCopyAllDevices</span>()</span></span></code></pre></div></div></figure><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">However, that function provides a list of GPUs that are available at that moment in time. To get the current list and register for device update notifications, provide a handler to Metal by calling the<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/2869731-mtlcopyalldeviceswithobserver" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLCopy<wbr data-v-88c637be="">All<wbr data-v-88c637be="">Devices<wbr data-v-88c637be="">With<wbr data-v-88c637be="">Observer(_:<wbr data-v-88c637be="">_:)</code></a><span class="Apple-converted-space">ย </span>function.</p><figure data-v-e05a7332="" id="3030772" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; display: block; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><div data-v-545cf0ad="" data-syntax="swift" class="code-listing" data-v-e05a7332="" style="display: flex; flex-direction: column; min-height: 100%; border-radius: var(--code-border-radius,15px); overflow: hidden; filter: blur(0px); background: var(--background,var(--color-code-background)); color: var(--text,var(--color-code-plain)); border-color: var(--colors-grid,var(--color-grid)); border-width: var(--code-border-width,0); border-style: var(--code-border-style,solid);"><div data-v-545cf0ad="" class="container-general" style="display: flex; overflow: auto; flex-grow: 1;"><pre data-v-545cf0ad="" style="margin: 0px; padding: 8px 0px 8px 14px; font-size: 1em; font-weight: 400; font-style: normal; overflow: unset; white-space: pre; overflow-wrap: normal; display: flex; height: 75px; flex-grow: 1;"><code data-v-545cf0ad="" style="font-size: 0.88235rem; font-weight: 400; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: -0.027em; display: flex; flex-direction: column; white-space: pre; overflow-wrap: normal; flex-grow: 9999; line-height: 1.66667;"><span data-v-545cf0ad="" class="code-line-container" style="display: flex; flex-shrink: 0; padding-right: 14px;"><span data-v-545cf0ad="" class="code-line"><span class="syntax-keyword" style="color: var(--syntax-keyword,var(--color-syntax-keywords));">let</span> (devices, observer) <span class="syntax-operator">=</span> <span class="syntax-type" style="color: var(--syntax-type,var(--color-syntax-other-type-names));">MTLCopyAllDevicesWithObserver</span>() { (device, notification) <span class="syntax-keyword" style="color: var(--syntax-keyword,var(--color-syntax-keywords));">in</span></span></span><span data-v-545cf0ad="" class="code-line-container" style="display: flex; flex-shrink: 0; padding-right: 14px;"><span data-v-545cf0ad="" class="code-line"> <span class="syntax-keyword" style="color: var(--syntax-keyword,var(--color-syntax-keywords));">self</span>.device(device, issued: notification)</span></span><span data-v-545cf0ad="" class="code-line-container" style="display: flex; flex-shrink: 0; padding-right: 14px;"><span data-v-545cf0ad="" class="code-line">}</span></span></code></pre></div></div></figure><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">Metal calls your handler to tell your app when the system adds or removes an<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/mtldevice" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLDevice</code></a><span class="Apple-converted-space">ย </span>from the system.</p><aside data-v-63fdad41="" aria-label="note" class="note" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; display: block; margin-top: 1.6em; break-inside: avoid; border-radius: var(--aside-border-radius,15px); border-style: var(--aside-border-style,solid); border-width: var(--aside-border-width,1px 1px 1px 1px); padding: 0.94118rem; background-color: var(--color-aside-note-background); border-color: var(--color-aside-note-border); box-shadow: 0 0 1px 0 var(--color-aside-note-border) inset,0 0 1px 0 var(--color-aside-note-border); caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><p data-v-63fdad41="" class="label" style="margin: 0px; padding: 0px; font-size: 1rem; line-height: 1.52944; font-weight: 600; letter-spacing: -0.021em; font-family: var(--typography-html-font,"SF Pro Text",system-ui,-apple-system,BlinkMacSystemFont,"Helvetica Neue","Helvetica","Arial",sans-serif); color: var(--color-aside-note);">Note</p><p data-v-63fdad41="" style="margin: 0.4em 0px 0px; padding: 0px;">Metal calls your appโs handler when a device may change its state in the future, such as when a person makes a safe disconnect request. For more information, see<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/gpu_devices_and_work_submission/multi-gpu_systems/handling_external_gpu_additions_and_removals" class="" data-v-63fdad41="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;">Handling External GPU Additions and Removals</a>.</p></aside><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">Your app can deregister its observer when it no longer needs GPU device updates from the system by calling the<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/2869724-mtlremovedeviceobserver" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLRemove<wbr data-v-88c637be="">Device<wbr data-v-88c637be="">Observer(_:)</code></a><span class="Apple-converted-space">ย </span>function.</p><figure data-v-e05a7332="" id="3992592" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; display: block; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><div data-v-545cf0ad="" data-syntax="swift" class="code-listing single-line" data-v-e05a7332="" style="display: flex; flex-direction: column; min-height: 100%; border-radius: 22px; overflow: hidden; filter: blur(0px); background: var(--background,var(--color-code-background)); color: var(--text,var(--color-code-plain)); border-color: var(--colors-grid,var(--color-grid)); border-width: var(--code-border-width,0); border-style: var(--code-border-style,solid);"><div data-v-545cf0ad="" class="container-general" style="display: flex; overflow: auto; flex-grow: 1;"><pre data-v-545cf0ad="" style="margin: 0px; padding: 8px 0px 8px 14px; font-size: 1em; font-weight: 400; font-style: normal; overflow: unset; white-space: pre; overflow-wrap: normal; display: flex; height: 25px; flex-grow: 1;"><code data-v-545cf0ad="" style="font-size: 0.88235rem; font-weight: 400; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: -0.027em; display: flex; flex-direction: column; white-space: pre; overflow-wrap: normal; flex-grow: 9999; line-height: 1.66667;"><span data-v-545cf0ad="" class="code-line-container" style="display: flex; flex-shrink: 0; padding-right: 14px;"><span data-v-545cf0ad="" class="code-line"><span class="syntax-type" style="color: var(--syntax-type,var(--color-syntax-other-type-names));">MTLRemoveDeviceObserver</span>(observer)</span></span></code></pre></div></div></figure><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"></p><h3 data-v-1c6ef997="" id="3030770" style="font-style: normal; font-variant-caps: normal; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; color: rgb(29, 29, 31); font-size: 1.64706rem; line-height: 1.14286; font-weight: 600; letter-spacing: 0.007em; font-family: var(--typography-html-font,"SF Pro Display",system-ui,-apple-system,BlinkMacSystemFont,"Helvetica Neue","Helvetica","Arial",sans-serif); caret-color: rgb(29, 29, 31); text-align: left;"><a data-v-1c6ef997="" href="https://developer.apple.com/documentation/metal/gpu_devices_and_work_submission/multi-gpu_systems/finding_multiple_gpus_on_an_intel-based_mac#3030770" class="header-anchor" aria-label="Scroll to section" style="color: inherit; text-decoration: none; position: relative; padding-right: 23px; display: inline-block;">Identify Each GPU by Type</a></h3><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 0.8em 0px 0px; padding: 0px; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;">Each GPU on a Mac computerโs system can be one of three types: integrated, discrete, or external. You can identify each<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/mtldevice" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">MTLDevice</code></a><span class="Apple-converted-space">ย </span>instanceโs type by inspecting its<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/mtldevice/1433409-islowpower" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">is<wbr data-v-88c637be="">Low<wbr data-v-88c637be="">Power</code></a><span class="Apple-converted-space">ย </span>and<span class="Apple-converted-space">ย </span><a href="https://developer.apple.com/documentation/metal/mtldevice/2889851-isremovable" class="" style="color: var(--colors-link,var(--color-link)); text-decoration: none;"><code data-v-88c637be="" style="font-size: 1em; font-weight: inherit; font-style: normal; font-family: "SF Mono", SFMono-Regular, ui-monospace, Menlo, monospace; letter-spacing: 0px;">is<wbr data-v-88c637be="">Removable</code></a><span class="Apple-converted-space">ย </span>properties.</p><figure data-v-e05a7332="" id="3992593" style="font-style: normal; font-variant-caps: normal; font-weight: 400; orphans: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1.6em 0px 0px; padding: 0px; display: block; caret-color: rgb(29, 29, 31); color: rgb(29, 29, 31); font-family: "SF Pro Text", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 17px; letter-spacing: -0.374px; text-align: left;"><div data-v-36e6df88="" class="table-wrapper" data-v-e05a7332="" style="overflow: auto;">
GPU Type | isLowPower | isRemovable
-- | -- | --
true | false
false | false
false | true
[Docs Related](https://developer.apple.com/documentation/metal/gpu_devices_and_work_submission/multi-gpu_systems/finding_multiple_gpus_on_an_intel-based_mac)
### Alternatives
As you can see, there are 2021 Intel-based Mac Pros on the market with costs reaching up to $70,000 that demonstrate remarkable performance when training on a single GPU using PyTorch. There are also many in the community who will continue to use MacBook Pros for a significant period. As such, it would be incredible to add support for multiple GPUs. Despite the advent of new ARM-based models, there remains extensive activity in the Intel sector, and we can expect support from Apple to continue for quite some time. A very straightforward issue to check would be whether the ARM or ARM64 system is single GPU, and if it's x86, does it have access to multiple GPUs.
### Additional context
_No response_
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,477 | 102,435 |
torch.distributed.all_reduce() has inconsistent behavior
|
oncall: distributed, triaged
|
### ๐ Describe the bug
This code does not perform an all_reduce() reduction across workers:
```
torch.distributed.all_reduce( total_loss.cuda(), op=torch.distributed.ReduceOp.AVG )
total_loss = total_loss_cuda.cpu()
```
while this code performs the operation correctly:
```
total_loss_cuda = total_loss.cuda()
torch.distributed.all_reduce( total_loss_cuda, op=torch.distributed.ReduceOp.AVG )
total_loss = total_loss_cuda.cpu()
```
In my opinion, this is a bug, at least a usability one since I don't see a reason why the temporary returned by total_loss.cuda() in the first version shouldn't be usable as an argument to torch.distributed.all_reduce(). Version: PyTorch/1.12.0-CUDA-11.7.
### Versions
collect_pyenv (the code is running on a large supercomputer so collect_pyenv might not be entirely accurate/usefule:
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.7 (Green Obsidian) (x86_64)
GCC version: (GCC) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.28
Python version: 3.10.4 (main, Oct 4 2022, 08:48:24) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 83%
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.93
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0
[conda] Could not collect
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 1 |
2,478 | 102,430 |
Add support ONNX Opset 19
|
module: onnx, triaged
|
### ๐ The feature, motivation and pitch
Onnx v1.14.0 has just released opset 19 . Please add support for exporting DeformConv and AdaptiveAveragePool2d to opset 19.
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
2,479 | 102,423 |
Add default device function interface for device-aware apis
|
triaged, module: dispatch
|
_Originally posted by @mikaylagawarecki in https://github.com/pytorch/pytorch/pull/102155#pullrequestreview-1446953358_
Hi @mikaylagawarecki , Some of the current community functions include c++ and pyhton, which have a default device of cuda.
e.g.` _pin_memory` has a default dispatchkey of KCUDA,
`DispatchKeySet _dk = c10::DispatchKeySet(c10::computeDispatchKey(c10::nullopt, self.layout(), device.value_or(at::kCUDA)));`
and` fork_rng `on the python side has a default device_type of` cuda`,
`def fork_rng(devices=None, enabled=True, _caller="fork_rng", _devices_kw="devices", device_type="cuda")`etc.
For third-party hardware devices, such as privateuse1, we need to set additional device parameters, `fork_rng(device_type='xxx')`,`pin_memory('xxx')`, when using the above functions. Some other functions have the same problem, the default device is cuda, which leads to incompatible api usage.
We want to set a default device function in the community. On third-party devices, by setting the default device interface, the above function can be called directly, such as` fork_rng()`, without the need to set the device_type additionally. this will improve the compatibility of the community code on different devices.
| 4 |
2,480 | 102,417 |
_view_func but without keeping original view tensor alive
|
module: autograd, triaged, needs design, module: viewing and reshaping
|
### ๐ Describe the bug
Tensor._view_func lets you reapply a view tensor's view func to a different tensor. That's good, but often what I want is to hold onto this view function **while freeing the original tensor** (e.g., if I'm using this to reconstruct a tensor later from a new base.) There's no way to do this right now.
### Versions
main
cc @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 1 |
2,481 | 102,402 |
[Composable] Unified way to check if modules are managed by composable API
|
oncall: distributed, release notes: distributed (composable)
|
### ๐ The feature, motivation and pitch
Today to check if a top-level module is managed by a composable API (replicate, fully_shard, trec_shard), the `_get_registry(module)` API can be used. However, this is limited when considering the case of nested wrapping. i.e. if I have the following:
```
class UnitModule:
self.lin1 = nn.Linear(...)
m = UnitModule()
fully_shard(m)
```
then I can check that `m` is managed by FSDP composable API, but there's no easy way for me to tell that `self.lin1` is also sharded.
The generalization that if `_get_registry(module)` returns True then all of module's children are also managed by that composable API is not true in practice. For example, I could've called `replicate()` or `trec_shard` on `self.lin1` and it would be managed by another composable API. Or, I could've specified `fully_shard` to ignore `self.lin1` as well.
The following API could work, but seems a bit cumbersome:
```
def get_fsdp_managed_modules(root_module):
fsdp_mods = []
fsdp_roots = [m for m in root_module.modules() if 'fully_shard' in _get_registry(m)]
for m in fsdp_roots:
if 'replicate' not in _get_registry(m) and 'trec_shard' not in _get_registry(m) and not _is_fsdp_ignored(m):
fsdp_mods.append(m)
return fsdp_mods
```
In general there are a couple pieces of information that we probably want:
- whether a particular module is the root module of a composable (fsdp, sharded, replicate) subtree
- whether a module is not (necessarily) the root, but still managed by a composable API
- whether a module is explicitly ignored by any of the composable APIs.
Would love suggestions / feedback on this issue.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 0 |
2,482 | 102,400 |
Unexpected Behavior when using torch.isclose()
|
triaged, module: NaNs and Infs, module: testing
|
### ๐ Describe the bug
When using `self.assertEqual` within test_transformers, I was observing some weird behavior for certain test cases. I tracked this down to `torch.isclose()` and have a minimal repro:
``` Python
import torch
def main():
a = torch.tensor([0.0001], dtype=torch.float16, device="cuda")
b = torch.tensor([0.0], dtype = torch.float16, device="cuda")
rtol = (torch.abs(a - b) / torch.min(abs(a), abs(b))).item()
print(rtol)
# This will be torch.inf due to the division by zero
print(torch.isclose(a, b, atol=1, rtol=rtol))
# >> False
if __name__ == "__main__":
main()
```
This test will fail regradless of what you set the atol. I think this is counter intuitive to what one would expect:
From the the docs: https://github.com/pytorch/pytorch/blob/0ed22fce978d7aae4a62ee86d9f85fa7314b1dd5/aten/src/ATen/native/TensorCompare.cpp#L287
``` C++
// (1) A is equal to B, with NaNs comparing equal when equal_nan is true.
// (2) The error abs(A - B) is finite and less than the max error
// (atol + abs(rtol * B)).
```
I think the problem is that this
In this case (A-B) is finite and `atol + abs(rtol *B) = 1 + abs(inf*0) `
The second term becomes NaN:
```
>>> import torch
>>> a = torch.tensor([torch.inf])
>>> b = torch.tensor([0.0])
>>> a
tensor([inf])
>>> b
tensor([0.])
>>> a * b
tensor([nan])
```
This might be truly a case of undefined behavior and we make no assumptions but I would think that someone using isclose with that atol thinks they are equal. I could be wrong. My argument for this would be add 1 to everything and shift the problem to not be centered around 0 and they results would be as expected.
### Versions
Nightly
| 1 |
2,483 | 102,374 |
Hooks not working in version 2.0.1+cu118
|
module: nn, triaged
|
### ๐ Forward Hooks are not getting called
When writing forward hooks on a layer of a model, I realized they are not being called at all.
```python
# inside __init__ function
for name, module in self.model.named_modules():
if attention_layer_name in name:
print(f"Module hook registered for: {name}")
module.register_forward_hook(lambda m, i, o: attentions.append(o))
```
Expected Behavior: When I forward an example through the model, the hooks should be called after specifically hooked layers are done forwarding the weights
I did try it on torch version 1.13.1, and it seems to work fine there.
### Versions
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 11 |
2,484 | 102,373 |
[cuda] Switching CI to CUDA 12.1 timing out linux-bionic-cuda12.1-py3.10-gcc7 / test (distributed, 3, 3, linux.8xlarge.nvidia.gpu)
|
module: cuda, module: ci, triaged
|
### ๐ Describe the bug
Following PR, switching CI from 11.8 to 12.1 CUDA : https://github.com/pytorch/pytorch/pull/102178
Failure of linux-bionic-cuda12.1-py3.10-gcc7 / test (distributed, 3, 3, linux.8xlarge.nvidia.gpu) : https://github.com/pytorch/pytorch/actions/runs/5071677184/jobs/9113227787
```
023-05-24T23:05:58.3284344Z distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_join_gpu [2023-05-24 22:25:58,524] torch.testing._internal.common_distributed: [ERROR] Process 0 timed out with traceback:
2023-05-24T23:05:58.3284800Z
2023-05-24T23:05:58.3284961Z Thread 0x00007fe2aaccc700 (most recent call first):
2023-05-24T23:05:58.3285257Z <no Python frame>
2023-05-24T23:05:58.3285416Z
2023-05-24T23:05:58.3285571Z Thread 0x00007fe2b371e700 (most recent call first):
2023-05-24T23:05:58.3285931Z <no Python frame>
2023-05-24T23:05:58.3286098Z
2023-05-24T23:05:58.3286273Z Current thread 0x00007fe2d8b0a700 (most recent call first):
2023-05-24T23:05:58.3286879Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 620 in _event_listener
2023-05-24T23:05:58.3287377Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 953 in run
2023-05-24T23:05:58.3287791Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
2023-05-24T23:05:58.3288226Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 973 in _bootstrap
2023-05-24T23:05:58.3288465Z
2023-05-24T23:05:58.3288623Z Thread 0x00007fe3825f9080 (most recent call first):
2023-05-24T23:05:58.3289167Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1401 in _pre_forward
2023-05-24T23:05:58.3289808Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1531 in forward
2023-05-24T23:05:58.3290429Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511 in _call_impl
2023-05-24T23:05:58.3291053Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502 in _wrapped_call_impl
2023-05-24T23:05:58.3291673Z File "/var/lib/jenkins/workspace/test/distributed/optim/test_zero_redundancy_optimizer.py", line 1016 in _test_zero_join
2023-05-24T23:05:58.3292215Z File "/var/lib/jenkins/workspace/test/distributed/optim/test_zero_redundancy_optimizer.py", line 1122 in test_zero_join_gpu
2023-05-24T23:05:58.3292886Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 109 in wrapper
2023-05-24T23:05:58.3293553Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543 in wrapper
2023-05-24T23:05:58.3294211Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657 in run_test
2023-05-24T23:05:58.3294872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 636 in _run
2023-05-24T23:05:58.3295374Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 108 in run
2023-05-24T23:05:58.3295840Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 314 in _bootstrap
2023-05-24T23:05:58.3296278Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 129 in _main
2023-05-24T23:05:58.3296737Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 116 in spawn_main
2023-05-24T23:05:58.3297107Z File "<string>", line 1 in <module>
2023-05-24T23:05:58.3297286Z
2023-05-24T23:05:58.3297632Z [2023-05-24 22:25:58,524] torch.testing._internal.common_distributed: [ERROR] Process 1 timed out with traceback:
2023-05-24T23:05:58.3297913Z
2023-05-24T23:05:58.3298093Z Current thread 0x00007fb1a6d38700 (most recent call first):
2023-05-24T23:05:58.3298690Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 620 in _event_listener
2023-05-24T23:05:58.3299184Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 953 in run
2023-05-24T23:05:58.3299594Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
2023-05-24T23:05:58.3300028Z File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 973 in _bootstrap
2023-05-24T23:05:58.3300271Z
2023-05-24T23:05:58.3300426Z Thread 0x00007fb250796080 (most recent call first):
2023-05-24T23:05:58.3300956Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114 in forward
2023-05-24T23:05:58.3301962Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511 in _call_impl
2023-05-24T23:05:58.3302727Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502 in _wrapped_call_impl
2023-05-24T23:05:58.3303371Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217 in forward
2023-05-24T23:05:58.3303991Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511 in _call_impl
2023-05-24T23:05:58.3304603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502 in _wrapped_call_impl
2023-05-24T23:05:58.3305250Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1372 in _run_ddp_forward
2023-05-24T23:05:58.3305895Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1535 in forward
2023-05-24T23:05:58.3306519Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511 in _call_impl
2023-05-24T23:05:58.3307140Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502 in _wrapped_call_impl
2023-05-24T23:05:58.3307673Z File "/var/lib/jenkins/workspace/test/distributed/optim/test_zero_redundancy_optimizer.py", line 1016 in _test_zero_join
2023-05-24T23:05:58.3308347Z File "/var/lib/jenkins/workspace/test/distributed/optim/test_zero_redundancy_optimizer.py", line 1122 in test_zero_join_gpu
2023-05-24T23:05:58.3309017Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 109 in wrapper
2023-05-24T23:05:58.3309666Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543 in wrapper
2023-05-24T23:05:58.3310329Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657 in run_test
2023-05-24T23:05:58.3310991Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 636 in _run
2023-05-24T23:05:58.3311469Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 108 in run
2023-05-24T23:05:58.3311933Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 314 in _bootstrap
2023-05-24T23:05:58.3312399Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 129 in _main
2023-05-24T23:05:58.3312859Z File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 116 in spawn_main
2023-05-24T23:05:58.3313208Z File "<string>", line 1 in <module>
2023-05-24T23:05:58.3313390Z
2023-05-24T23:05:58.3313504Z FAILED [300.0435s] [ 95%]
2023-05-24T23:05:58.3314080Z distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_model_parallel_parameters_as_bucket_view_False SKIPPED [8.1162s] (Need at least 4 CUDA devices) [ 97%]
2023-05-24T23:05:58.3314917Z distributed/optim/test_zero_redundancy_optimizer.py::TestZeroRedundancyOptimizerDistributed::test_zero_model_parallel_parameters_as_bucket_view_True SKIPPED [4.8112s] (Need at least 4 CUDA devices) [100%]
```
### Versions
2.1.0 nightly
cc @ngimel @seemethere @malfet @pytorch/pytorch-dev-infra @ptrblck
| 2 |
2,485 | 102,368 |
Issue with ShufflerIterDataPipe in torch 1.13.1
|
module: dataloader, triaged, module: data
|
### ๐ Describe the bug
Hi, I am trying to fine-tune a model in PyTorch using multiple GPUs. For this purpose I load a base model from `transformers` and I use the library `accelerate` to handle the multi-gpu setting. I have also created a new dataset class which inherits from `IterableDataset`. In that class, I define a shuffle function which make use of `ShufflerIterDataPipe`, and I use it to shuffle my dataset. From this dataset, I define a dataloader (using `DataLoader` from pytorch) and I go trough it with the help of a for loop.
TL;DR : The code works fine with PyTorch `1.10.1` but fails with PyTorch `1.13.1` due to `ShufflerIterDataPipe`
Here is my code
```python
import torch
from torch.utils.data import IterableDataset
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
from accelerate import Accelerator
from transformers import AutoModelForCausalLM
class ConstantLengthDataset(IterableDataset):
def __init__(
self,
dataset,
):
self.dataset = dataset
def __iter__(self):
for element in self.dataset :
yield element
def shuffle(self, buffer_size=1000):
return ShufflerIterDataPipe(self, buffer_size=buffer_size)
model = AutoModelForCausalLM.from_pretrained("codeparrot/codeparrot-small")
train_data = [torch.randint(0, 20000, (1024,)) for _ in range(10000)]
train_dataset = ConstantLengthDataset(train_data,).shuffle(buffer_size=100)
train_dataloader = DataLoader(train_dataset, batch_size=4)
accelerator = Accelerator()
model, train_dataloader = accelerator.prepare(model, train_dataloader)
for step, batch in enumerate(train_dataloader, start=1):
print(batch.shape)
print("step = "+str(step+1))
```
However when I use PyTorch `1.13.1`, I have an issue :
- When I do not shuffle my dataset, which implies not using `ShuffleIterDataPipe`, my code works perfectly whether I use 1 GPU or 4 GPUs.
- When I do shuffle my dataset, which implies using `ShufflerIterDataPipe`, my code works only when I use 1 GPU. And it does not work when I use more GPUs (typically 4 or 8). The error is
`RuntimeError: Trying to create tensor with negative dimension -327818479245320786: [-327818479245320786]`
On the other hand, when I use PyTorch `1.10.1` there is no issue when using `ShuffleIterDataPipe`, whether I run my code with 1 or 4 GPUs.
Can you give me some information about why it happens? Maybe there is a problem with `ShufflerIterDataPipe` when I use multiple GPUs. The update from torch `1.10.1` to torch `1.13.1` may have changed something.
The whole trace in case of error
```
File "/fsx/armel/miscellaneous/minimal.py", line 35, in <module>
for step, batch in enumerate(train_dataloader, start=1):
File "/fsx/armel/miniconda3/envs/env/lib/python3.10/site-packages/accelerate/data_loader.py", line 528, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
File "/fsx/armel/miniconda3/envs/env/lib/python3.10/site-packages/accelerate/data_loader.py", line 506, in _fetch_batches
broadcast_object_list(batch_info)
File "/fsx/armel/miniconda3/envs/env/lib/python3.10/site-packages/accelerate/utils/operations.py", line 339, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/fsx/armel/miniconda3/envs/env/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2090, in broadcast_object_list
object_tensor = torch.empty( # type: ignore[call-overload]
RuntimeError: Trying to create tensor with negative dimension -327818479245320786: [-327818479245320786]
```
The expected output should be like
```
torch.Size([4, 1024])
step = 2
torch.Size([4, 1024])
step = 3
torch.Size([4, 1024])
step = 4
torch.Size([4, 1024])
step = 5
torch.Size([4, 1024])
step = 6
torch.Size([4, 1024])
step = 7
torch.Size([4, 1024])
step = 8
torch.Size([4, 1024])
step = 9
torch.Size([4, 1024])
step = 10
torch.Size([4, 1024])
...
```
### Versions
Environment with torch `1.13.1`
```
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.13.1
[pip3] torch-struct==0.5
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.24.3 py310hd5efca6_0
[conda] numpy-base 1.24.3 py310h8e6c178_0
[conda] pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 0.13.1 py310_cu116 pytorch
[conda] torchvision 0.14.1 py310_cu116 pytorch
```
Environment with torch `1.10.1`
```
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.3
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 510.73.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.3 py39hf6e8229_1
[conda] numpy-base 1.24.3 py39h060ed82_1
[conda] pytorch 1.10.1 py3.9_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.1 py39_cu113 pytorch
[conda] torchvision 0.11.2 py39_cu113 pytorch
```
cc @SsnL @VitalyFedyunin @ejguan @NivekT @dzhulgakov
| 2 |
2,486 | 102,360 |
PyTorch hangs at import when used together with TensorFlow
|
triaged, topic: binaries
|
### ๐ Describe the bug
Code:
```python
import tensorflow
import torch
```
This hangs in some cases in the `import torch`.
Importing it the other way around, or also just importing Torch does not hang. However, I'm reporting this here because the stacktrace looks still suspicious.
Specifically, on my system, Ubuntu 22.04, using the distrib Python 3.10, I have TensorFlow 2.12 and PyTorch 2.0.1. Same also with Python 3.11.
The stacktrace of the hang:
```
0x00007ffff7d51992 in __GI___libc_read (fd=0, buf=0x7fffffffa6f7, nbytes=1) at ../sysdeps/unix/sysv/linux/read.c:26
26 ../sysdeps/unix/sysv/linux/read.c: No such file or directory.
(gdb) bt
#0 0x00007ffff7d51992 in __GI___libc_read (fd=0, buf=0x7fffffffa6f7, nbytes=1) at ../sysdeps/unix/sysv/linux/read.c:26
#1 0x00007ffff43af518 in std::random_device::_M_getval() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#2 0x00007fff5d55a6ef in _GLOBAL__sub_I_IpcFabricConfigClient.cpp () from /u/zeyer/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
#3 0x00007ffff7fc947e in call_init (l=<optimized out>, argc=argc@entry=3, argv=argv@entry=0x7fffffffd1c8, env=env@entry=0x555557198d40) at ./elf/dl-init.c:70
#4 0x00007ffff7fc9568 in call_init (env=0x555557198d40, argv=0x7fffffffd1c8, argc=3, l=<optimized out>) at ./elf/dl-init.c:33
...
```
The `fd=0`, coming from `std::random_device::_M_getval` looks very suspicious to me. It looks like the `std::random_device` is not properly initialized? Code [here](https://github.com/gcc-mirror/gcc/blob/d156c6054200237b707f9fb44ae9958d926b0745/libstdc%2B%2B-v3/src/c%2B%2B11/random.cc#L582) and [here](https://github.com/pytorch/kineto/blob/6bf8eeb9955a2e975e494ae8dffd2259680e771b/libkineto/src/IpcFabricConfigClient.cpp#L26).
In other cases, I have also seen the error "random_device could not be read". This seems to be very related, maybe it got another uninitialized `_M_fd` value.
I also reported this here: https://github.com/rwth-i6/returnn/issues/1339
Some related issues:
https://discuss.pytorch.org/t/random-device-could-not-be-read/138697 (very related)
https://github.com/JohnSnowLabs/spark-nlp/issues/5943
https://discuss.tensorflow.org/t/tensorflow-linux-wheels-are-being-upgraded-to-manylinux2014/8339
https://github.com/h2oai/datatable/issues/2453
https://github.com/robjinman/pro_office_calc/issues/5
https://github.com/boostorg/fiber/issues/249
https://github.com/h2oai/datatable/issues/2453
https://github.com/microsoft/LightGBM/issues/1516
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: 15.0.7
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 980
Nvidia driver version: 530.41.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU(s) scaling MHz: 42%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 9 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-5
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.23.5
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] triton==2.0.0
[conda] Could not collect
| 8 |
2,487 | 102,355 |
Data type mismatch in `batch_isend_irecv` docstring example
|
oncall: distributed
|
### ๐ The doc issue
In the example usage of [`batch_isend_irecv`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.batch_isend_irecv):
https://github.com/pytorch/pytorch/blob/6c9b94dcda696984552769da6cf578ccca0cfbdd/torch/distributed/distributed_c10d.py#L1709-L1718
`send_tensor` is `int64` while `recv_tensor` is `float32`. The resulting `recv_tensor` shows incorrect data.
To reproduce:
```python
import os
import torch
import torch.distributed as dist
def main():
device = "cuda"
rank = int(os.environ["RANK"])
local_rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
torch.cuda.set_device(local_rank)
dist.init_process_group(backend="nccl")
send_tensor = (torch.arange(2) + 2 * rank).to(device)
recv_tensor = torch.randn(2).to(device)
print(
f"(rank {rank}) send_tensor before: {send_tensor}\n"
f" recv_tensor before: {recv_tensor}\n",
end=""
)
send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1) % world_size)
recv_op = dist.P2POp(dist.irecv, recv_tensor, (rank - 1 + world_size) % world_size)
reqs = dist.batch_isend_irecv([send_op, recv_op])
for req in reqs:
req.wait()
torch.cuda.synchronize()
print(f"(rank {rank}) recv_tensor after: {recv_tensor}\n", end="")
if __name__ == "__main__":
main()
```
(the extra `synchronize` is to fix #82450 prior to PyTorch 1.13)
`torchrun --nproc_per_node 2 ./batch_isend_irecv_gpu.py` shows
```
(rank 0) recv_tensor after: tensor([2.8026e-45, 0.0000e+00], device='cuda:0')
(rank 1) recv_tensor after: tensor([0., 0.], device='cuda:1')
```
`torchrun --nproc_per_node 8 ./batch_isend_irecv_gpu.py` shows
```
(rank 4) recv_tensor after: tensor([8.4078e-45, 0.0000e+00], device='cuda:4')
(rank 1) recv_tensor after: tensor([0., 0.], device='cuda:1')
(rank 2) recv_tensor after: tensor([2.8026e-45, 0.0000e+00], device='cuda:2')
(rank 3) recv_tensor after: tensor([5.6052e-45, 0.0000e+00], device='cuda:3')
(rank 6) recv_tensor after: tensor([1.4013e-44, 0.0000e+00], device='cuda:6')
(rank 7) recv_tensor after: tensor([1.6816e-44, 0.0000e+00], device='cuda:7')
(rank 5) recv_tensor after: tensor([1.1210e-44, 0.0000e+00], device='cuda:5')
(rank 0) recv_tensor after: tensor([1.9618e-44, 0.0000e+00], device='cuda:0')
```
due to mis-interprete int as float. NCCL doesn't check datatype mismatch, [unlike MPI](https://clang.llvm.org/extra/clang-tidy/checks/mpi/type-mismatch.html)
### Suggest a potential alternative/fix
Simply change data type to:
```python
send_tensor = (torch.arange(2, dtype=torch.float32) + 2.0 * rank).to(device)
recv_tensor = torch.randn(2, dtype=torch.float32).to(device)
```
Then the result becomes correct:
```
(rank 0) recv_tensor after: tensor([2., 3.], device='cuda:0')
(rank 1) recv_tensor after: tensor([0., 1.], device='cuda:1')
```
```
(rank 1) recv_tensor after: tensor([0., 1.], device='cuda:1')
(rank 0) recv_tensor after: tensor([14., 15.], device='cuda:0')
(rank 7) recv_tensor after: tensor([12., 13.], device='cuda:7')
(rank 5) recv_tensor after: tensor([8., 9.], device='cuda:5')
(rank 3) recv_tensor after: tensor([4., 5.], device='cuda:3')
(rank 6) recv_tensor after: tensor([10., 11.], device='cuda:6')
(rank 4) recv_tensor after: tensor([6., 7.], device='cuda:4')
(rank 2) recv_tensor after: tensor([2., 3.], device='cuda:2')
```
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 2 |
2,488 | 102,354 |
"Examples" in "batch_isend_irecv" should be modified to get the correct results
|
oncall: distributed
| ERROR: type should be string, got "https://github.com/pytorch/pytorch/blob/6c9b94dcda696984552769da6cf578ccca0cfbdd/torch/distributed/distributed_c10d.py#LL1709C13-L1709C13\r\n\r\n Examples:\r\n >>> # xdoctest: +SKIP(\"no rank\")\r\n >>> send_tensor = torch.arange(2) + 2 * rank\r\n >>> recv_tensor = torch.randn(2)\r\n >>> send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1)%world_size)\r\n >>> recv_op = dist.P2POp(dist.irecv, recv_tensor, (rank - 1 + world_size)%world_size)\r\n >>> reqs = batch_isend_irecv([send_op, recv_op])\r\n >>> for req in reqs:\r\n >>> req.wait()\r\n >>> recv_tensor\r\nHere \"send_tensor\" and \"recv_tensor\" are in different dtypes, which would make it fail to get the correct results on gpus.\r\nFor example, one might need to add explict dtype declaration\r\n >>> send_tensor = torch.arange(2, dtype=torch.float) + 2 * rank\r\n >>> recv_tensor = torch.randn(2, dytpe=torch.float)\n\ncc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu" | 1 |
2,489 | 102,339 |
[Dynamo] Better graph-break message for unsupported ctx managers
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
Repro:
```
import torch
import torch._dynamo
import logging
import warnings
torch._logging.set_logs(dynamo=logging.DEBUG)
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return self.layer(x)
x = torch.randn(10, 10)
m = MyModule()
print(m(x))
opt_m = torch.compile(backend="eager", fullgraph=True)(m)
print(opt_m(x))
```
Error:
```
torch._dynamo.exc.Unsupported: setattr(UserDefinedObjectVariable) <slot wrapper '__setattr__' of 'object' objects>
```
The root cause is we can't handle the context manager, so we fall back to eager. Need to figure out a better way to handle unsupported context managers or print better logs.
### Versions
N/A
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov @ngimel
| 1 |
2,490 | 102,337 |
Tensors that share same underlying storage to also share gradient storage
|
module: autograd, triaged, needs research, module: python frontend
|
### ๐ The feature, motivation and pitch
Hello Pytorch Team!
I have a question concerning the way gradients are handled, in particular grad attribute. It's a bit hard to explain but in a very short manner I want "views" of a given parameter to have "synced" gradients with the original parameter (shared memory storage). In short the following script tries to mimick what I want to do.
```python
import torch
from torch import nn
def hack_to_sync_grad(param, *view_args, **view_kwargs):
with torch.no_grad():
param_view = param.view(*view_args, **view_kwargs)
# We need sliced tensor to also get the gradient in order to run optimizer on them
# TODO @thomasw21: It's really had to make sure that our sliced view keeps the same memory space as the original gradient
def sync_grad(orig_grad):
assert orig_grad.is_contiguous()
if param_view.grad is None:
# The gradient was reset to None, we need to reset the coalesced_tensor.grad as well
param.grad = None
if param.grad is None:
param_view.grad = orig_grad.view(*view_args, **view_kwargs)
else:
assert param_view.grad is not None
# TODO @thomasw21: How do I check that it's still the same memory allocation?
# One way is to re-assign the existing pointer
param_view.grad = param.grad.view(*view_args, **view_kwargs)
return orig_grad
param.register_hook(sync_grad)
return param_view
def main():
model = nn.Linear(2,3)
weight_view = model.weight.view(-1) # This fails as it doesn't propagate gradients to this view
# weight_view = hack_to_sync_grad(model.weight, -1) # This is fairly hacky since it relies on the backward hook to sync
random_input = torch.randn(5,2)
model(random_input).sum().backward()
for name, param in model.named_parameters():
assert param.grad is not None
# My goal is such that a view of a parameters should also be able to get the "sharded" gradient
assert weight_view.grad is not None
assert weight_view.grad.untyped_storage().data_ptr() == model.weight.grad.untyped_storage().data_ptr()
if __name__ == "__main__":
main()
```
In this example I'm using `view` but I'd like to support things like slices as well. So very concretely what I'm interested in is customising a grad getter. This would allow one to have a syncing mechanism that doesn't rely on the backward hook for syncing.
Solution found as of now:
- Creating a new property of a new class:
```python
class ViewTensor(torch.Tensor):
"""We want view tensor"""
def __new__(cls, data, *view_args, **view_kwargs):
new_tensor = super().__new__(cls, data.view(*view_args, **view_kwargs))
new_tensor.orig_data = data
new_tensor.view_args = view_args
new_tensor.view_kwargs = view_kwargs
return new_tensor
@property
def grad(self):
return self.orig_data.grad.view(*self.view_args, **self.view_kwargs)
@grad.setter
def grad(self, value):
self.orig_data.grad = value.view_as(self.orig_data)
@grad.deleter
def grad(self):
del self.orig_data.grad
def use_new_tensor_class(param, *view_args, **view_kwargs):
with torch.no_grad():
param_view = ViewTensor(param, *view_args, **view_kwargs)
return param_view
```
The only issue is that property is not something we can set on an instance, hence the need to define a new class. Maybe this solution can generalize by allowing for torch.Tensor to pass grad getter/setter/deleter
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 with whom I initially discussed
### Alternatives
_No response_
### Additional context
_No response_
| 12 |
2,491 | 102,334 |
There is a memory leak in torch.load
|
module: memory usage, module: serialization, triaged
|
### ๐ Describe the bug
test code:
```python
import torch
import gc
@profile
def my_func():
ckpt = torch.load("best.ckpt", map_location=torch.device('cpu'))
del ckpt
gc.collect()
gc.collect()
if __name__ == '__main__':
my_func()
```

There is a memory leak in torch.load and cannot be freed
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Aug 5 2022, 15:21:02) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-12.6-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.9.2
[conda] Could not collect
cc @mruberry @mikaylagawarecki
| 10 |
2,492 | 102,333 |
transformer encoder-layer, the sample-Independent attn_mask(dim=3) has different behaviors when training and validating
|
triaged, oncall: transformer/mha
|
in torch.nn.modules.transormer
line 469 TransformerEncoderLayer.forward()
https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/transformer.py#L500
it supports attn_mask with dim=3, whose first dim represent diff sample in batch,
however in this function(at line 543), it invokes the attention.merge_masks() which only support the attn_mask with dim=2 and do nothing for the raw input attn_mask with dim=3.
https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/transformer.py#L588
**encoderlayer.forward(): input args _src_mask_ was describe in Transformer.forward() which said:**
```
def forward(self, src: Tensor, tgt: Tensor, src_mask: Optional[Tensor] = None, tgt_mask: Optional[Tensor] = None,
memory_mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None,
tgt_key_padding_mask: Optional[Tensor] = None, memory_key_padding_mask: Optional[Tensor] = None) -> Tensor:
r"""Take in and process masked source/target sequences.
Args:
src: the sequence to the encoder (required).
tgt: the sequence to the decoder (required).
src_mask: the additive mask for the src sequence (optional).
tgt_mask: the additive mask for the tgt sequence (optional).
memory_mask: the additive mask for the encoder output (optional).
src_key_padding_mask: the Tensor mask for src keys per batch (optional).
tgt_key_padding_mask: the Tensor mask for tgt keys per batch (optional).
memory_key_padding_mask: the Tensor mask for memory keys per batch (optional).
Shape:
- src: :math:`(S, E)` for unbatched input, :math:`(S, N, E)` if `batch_first=False` or
`(N, S, E)` if `batch_first=True`.
- tgt: :math:`(T, E)` for unbatched input, :math:`(T, N, E)` if `batch_first=False` or
`(N, T, E)` if `batch_first=True`.
- src_mask: :math:`(S, S)` or :math:`(N\cdot\text{num\_heads}, S, S)`. --------โฌ
๏ธโฌ
๏ธ
- tgt_mask: :math:`(T, T)` or :math:`(N\cdot\text{num\_heads}, T, T)`.
- memory_mask: :math:`(T, S)`.
- src_key_padding_mask: :math:`(S)` for unbatched input otherwise :math:`(N, S)`.
- tgt_key_padding_mask: :math:`(T)` for unbatched input otherwise :math:`(N, T)`.
- memory_key_padding_mask: :math:`(S)` for unbatched input otherwise :math:`(N, S)`.
Note: [src/tgt/memory]_mask ensures that position i is allowed to attend the unmasked
positions. If a BoolTensor is provided, positions with ``True``
are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
is provided, it will be added to the attention weight.
[src/tgt/memory]_key_padding_mask provides specified elements in the key to be ignored by
the attention. If a BoolTensor is provided, the positions with the
value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- output: :math:`(T, E)` for unbatched input, :math:`(T, N, E)` if `batch_first=False` or
`(N, T, E)` if `batch_first=True`.
Note: Due to the multi-head attention architecture in the transformer model,
the output sequence length of a transformer is same as the input sequence
(i.e. target) length of the decoder.
where S is the source sequence length, T is the target sequence length, N is the
batch size, E is the feature number
"""
```
**the invoke point:**
```
if not why_not_sparsity_fast_path:
merged_mask, mask_type = self.self_attn.merge_masks(src_mask, src_key_padding_mask, src)
```
**the defination:**
```
def merge_masks(self, attn_mask: Optional[Tensor], key_padding_mask: Optional[Tensor],
query: Tensor) -> Tuple[Optional[Tensor], Optional[int]]:
r"""
Determine mask type and combine masks if necessary. If only one mask is provided, that mask
and the corresponding mask type will be returned. If both masks are provided, they will be both
expanded to shape ``(batch_size, num_heads, seq_len, seq_len)``, combined with logical ``or``
and mask type 2 will be returned
Args:
attn_mask: attention mask of shape ``(seq_len, seq_len)``, mask type 0
key_padding_mask: padding mask of shape ``(batch_size, seq_len)``, mask type 1
query: query embeddings of shape ``(batch_size, seq_len, embed_dim)``
Returns:
merged_mask: merged mask
mask_type: merged mask type (0, 1, or 2)
"""
```
the code logic is so simple so there is no use case, please start with torch.nn.modules.transormer line 469 to line 543
I just simply check the latest version, Probably the bug still exists.
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.25.1
Libc version: N/A
Python version: 3.9.16 (main, Mar 8 2023, 04:29:24) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[conda] numpy 1.23.5 py39h1398885_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.23.5 py39h90707a3_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] pytorch 2.0.0 py3.9_0 pytorch
[conda] torchaudio 2.0.0 py39_cpu pytorch
[conda] torchvision 0.15.0 py39_cpu pytorch
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 0 |
2,493 | 102,319 |
Re-enable Low Memory Dropout
|
triaged, oncall: pt2, module: inductor
|
### ๐ Describe the bug
https://github.com/pytorch/pytorch/pull/100064 caused 3% perf drop and 4% memory drop in HF training because of disabled low memory dropout
### Versions
master
cc @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10
| 1 |
2,494 | 102,285 |
[Dynamo]Outdated logging setting
|
module: docs, triaged, oncall: pt2
|
### ๐ The doc issue
In https://pytorch.org/docs/stable/dynamo/troubleshooting.html, it says using
torch._dynamo.config.log_level = logging.INFO
to enable debug/info logging, which no longer works.
### Suggest a potential alternative/fix
Per https://github.com/pytorch/pytorch/blob/0833f475ce7e42b1dd11af577f276b804fc2b158/torch/_dynamo/config.py#L12
we should use torch._logging.set_logs(dynamo=<level)
cc @svekars @carljparker @ezyang @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
| 1 |
2,495 | 102,272 |
after add /path_to_libtorch/libtorch/lib to LD_LIBRARY_PATH, I can't import torch_scatter.
|
module: build, module: cpp, module: abi, triaged
|
### ๐ Describe the bug
To compile and use libtorch, I add libtorch/lib to $LD_LIBRARY_PATH. However, it causes error if I import torch_scatter. The following is the error:
>>> import torch_scatter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/torch_scatter/__init__.py", line 16, in <module>
torch.ops.load_library(spec.origin)
File "/usr/lib/python3.8/site-packages/torch/_ops.py", line 643, in load_library
ctypes.CDLL(path)
File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/python3.8/site-packages/torch_scatter/_version_cuda.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs
It looks like there is a conflict.
Once I remove the "/path_to_libtorch/libtorch/lib" from LD_LIBRARY_PATH, the problem will disappear. However, I will not be able to use libtorch anymore.
### Versions
python 3.8
torch == 2.0.0+cu117
torchvision==0.15.1+cu117
torchaudio==2.0.1+cu117
cc @malfet @seemethere @jbschlosser
| 1 |
2,496 | 102,269 |
Import of torch breaks standard multiprocessing
|
module: dependency bug, module: multiprocessing, triaged, module: openmp
|
### ๐ Describe the bug
## Instructions To Reproduce the Issue:
When importing torch on my laptop, multiprocessing breaks for some reason. I can not reproduce this issue on another device, so it may be in combination with OS or hardware. But when this import is included, all spun up processes run on CPU cores 0 and 1. When this import is removed all CPU cores are used again. I first noticed this issues when training became extremely slow due to data not loading.
Minimal install:
```
conda create -n test pip tqdm
conda activate test
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
```
Minimal code for multiprocessing:
```py
import os
from multiprocessing import Pool
import random
from tqdm import tqdm
import torch
def heavy(x):
return [random.randint(1, 100) * x for _ in range(10000000)]
with Pool(os.cpu_count()) as pool:
_ = list(tqdm(pool.imap_unordered(heavy, list(range(100))), total=100))
```
When running the above code with the import the following command gives these outputs with and without the import:
```
ps -C python -o %cpu,%mem,cmd,psr
```
with:
```
%CPU %MEM CMD PSR
0.0 0.0 /home/stefan/.mambaforge/en 0
0.0 0.0 /home/stefan/.mambaforge/en 14
7.1 0.5 python test.py 6
11.0 0.3 python test.py 1
10.7 0.3 python test.py 1
10.7 0.3 python test.py 1
10.7 0.3 python test.py 1
10.6 0.3 python test.py 1
10.8 0.3 python test.py 1
10.6 0.3 python test.py 0
10.6 0.3 python test.py 0
10.7 0.3 python test.py 1
10.7 0.3 python test.py 1
10.7 0.3 python test.py 1
10.6 0.3 python test.py 0
10.5 0.3 python test.py 0
10.6 0.3 python test.py 1
10.6 0.3 python test.py 0
10.6 0.3 python test.py 0
10.6 0.3 python test.py 0
10.5 0.3 python test.py 0
10.7 0.3 python test.py 0
10.5 0.3 python test.py 0
```
without:
```
%CPU %MEM CMD PSR
0.0 0.0 /home/stefan/.mambaforge/en 0
0.0 0.0 /home/stefan/.mambaforge/en 14
64.5 9.9 python test.py 11
78.8 0.3 python test.py 18
88.7 0.5 python test.py 16
90.4 0.5 python test.py 12
87.7 0.4 python test.py 15
81.2 0.5 python test.py 6
83.0 0.4 python test.py 17
74.1 0.6 python test.py 10
76.5 0.0 python test.py 14
89.7 0.6 python test.py 14
80.8 0.1 python test.py 7
72.4 0.6 python test.py 11
85.7 0.1 python test.py 5
82.8 0.6 python test.py 13
72.2 0.1 python test.py 13
86.0 0.5 python test.py 4
80.6 0.5 python test.py 2
78.3 0.4 python test.py 8
74.5 0.3 python test.py 10
76.4 0.4 python test.py 0
71.2 0.1 python test.py 19
```
If it's an interaction with the OS, please let me know what other logs I need to provide.
## Expected behavior:
I would expect the import to have no effect on multiprocessing. Especially since nothing is done using torch. I would expect it to act the same as on my other devices that don't have this issue. Mainly trying to find what difference in setup/architecture is causing this weird behavior.
Thanks in advance for any help
Possible duplicate of: https://github.com/pytorch/pytorch/issues/101850, but also breaks normal multiprocessing
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 08:57:19) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 5000,0000
CPU min MHz: 400,0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11,5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.24.3 py311h64a7726_0 conda-forge
[conda] pytorch 2.0.1 py3.11_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py311_cu117 pytorch
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu117 pytorch
```
cc @VitalyFedyunin
| 12 |
2,497 | 102,261 |
ExponentialLR unexpectedly calls `step()` when init argument `last_epoch` is larger than -1
|
module: optimizer, triaged, actionable, module: LrScheduler
|
### ๐ Describe the bug
Currently, the init function of ``torch.optim.lr_scheduler._LRSchedule`` will call ``self.step()`` once. This causes a mismatch between the learning rate used by the optimizer and the closed_form_lr of ExponentialLR, when init argument `last_epoch` is larger than -1.
```python
import torch
model = torch.nn.Linear(3, 3)
optim = torch.optim.AdamW(model.parameters())
sched = torch.optim.lr_scheduler.ExponentialLR(optim, gamma=0.999)
optim.step()
sched.step()
print("Optim & sched:")
print(optim.state_dict())
print(optim.param_groups[0]["lr"])
print(sched.state_dict())
print(sched._get_closed_form_lr())
print("")
# As if we are restoring from a checkpoint
optim2 = torch.optim.AdamW(model.parameters())
optim2.load_state_dict(optim.state_dict())
# Init scheduler with last_epoch=0
sched2 = torch.optim.lr_scheduler.ExponentialLR(optim2, gamma=0.999, last_epoch=0)
print("Optim2 & sched2:")
print(optim2.state_dict())
print(optim2.param_groups[0]["lr"])
print(sched2.state_dict())
print(sched2._get_closed_form_lr())
print("")
```
As the result shows, ``optim2`` has lr 0.000998001, but the closed form lr of ``sched2`` is 0.000999. This behavior causes confusion and inconsistency when one resumes training from a checkpoint.
### Versions
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| 1 |
2,498 | 102,227 |
lintrunner should fail on badly formatted docstrings
|
triaged, enhancement, oncall: pt2, module: devx
|
As I was working on this PR https://github.com/pytorch/pytorch/pull/102182 and introducing lots of new docstrings, the lintrunner didn't care about the badly formatted docstring but the doc test failed
It feels like the lintrunner should have failed on this too
https://github.com/pytorch/pytorch/actions/runs/5074217066/jobs/9114389856?pr=102182
```
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.allow_in_graph:6: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.allow_in_graph:25: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.is_enabled:5: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.allow_in_graph:42: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.list_backends:9: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.list_backends:18: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:docstring of torch._dynamo.eval_frame.OptimizedModule:1: WARNING: Inline emphasis start-string without end-string.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:docstring of torch._dynamo.eval_frame.OptimizedModule:1: WARNING: Inline strong start-string without end-string.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.explain:13: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.explain:24: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.is_enabled:4: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.disable:5: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py:docstring of torch._dynamo.eval_frame.OptimizedModule:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.disable:14: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.compile:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.disable:22: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.is_enabled:2: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.disable:34: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.compile:1: WARNING: Block quote ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.list_mode_options:7: ERROR: Unexpected indentation.
/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/compiler/__init__.py:docstring of torch.compiler.list_mode_options:11: ERROR: Unexpected indentation.
```
cc @ezyang @wconstab @ngimel @bdhirsh @anijain2305 @ZainRizvi @kit1980 @huydhn @clee2000
| 6 |
2,499 | 102,211 |
[ONNX] test_op_consistency.py doesn't support constant inputs
|
module: onnx, triaged, onnx-triaged
|
### ๐ Describe the bug
The current op test design in TorchScript exporter assumes all inputs/args are model inputs, which JIT tracer would trace them all even if they are not tensors.
```python
class SingleOpModel(torch.nn.Module):
"""Test model to wrap around a single op for export."""
def __init__(self, op, kwargs):
super().__init__()
self.operator = op
self.kwargs = kwargs
def forward(self, *args):
return self.operator(*args, **self.kwargs)
```
This results in issue in ops like `scatter_reduce` which has `dim` in arguments. Usually, it is used like:
```python
input.scatter_reduce(0, index, x, reduce='amax')
```
So the dim wouldb't be traced. However, if it is passed in as model input, it will be traced, and inside symbolic function. We will not get prim::Constant of dim.
cc @justinchuby
### Versions
Nightly
| 0 |
2,500 | 102,207 |
skipIfTorchInductor Tracking Issue
|
good first issue, triaged, oncall: pt2, module: inductor
|
### ๐ Describe the bug
We have a long list of known failures in test_torch.py.
./test/test_torch.py:
- [x] 'test_scalar_check' - "segfaults"
- [x] 'test_conv_transposed_backward_agnostic_to_memory_format' - "Please convert all Tensors to FakeTensors"
- [ ] 'test_sync_warning' - "FIXME"
- [x] 'test_copy_transpose_math_view' - https://github.com/pytorch/pytorch/issues/110291
- [x] 'test_errors_index_copy' - "FIXME"
- [x] 'test_scatter_reduce_non_unique_index' - "FIXME"
- [x] 'test_dim_function_empty' - "RuntimeError: Trying to create tensor with negative dimension -1: [-1]" - https://github.com/pytorch/pytorch/pull/106994
- [x] 'test_ternary_op_mem_overlap' - "FIXME"
- [x] 'test_multinomial_device_constrain' - "out_wrapper does not check devices correctly"
- [x] 'test_multinomial_gpu_device_constrain' - "out_wrapper does not check devices correctly"
- [x] 'test_memory_format_propagation_rules' - "To be supported"
- [x] 'test_memory_format_empty_like' - "To be supported"
- [x] 'test_memory_format_operators' - "To be supported"
- [x] 'test_pin_memory_from_constructor' - "pin_memory isn't yet supported in TorchInductor"
- [x] 'test_memory_format_to' - "To be supported"
- [x] 'test_memory_format_type' - "To be supported"
- [x] 'test_memory_format_clone' - "To be supported"
- [x] 'test_memory_format_factory_like_functions_preserve' - "To be supported" (@masnesral)
- [x] 'test_memory_format_type_shortcuts' - "To be supported"
- [x] 'test_memory_format_cpu_and_cuda_ops' - "To be supported"
- [ ] 'test_hook_remove' - "FIXME"
- [x] 'test_assertRaisesRegex_ignore_msg_non_native_device' - "random_.from needs to be renamed"
- [x] 'test_advancedindex_mixed_cpu_devices' - "FIXME"
- [x] 'test_advancedindex_mixed_devices_error' - "FIXME"
- [ ] #108181 and https://github.com/pytorch/pytorch/issues/108798
- [ ] 'test_typed_storage_deprecation_warning' - "FIXME"
- [x] 'test_pin_memory' - "pin_memory isn't yet supported in TorchInductor"
- [x] 'test_memory_format' - "To be supported"
- [x] 'test_upsample_nearest2d_meta' - "https://github.com/pytorch/pytorch/issues/97414"
- [x] 'test_copy_broadcast' - "FIXME"
- [x] 'test_copy_many_to_one' - https://github.com/pytorch/pytorch/pull/108989
- [x] 'test_copy_float16' - "FIXME"
- [x] 'test_to' - "FIXME"
- [x] 'test_slice' - "FIXME"
All the non deterministic tests are low pri, but it might be easy enough to fix
- [ ] (one issue)
'test_nondeterministic_alert_AvgPool3d' - "aot-autograd issue"
'test_nondeterministic_alert_AdaptiveAvgPool2d' - "aot-autograd issue"
'test_nondeterministic_alert_AdaptiveAvgPool3d' - "aot-autograd issue"
'test_nondeterministic_alert_MaxPool3d' - "aot-autograd issue"
'test_nondeterministic_alert_AdaptiveMaxPool2d' - "aot-autograd issue"
'test_nondeterministic_alert_FractionalMaxPool2d' - "aot-autograd issue"
'test_nondeterministic_alert_FractionalMaxPool3d' - "aot-autograd issue"
'test_nondeterministic_alert_interpolate_linear' - "aot-autograd issue"
'test_nondeterministic_alert_interpolate_bilinear' - "aot-autograd issue"
'test_deterministic_interpolate_bilinear' - "aot-autograd issue"
'test_nondeterministic_alert_interpolate_bicubic' - "aot-autograd issue"
'test_nondeterministic_alert_interpolate_trilinear' - "aot-autograd issue"
'test_nondeterministic_alert_ReflectionPad1d' - "aot-autograd issue"
'test_nondeterministic_alert_ReflectionPad2d' - "aot-autograd issue"
'test_nondeterministic_alert_ReflectionPad3d' - "aot-autograd issue"
'test_nondeterministic_alert_ReplicationPad1d' - "aot-autograd issue"
'test_nondeterministic_alert_ReplicationPad2d' - "aot-autograd issue"
'test_nondeterministic_alert_ReplicationPad3d' - "aot-autograd issue"
'test_nondeterministic_alert_CTCLoss' - "aot-autograd issue"
'test_nondeterministic_alert_EmbeddingBag_max' - "aot-autograd issue"
'test_nondeterministic_alert_cumsum' - "aot-autograd issue"
'test_nondeterministic_alert_put_accumulate' - "warning is logged from the FallbackKernel: torch.ops.aten.put_.default when warn_only=True"
'test_nondeterministic_alert_grid_sample_2d' - "aot-autograd issue"
'test_nondeterministic_alert_grid_sample_3d' - "aot-autograd issue"
./test/test_modules.py:
- [x] 'test_cpu_gpu_parity' - "to be fixed"
- [x] 'test_memory_format' - "to be fixed"
- [x] 'test_non_contiguous_tensors - "to be fixed"
These are lower priority than
./test/test_ops.py
- [x] 'test_out_warning' - "rng mismatch"
- [x] 'test_out' - "rng mismatch"
- [x] 'test_variant_consistency_eager' - "complex error"
- [x] 'test_complex_half_reference_testing' - "complex error"
- [x] 'test_non_standard_bool_values' - "Inductor does not support view with dtype yet"
Following root cause is https://github.com/pytorch/pytorch/issues/107861#issuecomment-1708955469
- [x] 'test_conj_view' - "complex error" (fails with aot_eager) (@int3)
- [x] 'test_neg_view' - "grad error " (fails with aot_eager)
- [x] 'test_neg_conj_view' - "grad error" (fails with aot_eager
Test that take too long (defer for now, eventually find way to run periodically)
Covered by torchinductor opinfo tests
- [x] 'test_python_ref_meta' - "Takes too long for inductor" (skip or find way to run periodically)
- [x] 'test_python_ref' - "Takes too long for inductor" (skip or find way to run periodically)
- [x] 'test_python_ref_torch_fallback' - "Takes too long for inductor" (skip or find way to run periodically)
- [x] 'test_python_ref_executor' - "Takes too long for inductor" (skip or find way to run periodically)
- [x] 'test_python_ref_errors' - "Takes too long for inductor" (skip or find way to run periodically)
./test/test_ops_fwd_gradients.py:
- [ ] 'test_inplace_forward_mode_AD' - "to be fixed" (lower pri)
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.