Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
501 | 110,669 |
Backward pass for Nested Tensors using flash attention in sdpa fails
|
triaged, module: nestedtensor, oncall: transformer/mha, module: multi-headed-attention
|
### π Describe the bug
I am trying to use Flash Attention in PyTorch using torch.backends.cuda.sdp_kernel and nested tensors. Here a minimal example:
```python
import random
import torch
import math
class TransformerModel(torch.nn.Module):
def __init__(self, num_layers, model_dim, ff_dim, num_heads):
super().__init__()
self.layers = torch.nn.ModuleList([FastTransformerLayer(model_dim, ff_dim, num_heads) for i in range(num_layers)])
def forward(self, x):
for l in self.layers:
x = l(x)
x = x.to_padded_tensor(padding=0)
lens = (x!=0).sum(axis=1)[:,0]
mean_x = x.sum(axis=1)/lens[:,None]
return mean_x
class FastMHA(torch.nn.Module):
def __init__(self, model_dim, num_heads):
super().__init__()
self.Q = torch.nn.Linear(model_dim, model_dim)
self.K = torch.nn.Linear(model_dim, model_dim)
self.V = torch.nn.Linear(model_dim, model_dim)
self.O = torch.nn.Linear(model_dim, model_dim)
self.num_heads = num_heads
self.head_dim = model_dim // num_heads
def forward(self, q, k, v):
bsz, dim = q.size(0), q.size(2)
q,k,v = self.Q(q), self.K(k), self.V(v)
q = q.view(bsz,-1,self.num_heads,self.head_dim).transpose(1,2)
k = k.view(bsz,-1,self.num_heads,self.head_dim).transpose(1,2)
v = v.view(bsz,-1,self.num_heads,self.head_dim).transpose(1,2)
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
attn_output=torch.nn.functional.scaled_dot_product_attention(q,k,v)
attn_output = attn_output.transpose(1,2).view(bsz,-1,dim)
attn_output = self.O(attn_output)
return attn_output
class FastTransformerLayer(torch.nn.Module):
def __init__(self, model_dim, ff_dim, num_heads):
super().__init__()
self.mha = FastMHA(model_dim, num_heads)
self.ff = torch.nn.Sequential(torch.nn.Linear(model_dim, ff_dim), torch.nn.ReLU(), torch.nn.Linear(ff_dim, model_dim))
self.ln1 = torch.nn.LayerNorm(model_dim)
self.ln2 = torch.nn.LayerNorm(model_dim)
def forward(self, x):
x_att = self.mha(x, x, x)
x = self.ln1(x + x_att)
x_ff = self.ff(x)
x = self.ln2(x + x_ff)
return x
D=768
BS=16
MIN_SEQLEN=50
MAX_SEQLEN=300
FFDIM=2048
N_STEPS=1000
model = TransformerModel(num_layers=4, model_dim=D, ff_dim=FFDIM, num_heads=12)
model.to(device='cuda:0', dtype=torch.float16)
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.AdamW(model.parameters(),
lr=0.0001,
betas=(0.9,0.95),
weight_decay=0.05)
for t in range(N_STEPS):
xlens = torch.randint(low=MIN_SEQLEN,high=MAX_SEQLEN,size=(BS,),device='cuda:0')
x = torch.nested.nested_tensor([torch.randn((xl,D)) for xl in xlens]).to(device='cuda:0',dtype=torch.float16)
y = torch.randn((BS,D), device=x.device, dtype=x.dtype)
y_pred = model(x)
loss = criterion(y_pred, y)
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
This is the error I get:
```
Traceback (most recent call last):
File "minimal_train.py", line 84, in <module>
loss.backward()
File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention_backward' with arguments from the 'NestedTen
sorCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/c
ustom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://
fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention_backward' is only available for th
ese backends: [CPU, CUDA, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Neg
ative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, Autog
radIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, Autogr
adPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode,
Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher
].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31188 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:44143 [kernel]
Meta: registered at /dev/null:219 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:16976 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_4.cpp:13056 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:382 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:249 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:710 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:157 [backend fallback]
```
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following: [46/1972]
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor [20/1972]
Stepping: 0
Frequency boost: enabled
CPU MHz: 2200.000
CPU max MHz: 3800.0000
CPU min MHz: 2200.0000
BogoMIPS: 7586.00
Virtualization: AMD-V
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 12 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS No
t affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fx
sr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperf
mperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy sv
m extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext per
fctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx
smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsavee
rptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshol
d avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.9.post0
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.15.2
[pip3] triton==2.1.0
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @erichan1 @mikaylagawarecki
| 1 |
502 | 110,664 |
[sparse] Shape mismatch when doing matmul with semi-strutctured sparse and non-contiguous dense input
|
module: sparse, triaged
|
### π Describe the bug
When doing semi_structured sparse @ dense matmul, we sometimes see a shape error because PyTorch calculates the shape of the returned matrix wrong. It appears to be off by a factor of 4.
This only happens when we run on non-contiguous inputs, which happens when we have a MLP layer and we sparsify both layers.
```
======================================================================
ERROR: test_mlp_backend_cutlass_cuda (__main__.TestSparseSemiStructuredCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2388, in wrapper
method(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2388, in wrapper
method(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 428, in instantiated_test
raise rte
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 415, in instantiated_test
result = test(self, **param_kwargs)
File "/home/jessecai/local/AA/pytorch/test/test_sparse_semi_structured.py", line 306, in test_mlp
sparse_result = model(input)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/container.py", line 215, in forward
input = module(input)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jessecai/local/miniconda3/envs/pt-tutorial/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)
RuntimeError: shape '[64, 768, 3072]' is invalid for input of size 37748736
To execute this test, run the following from the base repo dir:
python test/test_sparse_semi_structured.py -k test_mlp_backend_cutlass_cuda
```
Code to reproduce:
```
SparseSemiStructuredTensor._FORCE_CUTLASS = backend == "cutlass"
input = torch.rand(64, 768, 768, device=device).half()
model = (
nn.Sequential(
nn.Linear(768, 3072),
nn.Linear(3072, 768),
)
.half()
.to(device)
)
for i in range(2):
m, n = model[i].weight.shape
mask = rand_sparse_semi_structured_mask(
m, n, device=device, dtype=torch.bool
)
# set masked weight
model[i].weight = nn.Parameter(model[i].weight * mask)
dense_result = model(input)
for i in range(2):
model[i].weight = nn.Parameter(to_sparse_semi_structured(model[i].weight))
sparse_result = model(input)
assert torch.allclose(dense_result, sparse_result, rtol=1e-3, atol=1e-3)
```
### Versions
Collecting environment information...
PyTorch version: 2.2.0a0+git50054b1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Clang version: 16.0.6 (Red Hat 16.0.6-1.el9)
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.1
/usr/lib64/libcudnn_adv_infer.so.8.8.1
/usr/lib64/libcudnn_adv_train.so.8.8.1
/usr/lib64/libcudnn_cnn_infer.so.8.8.1
/usr/lib64/libcudnn_cnn_train.so.8.8.1
/usr/lib64/libcudnn_ops_infer.so.8.8.1
/usr/lib64/libcudnn_ops_train.so.8.8.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] botorch==0.9.2
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] gpytorch==1.11
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] pytorch-lightning==2.0.9
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.1.0+cu121
[pip3] torchaudio==2.1.0+cu121
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.1.2
[pip3] torchmultimodal-nightly==2023.8.31
[pip3] torchrl==0.1.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.16.0+cu121
[pip3] torchx==0.5.0
[pip3] triton==2.1.0
[pip3] triton-nightly==2.1.0.dev20230822000928
[conda] blas 1.0 mkl
[conda] botorch 0.9.2 pypi_0 pypi
[conda] cpuonly 2.0 0 pytorch-test
[conda] gpytorch 1.11 pypi_0 pypi
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-include 2023.1.0 h06a4308_46343
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch-test
[conda] pytorch-lightning 2.0.9 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch-test
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi
[conda] torch 2.1.0+cu121 pypi_0 pypi
[conda] torchaudio 2.1.0+cu121 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 1.1.2 pypi_0 pypi
[conda] torchmultimodal-nightly 2023.8.31 pypi_0 pypi
[conda] torchrl 0.1.1 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.16.0+cu121 pypi_0 pypi
[conda] torchx 0.5.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
[conda] triton-nightly 2.1.0.dev20230822000928 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
503 | 110,662 |
[WIP][DDP] Use compiled_autograd to trace DDP backward allreduce
|
release notes: distributed (c10d), module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110662
* #109647
Use compiled_autograd to trace DDP backward allreduce.
Differential Revision: [D49428482](https://our.internmc.facebook.com/intern/diff/D49428482/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
504 | 110,656 |
[ROCm] Properly set atol/rtol settings for test_Conv2d_groups tests
|
module: rocm, triaged, open source, ciflow/trunk, topic: not user facing, rocm
|
This test was using default 1e-5 atol even for torch.half and torch.bfloat16 data types, which is impractical for half/bfloat16 hardware like MI200 and Navi 2x/3x.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 6 |
505 | 110,649 |
opinfo split is confusing
|
triaged, module: testing
|
I'm looking for `_refs.fft.ifft`. It's not in common_method_invocations.py (which has a lot of the _refs OpInfos). It's not in `opinfo/refs.py`. I found them in `opinfo/definitions/fft.py`.
Can we assert that the OpInfos are consistently grouped by file?
| 1 |
506 | 110,643 |
Pytorch 2.1.0 CUDA 12.x docker image missing
|
oncall: releng, triaged, module: docker
|
### π The feature, motivation and pitch
Please add CUDA 12.x official images.
### Alternatives
_No response_
### Additional context
https://github.com/pytorch/pytorch/issues/91122#issuecomment-1749534410
| 5 |
507 | 110,642 |
Raise TypeErrors if Tensor cannot be cast to scalar
|
release notes: python_frontend, topic: bc breaking
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110642
* #110619
* #110618
From Python documentation on [built-in exceptions](https://docs.python.org/3/library/exceptions.html#TypeError):
> Passing arguments of the wrong type (e.g. passing a [list](https://docs.python.org/3/library/stdtypes.html#list) when an [int](https://docs.python.org/3/library/functions.html#int) is expected) should result in a [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError), but passing arguments with the wrong value (e.g. a number outside expected boundaries) should result in a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError)
which should result in the following behavior:
```python
>>> float(torch.tensor([1, 2]))
TypeError: only one element tensors can be converted to Python scalars
>>> torch.tensor([1,2]).item()
TypeError: a Tensor with 2 elements cannot be converted to Scalar
>>> float(torch.tensor(1.5j))
ValueError: value cannot be converted to type double without overflow
```
Tested by:
- `test_errors_masked_fill` and `test_errors_item` in `test_ops`
- `test_interpolate_undefined_behavior_casting` in `test_nn.py`
- `test__complex__should_not_work` in `numpy_tests/core/test_multiarray.py`
- `test_simple_scalar_cast` and `test_randperm` in `test_tensor_creation_ops.py`
- `test_pointwise_op_with_tensor_of_scalarlist_overload` in `test_foreach.py`
- [`aten/src/ATen/test/scalar_test.cpp`](https://github.com/pytorch/pytorch/blob/261cae793a19d91fda0d44d2c2bd8cd7e9a0b93f/aten/src/ATen/test/scalar_test.cpp#L58)
Fixes https://github.com/pytorch/pytorch/issues/110605
| 5 |
508 | 110,641 |
`pytest test/dynamo -v ` fails locally
|
high priority, triage review, oncall: pt2
|
This fails locally for me (and for @voznesenskym, so it's probably not just us). We're not sure why CI is green. The number of failures is deterministic, I get between 10-150 each time I run this.
Possibly related:
- https://github.com/pytorch/pytorch/issues/103213
Example failures:

cc @ezyang @gchanan @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
509 | 110,636 |
[discussion] Have PyTorch functions support python scalars (like NumPy) + introduce convenience constants like `torch.pi` and `torch.e`
|
triaged, module: numpy, module: python frontend
|
### π The feature, motivation and pitch
OP: https://github.com/pytorch/pytorch/pull/110351#issuecomment-1742016370
As an example, it would be nice to have torch.sqrt(python scalar) -> python scalar without having to dispatch between torch.sqrt and math.sqrt and to enable a bit more polymorphic code
Another idea is to also have `torch.pi` and other constants (like NumPy), in order to avoid importing numpy or math in order to get these constants.
Please close this if it's duplicate. I tried to search for similar previous discussions, but the keywords are a bit too generic :(
For torch.sqrt specifically, the polymorphic equivalent currently exists: `x ** 0.5` which works both for tensor and python scalar inputs. But it is useful to have this behavior for many (at least simple) functions like torch.exp and so forth
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @rgommers @albanD
| 13 |
510 | 110,632 |
[DO_NOT_MERGE] test torchxla's xla pin update
|
triaged, open source, topic: not user facing, ciflow/inductor
|
torchxla chagned its XLA version. Testing it in Pytorch CI to ensure it's good.
| 1 |
511 | 110,630 |
Memory efficient attention for tensors where the last dimension is not divisible by 8
|
triaged, oncall: transformer/mha, module: multi-headed-attention
|
### π The feature, motivation and pitch
Currently, using `scaled_dot_product_attention` and the memory efficient kernel requires that the last dimension of the inputs is divisible by 8. Typically, this corresponds to the dimension per head in multihead attention, for example when using the `[batch, head, seq, dim]` convention.
Using inputs that do not conform to this requirement results in a `RuntimeError: No available kernel. Aborting execution.` and a warning: `UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8.`
It would be great if this requirement could be relaxed, for example by only being divisible by 2. The [TPU implementation associated with the paper](https://github.com/google-research/google-research/tree/master/memory_efficient_attention) appears to work with arbitrary dimensions, but this might not be the case for GPUs.
It would also be helpful if these requirements would be documented (the [documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) appears to be missing in this regard).
### Alternatives
The Flash attention kernel supports this feature, but it is missing some others, e.g. attention masks.
### Additional context
A minimal example:
```python
import torch
import torch.nn.functional as F
qkv_size = (10, 128, 123, 2)
Q = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
K = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
V = torch.rand(size=qkv_size, device='cuda', dtype=torch.bfloat16)
with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
```
The output
```
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Memory efficient kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:350).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Mem efficient attention requires last dimension of inputs to be divisible by 8. Got Query.size(-1): 2, Key.size(-1): 2, Value.size(-1): 2 instead. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:128).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention kernel not used because: (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:352).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
[/tmp/ipykernel_16779/975066207.py:2](https://file+.vscode-resource.vscode-cdn.net/tmp/ipykernel_16779/975066207.py:2): UserWarning: Flash attention has been runtime disabled. (Triggered internally at [/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439](https://file+.vscode-resource.vscode-cdn.net/opt/conda/conda-bld/pytorch_1696146114277/work/aten/src/ATen/native/transformers/sdp_utils_cpp.h:439).)
O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[34], line 2
1 with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_math=False, enable_mem_efficient=True):
----> 2 O = F.scaled_dot_product_attention(Q, K, V, attn_mask=None, dropout_p=0)
RuntimeError: No available kernel. Aborting execution.
```
This is using PyTorch 2.2.0.dev20231001, CUDA 11.8, and an Ampere GPU.
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki
| 0 |
512 | 110,627 |
[aimp][pt2] allow FakeTensor to find non-meta common devices
|
fb-exported, topic: not user facing, ciflow/inductor
|
Summary: In some cases, we have an inference model with embedding weights stored on the meta device in order to allow low-memory transformations on the model during publishing. Other parameters are on the cpu device. When tracing with PT2, this causes an error like "Unhandled FakeTensor Device Propagation for aten.mm.default, found two different devices meta, cpu". In this case, we know the parameters placed on the meta device are not final, so we want to ignore them when finding a common FakeTensor device.
Differential Revision: D49654689
| 4 |
513 | 110,626 |
[experiment] Shard in build?
|
topic: not user facing
|
Fixes #ISSUE_NUMBER
| 1 |
514 | 110,624 |
[CUDA] Errors when building with cuda 12.2
|
module: sparse, module: build, module: cuda, triaged
|
# Summary
While attempting to build PyTorch from source with Cudatoolkit 12-2 I ran into a number of issuess:
``` Shell
In file included from /home/drisspg/meta/pytorch/aten/src/ATen/cuda/CUDASparseDescriptors.cpp:4:
/home/drisspg/meta/pytorch/aten/src/ATen/cuda/CUDASparseDescriptors.h:101:46: error: 'cusparseDestroyBsrsv2Info' is deprecated: The routine will be removed in the next major release [-Werror,-Wdeprecated-declarations]
: public CuSparseDescriptor<bsrsv2Info, &cusparseDestroyBsrsv2Info> {
^
/usr/local/cuda-12.2/include/cusparse.h:468:1: note: 'cusparseDestroyBsrsv2Info' has been explicitly marked deprecated here
CUSPARSE_DEPRECATED
^
/usr/local/cuda-12.2/include/cusparse.h:108:12: note: expanded from macro 'CUSPARSE_DEPRECATED'
[[deprecated("The routine will be removed in the next major release")]]
```
This is confirmed here: https://docs.nvidia.com/cuda/cusparse/#cusparsedestroybsrsv2info-deprecated
Interestingly they don't provide a path forward.
### Potential issues with Symbool
``` Shell
en/src/ATen/cuda/cub-RadixSortKeys.cu.o
/home/drisspg/meta/pytorch/c10/core/SymBool.h:66:8: error: no viable conversion from returned value of type 'optional<typename std::decay<const bool &>::type>' to function return type 'optional<bool>'
return c10::make_optional(data_);
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @malfet @seemethere @ptrblck
| 1 |
515 | 110,611 |
torch.compile CPU backend is slower than eager for several transcendental functions
|
triaged, module: inductor, module: cpu inductor
|
I tried a handful of somewhat important transcendentals (sigmoid, tanh, log, logit), and they all appear to be slower than eager. On my 96c/192t Intel(R) Xeon(R) Platinum 8339HC CPU:
```
$ OMP_NUM_THREADS=1 numactl -C 94 python bench.py
log
eager: 363.3282147347927
compiled: 1091.864187270403
sigmoid
eager: 756.7794565111399
compiled: 872.3638635128736
logit
eager: 731.156948953867
compiled: 1594.5508629083633
tanh
eager: 475.9031366556883
compiled: 867.8419776260853
$ python bench.py
log
eager: 28.686294332146645
compiled: 106.62426799535751
sigmoid
eager: 91.70412458479404
compiled: 108.63539576530457
logit
eager: 25.92829428613186
compiled: 88.3821751922369
tanh
eager: 28.41925248503685
compiled: 113.94776217639446
```
```
import time
import torch
def do_bench(fn, iters=1000):
fn() # warmup
s = time.perf_counter()
for _ in range(iters):
fn()
e = time.perf_counter()
return (e - s) / iters * 1e6
x = torch.clamp(torch.rand(1 << 20), 1e-6, 1 - 1e-6) # good range for log, logit
for fn in [torch.log,
torch.sigmoid,
torch.logit,
torch.tanh,
]:
cfn = torch.compile(fn)
print(fn.__name__)
eager = do_bench(lambda: fn(x))
compiled = do_bench(lambda: cfn(x))
print(f"eager: {eager}")
print(f"compiled: {compiled}")
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
516 | 110,610 |
DISABLED test_type_promotion__foreach_sub (__main__.ForeachTests)
|
triaged, module: flaky-tests, skipped, module: inductor
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_type_promotion__foreach_sub&suite=ForeachTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17422466410).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_type_promotion__foreach_sub`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_foreach.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
517 | 110,609 |
[skip ci] voz/fsdp_autograd3 tracker PR
|
release notes: mps, ciflow/mps, module: inductor, module: dynamo, ciflow/inductor
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
518 | 110,605 |
ValueError issued instead of TypeError when tensor is cast to a scalar
|
module: error checking, triaged, module: numpy
|
### π Describe the bug
When using `math.isfinite` with a tensor a wrong `ValueError` is issued instead of a `TypeError`. This creates bugs (see #109819). Fixing this issue will fix the other one as well.
```python
>>> import math, torch
>>> a = torch.randn(5)
>>> math.isfinite(a)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[3], line 1
----> 1 math.isfinite(a)
ValueError: only one element tensors can be converted to Python scalars
>>> math.isfinite(a.numpy())
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 math.isfinite(a.numpy())
TypeError: only size-1 arrays can be converted to Python scalars
```
### Versions
`PyTorch version: 2.0.1`
cc @malfet @mruberry @rgommers
| 4 |
519 | 110,602 |
AOTAutograd logging: log autograd graphs
|
module: logging, triaged, module: aotdispatch
|
### π Describe the bug
For debugging purposes, it would be helpful to have something like this:
```
diff --git a/torch/_functorch/aot_autograd.py b/torch/_functorch/aot_autograd.py
index 84a176222ef..b094952890e 100644
--- a/torch/_functorch/aot_autograd.py
+++ b/torch/_functorch/aot_autograd.py
@@ -1313,6 +1313,9 @@ def create_joint(
backward_out = []
# Call the backwards pass
if grad_primals:
+ from torchviz import make_dot
+ for out in needed_outs:
+ log.warning("DOT GRAPH\n%s", make_dot(out))
with fx_traceback.preserve_node_meta():
# for full graph export, we always export a joint graph where we assume no tangents are needed.
if aot_config.no_tangents:
```
cc @bdhirsh
### Versions
main
| 3 |
520 | 110,600 |
Add mixed dtypes MM implementation based on CUTLASS upstream
|
module: cuda, open source, topic: new features, topic: not user facing, matrix multiplication, enable-mem-leak-check
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110600
cc @ptrblck
| 2 |
521 | 110,599 |
[torch.compile] Multiple set operations don't work
|
good first issue, triaged, oncall: pt2, module: dynamo, release notes: dynamo
|
### π Describe the bug
Originally reported by @fmassa
Examples:
```py
@torch.compile(backend="eager", fullgraph=True)
def f(x):
y = set({1, 2, 3})
if 1 in y:
return x
return x - 1
f(x)
x = torch.randn(3)
```
```py
y = set({1, 2, 3})
@torch.compile(backend="eager", fullgraph=True)
def f(x):
if x in y:
return 1
return 0
x = torch.randn(3)
f(x)
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
522 | 110,597 |
[PyTorch 2.1 regression] TorchScript behavior changed from 2.0.1 (and older) to 2.1
|
high priority, triage review, oncall: jit, module: regression
|
### π Describe the bug
The ONNX export caught a behavior change on TorchScript API after PyTorch 2.1 was released. Before, a `Torch._C.Value.node.mustBeNone()` returned `False` for a model with `aten::new_zeros` ops while after PyTorch 2.1 it returns `True`, changing the execution path of the ONNX model export.
There is nothing in the [Pytorch 2.1 Release Notes](https://github.com/pytorch/pytorch/releases/tag/v2.1.0) that mentions a behavior change of this nature
Reproduction:
```
import torch.nn as nn
import torch
import onnxruntime as ort
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.emb = nn.Embedding(50, 64)
def forward(self, x):
inp = x.new_zeros(x.shape)
return self.emb(inp)
model = MyModel()
inp = torch.Tensor([[2, 5, 6], [3, 2, 5]]).to(torch.int64)
torch.onnx.export(model, (inp,), "model.onnx", opset_version=9)
```
The repro uses `torch.onnx.export` because it calls `torch.jit.trace` and provides an easy entry point to print `new_zeros` arguments.
The ONNX error is as shown:
```bash
/opt/pytorch/torch/onnx/utils.py:1703: UserWarning: The exported ONNX model failed ONNX shape inference. The model will not be executable by the ONNX Runtime. If this is unintended and you believe there is a bug, please report an issue at https://github.com/pytorch/pytorch/issues. Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:Gather, node name: /emb/Gather): indices typestr: Tind, has unsupported type: tensor(float) (Triggered internally at /opt/pytorch/torch/csrc/jit/serialization/export.cpp:1445.)
_C._check_onnx_proto(proto)
Traceback (most recent call last):
File "repro_jit_type.py", line 18, in <module>
torch.onnx.export(model, (inp,), "model.onnx", opset_version=9)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 460, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from model.onnx failed:This is an invalid model. Type Error: Type 'tensor(float)' of input parameter (/Constant_output_0) of operator (Gather) in node (/emb/Gather) is invalid.
```
The error above doesn't point to TorchScript, but shows the execution path has changed for ONNX exporter.
Looking into it further and printing the `new_zeros`'s `dtype` argument (from within the [ONNX exporter code](https://github.com/pytorch/pytorch/blob/144cda7f068854dd870c9567781aa2aca6d5e4cf/torch/onnx/symbolic_opset9.py#L388) which has an easy entry point to the `new_zeros` TorchScript node), we can see that on PyTorch 2.1, the `dtype` arg of `new_zeros` is captured as:
```python
> /opt/pytorch/torch/onnx/symbolic_opset9.py(3889)new_zeros()
-> self_dtype = symbolic_helper._try_get_scalar_type(self)
(Pdb) dtype
15 defined in (%15 : NoneType = prim::Constant(), scope: __main__.MyModel::
)
```
However, in PyTorch 2.0.1 and older, it was:
```python
> /opt/conda/envs/ptca/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py(3708)new_zeros()
-> self_dtype = symbolic_helper._try_get_scalar_type(self)
(Pdb) dtype
15 defined in (%15 : Long(device=cpu) = onnx::Constant[value={4}](), scope: __main__.MyModel::
)
```
### Versions
PyTorch 2.1
cc @ezyang @gchanan @zou3519 @kadeng @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 18 |
523 | 110,595 |
Incorrect docstring / documentation for torch.nn.functional.scaled_dot_product_attention in 2.1
|
module: docs, triaged, module: multi-headed-attention
|
### π The doc issue
I originally noticed this error from following the documentation page at:
'https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention'
where I noticed that the written 'efficient implementation' was not using the mask correctly:
```
# Efficient implementation equivalent to the following:
def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None) -> torch.Tensor:
# Efficient implementation equivalent to the following:
L, S = query.size(-2), key.size(-2)
scale_factor = 1 / math.sqrt(query.size(-1)) if scale is None else scale
attn_bias = torch.zeros(L, S, dtype=query.dtype)
if is_causal:
assert attn_mask is None
temp_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0)
attn_bias.masked_fill_(temp_mask.logical_not(), float("-inf"))
attn_bias.to(query.dtype)
if attn_mask is not None:
if attn_mask.dtype == torch.bool:
attn_mask.masked_fill_(attn_mask.logical_not(), float("-inf"))
else:
attn_bias += attn_mask
attn_weight = query @ key.transpose(-2, -1) * scale_factor
attn_weight += attn_bias
attn_weight = torch.softmax(attn_weight, dim=-1)
attn_weight = torch.dropout(attn_weight, dropout_p, train=True)
return attn_weight @ value
```
The error is that if the attn_mask.dtype == torch.bool, the attn_mask does not impact the attn_weight as it does not change the attn_bias and also is not mentioned in the final 5 lines. This documentation page is derived from the docstring in the code that has this error.
If we look at the current documentation page representing the docstring for the unstable version of the package at: https://pytorch.org/docs/main/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention, we can see:
```
# Efficient implementation equivalent to the following:
def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None) -> torch.Tensor:
# Efficient implementation equivalent to the following:
L, S = query.size(-2), key.size(-2)
scale_factor = 1 / math.sqrt(query.size(-1)) if scale is None else scale
attn_bias = torch.zeros(L, S, dtype=query.dtype)
if is_causal:
assert attn_mask is None
temp_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0)
attn_bias.masked_fill_(temp_mask.logical_not(), float("-inf"))
attn_bias.to(query.dtype)
if attn_mask is not None:
if attn_mask.dtype == torch.bool:
attn_bias.masked_fill_(attn_mask.logical_not(), float("-inf"))
else:
attn_bias += attn_mask
attn_weight = query @ key.transpose(-2, -1) * scale_factor
attn_weight += attn_bias
attn_weight = torch.softmax(attn_weight, dim=-1)
attn_weight = torch.dropout(attn_weight, dropout_p, train=True)
return attn_weight @ value
```
where notably they masked_fill attn_bias instead of attn_mask.
N.B: I haven't really spent the time checking whether the code is implemented as in the docstring: READ: Incorrectly.
### Suggest a potential alternative/fix
A fix is clear. Change the docstring that is used in V2.1 so as to have attn_bias.masked_fill_(attn_mask.logical_not(), float("-inf")) as in the unstable version
rather than attn_mask.masked_fill_(attn_mask.logical_not(), float("-inf")).
It may be worth considering whether this change can be explicitly made to the online documentation before doing a v2.1.1 where this error is amended.
cc @svekars @carljparker
| 3 |
524 | 110,594 |
Multiprocessing takes forever after on .get() with mp.Queue() (Possible Deadlock)
|
needs reproduction, module: multiprocessing, triaged
|
### π Bug
This bug came up when i decided to Train ColBERT on a custom Dataset, but it was taking Forever, so I tried diagnosing the problem, seems that it uses torch.multiprocessing to divide tasks, but whenever a Task Queue is formed, the code gets stuck on the get() method
```Python
#Sample code to reproduce the Problem
import torch
import torch.multiprocessing as mp
try:
mp.set_start_method('spawn', force=True)
except RuntimeError:
print('Hello')
return_value_queue = mp.Queue()
#return_values = sorted([return_value_queue.get() for _ in all_procs]) #The Code gets stuck here
print(return_value_queue.get()) #To Reproduce
```
### Versions
torch version = 1.13.1+cu117
cc @VitalyFedyunin
| 7 |
525 | 110,593 |
DISABLED test_cond_with_quantization (__main__.MiscTests)
|
oncall: quantization, triaged, module: flaky-tests, skipped, module: unknown
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cond_with_quantization&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17411684051).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cond_with_quantization`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_misc.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"246923","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"c7c8187891006854d573a956b6ecb352949216e485aedde4223e6a7001644ccb\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"EA54:73BB:2687EB:2FD366:651E844E","accept-ranges":"bytes","date":"Thu, 05 Oct 2023 09:39:27 GMT","via":"1.1 varnish","x-served-by":"cache-sjc10070-SJC","x-cache":"MISS","x-cache-hits":"0","x-timer":"S1696498767.228529,VS0,VE228","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"fedadda00189e53d6e9e2f621f2c2e1b18d32c76","expires":"Thu, 05 Oct 2023 09:44:27 GMT","source-age":"0"}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 3 |
526 | 110,590 |
wip - Hook new guard system
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110590
* #110735
* #110589
* #110265
* #108839
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
527 | 110,589 |
[dynamo][guard-refactor] TypeGuardAccessor
|
module: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110590
* #110735
* __->__ #110589
* #110265
* #108839
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
528 | 110,588 |
libtorch.so: error adding symbols: file in wrong format
|
needs reproduction, module: build, triaged
|
### π Describe the bug
Ubuntu 22.04.2 LTS
Architecture aarch64
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
cmake version 3.22.1
libtorch version 1.9.1
When i tried to build libtorch in the above environment, i get error like this
[ 95%] Linking CXX shared library libstt_sdk.so
/usr/bin/ld: ../third_party/libtorch_cpu/lib/libtorch.so: error adding symbols: file in wrong format
collect2: error: ld returned 1 exit status
can you tell me what happened?
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.19.0-1025-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: r3p1
BogoMIPS: 243.75
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
L1d cache: 128 KiB (2 instances)
L1i cache: 128 KiB (2 instances)
L2 cache: 2 MiB (2 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.0
[conda] Could not collect
cc @malfet @seemethere
| 1 |
529 | 110,586 |
gh-110507 Add Dtype Support and Clarify Constraints in `torch.nn.softshrink` Documentation
|
triaged, module: doc infra, open source, topic: docs
|
Fixes #110507
cc @ezyang @zou3519 @holly1238 @svekars
| 8 |
530 | 110,582 |
Switch eigen to GitHub mirror
|
topic: not user facing, test-config/default
|
I found this mirror https://github.com/eigen-mirror/eigen/tree/master that we can use. Should we make the switch? One less flaky source is better than twos, i.e. https://hud.pytorch.org/pr/pytorch/pytorch/110530#17400038975
Please let me know if there are any concerns as the gitlab one is still considered official.
| 4 |
531 | 110,579 |
Added a unittest for ModuleWrapPolicy callable.
|
triaged, open source, topic: not user facing
|
Fixes #109266 .
cc @rohan-varma for the issue.
| 2 |
532 | 110,578 |
[WIP] [TD] New heuristic: Historical correlation with TestClass failures
|
release notes: releng, suppress-api-compatibility-check
|
Fixes #ISSUE_NUMBER
| 1 |
533 | 110,577 |
Support de-functionalizing _c10d_functional.all_reduce in AOTInductor
|
module: inductor, ciflow/inductor, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110577
* #110570
Differential Revision: [D49939256](https://our.internmc.facebook.com/intern/diff/D49939256/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm
| 1 |
534 | 110,574 |
Replace int with DeviceIndex for device indices
|
open source, ciflow/binaries, topic: not user facing
|
Use DeviceIndex more thoroughly to catch possible type inconsistency.
| 4 |
535 | 110,573 |
Unify torch.SymInt and torch.types.SymInt
|
topic: not user facing, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110573
* #110572
* #110112
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
536 | 110,572 |
Enable more mypy import following for torch/_inductor/
|
topic: not user facing, module: inductor, module: dynamo, ciflow/inductor, module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110572
I've excluded importing several modules & added a bunch of
type-ignore comments to make this more tractable. I think it's valuable
to turn this on sooner rather than later given that it greatly improves
our coverage; `follow_imports=skip` actually makes a lot of things into
Any types when we have more precise type info.
I also set us up to ignore errors from torch modules that are already
handled through the MYPY lintrunner command. This was not done
just for perf reasons: `mypy-nofollow.ini` is actually stricter in some
ways than the `mypy.ini`, so MYPYNOFOLLOW was raising errors in
the rest of the code.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm
| 1 |
537 | 110,570 |
Native c10d_functional ops
|
release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110570
This PR introduces a native version of c10d_functional ops. The main goal is to add collective support in AOTInductor and allow collective ops to work in multi-threaded native runtimes.
The native version also incorporated API improvements we wished to implement in Python c10d_functional:
- Removed `ranks` and `group_size` from collective op signatures which were proven to be redundant.
- Use tensor storage as opposed to `void*` to resolve in-flight work.
The native process group registration/resolution mechansim is only used for native c10d_functional in the PR. It will become the single source of truth in upcoming PRs.
The upcoming PRs will implement Inductor/AOTInductor support for c10d_functional, after which native c10d_functional will replace Python c10d_functional.
Differential Revision: [D49932255](https://our.internmc.facebook.com/intern/diff/D49932255/)
| 1 |
538 | 110,569 |
automate the full source tarball release asset (sdist)
|
oncall: releng, triaged
|
### π The feature, motivation and pitch
This release asset - pytorch-vX.Y.Z.tar.gz - appears a few weeks after an official release, like an afterthought, yet is crucial for source-based packages, because it contains all Git submodules: https://github.com/stefantalpalaru/gentoo-overlay/blob/c69c2c2be65abe76649493af8c1099fb406806e7/sci-libs/caffe2/caffe2-2.0.1-r101.ebuild#L14-L16
Please generate it automatically, using a CI workflow, and make sure it runs when a new release is created.
### Alternatives
_No response_
### Additional context
_No response_
| 6 |
539 | 110,564 |
[Inductor] ABI-fy some aten fallback kernels
|
module: inductor, ciflow/inductor
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
540 | 110,558 |
[Optimus][pt2] Initial opportunity finder
|
fb-exported, module: inductor, ciflow/inductor
|
Summary:
Add a new fx pass to help analyze model.
In this initial version, we can find some opportunities for horizontal fusion. Will add more
Test Plan:
torch level: P832449483
aten level:
fwd P832451192
bwd P832452174
Differential Revision: D49408003
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 4 |
541 | 110,552 |
Call RECORD_KERNEL_FUNCTION_DTYPE
|
fb-exported, release notes: linalg_frontend
|
Summary:
Modified the .cpp, then run the tracer command to generate the rest:
cd ~/fbsource/fbcode && buck run caffe2/torch/fb/mobile/cli:cli -- --gen_model_config --model_name MultitaskPeopleSegmentation --model_version 9000 --asset_name PYTORCH_MODEL
Test Plan: CI
Differential Revision: D49926634
| 20 |
542 | 110,548 |
[For jansel] [Do not review] .data -> set data fn
|
release notes: distributed (fsdp)
|
Fixes #ISSUE_NUMBER
| 1 |
543 | 110,546 |
Create nested _sdpa_flash
|
module: cpu, ciflow/inductor
|
# Summary
Follow up PR on #110527 and #110533
This creates a new aten function for SDPA_flash that only has a nested registration. This is because the information needed for backward is different between the tensor layouts.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
544 | 110,544 |
[CI] Add inductor workflow for rocm
|
module: rocm, open source, topic: not user facing, module: inductor, ciflow/inductor
|
This PR is to create a separate CI job for inductor UTs on ROCm.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 6 |
545 | 110,543 |
Clean way to distinguish python subclass NT vs. C++ NT
|
triaged, module: nestedtensor
|
## Issue description
We introduced a jagged layout backed NT as a python tensor subclass in #108314. Since the `NestedTensor` and `AutogradNestedTensor` dispatch keys are set for this subclass, calling `t.is_nested` will return True for both the subclass and the traditional C++-side NT. We need a clean way to distinguish the two for cases where they should be handled differently on the python side (e.g. during multiprocessing serialization).
This currently works but it's ugly:
```python
from torch.nested._internal.nested_tensor import NestedTensor
if isinstance(t, NestedTensor):
# python subclass
...
elif t.is_nested:
# C++ subclass
...
```
Relevant comment link: https://github.com/pytorch/pytorch/pull/110292#discussion_r1344297126
cc @cpuhrsch @bhosmer @drisspg @soulitzer
| 4 |
546 | 110,541 |
On the correctness of torch.signal.windows.cosine
|
triaged, module: numpy, topic: bc breaking, topic: docs
|
As of writing, `torch.signal.windows.cosine` is implementing the following function:
$h[n] = \sin(\frac{\pi(n+0.5)}{M}),$
where $M$ is the length of the window. (The denominator is $M+1$ when `sym=False`, but my point will be the same.)
This implementation is identical to scipy's, but it deviates from the definition I'm familiar with, which is
$h[n] = \sin(\frac{\pi n}{M-1}).$
As far as I know, the cosine window is the square root of the Hann window. If we use the current implementation, this equality is not satisfied, while the latter definition does indeed satisfy it. For what it's worth, MATLAB also follows the latter definition. I could not find an explanation for scipy's deviation.
### Solution
I suppose there are two choices:
1. Continue following scipy, but make it clear in the documentation that it deviates from the correct definition.
2. Modify the code to implement the correct definition, and perhaps mention how it deviates from scipy in the docs.
cc @mruberry @rgommers
| 2 |
547 | 110,539 |
Add bandwidth to extern kernel calc
|
fb-exported, ciflow/trunk, topic: not user facing, module: inductor, ciflow/inductor
|
Summary: - Modify the result of get_estimated_runtime() for ExternKernelSchedulerNode to count both bytes and FLOPs and return the maximum of the two.
Reviewed By: xmfan
Differential Revision: D48987490
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 9 |
548 | 110,537 |
[MPS] Unsupported operand type for * with complex tensors
|
triaged, module: complex, module: mps
|
### π Describe the bug
```python
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self, x, y):
super().__init__()
self.register_parameter("x", nn.Parameter(x))
self.register_buffer("y", y)
x = torch.rand(3, dtype=torch.complex64)
y = torch.rand(3, dtype=torch.complex64)
model = MyModule(x, y).to(device="mps")
model.x * model.y
```
returns
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 17
15 device = torch.device("mps")
16 model = MyModule(x, y).to(device)
---> 17 model.forward()
Cell In[1], line 11, in MyModule.forward(self)
10 def forward(self):
---> 11 return self.x * self.y
TypeError: unsupported operand type(s) for *: 'Parameter' and 'Tensor'
```
### Versions
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.0 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.1.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 2 |
549 | 110,525 |
performance drop because batching rule for aten::_scaled_dot_product_attention_math is not yet implemented
|
module: vmap, oncall: transformer/mha, module: functorch
|
### π Describe the bug
Pretty much doing what the stacktrace is asking. I am trying to write a batched implementation of the DETR code in the original paper (https://arxiv.org/pdf/2005.12872.pdf), where a single set of queries is decoded over a batch containing several images, without using a custom transformer implementation. torch.vmap seemed like the ideal function for the job. As far as I can tell, my implementation works, but I get the following stacktrace:
```bash
/anaconda3/envs/detr_env/lib/python3.11/site-packages/torch/nn/functional.py:5373: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::_scaled_dot_product_attention_math. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
```
This "bug" can be reproduced with the following code:
```python
from torch.nn import Transformer
from torch import vmap, rand
transformer = Transformer()
vmap_transformer = vmap(transformer, in_dims=(1, None), randomness="same")
src, tgt = rand((32, 8, 512)), rand((32, 512))
out = vmap_transformer(src, tgt)
```
I am on PyTorch CPU, I've seen other people getting similar errors on GPUs but I apologize if this fact makes this bug report irrelevant.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Fedora Linux 38 (Workstation Edition) (x86_64)
GCC version: (GCC) 13.2.1 20230728 (Red Hat 13.2.1-1)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.37
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.5-200.fc38.x86_64-x86_64-with-glibc2.37
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU(s) scaling MHz: 89%
CPU max MHz: 3400.0000
CPU min MHz: 400.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] numpy==1.26.0
[pip3] segmentation-models-pytorch==0.3.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.26.0 py311h08b1b3b_0
[conda] numpy-base 1.26.0 py311hf175353_0
[conda] pytorch 2.0.1 py3.11_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] segmentation-models-pytorch 0.3.3 pypi_0 pypi
[conda] torchaudio 2.0.2 py311_cpu pytorch
[conda] torchvision 0.15.2 py311_cpu pytorch
cc @zou3519 @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @Chillee @samdow @kshitij12345 @janeyx99
| 3 |
550 | 110,524 |
[dynamo] Implement set in terms of dict
|
open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108420
* __->__ #110524
* #111196
* #110523
* #110522
This allows to heavily simplify the implementation of set, which was
"quite unique". Now we represent a set a as a dict where all its values
are None.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
551 | 110,523 |
[dynamo] Simplify add_dict in preparation to refactor it with call_set
|
open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108420
* #110524
* #111196
* __->__ #110523
* #110522
The previous implementation had a fair amount of repeated code, and did
things like calling `add_options` where options was always empty (which
is fine, as the guards are already set within ConstDictVariable).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
552 | 110,522 |
[dynamo] [easy] Move Set to dicts.py
|
open source, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #108420
* #110524
* #111196
* #110523
* __->__ #110522
A set is more of a dict than a list if you ask me.
This comes before the refactor where we implement sets and dicts via the
same logic.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
553 | 110,516 |
Torch Nested Issue With Backward Pass In Transpose
|
triaged, has workaround, module: nestedtensor
|
### π Describe the bug
When transposing a nested tensor and doing a matrix multiplication, the backward pass requires the tensor to be contiguous. I'm guessing this issue has to do with transpose causing tensors to be non-contiguous after the operation which is fine with the forward pass, but the backward pass may have a bug for nested tensors when going through the backward transpose operation.
```python
import torch
X1 = torch.nested.as_nested_tensor([
torch.tensor([[1,2],[3,4.],[5,6]]),
torch.tensor([[7,8],[9,10]])
]).requires_grad_().cuda()
X2 = torch.nested.as_nested_tensor([
torch.tensor([[1,2],[3,4.],[5,6]]),
torch.tensor([[7,8],[9,10]])
]).requires_grad_().cuda()
Y = X2.transpose(-1, -2).contiguous()@X1
L = sum([i.sum() for i in Y])
L.backward()
```
```
Traceback (most recent call last):
File "<pyshell#51>", line 1, in <module>
L.backward()
File "../torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File ".../torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: empty_like only supports contiguous memory format for Nested Tensors
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: (Rev6, Built by MSYS2 project) 13.1.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Nvidia driver version: 537.42
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2500
DeviceID=CPU0
Family=207
L2CacheSize=11776
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2500
Name=12th Gen Intel(R) Core(TM) i9-12900HK
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cu118
[pip3] torch-geometric==2.3.1
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchdiffeq==0.2.3
[pip3] torchsde==0.2.5
[pip3] torchtext==0.6.0
[pip3] torchvision==0.15.2+cu118
[pip3] vector-quantize-pytorch==1.6.19
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer
| 2 |
554 | 110,515 |
DynamicQuantizedLinear shows incorrect qscheme after applying eager mode dynamic quantization
|
oncall: quantization, triaged
|
### π Describe the bug
I've been playing around with quantization for a little bit and I wanted to verify the qscheme (`torch.per_tensor_symmetric`) I defined for my weight and activation observers aligned with the modules I was quantizing. I just quantized my linear layer (`nn.Linear` class), and I noticed that it printed the wrong qscheme (it was `torch.per_tensor_affine`) when I viewed it after quantization. For sanity sake, I looked at the weight and it appeared the correct qscheme was applied (the `scale` was an appropriate value and the` zero_point` was 0), but for some reason it still said the qscheme`was `per_tensor_affine`.
To double check further, I changed the qscheme for the observers to` torch.per_tensor_affine` and when I viewed the weights again it appeared the correct qscheme was applied (scale was an appropriate value and the `zero_point` was an arbitrarily non-zero number). Overall, it appears to me the application of the qscheme works in practice, but the information displayed when you print the model (or a module in the model) appears to be wrong which can be misleading.
These calls are what shows the information when printing the `DynamicQuantizedLinear` module
```python
# quant_venv/lib/python3.10/site-packages/torch/ao/nn/quantized/dynamic/modules/linear.py
def _weight_bias(self):
return self._packed_params._weight_bias()
def weight(self):
return self._weight_bias()[0]
#...
def extra_repr(self):
extra_repr_str = 'in_features={}, out_features={}, dtype={}'.format(
self.in_features, self.out_features, self._packed_params.dtype
)
if self._packed_params.dtype == torch.qint8:
extra_repr_str += ', qscheme={}'.format(self.weight().qscheme())
return extra_repr_str
# quant_venv/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/linear.py
@torch.jit.export
def _weight_bias(self):
if self.dtype == torch.qint8:
return torch.ops.quantized.linear_unpack(self._packed_params)
```
Below is example to replicate this issue:
```python
import torch
import torch.nn as nn
import torch.ao.quantization as quantize
class DummyModel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = nn.Linear(100, 1)
self.relu = nn.ReLU()
def forward(self, x):
return self.relu(self.linear(x))
dummy_model = DummyModel()
weight_observer = quantize.MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_channel_symmetric,
quant_min=-127, quant_max=127)
activation_observer = quantize.PlaceholderObserver.with_args(dtype=torch.quint8, is_dynamic=True,
quant_min=0, quant_max=255)
dynamic_qconfig = quantize.QConfig(weight=weight_observer, activation=activation_observer)
module_mappings = {nn.Linear: dynamic_qconfig}
dynamic_dummy = quantize.quantize_dynamic(dummy_model, qconfig_spec=module_mappings,
dtype=torch.qint8, inplace=False)
dynamic_quantized_linear = dynamic_dummy.linear
print(dynamic_quantized_linear)
print(dynamic_quantized_linear.weight())
```
Output:
```bash
DynamicQuantizedLinear(in_features=100, out_features=1, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
tensor([[ 0.0741, 0.0101, -0.0226, -0.0936, -0.0640, -0.0491, -0.0078, -0.0250,
0.0242, 0.0936, -0.0764, -0.0780, -0.0086, -0.0086, 0.0156, 0.0850,
-0.0702, -0.0218, 0.0491, -0.0835, -0.0055, -0.0078, 0.0226, -0.0312,
-0.0959, -0.0398, 0.0429, 0.0078, 0.0562, 0.0406, 0.0140, 0.0913,
0.0265, 0.0148, 0.0187, 0.0936, -0.0101, 0.0842, -0.0016, 0.0094,
0.0312, 0.0928, 0.0741, 0.0780, -0.0827, -0.0460, -0.0195, 0.0710,
0.0148, -0.0179, -0.0538, 0.0515, -0.0312, 0.0218, 0.0359, -0.0452,
-0.0881, -0.0437, -0.0452, -0.0733, 0.0991, -0.0359, 0.0686, -0.0749,
-0.0507, 0.0936, -0.0304, 0.0741, 0.0140, 0.0140, -0.0039, -0.0507,
0.0725, 0.0975, 0.0117, -0.0140, 0.0140, -0.0265, 0.0842, 0.0172,
-0.0398, -0.0608, -0.0936, -0.0086, -0.0265, -0.0913, -0.0164, -0.0281,
0.0234, 0.0920, 0.0187, -0.0211, 0.0562, -0.0281, -0.0398, -0.0702,
0.0218, 0.0421, -0.0148, -0.0359]], size=(1, 100),
dtype=torch.qint8, quantization_scheme=torch.per_tensor_affine,
scale=0.0007800564053468406, zero_point=0)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 528.24
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7700X 8-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 8982.97
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 6 |
555 | 110,514 |
Remove some CUDA nvcc suppression
|
triaged, open source, ciflow/trunk, topic: not user facing, ciflow/periodic
| null | 9 |
556 | 110,511 |
[ROCM][CI] Introduce tests-to-include as rocm-test workflow input
|
module: rocm, open source, ciflow/trunk, topic: not user facing
|
Fixes https://github.com/pytorch/pytorch/issues/110181
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 6 |
557 | 110,507 |
doc modification of torch.nn.softshrink api
|
module: docs, module: nn, triaged
|
### π The doc issue
Hi! I am try to built a api that calling the torch torch.nn.softshrink. or you can consider any other non linear activation function. and i want to pass the torch tensor with it's dtype. but in the documentation it doesn't given which dtype the x tensor is supports for this activation function. here is documentati of [softshrink](https://pytorch.org/docs/2.1/generated/torch.nn.Softshrink.html#torch.nn.Softshrink). can you help me by adding the supported dtype. for the input tensor in the activation function.
and in the softshrink activation. what will the constrain on the lambd value. ?<lambd<?
### Suggest a potential alternative/fix
by adding the supported dtype to the all activation doc for the input tensor.
cc @svekars @carljparker @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
558 | 110,506 |
[dynamo] Slow compile times for optimizers due to for loops
|
module: optimizer, triaged, module: dynamo
|
### π Describe the bug
Dynamo tracing time is:
- 70 seconds on 200 param Adam
- 162 seconds on 1000 param SGD
As identified in https://github.com/pytorch/pytorch/pull/110353#issuecomment-1746729070, this is due to dynamo needing to trace an expensive for loop.
If instead this for loop can be written in a way that can be easily traced (e.g. by tracing a `map` over a single lambda/similar to foreach over the optimizer main loop) then we are likely to speedup compilation times across all optimizers by a significant factor.
Example here: https://github.com/pytorch/pytorch/blob/31d635803b8d72433ea275d3c36bf829b158d5ec/torch/optim/sgd.py#L39
Example dynamo logs (which trace the same computation ad infinitum):
```
[2023-10-04 08:14:53,891] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST grad [ListIteratorVariable()]
[2023-10-04 08:14:53,891] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_sparse [ListIteratorVariable(), TensorVariable()]
[2023-10-04 08:14:53,892] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE YIELD_VALUE None [ListIteratorVariable(), ConstantVariable(bool)]
[2023-10-04 08:14:53,892] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_TOP None [ListIteratorVariable(), ConstantVariable(NoneType)]
[2023-10-04 08:14:53,892] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE JUMP_ABSOLUTE 4 [ListIteratorVariable()]
[2023-10-04 08:14:53,892] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE FOR_ITER 18 [ListIteratorVariable()]
[2023-10-04 08:14:53,895] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE STORE_FAST grad [ListIteratorVariable(), TensorVariable()]
[2023-10-04 08:14:53,895] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST grad [ListIteratorVariable()]
[2023-10-04 08:14:53,895] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_sparse [ListIteratorVariable(), TensorVariable()]
[2023-10-04 08:14:53,896] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE YIELD_VALUE None [ListIteratorVariable(), ConstantVariable(bool)]
[2023-10-04 08:14:53,896] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_TOP None [ListIteratorVariable(), ConstantVariable(NoneType)]
[2023-10-04 08:14:53,896] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE JUMP_ABSOLUTE 4 [ListIteratorVariable()]
[2023-10-04 08:14:53,896] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE FOR_ITER 18 [ListIteratorVariable()]
[2023-10-04 08:14:53,900] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE STORE_FAST grad [ListIteratorVariable(), TensorVariable()]
[2023-10-04 08:14:53,900] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST grad [ListIteratorVariable()]
[2023-10-04 08:14:53,900] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_sparse [ListIteratorVariable(), TensorVariable()]
[2023-10-04 08:14:53,901] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE YIELD_VALUE None [ListIteratorVariable(), ConstantVariable(bool)]
[2023-10-04 08:14:53,901] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_TOP None [ListIteratorVariable(), ConstantVariable(NoneType)]
[2023-10-04 08:14:53,901] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE JUMP_ABSOLUTE 4 [ListIteratorVariable()]
[2023-10-04 08:14:53,901] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE FOR_ITER 18 [ListIteratorVariable()]
```
CC: @mlazos @jansel
### Repro
```python
import time
import torch
from torch.optim import Adam, SGD
def compile_opt(opt_compiled):
torch._dynamo.eval_frame.TorchPatcher.patch()
step_fn = opt_compiled.step.__wrapped__
def fn():
step_fn(opt_compiled)
return torch.compile(fn, backend="inductor", fullgraph=True)
optim_cls = SGD
NUM_PARAMS = 1000
kwargs = { "lr": 0.01, "foreach": True }
torch._dynamo.reset()
# torch._inductor.metrics.reset()
input = torch.ones([10, 10], device="cuda:0")
model = torch.nn.Sequential(
*[torch.nn.Linear(10, 10, device="cuda:0") for _ in range(NUM_PARAMS)]
)
input = torch.ones([10, 10], device="cuda:0")
model(input).sum().backward()
opt_compiled = optim_cls(model.parameters(), **kwargs)
compiled_step = compile_opt(opt_compiled)
with torch.set_grad_enabled(False):
start_time = time.time()
compiled_step()
print("compile opt took: %s seconds", time.time() - start_time)
```
### Versions
main
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 9 |
559 | 110,505 |
scaled_dot_product returns NaN arrays with eval()
|
oncall: transformer/mha
|
### π Describe the bug
Hello, I'm trying to use the built-in transformer encoder for my work.
The model works well in the training phase (with train()).
But, when I try to evaluate my model (with eval() and no_grad()), my model always returns all of NaNs.
(I checked that the no_grad doesn't cause the problem; only eval() causes the problem.)
I traced the source code and found that the `drouput_p` in `nn.functional.multi_head_attention_forward` causes the problem.
The `dropout_p` is set to 0.1 with train(), and 0.0 with eval().
And the `scaled_dot_product` (nn.functional.py line 5287) function returns all NaNs with 0.0 of `dropout_p`.
So I manually changed the `dropout_p` to 1.0 in eval(), then the `scaled_dot_product` function returns some float values (but I can't be sure they are correct values).
Is there any bug in the `scaled_dot_product` code for the `dropout_p`?
Or did I make some mistake to use the model?
I don't use any attn_mask or padding mask in this work.
my model is as follows:
```python
class RankNet(nn.Module):
def __init__(self, config):
super(RankNet, self).__init__()
f_dim = config['train']['f_dim']
d_model = config['train']['d_model']
self.autoencoder = AutoEncoder()
self.extractor = nn.Linear(f_dim, d_model)
self.pos_encoder = PositionalEncoding(d_model)
encoder_layers = TransformerEncoderLayer(d_model=d_model, nhead=8)
self.transformer_encoder = TransformerEncoder(encoder_layers, num_layers=2)
self.fc = nn.Linear(d_model, 1)
def forward(self, img1, input1, img2, input2):
reshaped_img1 = img1.view(-1, *img1.shape[2:]) # batch, 4, c, h, w => batch * 4, c, h, w
reshaped_img2 = img2.view(-1, *img2.shape[2:])
e1, d1 = self.autoencoder(reshaped_img1)
e2, d2 = self.autoencoder(reshaped_img2)
e1 = e1.view(int(e1.shape[0]/4), 4, -1)
e2 = e2.view(int(e2.shape[0]/4), 4, -1)
input1 = torch.cat((input1, e1), dim=2)
input1 = self.extractor(input1.view(-1, input1.shape[-1]))
input1 = input1.view(int(input1.shape[0]/4), 4, -1)
input2 = torch.cat((input2, e2), dim=2)
input2 = self.extractor(input2.view(-1, input2.shape[-1]))
input2 = input2.view(int(input2.shape[0]/4), 4, -1)
# batch first to seqence first
input1 = input1.transpose(0, 1)
input2 = input2.transpose(0, 1)
input1 = self.pos_encoder(input1)
input2 = self.pos_encoder(input2)
x1 = self.transformer_encoder(input1)
x2 = self.transformer_encoder(input2)
avg_pooled1 = torch.mean(x1, dim=0)
avg_pooled2 = torch.mean(x2, dim=0)
x1 = self.fc(avg_pooled1)
x2 = self.fc(avg_pooled2)
return torch.sigmoid(x1 - x2), d1, d2
```
And the evaluation process is as follows:
``` python
model.eval()
with torch.no_grad():
accs = 0
d1, d2 = None, None
for i, (img1, feature1, img2, feature2, label) in tqdm(enumerate(val_loader), desc=f'Evaluation'):
img1 = img1.to(self.device)
feature1 = feature1.to(self.device)
img2 = img2.to(self.device)
feature2 = feature2.to(self.device)
label = label.unsqueeze(1).to(self.device)
o, d1, d2 = model(img1, feature1, img2, feature2)
```
### Versions
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
Stepping: 6
CPU MHz: 800.000
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-13,56-69
NUMA node1 CPU(s): 14-27,70-83
NUMA node2 CPU(s): 28-41,84-97
NUMA node3 CPU(s): 42-55,98-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] numpy==1.24.3
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.0+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[pip3] tritonclient==2.35.0
[conda] Could not collect
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 1 |
560 | 110,500 |
[DEMO] cached allocate across devices
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110500
This is a demo of functionality that allows the caching allocator to create
as single tensor that exists allocated across all devices. The outer stride of
the tensor steps across the allocations on different devices.
The implementation uses the expandable segment functionality to control the
mapping of memory to ensure a single stride to cross devices. For N devices,
it creates N blocks of virtual memory that are sequential. Then it performs
the same map/unmap operations in each block, but wiring memory from the
corresponding devices. We use the standard caching allocator to manage
the allocation as if it were on just one of the devices and then hack
in the extra stride that reveals that we also allocated memory on the other devices.
Bug - the tensor returned from _reveal_multiple_devices does not own the data. The original tensor needs to be kept around to keep it alive. This can probably be fixed using the right DataPtr callbacks to keep a handle on the right storage object.
import torch
# tensors allocated under this guard will be
# allocated as a copy per device,
# torch will think the tensor is allocated on
# whatever the 'current' device is.
with torch.cuda.memory._multi_device_allocator():
x = torch.empty(3, 4, device='cuda')
# use this function to get a tensor with a stride over
# all the devices. the second argument tells torch
# where it should run the kernels on it
y = torch._C._reveal_multiple_devices(x, torch.device('cuda', 1))
print(y.size()) # 2, 3, 4
# note: no need for enable peer access!
y.fill_(4)
print(y)
| 1 |
561 | 110,498 |
[DO NOT MERGE][CUDNN][CUDNN V8 API] Testing submodule 1.0 update
|
module: cudnn, open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
cc @csarofeen @ptrblck @xwang233
| 1 |
562 | 110,497 |
Fix resume issue in CosineAnnealingWarmRestarts (#88791)
|
triaged, open source, release notes: optim
|
Fixes #88791, see #110493 for context.
This change is similar to #110493, but uses the modulo-based solution mentioned in the original issue.
| 3 |
563 | 110,496 |
expose sdpa helpers to python
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110496
* #110495
| 1 |
564 | 110,495 |
expose mem-eff to autograd
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110496
* __->__ #110495
| 1 |
565 | 110,485 |
[export] `torch.tensor(0)` should not get burned in as a constant
|
triaged, oncall: pt2, module: aotdispatch, module: dynamo, module: export
|
```
import torch
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
self.init_value = 0.1
def forward(self, x):
return x + torch.tensor(self.init_value)
def main() -> None:
m = Mod()
foo = torch.export.export(m, (torch.randn((2, 3)),))
print(foo.graph)
```
prints
```
graph():
%_lifted_tensor_constant0 : [num_users=1] = placeholder[target=_lifted_tensor_constant0]
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%lift_fresh_copy : [num_users=1] = call_function[target=torch.ops.aten.lift_fresh_copy.default](args = (%_lifted_tensor_constant0,), kwargs = {})
%add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg0_1, %lift_fresh_copy), kwargs = {})
return (add,)
```
I think `torch.tensor(0)` should behave like the other tensor factory functions and get preserved in the graph. This most directly preserves user intent. Besides, if someone wants it to be a constant they can just fold it later.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @avikchaudhuri @gmagogsfm
| 3 |
566 | 110,484 |
[export] Constant tensors should not get lifted to buffers
|
triaged, module: export
|
Today, if we have a constant tensor (e.g. generate by a `torch.tensor(0)` call), it will get lifted to be a buffer (https://github.com/pytorch/pytorch/blob/main/torch/_export/passes/lift_constant_tensor_pass.py).
This is potentially problematicβthe state dict is a well-established abstraction in PyTorch, and mutating it due to export implementation details may be surprising for users. As a concrete example, we have many users that guard model compatibility based on state dict keys. If export adds a key based on some details of how tracing happened, those compatibility checks will break and users will be confused as to why.
Instead, we should handle tensor constants in the tensor specification. Nodes should be able to take constant tensors as inputs, and we should have a table mapping constant tensor handles to in-archive serialized blobs. The serialization/deserialization systems will need to be updated to properly load/save these tensors.
cc @avikchaudhuri @gmagogsfm
| 1 |
567 | 110,479 |
[FSDP] [Checkpointing] Loading optimizer state dict with use_orig_params True causes OOM
|
triaged, module: fsdp
|
### π Describe the bug
When use_orig_params is set to True in FSDP, `FSDP.optim_state_dict_to_load` call uses much higher memory on GPU than when use_orig_params is False. This causes an OOM later when optimizer.load_state_dict clones the state dict.
There's also no way to offload this to CPU.
For a Llama 7B model on a single node, it produced 9GB extra allocation on the GPU after optim_state_dict_to_load when compared to use_orig_params=False.
```
2023-09-27 20:14:45 I [train_lib.py:642] Created optimizer
warm up iteration = 1000.0
2023-09-27 20:14:46 I [checkpoints.py:400] Loading checkpoint from /opt/ml/input/data/train/checkpoints/llama-7b-test/llama_fast-10steps/ ...
NCCL version 2.18.3+cuda11.8
2023-09-27 20:14:57 I [checkpoints.py:336] Loaded model state from disk
2023-09-27 20:14:57 I [checkpoints.py:337] Loaded state from disk: start_train_path_index 0, start_batch_index 10.
2023-09-27 20:15:08 I [learning_rates.py:122] Overriding learning rate value to 0.0003
2023-09-27 20:15:08 I [learning_rates.py:122] Overriding minimum learning rate value to 3e-05
2023-09-27 20:15:08 I [learning_rates.py:122] Overriding warmup iterations value to 1000.0
2023-09-27 20:15:08 I [learning_rates.py:122] Overriding total number of iterations value to 12500
2023-09-27 20:15:08 I [learning_rates.py:122] Overriding decay style value to cosine
2023-09-27 20:15:28 I [checkpoints.py:352] Loaded and sharded optimizer state from disk
2023-09-27 20:15:29 I [checkpoints.py:361] Converted optimizer state dict for FSDP
Traceback (most recent call last):
File "/opt/ml/code/train.py", line 13, in <module>
main()
File "/opt/ml/code/train.py", line 9, in main
train_lib.main(args)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/opt/ml/code/train_lib.py", line 658, in main
) = load_checkpoint(
File "/opt/ml/code/checkpoints.py", line 416, in load_checkpoint
loaded = _load_sharded(
File "/opt/ml/code/checkpoints.py", line 362, in _load_sharded
optimizer.load_state_dict(flattened_osd)
File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 379, in load_state_dict
state_dict = deepcopy(state_dict)
File "/opt/conda/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/lib/python3.10/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 118, in __deepcopy__
new_storage = self._typed_storage()._deepcopy(memo)
File "/opt/conda/lib/python3.10/site-packages/torch/storage.py", line 684, in _deepcopy
return self._new_wrapped_storage(copy.deepcopy(self._untyped_storage, memo))
File "/opt/conda/lib/python3.10/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/lib/python3.10/site-packages/torch/storage.py", line 98, in __deepcopy__
new_storage = self.clone()
File "/opt/conda/lib/python3.10/site-packages/torch/storage.py", line 112, in clone
return type(self)(self.nbytes(), device=self.device).copy_(self)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 4; 39.56 GiB total capacity; 36.98 GiB already allocated; 64.81 MiB free; 37.84 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Versions
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.26
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.186-179.751.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1996.805
BogoMIPS: 5999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==1.9.4
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.5 pypi_0 pypi
[conda] pytorch 2.0.1 sm_py3.10_cuda11.8_cudnn8.7.0_nccl2.18.3_0_smp_2.0.0b1_pt_2.0.1 s3://smdistributed-modelparallel-preview/smp-2.0.0b1-pt-2.0.1/build_artifacts/2023-09-15/smp-v2_preview
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 1.9.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu118 pytorch
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchtriton 2.0.0+b8b470bc59 py310 pytorch-nightly
[conda] torchvision 0.15.2 pypi_0 pypi
```
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu
| 7 |
568 | 110,477 |
[ONNX] Export and runtime error minifier
|
open source, release notes: onnx
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #110313
* __->__ #110477
* #110178
* #108376
| 1 |
569 | 110,476 |
[ONNX] Figure out aot inline strategy for Dort / onnxrt backend
|
module: onnx, triaged
|
`onnx.inliner` is applied on dort exported intermediate onnx models as a temporary solution to improve performance until runtimes adapt to function format. This issue tracks related progress and discussions on this matter until the strategy is finalized.
| 0 |
570 | 110,471 |
Support mutating constant attributes in export
|
ciflow/trunk, topic: not user facing, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110471
In this PR, we implement simulating setattr on nn.Module and HuggingFacePretrainedConfig objets while not changing the original object. For nn.Module, we do:
1. If the nn.Module has custom setattr, we inline it
2. If the nn.Module has default setattr, we inline it in torch.compile and simulate this behavior in export only. The reason is inlining setattr will cause a graph break (https://github.com/pytorch/pytorch/blame/f2d7faf4ba92a6ed43890775ca6ca174ddbf99ea/torch/nn/modules/module.py#L1755) This is fine because this simulation only happens for constant attributes so it is pretty easy to keep track.
For HuggingFacePreTrained, we do:
1. Inline setattr for torch.compile
2. Simulate the setattr in torch.export. This seems bit problematic for attributes that are stored inside https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/configuration_utils.py#L254 but i am not really sure how likely these occur in practice. If the issue rises, we can probably revisit this issue?
In the future, I think we should create ExportInstructionTranslator that knows how to deal with mutations.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
571 | 110,467 |
[optim] Better support subclassing usecase
|
release notes: optim
|
Need to add testing
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110467
| 1 |
572 | 110,465 |
Upgrade CI to ROCm5.7
|
module: rocm, open source, ciflow/trunk, topic: not user facing, ciflow/periodic, keep-going
|
This PR is to upgrade CI to ROCm5.7
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @zstreet87
| 11 |
573 | 110,461 |
Custom tensor attributes not preserved with registered functions
|
triaged, module: custom-operators, module: library
|
In the following snippet
```.py
import torch
class Fn(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x/x.coef
@staticmethod
def backward(ctx):
pass
def my_fn_impl(x):
x.coef = 2.
print("COEF", x.coef)
return Fn.apply(x)
from torch.library import Library
ns = 'capturable'
lib = Library(ns, "DEF")
fwd_name = lib.define(f"capturable::myop(Tensor x) -> Tensor")
lib.impl(fwd_name, my_fn_impl, 'CPU')
x = torch.randn(2, 3)
print(my_fn_impl(x)) #fine
print(torch.ops.capturable.myop(x)) # COEF is printed, but autograd fn says Tensor has no attribute coef
```
when `my_fn_impl` is registered on CPU key, an attribute coef that's attached to tensor in my_fn_impl is stripped by Autograd function
According to @zou3519
>For some reason, HermeticPyObjectTLS is set, which disables PyObject preservation (the thing that would make .coef show up on the Tensor)
cc @anjali411
| 4 |
574 | 110,455 |
Local build breakage on AWS cluster
|
module: build, triaged
|
### π Describe the bug
The build fails after https://github.com/pytorch/pytorch/pull/109986 was merged.
Repro command on AWS hpcaas:
`python setup.py clean && time CCACHE_DIR=/scratch/mlazos/.ccache_dir CXX=g++ CC=gcc USE_CUDA=1 USE_FBGEMM=0 BUILD_NVFUSER=0 USE_MKLDNN=0 python setup.py develop`
Adding BUILD_TEST=0 fixes the issue as a workaround, but this looks like it should be addressed.
### Versions
PyTorch version: 2.2.0a0+gitef5ff79
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Jan 11 2023, 16:05:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2999.998
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] CoCa-pytorch==0.0.6
[pip3] dalle2-pytorch==1.14.2
[pip3] ema-pytorch==0.0.10
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-transformers==1.2.0
[pip3] pytorch-warmup==0.1.0
[pip3] rotary-embedding-torch==0.1.5
[pip3] torch==2.2.0a0+gitef5ff79
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torch-optimizer==0.1.0
[pip3] torch-struct==0.5
[pip3] torchfile==0.1.0
[pip3] torchmetrics==0.10.0
[pip3] torchrec-nightly==2022.8.18
[pip3] torchx-nightly==2022.8.18
[pip3] triton==2.1.0
[pip3] vector-quantize-pytorch==0.9.2
[conda] blas 1.0 mkl
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] coca-pytorch 0.0.6 pypi_0 pypi
[conda] dalle2-pytorch 1.14.2 pypi_0 pypi
[conda] ema-pytorch 0.0.10 pypi_0 pypi
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.1.0 h06a4308_224
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] pytorch-transformers 1.2.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.0 pypi_0 pypi
[conda] rotary-embedding-torch 0.1.5 pypi_0 pypi
[conda] torch 2.2.0a0+gitef5ff79 dev_0 <develop>
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-optimizer 0.1.0 pypi_0 pypi
[conda] torch-struct 0.5 pypi_0 pypi
[conda] torchaudio 2.1.0a0+406e9c8 dev_0 <develop>
[conda] torchdata 0.7.0a0+901b483 dev_0 <develop>
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchmetrics 0.10.0 pypi_0 pypi
[conda] torchrec-nightly 2022.8.18 pypi_0 pypi
[conda] torchtext 0.14.0a0+e1b6984 pypi_0 pypi
[conda] torchvision 0.16.0a0+498b9c8 dev_0 <develop>
[conda] torchx-nightly 2022.8.18 pypi_0 pypi
[conda] triton 2.1.0 dev_0 <develop>
[conda] vector-quantize-pytorch 0.9.2 pypi_0 pypi
cc @malfet @seemethere
| 5 |
575 | 110,453 |
[fuzzing result][fuzz_torch_jit_lite_interpreter] read-heap-buffer-overflow-far-from-bounds (size 4) in c10::IValue::IValue()
|
fb-exported, release notes: mobile
|
Summary: This diff fixes an OOB read found by fuzzing in torch/../jit/mobile
Test Plan:
CI and
```
arc lionhead crash reproduce 853835926354224
```
doesn't crash anymore.
Differential Revision: D49537377
| 2 |
576 | 110,452 |
[pytorch][PR][inductor] Change log to debug for Optimus
|
fb-exported, module: inductor, ciflow/inductor
|
Summary: The log breaks one of ads-model export flows, and we change the log to debug
Test Plan: see details in D49710166
Differential Revision: D49844303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 8 |
577 | 110,450 |
`test_pytorch_onnx_onnxruntime_cuda.py` is not run in CI
|
module: onnx, module: cuda, module: tests, triaged
|
Currently this file fails at import time because of this import:
https://github.com/pytorch/pytorch/blob/4069d1de59bb5cd60c5364acd2835fbf9c6e601c/test/onnx/test_pytorch_onnx_onnxruntime_cuda.py#L17-L21
which imports the `*_OPSET_VERSION` constants from the wrong file
https://github.com/pytorch/pytorch/blob/4069d1de59bb5cd60c5364acd2835fbf9c6e601c/test/onnx/onnx_test_common.py#L436-L439
I am incidentally fixing the import issue in https://github.com/pytorch/pytorch/pull/110310#discussion_r1342669352 but the fact it fails at all means it must not be run in CI.
cc @ptrblck @mruberry @ZainRizvi
| 0 |
578 | 110,449 |
[C10D] Split watchdog into CUDA and non-cuda calls.
|
open source, release notes: distributed (c10d)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110449
A common problem with the current watchdog is that it hangs
trying to abort NCCL or destroy events, those take CUDA locks
that might be held by the main thread that is starved due to a stuck collective.
We do it by introducing a second thread called the monitor thread.
Here some details of this design.
We introduce started_ and _completed_ atomics that track progress.
We set them as part of isCompleted and isStarted. Note that we can't
detect that a collective started if we're not recording those events.
Important visibility rule. We set started_ then completed_ so we must
read them in the opposite order.
Beyond that, we use a pair of queues so collectives can flow between them
as they complete. The new queue is used for the sole purpose of carrying
the destruction on the monitor thread.
| 1 |
579 | 110,448 |
Explore Hybrid (CPU+GPU) Graphs in Scalar parameters
|
oncall: pt2, module: inductor
|
### π The feature, motivation and pitch
As observed across https://github.com/pytorch/pytorch/issues/107006#issuecomment-1741841283, https://github.com/pytorch/pytorch/pull/110345#discussion_r1344183030
Inductor currently is unable to reason properly about graphs that have tensors spread across CPU and GPU.
This forces us to coerce to GPU the scalar computations and corresponding scalar inputs in Adam-like and Adagrad/Adamax. Inductor then launches a single small prelude kernel. This choice may be suboptimal as this small kernel launch is unlikely to saturate the GPU.
https://github.com/pytorch/pytorch/blob/ff96f6d04f062e660224e7ba1f00da867d25d0de/test/inductor/test_compiled_optimizers.py#L128
One possible solution would be to have inductor generate CPU kernels for the scalar part of the computation, and GPU kernels for the main optimizer step.
### Alternatives
It is unclear if this incurs a large enough cost to justify the effort.
It is also unclear if saturating the GPU with this kernel is necessary - if multiple chunks/groups are launched, then a small kernel can be running concurrently alongside a big kernel launch.
@mlazos @janeyx99 for advice on whether this direction does deserve more investigation.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 3 |
580 | 110,447 |
Using `torch.onnx.export` from file named `onnx.py` results in cryptic error message
|
module: onnx, triaged
|
### π Describe the bug
Right now, there's a massive footgun when using `torch.onnx.export()` from a file named `onnx.py`.
```python3
# File contents of onnx.py
import torch
import torch.nn as nn
torch.onnx.export(nn.Linear(10, 10), args=torch.ones(10), f='output.onnx')
```
This results in a few dozen lines of rather cryptic error messages:
```
Traceback (most recent call last):
File "/path/to/project/onnx.py", line 3, in <module>
torch.onnx.export(nn.Linear(10, 10), args=torch.ones(10), f='output.onnx')
File "/path/to/python3.9/site-packages/torch/__init__.py", line 1868, in __getattr__
return importlib.import_module(f".{name}", __name__)
File "/path/to/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/path/to/python3.9/site-packages/torch/onnx/__init__.py", line 57, in <module>
from ._internal.onnxruntime import (
File "/path/to/python3.9/site-packages/torch/onnx/_internal/onnxruntime.py", line 38, in <module>
import onnx
File "/path/to/project/onnx.py", line 3, in <module>
torch.onnx.export(nn.Linear(10, 10), args=torch.ones(10), f='output.onnx')
File "/path/to/python3.9/site-packages/torch/onnx/utils.py", line 516, in export
_export(
File "/path/to/python3.9/site-packages/torch/onnx/utils.py", line 1590, in _export
with exporter_context(model, training, verbose):
File "/path/to/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/path/to/python3.9/site-packages/torch/onnx/utils.py", line 179, in exporter_context
with select_model_mode_for_export(
File "/path/to/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/path/to/python3.9/site-packages/torch/onnx/utils.py", line 166, in setup_onnx_logging
is_originally_enabled = torch.onnx.is_onnx_log_enabled()
AttributeError: partially initialized module 'torch.onnx' has no attribute 'is_onnx_log_enabled' (most likely due to a circular import)
```
When `torch` imports `onnx` [here](https://github.com/pytorch/pytorch/blob/main/torch/onnx/_internal/onnxruntime.py#L34), the local file seems to be preferred over the pip installed `onnx`. According to #98271, I'm not the first one to encounter this issue, so we should probably throw a more descriptive error.
### Versions
torch==2.2.0.dev20231002+cu118
| 1 |
581 | 110,439 |
Torch.onnx.export of module used positional and keyword arguments
|
module: onnx, triaged
|
### π Describe the bug
I defined a simple nn.Module with forward(..) function using positional and keyword arguments:
```
import torch
import torch.nn as nn
cuda0 = torch.device('cuda:0')
x = torch.tensor([[[[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]]]).to(device=cuda0)
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(1, 1, 2)
).to(device=cuda0)
def forward(self, cond, **kwargs):
if (cond):
return self.net(kwargs['input'])
else:
return torch.tensor(0).to(device=cuda0)
module = MyModule()
module(torch.tensor(True).to(device=cuda0), **{'input': x})
```
Next, I am trying to export this module to onnx:
```
torch.onnx.export(module,
args=(torch.tensor(True).to(device=cuda0), {'input': x}),
f='sample.onnx', input_names=['input'], output_names=['output'], export_params=True)
```
But that leads to an error:
`TypeError: forward() takes 2 positional arguments but 3 were given`
I suppose, I am doing that according to the documentation:
> All but the last element of the tuple will be passed as non-keyword arguments, and named arguments will be set from the last element.
https://pytorch.org/docs/stable/onnx.html
I suppose, forward(..) signature with **kwargs is unusual, but I can't change that, that signature is from MMDetection3D.
### Versions
Collecting environment information...
PyTorch version: 1.8.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.1.105
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.113.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz
Stepping: 3
CPU MHz: 2900.000
CPU max MHz: 4300,0000
CPU min MHz: 800,0000
BogoMIPS: 5799.77
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1,5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] ament-flake8==0.9.8
[pip3] flake8==3.7.9
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.19.5
[pip3] torch==1.8.0+cu111
[pip3] torch-scatter==2.0.9
[pip3] torchex==0.1.0
[pip3] torchvision==0.9.0+cu111
[pip3] triton==2.0.0
| 2 |
582 | 110,437 |
[fx] Add cache_result option in ShapeProp which can help cache interm tensor in meta
|
fb-exported, release notes: fx
|
Summary: While authoring transformations, we want to compare the result before and after to make sure it works as expected. Extending functionality in shape prop to catch interm result by adding "cache_result=True"
Test Plan:
buck run caffe2/test:fx
```
test_shape_prop_cache_result (test_fx.TestFX) ... ok
...
t_type_check_reshape_true (fx.test_gradual_type.TypeCheckerTest) ... ok
test_type_check_symbolic_inferenceconv2D_maxpool2d_flatten (fx.test_gradual_type.TypeCheckerTest) ... ok
test_type_check_transpose_False (fx.test_gradual_type.TypeCheckerTest) ... ok
test_type_check_transpose_true (fx.test_gradual_type.TypeCheckerTest) ... ok
test_type_maxpool2d_fully_static (fx.test_gradual_type.TypeCheckerTest) ... ok
test_type_typechecl_maxpool2d_3dinput (fx.test_gradual_type.TypeCheckerTest) ... ok
test_typecheck_basicblock (fx.test_gradual_type.TypeCheckerTest) ... ok
----------------------------------------------------------------------
Ran 1234 tests in 48.683s
```
Differential Revision: D49857532
| 4 |
583 | 110,436 |
Pytorch for Python 3.12 not available
|
module: build, triaged, module: python frontend
|
### π The feature, motivation and pitch
On Windows, Pytorch installation is not available:
```
C:\> python312.exe -m pip install torch
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
```
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere @albanD
| 3 |
584 | 110,422 |
jacrev Issue when Using Cuda
|
triaged, module: functorch
|
### π Describe the bug
Hi all,
I am trying to use the jacrev to get model jacobians. However, jacrev does not work when setting device to "cuda" while it works perfectly on CPU.
Please see below toy example for reproducing the issue:
```
import torch
import torch.nn as nn
from torch.func import jacrev, functional_call
device = "cuda"
from torch.func import jacrev, functional_call
class LSTMModel(nn.Module):
def __init__(self, input_size, lstm_units, dense_units):
super(LSTMModel, self).__init__()
self.lstm = nn.LSTM(input_size, lstm_units, batch_first=True)
self.fc1 = nn.Linear(lstm_units, dense_units)
self.fc2 = nn.Linear(dense_units, 1)
def forward(self, x):
lstm_out, _ = self.lstm(x)
lstm_out = torch.tanh(lstm_out)
fc1_out = torch.tanh(self.fc1(lstm_out))
output = self.fc2(fc1_out)
return output
input_size = 2
lstm_units = 32
dense_units = 16
model = LSTMModel(input_size, lstm_units, dense_units)
inputs = torch.randn(5, 100, 2)
model.to(device); inputs = inputs.to(device)
params = dict(model.named_parameters())
# jacrev computes jacobians of argnums=0 by default.
# We set it to 1 to compute jacobians of params
jacobians = jacrev(functional_call, argnums=1)(model, params, (inputs,))
```
and here is the full traceback when executing the above: (note that if you change the device to "cpu", it works without any issues).
```
Traceback (most recent call last):
File "C:\Users\kshamsaei\Desktop\jac_test.py", line 39, in <module>
jacobians = jacrev(functional_call, argnums=1)(model, params, (inputs,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\_functorch\eager_transforms.py", line 489, in wrapper_fn
vjp_out = _vjp_with_argnums(func, *args, argnums=argnums, has_aux=has_aux)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\_functorch\vmap.py", line 39, in fn
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\_functorch\eager_transforms.py", line 291, in _vjp_with_argnums
primals_out = func(*primals)
^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\_functorch\functional_call.py", line 143, in functional_call
return nn.utils.stateless._functional_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\utils\stateless.py", line 262, in _functional_call
return module(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Desktop\jac_test.py", line 20, in forward
lstm_out, _ = self.lstm(x)
^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\rnn.py", line 762, in forward
self._init_flat_weights()
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\rnn.py", line 139, in _init_flat_weights
self.flatten_parameters()
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\rnn.py", line 176, in flatten_parameters
unique_data_ptrs = {p.data_ptr() for p in self._flat_weights}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kshamsaei\Miniconda3\envs\torch\Lib\site-packages\torch\nn\modules\rnn.py", line 176, in <setcomp>
unique_data_ptrs = {p.data_ptr() for p in self._flat_weights}
^^^^^^^^^^^^
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows Server 2019 Standard
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.17763-SP0
Is CUDA available: True
CUDA runtime version: 5.5.0
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro P2000
Nvidia driver version: 511.09
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2101
DeviceID=CPU0
Family=179
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
ProcessorType=3
Revision=21764
Architecture=9
CurrentClockSpeed=2101
DeviceID=CPU1
Family=179
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
ProcessorType=3
Revision=21764
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2a0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46357
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.6 py311hf62ec03_1
[conda] mkl_random 1.2.2 py311hf62ec03_1
[conda] numpy 1.25.2 py311hdab7c0b_0
[conda] numpy-base 1.25.2 py311hd01c5d8_0
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchvision 0.15.2 cpu_py311haf6e6b9_0
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 4 |
585 | 110,418 |
bypass nvml for torch.cuda.device_count() if rocm
|
module: rocm, triaged, open source, ciflow/trunk, topic: not user facing, ciflow/periodic, rocm
|
This is a quick-fix to suppress printing "UserWarning: Can't initialize NVML" when calling torch.cuda.device_count() if [NVIDIA Management Library] (https://developer.nvidia.com/nvidia-management-library-nvml) (nvml module) is installed with ROCm.
Fixes https://ontrack-internal.amd.com/browse/SWDEV-414997
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 5 |
586 | 110,417 |
dynamo support for functorch `grad` with pytree inputs.
|
open source, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110417
* #107618
* #109433
* #110290
This PR removes a graph break when `torch.func.grad` is used with pytree inputs. In
summary: we flatten the inputs for adding graph inputs, followed by an unflatten for
calling the original function.
The following example is extracted from `test_grad_pytree` at
/test/dynamo/test_higher_order_ops.py/. There are no graph breaks after this PR.
```python
def fn(x):
x1, x2 = x
return x1.sin().sum() + x2
@torch.compile
def wrapper_fn(x):
return torch.func.grad(fn)(x)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
587 | 110,416 |
DynamicShapesExportTests::test_retracibility_dynamic_shapes times out with ASAN
|
oncall: pt2
|
The test has been timing out for some time, succeeding on retries.
Now it reliably times out:
see linux-jammy-py3.9-clang12-asan / test (default, 2, 6, linux.4xlarge) at
https://hud.pytorch.org/pytorch/pytorch/commit/16e3f158b947c2c14a98178670f22c047a40807c
> dynamo/test_dynamic_shapes.py::DynamicShapesExportTests::test_retracibility_dynamic_shapes <- test/dynamo/test_export.py Command took >30min, returning 124
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
588 | 110,413 |
Add support for `torch.Generator` type in TorchScript
|
triaged, open source, release notes: jit, ciflow/inductor, suppress-bc-linter
|
- Add support for `torch.Generator` type in TorchScript
- Add `generator` args to all `torch.nn.init` functions that call `uniform_` or `normal_`
- Add support for `torch.Generator` in LTC's TorchScript backend (CC: @wconstab)
CC: @eellison @davidberard98 @GlebKazantaev @behzad-a
| 12 |
589 | 110,401 |
WIP / TST: allow testing torch._numpy under Dynamo
|
module: numpy, open source, topic: not user facing, module: dynamo, ciflow/inductor, release notes: dynamo, ciflow/slow
|
Use conditional imports: when running under dynamo, import the original NumPy not torch._numpy. This is what we want to trace, not our implementation.
With this, the test suite passes with and without `PYTORCH_TEST_WITH_DYNAMO=1` (modulo a couple of test modules which are not meant to be compiled, e.g. `test_nep50_examples`). There are two new decorators, `x{fail,pass}ifTorchDynamo`, the `xpass` in most cases indicates a graph break and a fallback to eager for things we do not implement.
cc @mruberry @rgommers @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
590 | 110,400 |
[experiment] do not use slow test json
|
ciflow/trunk, module: dynamo, ciflow/inductor, ciflow/slow
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
591 | 110,396 |
Add scatter_mm and bsr_scatter_mm operations.
|
module: sparse, open source, release notes: sparse, topic: new features, topic: performance
|
This PR introduces `scatter_mm` operation (compute `mm` of arbitrary pairs of tensors given in batches of tensors) that is used to implement `bsr_scatter_mm` that is equivalent to `bsr_dense_mm` (the `mm` operation on bsr and strided tensors). The implementation is provided both in Triton (when tensor dimensions are multiples of 16) and in PyTorch (otherwise).
The figures below illustrate the performance differences of `bsr_scatter_mm` and `bsr_dense_mm` (GPU: `NVIDIA GeForce RTX 2060 SUPER`). The first figure represents the performance equilibrium point in BSR tensor sparsity at which value `bsr_scatter_mm` or `bsr_dense_mm` have the same performance characteristics as `torch.matmul`. The second figure represents speedups from using `bsr_scatter_mm` at its performance equilibrium points with respect to `bsr_dense_mm`.
<img src="https://github.com/pytorch/pytorch/assets/402156/526d182e-937f-4812-a6c4-904f52d6d5ab" width="48%"> <img src="https://github.com/pytorch/pytorch/assets/402156/ccb606ab-1f3f-4133-887c-b56285f4f168" width="48%">
The same figures for GPU card `NVIDIA A100-SXM4-80GB`:
<img src="https://github.com/pytorch/pytorch/assets/402156/25466f1d-df34-4d1c-a975-afb478e4d9f0" width="48%"> <img src="https://github.com/pytorch/pytorch/assets/402156/6ada91f0-a20f-4f0d-8a48-1f4ccc60d08e" width="48%">
In sum:
- `bsr_scatter_mm` is about 2x faster than `bsr_dense_mm` for small block sizes of 16 and 32 and large tensors [GPU: `NVIDIA GeForce RTX 2060 SUPER`].
- `bsr_scatter_mm` is up to 2x faster than `bsr_dense_mm` for small block sizes of 16 and large tensors [GPU: `NVIDIA A100-SXM4-80GB`].
- `bsr_dense_mm` is up to 20 % faster than `bsr_scatter_mm` for block sizes of 64 or larger [GPU: `NVIDIA GeForce RTX 2060 SUPER`].
- However, `bsr_dense_mm` fails with `OutOfResources` exception for block sizes of 256 or larger whereas `bsr_scatter_mm` succeeds.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111489
* #111470
* __->__ #110396
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 1 |
592 | 110,387 |
Different results for forward pass of two equal tensors through Conv2d
|
triaged, module: numerical-reproducibility
|
### π Describe the bug
Forwarding a tensor `img` through a simple PyTorch Conv2d model produces a different result than forwarding `img + torch.zeros_like(img)`.
Here is a minimal example: https://github.com/dozed/pytorch-issue-1/blob/main/test_issue.py
```python
# prepare input
img = torchvision.io.image.read_image('sky1024px.jpg')
img = FT.convert_image_dtype(img, torch.float32)
img = img.unsqueeze(dim=0)
# prepare input + zero
zeros = torch.zeros_like(img)
img_updated = img + zeros
# input tensors are identical
assert torch.allclose(img, img_updated)
assert torch.equal(img, img_updated)
# prepare model
conv1 = nn.Conv2d(in_channels=3, out_channels=129, kernel_size=3, padding=1)
conv2 = nn.Conv2d(in_channels=129, out_channels=4, kernel_size=1)
conv1.requires_grad_(False)
conv2.requires_grad_(False)
# forward 1
x = conv1(img)
result1 = conv2(x)
# forward 2
y = conv1(img_updated)
result2 = conv2(y)
# tensors after conv1 are equal
assert torch.allclose(x, y)
assert torch.equal(x, y)
# ISSUE: the results are not equal but should be, since only zeros are added
print(torch.linalg.norm(result1 - result2))
assert torch.allclose(result1, result2)
assert torch.equal(result1, result2)
```
### Versions
```
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 23.04 (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.37
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.37
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-1265U
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 37%
CPU max MHz: 4800,0000
CPU min MHz: 400,0000
BogoMIPS: 5376,00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 352 KiB (10 instances)
L1i cache: 576 KiB (10 instances)
L2 cache: 6,5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.3
[pip3] pytorch-caffe-models==0.1
[pip3] torch==2.0.1
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] torchviz==0.0.2
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.3 py311h08b1b3b_1
[conda] numpy-base 1.24.3 py311hf175353_1
[conda] pytorch 2.0.1 py3.11_cpu_0 pytorch
[conda] pytorch-caffe-models 0.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 py311_cpu pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
```
| 4 |
593 | 110,381 |
Allow storages to alias even when the deleter is deleteNothing
|
triaged, open source, topic: not user facing
|
This improves the debugging experience for devices that provide custom storage pointer.
| 5 |
594 | 110,380 |
Removed special case for reshape on the IPU
|
triaged, open source, ciflow/trunk, topic: not user facing
|
This is no longer needed by the IPU.
| 12 |
595 | 110,379 |
Pytorch LoadNativeLibrary issue
|
oncall: mobile
|
### π LoadNativeLibrary Crash in .ptl model
Hello,
I was trying to run object detection app but it crashes when model in Pytorch is loaded, in start of the model:
```
mModule = LiteModuleLoader.load(
assetFilePath(
context,
ptlModel
))
```
4080 4080 F /system/bin/app_process64: runtime.cc:1709] LoadNativeLibrary failed for "libjavacore.so":
I'm using a trained model (.ptl)
Similar load of model as in demo app :
https://github.com/pytorch/android-demo-app/blob/master/ObjectDetection/README.md
gradle:
implementation 'org.pytorch:pytorch_android_lite:1.13.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.13.0'
How can I fix this error?
### Versions
pytorch_android_lite:1.13.0
pytorch_android_torchvision_lite:1.13.0
| 0 |
596 | 110,378 |
[3/N] Clean up CMake target linking
|
triaged, open source, ciflow/binaries, release notes: distributed (c10d)
|
This PR simplifies handling of more targets.
cc @albanD @malfet
| 3 |
597 | 110,377 |
[xla hash update] update the pinned xla hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned xla hash.
| 4 |
598 | 110,375 |
[dynamo][nn_module] Save the nn module object as value
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110375
* #110535
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
599 | 110,366 |
Categorical Simplex constraint throws error for valid values
|
module: distributions, triaged
|
### π Describe the bug
When using `torch.distributions.Categorical` with a tensor of `torch.float16`, the Simplex validation fails.
Reproduces with the following code (on GPU)
```python
import torch
import torch.distributions as dist
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
values = torch.tensor([0.10089111328125, 0.1925048828125, 0.15234375, 0.12152099609375, 0.1510009765625, 0.1805419921875, 0.150634765625, 0.1253662109375, 0.1356201171875, 0.14404296875, 0.13671875, 0.142822265625, 0.171630859375, 0.1619873046875, 0.1708984375, 0.083251953125, 0.178955078125, 0.121337890625, 0.1776123046875, 0.1689453125, 0.2469482421875, 0.193603515625, 0.162841796875, 0.2244873046875, 0.2734375, 0.26806640625, 0.1658935546875, 0.191162109375, 0.2017822265625, 0.191650390625, 0.1575927734375, 0.131103515625, 0.1680908203125, 0.1553955078125, 0.2208251953125, 0.2529296875, 0.2352294921875, 0.29443359375, 0.3447265625, 0.355224609375, 0.346923828125, 0.343994140625, 0.3125, 0.19873046875, 0.20751953125, 0.210205078125, 0.214599609375, 0.1605224609375, 0.1568603515625, 0.1431884765625, 0.2003173828125, 0.32080078125, 0.359619140625, 0.425048828125, 0.327392578125, 0.412841796875, 0.4619140625, 0.39892578125, 0.2408447265625, 0.206787109375, 0.254150390625, 0.314453125, 0.1744384765625, 0.1451416015625, 0.15625, 0.156982421875, 0.169189453125, 0.25146484375, 0.388427734375, 0.321044921875, 0.307861328125, 0.5224609375, 0.6650390625, 0.404296875, 0.263427734375, 0.389404296875, 0.44140625, 0.4267578125, 0.358154296875, 0.1767578125, 0.1795654296875, 0.2413330078125, 0.27001953125, 0.263671875, 0.2724609375, 0.283447265625, 0.28076171875, 0.41455078125, 0.3759765625, 0.30029296875, 0.40478515625, 0.4375, 0.486083984375, 0.60009765625, 0.4970703125, 0.178955078125, 0.26025390625, 0.27197265625, 0.359619140625, 0.381103515625, 0.3330078125, 0.31689453125, 0.368896484375, 0.40087890625, 0.321044921875, 0.299072265625, 0.489013671875, 0.54638671875, 0.654296875, 0.70068359375, 0.36767578125, 0.15576171875, 0.163818359375, 0.30322265625, 0.376953125, 0.385986328125, 0.306396484375, 0.3271484375, 0.39599609375, 0.331787109375, 0.301025390625, 0.3671875, 0.423095703125, 0.465087890625, 0.68896484375, 0.46337890625, 0.3037109375, 0.195068359375, 0.2332763671875, 0.298828125, 0.408935546875, 0.453125, 0.3779296875, 0.30859375, 0.27490234375, 0.260986328125, 0.29248046875, 0.360595703125, 0.5068359375, 0.36669921875, 0.443603515625, 0.349365234375, 0.29345703125, 0.2353515625, 0.233642578125, 0.335693359375, 0.391845703125, 0.453857421875, 0.371826171875, 0.40673828125, 0.47265625, 0.5556640625, 0.464599609375, 0.51171875, 0.469482421875, 0.4462890625, 0.47216796875, 0.37744140625, 0.36669921875, 0.248291015625, 0.20654296875, 0.2359619140625, 0.295654296875, 0.34130859375, 0.33837890625, 0.4853515625, 0.4111328125, 0.371337890625, 0.31787109375, 0.4345703125, 0.5263671875, 0.376220703125, 0.421875, 0.36572265625, 0.3193359375, 0.198974609375, 0.1947021484375, 0.22119140625, 0.27685546875, 0.291748046875, 0.257080078125, 0.32666015625, 0.31201171875, 0.29931640625, 0.2421875, 0.344482421875, 0.42041015625, 0.339111328125, 0.31689453125, 0.2176513671875, 0.222412109375, 0.174560546875, 0.1805419921875, 0.2244873046875, 0.288818359375, 0.300537109375, 0.283203125, 0.34375, 0.387939453125, 0.348388671875, 0.2919921875, 0.245849609375, 0.313232421875, 0.3017578125, 0.33984375, 0.2264404296875, 0.2109375, 0.1680908203125, 0.188232421875, 0.25244140625, 0.256591796875, 0.3115234375, 0.34130859375, 0.36474609375, 0.298583984375, 0.273193359375, 0.2275390625, 0.236572265625, 0.237060546875, 0.239990234375, 0.2010498046875, 0.196044921875, 0.1680908203125, 0.1383056640625, 0.209228515625, 0.1905517578125, 0.1845703125, 0.206787109375, 0.1903076171875, 0.1800537109375, 0.1734619140625, 0.15087890625, 0.1568603515625, 0.1729736328125, 0.1798095703125, 0.172119140625, 0.1512451171875, 0.1392822265625, 0.1326904296875, 0.129638671875, 0.1116943359375, 0.1904296875, 0.1793212890625, 0.2366943359375, 0.2391357421875, 0.197021484375, 0.2291259765625, 0.207763671875, 0.2169189453125, 0.2010498046875, 0.1904296875, 0.155029296875, 0.171875, 0.15380859375, 0.189697265625, 0.1802978515625])
values = values.to(device, torch.float16) # If this line is commented out, there is no error
q = dist.Categorical(values)
```
The error is
```
Expected parameter probs (Tensor of shape (256,)) of distribution Categorical(probs: torch.Size([256])) to satisfy the constraint Simplex(), but found invalid values:
tensor([0.0014, 0.0026, 0.0021, 0.0017, 0.0021, 0.0025, 0.0021, 0.0017, 0.0019, ...
File "<python_path>/lib/python3.10/site-packages/torch/distributions/distribution.py", line 62, in __init__
raise ValueError(
File "<python_path>/lib/python3.10/site-packages/torch/distributions/categorical.py", line 66, in __init__
super().__init__(batch_shape, validate_args=validate_args)
File "<project_path>/run.py", line 6, in <module>
q = dist.Categorical(values)
File "<python_path>/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "<python_path>/lib/python3.10/runpy.py", line 196, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
```
### Versions
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (GCC) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
GPU 6: Tesla V100-SXM2-32GB
GPU 7: Tesla V100-SXM2-32GB
Nvidia driver version: 470.129.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1710.557
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4389.92
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 10 MiB
L3 cache: 100 MiB
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] clip-anytorch==2.5.2
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] open-clip-torch==2.16.2
[pip3] rotary-embedding-torch==0.3.0
[pip3] torch==2.0.0
[pip3] torchdiffeq==0.2.3
[pip3] torchsde==0.2.6
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] clip-anytorch 2.5.2 pypi_0 pypi
[conda] dctorch 0.1.2 pypi_0 pypi
[conda] numpy 1.25.2 pypi_0 pypi
[conda] open-clip-torch 2.16.2 pypi_0 pypi
[conda] rotary-embedding-torch 0.3.0 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @fritzo @neerajprad @alicanb @nikitaved
| 0 |
600 | 110,363 |
nn.BatchNorm2d (track_running_stats = True) causes "modified by an in-place operation" error when in torch.nn.parallel.DistributedDataParallel
|
oncall: distributed, triaged
|
### π Describe the bug
I have already raised this issue on Torch Discussions [(link)](https://discuss.pytorch.org/t/runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation-batchnorm2d-track-running-stats/189071/1).
The above issue has a snippet of code and a temporary working solution.
If we do a second forward pass on the model having nn.BatchNorm2d with track_running_stats = True, while in `torch.nn.parallel.DistributedDataParallel` mode, the code throws the error:
```RuntimeError: one of the variables needed for gradient computation has been modified by an in-place operation```
Only way to prevent this error is:
1. Run with Track_running_stats = False, or
2. Disable torch.nn.parallel.DistributedDataParallel.
3. Modify the nn.BatchNorm2d implementation
```
self.running_mean_copy = copy.deepcopy(self.running_mean)
self.running_var_copy = copy.deepcopy(self.running_var)
F.batch_norm(
input,
self.running_mean_copy if not self.training or self.track_running_stats else None,
self.running_var_copy if not self.training or self.track_running_stats else None,
self.weight,
self.bias,
bn_training,
exponential_average_factor,
self.eps,
)
self.running_mean = self.running_mean * 0 + self.running_mean_copy
self.running_var = self.running_var * 0 + self.running_var_copy
```
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2295 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-msssim==1.0.0
[pip3] torch==1.13.1+cu117
[pip3] torchaudio==0.13.1+cu117
[pip3] torchvision==0.14.1+cu117
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-msssim 1.0.0 pypi_0 pypi
[conda] torch 1.13.1+cu117 pypi_0 pypi
[conda] torchaudio 0.13.1+cu117 pypi_0 pypi
[conda] torchvision 0.14.1+cu117 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.