Datasets:
Dataset Viewer
Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
⌀ | Body
stringlengths 9
74.5k
⌀ | Comments
int64 0
867
|
---|---|---|---|---|---|
1 | 111,756 |
userwarning: loky-backed parallel loops cannot be called in a multiprocessing with num_workers=1 but two dataloaders
| null |
### 🐛 Describe the bug
Setting num_workers=1 speeds dataloader a lot, but it doesn't seem to work when I have more than 1 dataloader. When I only have one, no warning appears and the enumeration only takes 0.3s. However, when I have 2 dataloader (train and val), the warning starts to appear on every iteration and it now takes 100s to enumerate the data loader, like when using spawn or without worker. It also happens when I just recreate that 1 dataloader.
userwarning: loky-backed parallel loops cannot be called in a multiprocessing with num_workers=1
Environment:
Google Colab
Python 3.10.12
torch==1.13.1
I also tried upgrading to torch 2.1.0 but it's the same.
### Versions
--2023-10-22 05:26:33-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21737 (21K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[===================>] 21.23K --.-KB/s in 0s
2023-10-22 05:26:33 (114 MB/s) - ‘collect_env.py’ saved [21737/21737]
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.7
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] torch==1.13.1
[pip3] torchaudio==2.1.0+cu118
[pip3] torchdata==0.7.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.2.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.16.0
[pip3] torchvision==0.14.1
[pip3] triton==2.1.0
[conda] Could not collect
| 0 |
2 | 111,755 |
[dtensor] add device_mesh.device_type to make RNGStateTracker support CUDA-like devices
|
open source
|
[dtensor] Add device_mesh.device_type to make RNGStateTracker support CUDA-like devices
| 1 |
3 | 111,754 |
[dynamo] Better determinism of `ConfigModule` by walking using pytree
| null |
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/pull/111318
Currently, validation only occurs at the root. However, we should walk the pytree of each object to ensure types are respected.
In particular, we can do conversions of unfriendly types
- function objects into `"f{__module__}{__name__}"` strings
- sort sets and walk their objects to make them deterministic.
- There isn't a canonical order, however; sets have `__hash__`, but this hash is based on `id` which is non-deterministic. So sorting by hash is non-deterministic. Further, not all objects implement `>` operator.
- In other words, there is no feasible way to make deterministic representations of sets.
| 0 |
4 | 111,753 |
[dynamo] AutogradFunctionMethodHigherOrderVariable check for new guards is broken
|
module: dynamo
|
AutogradFunctionMethodHigherOrderVariable has a check for new guards being added in the following places:
https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1091
https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1115
https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1122-L1123
As written, this check does nothing because we are storing a *reference* in pre_guards that just gets mutated. So `pre_guards is post_guards` evaluates `True`.
The following will make the code do as intended:
```patch
diff --git a/torch/_dynamo/variables/higher_order_ops.py b/torch/_dynamo/variables/higher_order_ops.py
index 2292c71f048..5d7d87f810c 100644
--- a/torch/_dynamo/variables/higher_order_ops.py
+++ b/torch/_dynamo/variables/higher_order_ops.py
@@ -1088,7 +1088,7 @@ class AutogradFunctionMethodHigherOrderVariable(TorchHigherOrderOperatorVariable
else:
fn = TorchVariable(self.value)
checkpoint = tx.copy_graphstate()
- pre_guards = tx.output.guards
+ pre_guards = tx.output.guards.clone()
graph_checkpoint = tx.output.graph
# TODO: Support kwargs
diff --git a/torch/_guards.py b/torch/_guards.py
index e532a32cdd2..4f9e874476e 100644
--- a/torch/_guards.py
+++ b/torch/_guards.py
@@ -483,6 +483,8 @@ class GuardsSet:
for o in others:
for g in o:
self.add(g, skip=1)
+ def clone(self):
+ return GuardsSet(set(self.inner))
class GuardsContext(Checkpointable[GuardsCheckpointState]):
```
However, it causes a test to fail:
```
____________________________________________ ReproTests.test_hf_xsoftmax_training ____________________________________________
Traceback (most recent call last):
File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "/home/jansel/pytorch/torch/testing/_internal/common_utils.py", line 2453, in wrapper
method(*args, **kwargs)
File "/home/jansel/pytorch/test/dynamo/test_repros.py", line 3163, in test_hf_xsoftmax_training
self.assertEqual(dict(counters["frames"]), {"total": 1, "ok": 1})
File "/home/jansel/pytorch/torch/testing/_internal/common_utils.py", line 3356, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
The failure occurred for item ['ok']
To execute this test, run the following from the base repo dir:
python test/dynamo/test_repros.py -k test_hf_xsoftmax_training
```
I think this piece of code needs to be revisited, as I am not sure if the graph break it adds is correct.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305
| 0 |
5 | 111,752 |
Is it a good time to switch to CXX11_ABI?
| null |
### 🚀 The feature, motivation and pitch
Now most of CI jobs use g++>=9 except Android jobs which use g++-8. Given this situation, is it possible to always use CXX11_ABI and get rid of the many checks in build systems?
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
6 | 111,749 |
[dynamo] Expand _nonvar_fields names
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111749
This should be a small compile time optimization, since we won't need to
walk these fields in apply().
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
7 | 111,748 |
Allow to specify specific files for debug info
|
topic: not user facing
|
Building with `USE_CUSTOM_DEBINFO=torch/csrc/Module.cpp python setup.py develop` for example will provide debug info only for this file.
This allows to enable debug symbols very fast from a non-debug build by doing a clean then develop (as long as you have ccache) and avoid very large binaries that take a very long time to load in gdb.
| 1 |
8 | 111,747 |
New swap function
|
module: dynamo
|
This PR is proposing a new approach to solve the nn/optim only linked by python object identity problem.
The idea is to have a function that can swap the content of two Tensors t1 and t2 while preserving all the old references.
This would allow us to swap the `model.weight` with a new Tensor (can be any subclass of Tensor and any TensorImpl (xla, sparse, nested tensorimpl would work)). The use within nn will be done in a follow up.
This is done by swapping the whole content of the PyObject and then putting back the fields associated with external references (refcount, gc tracking and weakrefs).
Note that we have to properly handle all the cases where there is memory used before the public pointer PyObject* and where the PyObject is bigger due to dict/weakref being inlined (older CPython version) or due to slots.
The main limitation of this approach is that the number of slots need to match for the objects being swapped and thus limit usage of slots in subclasses.
Draft right now to see what @colesbury thinks about doing this?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
9 | 111,746 |
[dynamo] add repro for functorch/fx interop issue (`allow_in_graph`)
|
open source, topic: not user facing, module: dynamo
|
Fixes https://github.com/pytorch/pytorch/issues/109025 by adding repro
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
10 | 111,745 |
[dynamo]: `nn.Module` recursively set `training` mode via `train` and `eval`
|
open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/109885
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
11 | 111,744 |
ninja: build stopped: subcommand failed
|
oncall: pt2
|
### 🐛 Describe the bug
When I try to build pytorch from source in linux, I face a confusing problem when I run the 'python setup.py install'.
Here are the error logs when I run the 'python setup.py install' the second time.
### Error logs
(pytorch_install) [root@cn0 pytorch-1.7]# python setup.py installBuilding wheel torch-1.7.0a0
-- Building version 1.7.0a0
cmake --build . --target install --config Release -- -j 96
[3/2123] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o
FAILED: caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../c10/.. -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -mavx2 -mfma -mavx -mf16c -std=gnu++14 -MD -MT caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o -MF caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o.d -o caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o -c ../caffe2/perfkernels/common_avx2.cc
../caffe2/perfkernels/common_avx2.cc:17:2: error: #error ( "You found a build system error: __AVX2__ is defined (via e.g. -mavx2) " "but CAFFE2_PERF_WITH_AVX2 is not defined.");
#error( \
^~~~~
[4/2123] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o
FAILED: caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../c10/.. -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -mavx512f -mavx512dq -mavx512vl -mavx2 -mfma -mavx -mf16c -std=gnu++14 -MD -MT caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o -MF caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o.d -o caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o -c ../caffe2/perfkernels/common_avx512.cc
../caffe2/perfkernels/common_avx512.cc:18:2: error: #error ( "You found a build system error: __AVX512F__, __AVX512DQ__, __AVX512VL__ " "is defined (via e.g. -mavx512f, -mavx512dq, and -mavx512vl) " "but CAFFE2_PERF_WITH_AVX512 is not defined.");
#error( \
^~~~~
[5/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o -MF caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o.d -o caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o -c ../caffe2/onnx/backend.cc
../caffe2/onnx/backend.cc:11:10: fatal error: onnx/optimizer/optimize.h: No such file or directory
#include "onnx/optimizer/optimize.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[16/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o -c ../aten/src/ATen/native/xnnpack/Convolution.cpp
../aten/src/ATen/native/xnnpack/Convolution.cpp: In function ‘at::native::xnnpack::ContextConv2D at::native::xnnpack::internal::convolution2d::create(const at::Tensor&, const c10::optional<at::Tensor>&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, int64_t, bool, float, float)’:
../aten/src/ATen/native/xnnpack/Convolution.cpp:236:22: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘21’ to ‘xnn_status xnn_create_deconvolution2d_nhwc_f32(uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’
&convolution_op); // operator
^
../aten/src/ATen/native/xnnpack/Convolution.cpp:264:22: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘21’ to ‘xnn_status xnn_create_convolution2d_nhwc_f32(uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’
&convolution_op); // operator
^
[24/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o -c ../aten/src/ATen/native/xnnpack/Linear.cpp
../aten/src/ATen/native/xnnpack/Linear.cpp: In function ‘at::native::xnnpack::ContextLinear at::native::xnnpack::internal::linear::create(const at::Tensor&, const c10::optional<at::Tensor>&, float, float)’:
../aten/src/ATen/native/xnnpack/Linear.cpp:100:17: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘10’ to ‘xnn_status xnn_create_fully_connected_nc_f32(size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’
&linear_op); // operator
^
[53/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o -c ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp: In function ‘at::Tensor at::native::{anonymous}::qembeddingbag_byte_unpack(const at::Tensor&)’:
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:77:11: error: ‘Fused8BitRowwiseQuantizedSBFloatToFloat’ is not a member of ‘fbgemm’
fbgemm::Fused8BitRowwiseQuantizedSBFloatToFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:77:11: note: suggested alternative: ‘Fused8BitRowwiseQuantizedSBFloatToFloatOrHalf’
fbgemm::Fused8BitRowwiseQuantizedSBFloatToFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fused8BitRowwiseQuantizedSBFloatToFloatOrHalf
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp: In function ‘at::Tensor at::native::{anonymous}::_qembeddingbag_nbit_unpack_helper(const at::Tensor&, int)’:
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:116:11: error: ‘FusedNBitRowwiseQuantizedSBHalfToFloat’ is not a member of ‘fbgemm’
fbgemm::FusedNBitRowwiseQuantizedSBHalfToFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:116:11: note: suggested alternative: ‘FusedNBitRowwiseQuantizedSBHalfToFloatOrHalf’
fbgemm::FusedNBitRowwiseQuantizedSBHalfToFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FusedNBitRowwiseQuantizedSBHalfToFloatOrHalf
[55/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o
/opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o -c ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp: In function ‘at::Tensor at::native::{anonymous}::qembeddingbag_byte_prepack(const at::Tensor&)’:
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:116:11: error: ‘FloatToFused8BitRowwiseQuantizedSBFloat’ is not a member of ‘fbgemm’
fbgemm::FloatToFused8BitRowwiseQuantizedSBFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:116:11: note: suggested alternative: ‘FloatOrHalfToFused8BitRowwiseQuantizedSBFloat’
fbgemm::FloatToFused8BitRowwiseQuantizedSBFloat(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FloatOrHalfToFused8BitRowwiseQuantizedSBFloat
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp: In function ‘at::Tensor at::native::{anonymous}::_qembeddingbag_nbit_prepack_helper(const at::Tensor&, int, bool)’:
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:187:13: error: ‘FloatToFusedNBitRowwiseQuantizedSBHalf’ is not a member of ‘fbgemm’
fbgemm::FloatToFusedNBitRowwiseQuantizedSBHalf(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:187:13: note: suggested alternative: ‘FloatOrHalfToFusedNBitRowwiseQuantizedSBHalf’
fbgemm::FloatToFusedNBitRowwiseQuantizedSBHalf(
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FloatOrHalfToFusedNBitRowwiseQuantizedSBHalf
[98/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Functions.cpp.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 717, in <module>
build_deps()
File "setup.py", line 308, in build_deps
build_caffe2(version=version,
File "/home/mpi_share/env/bz/pytorch-1.7/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/home/mpi_share/env/bz/pytorch-1.7/tools/setup_helpers/cmake.py", line 345, in build
self.run(build_args, my_env)
File "/home/mpi_share/env/bz/pytorch-1.7/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/home/mpi_share/env/bz/anaconda3/envs/pytorch_install/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '96']' returned non-zero exit status 1.
### Minified repro
_No response_
### Versions
sry, there is also a bug when I use the given commands.
pytorch 1.7.0
cuda 10.2
cmake 3.18.4
ninja 1.10.2
python 3.8.2
There are some difficulties to update my cuda version, so I just build the v1.7.0 pytorch to match the cuda version.
If there is something you need to know, please tell me.
Thanks ^^
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 0 |
12 | 111,743 |
WIP Implement channels_last_3d convolution
|
module: cpu, open source
|
Maybe participates to #59168
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 3 |
13 | 111,742 |
Add CSR tensor with non-contiguous values support to CuSparseSpMatCsrDescriptor
|
module: sparse, open source, release notes: sparse, topic: new features
|
Fixes https://github.com/pytorch/pytorch/issues/111574
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111742
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 3 |
14 | 111,741 |
[dynamo] `{*}Tensor.__init__` from list of ndarray as `torch.stack(List[FakeTensor])`
|
open source, module: dynamo, ciflow/inductor
|
Follow up to https://github.com/pytorch/pytorch/pull/111665
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano
| 2 |
15 | 111,740 |
GPU computation is not equivalent
| null |
### 🐛 Describe the bug
GPU computation is not equivalent, but it is equivalent on CPU. Why? And how can I avoid this?
```python
import torch
import torch.nn as nn
hidden_states = torch.randn([4, 2048, 512])
v_proj = nn.Linear(512, 128, bias=False)
value_states = v_proj(hidden_states)
h1, h2 = torch.chunk(hidden_states, 2, dim=0)
v1 = v_proj(h1)
assert h1.equal(hidden_states[:2])
print(v1[0,0,0].item())
print(value_states[0,0,0].item())
assert v1.equal(value_states[:2])
hidden_states = torch.randn([4, 2048, 512]).cuda()
v_proj = nn.Linear(512, 128, bias=False).cuda()
value_states = v_proj(hidden_states)
h1, h2 = torch.chunk(hidden_states, 2, dim=0)
v1 = v_proj(h1)
assert h1.equal(hidden_states[:2])
print(v1[0,0,0].item())
print(value_states[0,0,0].item())
assert v1.equal(value_states[:2])
```
running results
```python
0.429298460483551
0.429298460483551
0.3757566213607788
0.37575680017471313
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
```
### Versions
```python
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22635-SP0
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 531.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2300
DeviceID=CPU0
Family=207
L2CacheSize=14336
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2300
Name=12th Gen Intel(R) Core(TM) i9-12900HX
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.3
[pip3] torch==2.1.0+cu121
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.1.0+cu121
[pip3] torchvision==0.16.0+cu121
[pip3] torchviz==0.0.2
[conda] Could not collect
```
| 0 |
16 | 111,739 |
grad is inf/nan when using torch.amp
| null |
### 🐛 Describe the bug
Below is a very simple for using torch.amp, but the gradients are inf/nan.
```python
import torch
from torch.cuda.amp import GradScaler
from torch import optim
scaler = GradScaler()
a = torch.randn(2, 2, requires_grad=True, device="cuda")
b = torch.randn(2, 2, requires_grad=True, device="cuda")
optimizer = optim.Adam([a, b], lr=0.1)
with torch.autocast(device_type='cuda'):
c = a @ b
loss = c.sum()
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
print(a.grad)
```
running results:
```python
tensor([[-inf, nan],
[-inf, nan]], device='cuda:0')
```
### Versions
```python
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22635-SP0
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU
Nvidia driver version: 531.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2300
DeviceID=CPU0
Family=207
L2CacheSize=14336
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2300
Name=12th Gen Intel(R) Core(TM) i9-12900HX
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.3
[pip3] torch==2.1.0+cu121
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.1.0+cu121
[pip3] torchvision==0.16.0+cu121
[pip3] torchviz==0.0.2
[conda] Could not collect
```
| 0 |
17 | 111,738 |
[dynamo] Implement `set.__contains__` for `Tensor` as object match of `FakeTensor`
|
open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/111556
Dynamo implementation of `set.__contains__` previously used `__eq__` match.
But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issues/111542
Hence implement as Tensor object match i.e. proxy node `'example_value'` FakeTensor match.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
18 | 111,737 |
Support calling __torch_function__ attribute access
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111737
* #111731
* #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
19 | 111,736 |
Implementation of Lion Optimizer.
| null |
### 🚀 The feature, motivation and pitch
Lion Optimizer is becoming a great alterative to AdamW and Adam Optimizer. It is more efficient as it does not use second order moments and instead uses sign operations in order to update the weights. This saves on memory and decreases training time. In some cases it is better than Adam and AdamW as given in the paper.
The original paper for this is : https://arxiv.org/pdf/2302.06675.pdf
The RFCS PR for this is: https://github.com/pytorch/rfcs/pull/60
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
20 | 111,735 |
hack hack hack
|
ciflow/inductor, release notes: AO frontend
|
Fixes #ISSUE_NUMBER
| 1 |
21 | 111,734 |
Is the index_add_ function differentiable?
| null |
### 🚀 The feature, motivation and pitch
verts_normals = torch.zeros_like(cornea_vertex)
vertices_faces = cornea_vertex[face_index]
faces_normals = torch.cross(
vertices_faces[:, 2] - vertices_faces[:, 1],
vertices_faces[:, 0] - vertices_faces[:, 1],
dim=-1,
)
unit_faces_normals = safe_normalize(faces_normals)
verts_normals.index_add_(0, face_index[:, 0], unit_faces_normals)
verts_normals.index_add_(0, face_index[:, 1], unit_faces_normals)
verts_normals.index_add_(0, face_index[:, 2], unit_faces_normals)
### Alternatives
_No response_
### Additional context
_No response_
```[tasklist]
### Tasks
```
| 0 |
22 | 111,733 |
Bug: torch.compile fails to compile torch.func.vmap with reduction functions and raw python numbers
| null |
### 🐛 Describe the bug
`torch.compile` fails to compile vmap transformation with reduction functions and native python numbers. This bug was only found when using reduction functions, and there are several workarounds as shown in the following examples:
```python
import torch
torch._dynamo.reset()
torch._dynamo.config.capture_func_transforms=True
def foo(x):
return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error
# return torch.vmap(lambda x: torch.mean(x) + 1e-2)(x) # Error
# return torch.vmap(lambda x: torch.std(x) + 1e-2)(x) # Error
# return torch.vmap(lambda x: torch.sum(x) + torch.tensor(1e-2))(x) # OK
# return torch.vmap(lambda x: torch.sum(x, 0, keepdim=True) + 1e-2)(x) # OK
# return torch.vmap(lambda x: torch.square(x) + 1e-2)(x) # OK
# return torch.vmap(lambda x: x + 1e-2)(x) # OK
torch.compile(foo, fullgraph=True)(torch.randn((3, 3), device='cuda:0'))
# foo(torch.randn((3, 3), device='cuda:0')) # OK
```
Error messages:
```
BackendCompilerFailed: backend='inductor' raised:
AssertionError: While executing %call : [num_users=1] = call_method[target=__call__](args = (%vmap_proxy, %l_x_), kwargs = {})
Original traceback:
File "/tmp/ipykernel_672249/1649664715.py", line 7, in foo
return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error
File "/data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/apis.py", line 188, in wrapped
return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Traceback:
```
---------------------------------------------------------------------------
BackendCompilerFailed Traceback (most recent call last)
Cell In[66], line 16
7 return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error
8 # return torch.vmap(lambda x: torch.mean(x) + 1e-2)(x) # Error
9 # return torch.vmap(lambda x: torch.std(x) + 1e-2)(x) # Error
10 # return torch.vmap(lambda x: torch.sum(x) + torch.tensor(1e-2))(x) # OK
11 # return torch.vmap(lambda x: torch.sum(x, 0, keepdim=True) + 1e-2)(x) # OK
12 # return torch.vmap(lambda x: torch.square(x) + 1e-2)(x) # OK
13 # return torch.vmap(lambda x: x + 1e-2)(x) # OK
---> 16 torch.compile(foo, fullgraph=True)(torch.randn((3, 3), device='cuda:0'))
17 # foo(torch.randn((3, 3), device='cuda:0')) # OK
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
326 dynamic_ctx.__enter__()
327 try:
--> 328 return fn(*args, **kwargs)
329 finally:
330 set_eval_frame(prior)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:490, in catch_errors_wrapper.<locals>.catch_errors(frame, cache_entry, frame_state)
487 return hijacked_callback(frame, cache_entry, hooks, frame_state)
489 with compile_lock, _disable_current_modes():
--> 490 return callback(frame, cache_entry, hooks, frame_state)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:133, in wrap_convert_context.<locals>._fn(*args, **kwargs)
131 cleanup = setup_compile_debug()
132 try:
--> 133 return fn(*args, **kwargs)
134 finally:
135 cleanup.close()
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:389, in convert_frame_assert.<locals>._convert_frame_assert(frame, cache_entry, hooks, frame_state)
376 compile_id = CompileId(frame_id, frame_compile_id)
378 signpost_event(
379 "dynamo",
380 "_convert_frame_assert._compile",
(...)
386 },
387 )
--> 389 return _compile(
390 frame.f_code,
391 frame.f_globals,
392 frame.f_locals,
393 frame.f_builtins,
394 compiler_fn,
395 one_graph,
396 export,
397 export_constraints,
398 hooks,
399 cache_size,
400 frame,
401 frame_state=frame_state,
402 compile_id=compile_id,
403 )
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:569, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_size, frame, frame_state, compile_id)
567 with compile_context(CompileContext(compile_id)):
568 try:
--> 569 guarded_code = compile_inner(code, one_graph, hooks, transform)
570 return guarded_code
571 except (
572 Unsupported,
573 TorchRuntimeError,
(...)
578 ValidationException,
579 ) as e:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
187 with torch.profiler.record_function(f"{key} (dynamo_timed)"):
188 t0 = time.time()
--> 189 r = func(*args, **kwargs)
190 time_spent = time.time() - t0
191 compilation_time_metrics[key].append(time_spent)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:491, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
489 for attempt in itertools.count():
490 try:
--> 491 out_code = transform_code_object(code, transform)
492 orig_code_map[out_code] = code
493 break
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1028, in transform_code_object(code, transformations, safe)
1025 instructions = cleaned_instructions(code, safe)
1026 propagate_line_nums(instructions)
-> 1028 transformations(instructions, code_options)
1029 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:458, in _compile.<locals>
.transform(instructions, code_options)
456 try:
457 with tracing(tracer.output.tracing_context):
--> 458 tracer.run()
459 except (exc.RestartAnalysis, exc.SkipFrame):
460 raise
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2074, in InstructionTranslator.run(self)
2073 def run(self):
-> 2074 super().run()
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:724, in InstructionTranslatorBase.run(self)
719 try:
720 self.output.push_tx(self)
721 while (
722 self.instruction_pointer is not None
723 and not self.output.should_exit
--> 724 and self.step()
725 ):
726 pass
727 except BackendCompilerFailed:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:688, in InstructionTranslatorBase.step(self)
684 unimplemented(f"missing: {inst.opname}")
685 TracingContext.set_current_loc(
686 self.f_code.co_filename, self.lineno, self.f_code.co_name
687 )
--> 688 getattr(self, inst.opname)(inst)
690 return inst.opname != "RETURN_VALUE"
691 except Unsupported:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2162, in InstructionTranslator.RETURN_VALUE(self, inst)
2157 _step_logger()(
2158 logging.INFO,
2159 f"torchdynamo done tracing {self.f_code.co_name} (RETURN_VALUE)",
2160 )
2161 log.debug("RETURN_VALUE triggered compile")
-> 2162 self.output.compile_subgraph(
2163 self,
2164 reason=GraphCompileReason(
2165 "return_value", [self.frame_summary()], graph_break=False
2166 ),
2167 )
2168 self.output.add_output_instructions([create_instruction("RETURN_VALUE")])
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:833, in OutputGraph.compile_subgraph(self, tx, partial_convert, reason)
830 append_prefix_insts()
831 # optimization to generate better code in a common case
832 self.add_output_instructions(
--> 833 self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
834 + [create_instruction("UNPACK_SEQUENCE", arg=len(stack_values))]
835 )
836 else:
837 graph_output_var = self.new_var("graph_out")
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/contextlib.py:81, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
78 @wraps(func)
79 def inner(*args, **kwds):
80 with self._recreate_cm():
---> 81 return func(*args, **kwds)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:957, in OutputGraph.compile_and_call_fx_graph(self, tx, rv, root)
952 graph_tabular_log.debug("%s", lazy_format_graph_tabular(name, gm))
953 graph_sizes_log.debug(
954 "%s", LazyString(lambda: self.get_graph_sizes_log_str(name))
955 )
--> 957 compiled_fn = self.call_user_compiler(gm)
958 compiled_fn = disable(compiled_fn)
960 counters["stats"]["unique_graphs"] += 1
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
187 with torch.profiler.record_function(f"{key} (dynamo_timed)"):
188 t0 = time.time()
--> 189 r = func(*args, **kwargs)
190 time_spent = time.time() - t0
191 compilation_time_metrics[key].append(time_spent)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1024, in OutputGraph.call_user_compiler(self, gm)
1022 unimplemented_with_warning(e, self.root_tx.f_code, msg)
1023 except Exception as e:
-> 1024 raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
1025 e.__traceback__
1026 ) from None
1028 signpost_event(
1029 "dynamo",
1030 "OutputGraph.call_user_compiler",
(...)
1036 },
1037 )
1039 return compiled_fn
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1009, in OutputGraph.call_user_compiler(self, gm)
1007 if config.verify_correctness:
1008 compiler_fn = WrapperBackend(compiler_fn)
-> 1009 compiled_fn = compiler_fn(gm, self.example_inputs())
1010 _step_logger()(logging.INFO, f"done compiler function {name}")
1011 assert callable(compiled_fn), "compiler_fn did not return callable"
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py:117, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)
115 raise
116 else:
--> 117 compiled_gm = compiler_fn(gm, example_inputs)
119 return compiled_gm
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py:117, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs)
115 raise
116 else:
--> 117 compiled_gm = compiler_fn(gm, example_inputs)
119 return compiled_gm
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/__init__.py:1568, in _TorchCompileInductorWrapper.__call__(self, model_, inputs_)
1565 def __call__(self, model_, inputs_):
1566 from torch._inductor.compile_fx import compile_fx
-> 1568 return compile_fx(model_, inputs_, config_patches=self.config)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:1150, in compile_fx(model_, example_inputs_, inner_compile, config_patches, decompositions)
1143 tracing_context = (
1144 torch._guards.TracingContext.get() or torch._guards.TracingContext(fake_mode)
1145 )
1147 with V.set_fake_mode(fake_mode), torch._guards.tracing( # type: ignore[call-arg]
1148 tracing_context
1149 ), compiled_autograd.disable():
-> 1150 return aot_autograd(
1151 fw_compiler=fw_compiler,
1152 bw_compiler=bw_compiler,
1153 inference_compiler=inference_compiler,
1154 decompositions=decompositions,
1155 partition_fn=partition_fn,
1156 keep_inference_input_mutations=True,
1157 )(model_, example_inputs_)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/backends/common.py:55, in aot_autograd
.<locals>.compiler_fn(gm, example_inputs)
52 try:
53 # NB: NOT cloned!
54 with enable_aot_logging(), patch_config:
---> 55 cg = aot_module_simplified(gm, example_inputs, **kwargs)
56 counters["aot_autograd"]["ok"] += 1
57 return disable(cg)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3891, in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, keep_inference_input_mutations, inference_compiler)
3875 aot_config = AOTConfig(
3876 fw_compiler=fw_compiler,
3877 bw_compiler=bw_compiler,
(...)
3887 no_tangents=False,
3888 )
3890 with compiled_autograd.disable():
-> 3891 compiled_fn = create_aot_dispatcher_function(
3892 functional_call,
3893 full_args,
3894 aot_config,
3895 )
3897 # TODO: There is something deeply wrong here; compiled_fn running with
3898 # the boxed calling convention, but aot_module_simplified somehow
3899 # historically returned a function that was not the boxed calling
3900 # convention. This should get fixed...
3901 def forward(*runtime_args):
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
187 with torch.profiler.record_function(f"{key} (dynamo_timed)"):
188 t0 = time.time()
--> 189 r = func(*args, **kwargs)
190 time_spent = time.time() - t0
191 compilation_time_metrics[key].append(time_spent)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3429, in create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
3426 compiler_fn = partial(aot_wrapper_dedupe, compiler_fn=compiler_fn)
3427 # You can put more passes here
-> 3429 compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
3430 if aot_config.is_export:
3432 mutated_user_inp_locs = [
3433 idx - aot_config.num_params_buffers
3434 for idx in fw_metadata.mutated_inp_indices
3435 if idx >= aot_config.num_params_buffers
3436 ]
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:2212, in aot_wrapper_dedupe(flat_fn, flat_args, aot_config, compiler_fn, fw_metadata)
2209 break
2211 if ok:
-> 2212 return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
2214 # export path: ban duplicate inputs for now, add later if requested.
2215 if aot_config.is_export:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:2392, in aot_wrapper_synthetic_base(flat_fn, flat_args, aot_config, fw_metadata, needs_autograd, compiler_fn)
2390 # Happy path: we don't need synthetic bases
2391 if synthetic_base_info is None:
-> 2392 return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
2394 # export path: ban synthetic bases for now, add later if requested.
2395 if aot_config.is_export:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1558, in aot_dispatch_base(flat_fn, flat_args, aot_config, fw_metadata)
1557 def aot_dispatch_base(flat_fn, flat_args: List[Tensor], aot_config: AOTConfig, *, fw_metadata: ViewAndMutationMeta):
-> 1558 fw_module = aot_dispatch_base_graph(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
1560 disable_amp = torch._C._is_any_autocast_enabled()
1561 context = torch._C._DisableAutocast if disable_amp else nullcontext
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1533, in aot_dispatch_base_graph(flat_fn, flat_args, aot_config, fw_metadata)
1526 keep_mutations = aot_config.keep_inference_input_mutations
1527 fn_to_trace = fn_input_mutations_to_outputs(
1528 flat_fn,
1529 fw_metadata,
1530 keep_data_input_mutations=aot_config.keep_inference_input_mutations,
1531 )
-> 1533 fw_module = create_functionalized_graph(
1534 fn_to_trace,
1535 flat_args,
1536 meta=fw_metadata,
1537 aot_config=aot_config,
1538 trace_joint=False,
1539 )
1541 # As long as we opted to remove input mutations, then
1542 # there should be *NO* mutating ops in the graph at this point.
1543 copy_count = assert_functional_graph(fw_module.graph, allow_input_mutations=aot_config.keep_inference_input_mutations)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1420, in create_functionalized_graph(fn, args, meta, aot_config, trace_joint)
1417 helper, args = create_functionalized_rng_ops_wrapper(helper, args, trace_joint)
1419 with enable_python_dispatcher():
-> 1420 fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args)
1422 return fx_g
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:809, in make_fx.<locals>.wrapped(*args)
801 # We disable the autocast cache as the autocast cache causes type conversions on parameters to
802 # check a cache, which introduces untracked tensors into the graph
803 #
804 # We also disable tracing by any other tensor proxy-based tracers except the current. The
805 # purpose of `make_fx` is to produce graphmodules as a side effect; its internal execution is
806 # thus irrelevant to any external functional trace.
807 with decompose(decomposition_table), fake_tensor_mode, python_dispatcher_mode, pre_dispatch_mode, proxy_function_mode, \
808 sym_mode, proxy_mode, disable_autocast_cache(), disable_proxy_modes_tracing(enable_current=True):
--> 809 t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
811 # TODO: kind of a bad way to do it, should maybe figure out a better way
812 if tracing_mode == "symbolic":
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_compile.py:24, in _disable_dynamo.<locals>.inner(*args, **kwargs)
20 @functools.wraps(fn)
21 def inner(*args, **kwargs):
22 import torch._dynamo
---> 24 return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
326 dynamic_ctx.__enter__()
327 try:
--> 328 return fn(*args, **kwargs)
329 finally:
330 set_eval_frame(prior)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/external_utils.py:17, in wrap_inline.<locals>.inner(*args, **kwargs)
15 @functools.wraps(fn)
16 def inner(*args, **kwargs):
---> 17 return fn(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:468, in dispatch_trace(root, tracer, concrete_args)
462 @torch._disable_dynamo
463 def dispatch_trace(
464 root: Union[torch.nn.Module, Callable],
465 tracer: Tracer,
466 concrete_args: Optional[Tuple[Any, ...]] = None,
467 ) -> GraphModule:
--> 468 graph = tracer.trace(root, concrete_args)
469 name = root.__class__.__name__ if isinstance(root, torch.nn.Module) else root.__name__
470 return GraphModule(tracer.root, graph, name)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
326 dynamic_ctx.__enter__()
327 try:
--> 328 return fn(*args, **kwargs)
329 finally:
330 set_eval_frame(prior)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/external_utils.py:17, in wrap_inline.<locals>.inner(*args, **kwargs)
15 @functools.wraps(fn)
16 def inner(*args, **kwargs):
---> 17 return fn(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:817, in Tracer.trace(self, root, concrete_args)
810 for module in self._autowrap_search:
811 _autowrap_check(
812 patcher, module.__dict__, self._autowrap_function_ids
813 )
814 self.create_node(
815 "output",
816 "output",
--> 817 (self.create_arg(fn(*args)),),
818 {},
819 type_expr=fn.__annotations__.get("return", None),
820 )
822 self.submodule_paths = None
823 finally:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:485, in wrap_key.<locals>.wrapped(*proxies)
482 assert isinstance(m, ProxyTorchDispatchMode)
483 track_tensor_tree(flat_tensors, flat_proxies, constant=None, tracer=tracer)
--> 485 out = f(*tensors)
486 out = pytree.tree_map_only(
487 torch.Tensor,
488 lambda t: get_proxy_slot(t, tracer, t, lambda x: x.proxy),
489 out
490 )
491 out = pytree.tree_map_only(
492 (SymInt, SymFloat, SymBool),
493 lambda t: get_proxy_slot(t.node, tracer)(),
494 out
495 )
File <string>:1, in <lambda>(arg0)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1412, in create_functionalized_graph.<locals>.fwd_helper(*args)
1411 def fwd_helper(*args):
-> 1412 return functionalized_f_helper(*args)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1363, in create_functionalized_graph.<locals>.functionalized_f_helper(*args)
1360 torch._enable_functionalization(reapply_views=True)
1361 try:
1362 # Run the joint
-> 1363 f_outs = fn(*f_args)
1364 finally:
1365 torch._disable_functionalization()
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1165, in fn_input_mutations_to_outputs.<locals>.inner_fn(*args)
1164 def inner_fn(*args):
-> 1165 outs = fn(*args)
1166 assert len(meta.output_info) == len(outs)
1167 # The compiled fw will return mutated input tensors, *including* metadata-only mutation.
1168 # However, if keep_data_input_mutations is set, the compiled fw only needs to return metadata-mutated inputs.
1169 # (because data-only input mutations are handled directly in the compiled graph)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3496, in create_functional_call.<locals>.functional_call(*args, **kwargs)
3492 warnings.filterwarnings(
3493 "ignore", "Anomaly Detection has been enabled."
3494 )
3495 with torch.autograd.detect_anomaly(check_nan=False):
-> 3496 out = Interpreter(mod).run(*args[params_len:], **kwargs)
3497 else:
3498 out = mod(*args[params_len:], **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:138, in Interpreter.run(self, initial_env, enable_io_processing, *args)
135 continue
137 try:
--> 138 self.env[node] = self.run_node(node)
139 except Exception as e:
140 if self.extra_traceback:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:195, in Interpreter.run_node(self, n)
193 assert isinstance(args, tuple)
194 assert isinstance(kwargs, dict)
--> 195 return getattr(self, n.op)(n.target, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:289, in Interpreter.call_method(self, target, args, kwargs)
287 # Execute the method and return the result
288 assert isinstance(target, str)
--> 289 return getattr(self_obj, target)(*args_tail, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/apis.py:188, in vmap.<locals>.wrapped(*args, **kwargs)
187 def wrapped(*args, **kwargs):
--> 188 return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:266, in vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
262 return _chunked_vmap(func, flat_in_dims, chunks_flat_args,
263 args_spec, out_dims, randomness, **kwargs)
265 # If chunk_size is not specified.
--> 266 return _flat_vmap(
267 func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs
268 )
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:38, in doesnt_support_saved_tensors_hooks.<locals>.fn(*args, **kwargs)
35 @functools.wraps(f)
36 def fn(*args, **kwargs):
37 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 38 return f(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:379, in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
377 try:
378 batched_inputs = _create_batched_inputs(flat_in_dims, flat_args, vmap_level, args_spec)
--> 379 batched_outputs = func(*batched_inputs, **kwargs)
380 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
381 finally:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:678, in GraphModule.recompile.<locals>.call_wrapped(self, *args, **kwargs)
677 def call_wrapped(self, *args, **kwargs):
--> 678 return self._wrapped_call(self, *args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:284, in _WrappedCall.__call__(self, obj, *args, **kwargs)
282 raise e.with_traceback(None)
283 else:
--> 284 raise e
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:274, in _WrappedCall.__call__(self, obj, *args, **kwargs)
272 return self.cls_call(obj, *args, **kwargs)
273 else:
--> 274 return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
275 except Exception as e:
276 assert e.__traceback__
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:795, in Tracer.trace.<locals>.module_call_wrapper(mod, *args, **kwargs)
788 return _orig_module_call(mod, *args, **kwargs)
790 _autowrap_check(
791 patcher,
792 getattr(getattr(mod, "forward", mod), "__globals__", {}),
793 self._autowrap_function_ids,
794 )
--> 795 return self.call_module(mod, forward, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:425, in PythonKeyTracer.call_module(self, m, forward, args, kwargs)
422 def call_module(
423 self, m: torch.nn.Module, forward: Callable[..., Any], args: Tuple[Any, ...], kwargs: Dict[str, Any]
424 ) -> Any:
--> 425 return forward(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:788, in Tracer.trace.<locals>.module_call_wrapper.<locals>.forward(*args, **kwargs)
787 def forward(*args, **kwargs):
--> 788 return _orig_module_call(mod, *args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File <eval_with_key>.429:6, in forward(self, select)
4 def forward(self, select):
5 sum_1 = torch.sum(select); select = None
----> 6 add = sum_1 + 0.01; sum_1 = None
7 return add
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
18 simple_call_counter[fn.__qualname__] = 0
19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 20 return fn(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:555, in ProxyTorchDispatchMode.__torch_dispatch__(self, func, types, args, kwargs)
552 @count
553 def __torch_dispatch__(self, func, types, args=(), kwargs=None):
554 with self.sym_mode.enable(False), set_original_aten_op(func):
--> 555 return self.inner_torch_dispatch(func, types, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:580, in ProxyTorchDispatchMode.inner_torch_dispatch(self, func, types, args, kwargs)
577 if func in [prim.device.default]:
578 return func(*args, **kwargs)
--> 580 return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:262, in proxy_call(proxy_mode, func, pre_dispatch, args, kwargs)
260 if func in CURRENT_DECOMPOSITION_TABLE:
261 with proxy_mode:
--> 262 r = CURRENT_DECOMPOSITION_TABLE[func](*args, **kwargs)
263 if r is not NotImplemented:
264 return r
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_decomp/decompositions.py:1737, in _to_copy(x, dtype, layout, device, pin_memory, non_blocking, memory_format)
1735 x = torch._prims.device_put(x, device)
1736 if dtype is not None and not dtype_converted:
-> 1737 x = torch._prims.convert_element_type(x, dtype)
1738 dtype_converted = True
1739 # In case of dtype promotion, faketensor converted into tensor.
1740 # Need to convert into faketensor if input was a faketensor.
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_ops.py:448, in OpOverload.__call__(self, *args, **kwargs)
447 def __call__(self, *args, **kwargs):
--> 448 return self._op(*args, **kwargs or {})
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
18 simple_call_counter[fn.__qualname__] = 0
19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 20 return fn(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:555, in ProxyTorchDispatchMode.__torch_dispatch__(self, func, types, args, kwargs)
552 @count
553 def __torch_dispatch__(self, func, types, args=(), kwargs=None):
554 with self.sym_mode.enable(False), set_original_aten_op(func):
--> 555 return self.inner_torch_dispatch(func, types, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:580, in ProxyTorchDispatchMode.inner_torch_dispatch(self, func, types, args, kwargs)
577 if func in [prim.device.default]:
578 return func(*args, **kwargs)
--> 580 return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:361, in proxy_call(proxy_mode, func, pre_dispatch, args, kwargs)
358 else:
359 args[0].proxy = proxy_out
--> 361 out = func(*args, **kwargs)
363 # In some circumstances, we will be tracing in a situation where a tensor
364 # is *statically* known to be a constant (currently, this only happens if
365 # you run torch.tensor; deterministic factory functions like torch.arange
(...)
382 # propagating const-ness. Similarly, we don't require the constant to
383 # live on CPU, but we could.
384 any_constant = pytree.tree_any_only(_ProxyTensor, lambda t: t.constant is not None, (f_args, f_kwargs))
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_ops.py:448, in OpOverload.__call__(self, *args, **kwargs)
447 def __call__(self, *args, **kwargs):
--> 448 return self._op(*args, **kwargs or {})
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
18 simple_call_counter[fn.__qualname__] = 0
19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 20 return fn(*args, **kwargs)
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1250, in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
1248 assert self not in _get_current_dispatch_mode_stack(), func
1249 try:
-> 1250 return self.dispatch(func, types, args, kwargs)
1251 except TypeError:
1252 log.exception("fake tensor raised TypeError")
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1470, in FakeTensorMode.dispatch(self, func, types, args, kwargs)
1464 if (
1465 "prims::" in func._schema.name
1466 and hasattr(func, "prim_meta_impl")
1467 and not stride_incorrect_op(func)
1468 ):
1469 with self:
-> 1470 return func.prim_meta_impl(*args, **kwargs)
1472 # Users can register FakeTensor rules for custom operators
1473 # Call them if they exist.
1474 if func.name() in torch._custom_op.impl.global_registry:
File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_prims/__init__.py:1993, in _convert_element_type_meta(a, dtype)
1991 def _convert_element_type_meta(a: TensorLikeType, dtype: torch.dtype) -> TensorLikeType:
1992 # Type checks
-> 1993 assert isinstance(a, TensorLike)
1994 assert isinstance(dtype, torch.dtype)
1996 # dtype conversion preserves dense strides
```
### Versions
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:40:35) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.52
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 510.60.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz
Stepping: 7
CPU MHz: 3399.999
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 5400.00
Virtualization: VT-x
L1d cache: 1.8 MiB
L1i cache: 1.8 MiB
L2 cache: 56 MiB
L3 cache: 77 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.1.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] cudatoolkit 11.8.0 h4ba93d1_12 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.26.0 py311h64a7726_0 conda-forge
[conda] pyg 2.4.0 py311_torch_2.1.0_cu118 pyg
[conda] pytorch 2.1.0 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.1.0 py311_cu118 pytorch
[conda] torchtriton 2.1.0 py311 pytorch
[conda] torchvision 0.16.0 py311_cu118 pytorch
| 0 |
23 | 111,732 |
Pass `ignored_params` at the leaf FSDP wrapping class call
|
open source, release notes: distributed (fsdp)
|
Fixes #111623
| 2 |
24 | 111,731 |
Support tracing base torch_function impl
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111737
* __->__ #111731
* #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
25 | 111,730 |
TensorWithTFOverride inheritance from TensorVariable
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111737
* #111731
* __->__ #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
26 | 111,729 |
An OOM where there should not be any OOM.
| null |
### 🐛 Describe the bug
I see similar type of errors being asked about in quite a few places, with advice given being usually useless. The suggestion below to muck around with an environment variable is similarly useless.
What is confounding to me is that memory allocation is tiny in comparison to still available space. Why is this happening?
Traceback (most recent call last):
File "fine-tune.py", line 338, in <module>
train()
File "fine-tune.py", line 331, in train
trainer.train()
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1591, in train
return inner_training_loop(
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1726, in _inner_training_loop
model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1280, in prepare
result = self._prepare_deepspeed(*args)
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1662, in _prepare_deepspeed
engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/__init__.py", line 171, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 304, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1212, in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1473, in _configure_zero_optimizer
optimizer = DeepSpeedZeroOptimizer(
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 509, in __init__
self.initialize_optimizer_states()
File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 644, in initialize_optimizer_states
self.optimizer.step()
File "/home/developer/pytorch/torch/optim/lr_scheduler.py", line 69, in wrapper
return wrapped(*args, **kwargs)
File "/home/developer/pytorch/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/home/developer/pytorch/torch/optim/optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
File "/home/developer/pytorch/torch/optim/adamw.py", line 171, in step
adamw(
File "/home/developer/pytorch/torch/optim/adamw.py", line 321, in adamw
func(
File "/home/developer/pytorch/torch/optim/adamw.py", line 564, in _multi_tensor_adamw
exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 134.00 MiB (GPU 0; 11.93 GiB total capacity; 4.48 GiB already allocated; 6.69 GiB free; 4.77 GiB allowed; 4.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
### Versions
UBuntu 20.04
Python 3.8
Anaconda environment
Inside a docker
| 0 |
27 | 111,728 |
Not Implemented Issue
| null |
### 🚀 The feature, motivation and pitch
NotImplementedError: The operator 'aten::_unique2' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
### Alternatives
Please add this
### Additional context
_No response_
| 0 |
28 | 111,727 |
[TESTING] Check Triton update after elementwise dedup fix
|
ciflow/trunk, topic: not user facing, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111727
This PR is patched over the current Triton pin: https://github.com/openai/triton/pull/2512 .
| 1 |
29 | 111,726 |
[dynamo] Remove VariableTracker.propagate
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111726
* #111725
* #111415
* #111614
* #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
30 | 111,725 |
[dynamo] Remove VariableTracker.add_options
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111726
* __->__ #111725
* #111415
* #111614
* #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
31 | 111,724 |
[torchx] Do not terminate parent process if exit code from child isn't valid
|
fb-exported
|
Summary:
There's no reason to terminate the parent process trying to find the name of the signal received by the child process.
Let's make sure this is handled properly, which then will ensure that parent process can process child failures.
Test Plan: Unit tests.
Differential Revision: D50516668
| 6 |
32 | 111,722 |
Add cudagraph_mark_step_begin in torch.compiler, reference in error message
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111722
cc @chauhang
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
33 | 111,721 |
Constrain sdpa to fx strides
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111721
Fix for https://github.com/pytorch/pytorch/issues/109607. sdpa requires last dimension strides to be 1. Add constraint so that we run the op with the strides we observed in tracing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
34 | 111,719 |
add dry run metrics to td strategies
|
topic: not user facing
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111719
Creates a decorator in order to emit metrics for dry runs on target determination strategies. @ZainRizvi does this seem reasonable?
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at c90a623</samp>
Add a new module `dry_run.py` to support dry run mode for test dependency strategies. This mode allows testing the logic and performance of different strategies without actually running the tests. This can help improve testing efficiency and quality.
| 1 |
35 | 111,718 |
Wrong way of checking if CustomModule is a subclass of torch.nn.Module
| null |
### 🐛 Describe the bug
When I build my custom module and try to add it to a sequential text processing, with "[torchtext.transforms.Sequential](https://pytorch.org/text/stable/transforms.html#torchtext.transforms.Sequential)" It raises an error even though I'm doing the sub classing correctly.
This is a fragment of the error stack trace:

When I go directly to the code to see what happens I find that the method used for checking if the provided module is a subclass of torch.nn.Module I find this:

This is a mistake because the function for doing the subclassing check is 'issubclass' instead of 'isinstance'. I changed the code and it worked as needed, so please check this bug out.
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 13.2.1 20230801
Clang version: 16.0.6
CMake version: version 3.27.5
Libc version: glibc-2.38
Python version: 3.11.5 (main, Sep 2 2023, 14:16:33) [GCC 13.2.1 20230801] (64-bit runtime)
Python platform: Linux-6.1.53-1-MANJARO-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Ti
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 3600X 6-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 70%
CPU max MHz: 4408,5928
CPU min MHz: 2200,0000
BogoMIPS: 7603,86
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] pytorch-lightning==2.1.0
[pip3] torch==2.1.0
[pip3] torchaudio==2.1.0
[pip3] torchdata==0.7.0
[pip3] torchmetrics==1.2.0
[pip3] torchtext==0.16.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] Could not collect
| 0 |
36 | 111,717 |
[dynamo] Lazily construct symbolic_locals
|
module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111726
* #111725
* #111415
* #111614
* __->__ #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
37 | 111,716 |
Cannot pip install torch 2.0.1
| null |
### 🐛 Describe the bug
I was trying to follow the instruction on the [webpage](https://pytorch.org/get-started/previous-versions/) to install torch 2.0.1 using pip.
```
# ROCM 5.4.2 (Linux only)
pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/rocm5.4.2
# CUDA 11.7
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu117
# CUDA 11.8
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
# CPU only
pip install torch==2.0.1+cpu torchvision==0.15.2+cpu torchaudio==2.0.2 --index-url https://download
```
But it would throw an error e.g. for installing 2.0.1+cu117
```
ERROR: Could not find a version that satisfies the requirement torch==2.0.1+cu117 (from versions: 1.13.0+cu117, 1.13.1+cu117)
ERROR: No matching distribution found for torch==2.0.1+cu117
```
Commands for other versions above throw similar errors.
### Versions
Attempt to install 2.0.1
| 0 |
38 | 111,715 |
[Export] Don't serialize missing args with default value
|
fb-exported, topic: not user facing, module: inductor, ciflow/inductor, module: export
|
Summary: Per https://docs.google.com/document/d/1FzWm-sHYwmRi3x_g036kOxd99KaYquUsA-L5JwOn8ys/edit
I wonder if this would break executorch? @larryliu0820
I see exir/serialize.py using export's GraphModuleSerializer.
Test Plan: Existing CIs
Differential Revision: D50519217
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 4 |
39 | 111,713 |
[dynamo] generic `is_` type shortcut is not appropriately guarded
|
bug, oncall: pt2, module: dynamo
|
### 🐛 Describe the bug
This hack
https://github.com/pytorch/pytorch/blob/5a2f97dee80ca27b732e12b61359d6e475a9c03b/torch/_dynamo/variables/builtin.py#L1310
in https://github.com/pytorch/pytorch/pull/104840
is too strong.
### Use-Cases
Support for tracing `is_` when there's type mismatch: https://github.com/pytorch/pytorch/issues/109504
Part of the way to: https://github.com/pytorch/pytorch/issues/111550
### Solution
Perhaps installing unalias check and guarding on as python constant might be good enough to solve generic is_ check without resorting to hacks like this.
### Repro
```python
import collections
def fn(x, y, z):
z += 1
return x is y, z
x = collections.OrderedDict({1: 2})
y = {1: 2}
z = torch.tensor([1])
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
assert opt_fn(x, y, z) == fn(x, y, z) # Compile with x is y == False
assert opt_fn(x, x, z) == fn(x, x, z) # Does not recompile as input types are not guarded
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
### Versions
main
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 0 |
40 | 111,712 |
Re-enable some embedded bag tests
|
topic: not user facing
|
They were temporary disabled in 2019 by https://github.com/pytorch/pytorch/pull/26599
May be it has been fixed already...
<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 1e49d84</samp>
> _`TestEmbeddingNN`_
> _CUDA tests restored_
> _Bug fixed in autumn breeze_
| 1 |
41 | 111,711 |
[aotinductor] 14k models: CppCompileError: C++ compile error
|
triaged, oncall: pt2
|
```
25 errors like: CppCompileError: C++ compile error (example ./generated/test_krrish94_nerf_pytorch.py:SinThetaByTheta # pytest ./generated/test_krrish94_nerf_pytorch.py -k test_001)
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
42 | 111,710 |
`fbgemm` update causes failures in `test_embedding.py`
|
high priority, triage review, module: regression, module: third_party
|
### 🐛 Describe the bug
```
% python3 test/nn/test_embedding.py -k test_EmbeddingBag_per_sample_weights_and_new_offsets_cpu_int32_int32_bfloat16
...
AssertionError: Tensor-likes are not close!
Mismatched elements: 4 / 10 (40.0%)
Greatest absolute difference: 9.1875 at index (3, 1) (up to 0.1 allowed)
Greatest relative difference: 0.57421875 at index (3, 1) (up to 0 allowed)
```
Reverting https://github.com/pytorch/FBGEMM/pull/1851 fixes the problem
### Versions
Nightly
cc @ezyang @gchanan @zou3519 @kadeng
| 2 |
43 | 111,709 |
lintrunner job time keeps growing
|
triaged, module: devx
|
For example:
Sep 29 https://hud.pytorch.org/pytorch/pytorch/commit/bc047ec906d8e1730e2ccd8192cef3c3467d75d1 - 18 mins
Oct 06 https://hud.pytorch.org/pytorch/pytorch/commit/65d40a72c4ff3cf5218dffda8b5da60ea2163890 - 22 mins
Today, Oct 20 https://hud.pytorch.org/pytorch/pytorch/commit/303c54dbd9921d78ed01116547c063b450338c74 - 26 mins
If we want to reduce the time, we need to investigate what's taking so long.
Two possible candidates are ruff and clangtidy.
It would be nice to have time split by linter in the lintrunner job logs.
cc @ZainRizvi @huydhn @clee2000 @PaliC @malfet
| 3 |
44 | 111,706 |
DISABLED test_meta_outplace_fft_ifft_cpu_uint8 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_uint8&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17905842710).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 36 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_fft_ifft_cpu_uint8`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
45 | 111,704 |
Add more flexibility on print / output console
| null |
### 🚀 The feature, motivation and pitch
For a large number of debug usage, printing tensors on console is clearly usefull. The current C++ API output fix the `std::cout` precision on float to 4 decimals.
> float can hold up to 7 decimal digits accurately while double can hold up to 15
The only 'user' parameter of `torch::print()` is the `int64_t linesize = 80`.
### Alternatives
I see at least two major options that can be usefull for common usage:
1. set the number of decimal (see `std::set_precision`) for floating-point numbers.
2. set the scientific output optional, not automatic (see `std::fixed` / `std::scientific`)
### Additional context
I currently output (`std::cout << tensor`) two differents 8x8 Float Tensors for comparison:
```
0.2360 0.0258 0.1689 0.1564 -0.2261 0.0567 0.1844 0.3033
0.2940 -0.2500 -0.0653 -0.0805 0.2112 -0.1635 0.2915 0.3023
0.2912 0.0944 0.1377 -0.1824 -0.1882 -0.2844 0.0189 -0.2718
-0.2812 -0.0292 -0.3035 -0.0724 0.1665 -0.2391 0.0724 -0.1974
-0.2716 0.1460 -0.3044 0.1312 0.2848 0.1549 0.2815 0.1874
0.0980 -0.1967 0.1135 0.2974 -0.1395 0.2800 -0.2298 0.2627
0.2153 0.1423 0.2779 0.0157 -0.3499 0.1718 0.2147 0.2121
0.2856 0.2004 0.0951 -0.0757 -0.3016 0.0643 -0.2685 -0.1260
[ CUDAFloatType{8,8} ]
0.2360 0.0258 0.1689 0.1564 -0.2261 0.0567 0.1844 0.3033
0.2940 -0.2500 -0.0653 -0.0805 0.2112 -0.1635 0.2915 0.3023
0.2912 0.0944 0.1377 -0.1824 -0.1882 -0.2844 0.0189 -0.2718
-0.2812 -0.0292 -0.3035 -0.0724 0.1665 -0.2391 0.0724 -0.1973
-0.2716 0.1460 -0.3044 0.1312 0.2848 0.1549 0.2815 0.1874
0.0980 -0.1967 0.1135 0.2974 -0.1395 0.2800 -0.2298 0.2627
0.2153 0.1423 0.2779 0.0157 -0.3499 0.1718 0.2147 0.2121
0.2856 0.2004 0.0951 -0.0757 -0.3016 0.0643 -0.2685 -0.1260
[ CUDAFloatType{8,8} ]
```
But the `torch::allclose(actual, expected, rtol, atol);` (where `rtol = atol = 1e-5`) give me a `false`.
This `false` is probably `true`, but console output don't help for a quick check.
!!! Thank you for Torch !!!
| 0 |
46 | 111,695 |
Runnings SentenceTransformer encoding step causes Docker containers on Mac (Silicon) to crash with code 139
| null |
### 🐛 Describe the bug
Hi! Hopefully there isn't a similar issue already open. I couldn't find one after a search through the issues list. Feel free to mark as duplicate/close if it already exists.
I've created this repository with a minimal setup to reproduce the error: https://github.com/sabaimran/repro-torch-bug. You just have to clone it and run `docker-compose up` to see the error. Basically it runs the script below in a minimal Docker container:
```python
import torch
from langchain.embeddings import HuggingFaceEmbeddings
class EmbeddingsModel:
def __init__(self):
self.model_name = "sentence-transformers/multi-qa-MiniLM-L6-cos-v1"
encode_kwargs = {"normalize_embeddings": True}
if torch.cuda.is_available():
# Use CUDA GPU
device = torch.device("cuda:0")
elif torch.backends.mps.is_available():
# Use Apple M1 Metal Acceleration
device = torch.device("mps")
else:
device = torch.device("cpu")
self.device = device
model_kwargs = {"device": device}
self.embeddings_model = HuggingFaceEmbeddings(
model_name=self.model_name, encode_kwargs=encode_kwargs, model_kwargs=model_kwargs
)
def embed_documents(self, docs: List[str]):
logger.info(f"Using device: {self.device} to embed {len(docs)} documents")
return self.embeddings_model.embed_documents(docs)
model = EmbeddingsModel()
embeddings = model.embed_documents(["this is a document", "so is this"])
print(f"Created embeddings of length {len(embeddings)}")
```
If you run this code inside of a Docker container (with the appropriate dependencies), it will fail with exit code 139.
Pinning the `torch` package to `2.0.1` circumvents the error. See this other relevant issue: https://github.com/docker/for-mac/issues/7016
### Versions
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.2.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.11.4 (main, Jul 10 2023, 18:52:37) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.1
[pip3] torch==2.1.0
[pip3] torchvision==0.16.0
[conda] Could not collect
| 0 |
47 | 111,694 |
[Release/2.1.1][ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935
|
module: onnx, open source, release notes: onnx
|
Original PR: https://github.com/pytorch/pytorch/pull/110956
Fixes https://github.com/pytorch/pytorch/issues/110597
Summary:
* Generic code: The torch._C.Value.node().mustBeNone() is encapsulated into the high-level API JitScalarType.from_value ; _is_none was also extended to allow either None or torch._C.Value.node.mustBeNone(), so users don't manually call into TorchScript API when implementing operators
* Specific to new_zeros (and ops of *_like and new_*): When checking dtype, we always must use _is_none, which will call proposed by https://github.com/pytorch/pytorch/pull/110935
| 1 |
48 | 111,693 |
[export] 14k models: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types
|
triaged, oncall: pt2, module: export
|
167 errors like: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types: (<class 'torch.Tensor'>) (example ./generated/test_XPixelGroup_BasicSR.py:SPADEResnetBlock # pytest ./generated/test_XPixelGroup_BasicSR.py -k test_030)
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 1 |
49 | 111,692 |
DISABLED test_sigmoid (__main__.TestQuantizedOps)
|
oncall: quantization, triaged, module: macos, skipped
|
Platforms: mac, macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_quantization.py%3A%3ATestQuantizedOps%3A%3Atest_sigmoid)).
This test is failing on MacOS x86 https://hud.pytorch.org/pytorch/pytorch/commit/ca7d084ff9b67675cfff0d175ea6b96fcedc4950
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @malfet @albanD
| 1 |
50 | 111,691 |
[aotinductor] 14k models: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args'
|
triaged, oncall: pt2
|
347 errors like: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args' (example ./generated/test_ludwig_ai_ludwig.py:SequenceReducer # pytest ./generated/test_ludwig_ai_ludwig.py -k test_015)
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
51 | 111,689 |
[Quantization] Add a test for QAT + PTQ selective quantization in
|
release notes: quantization
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111689
xnnpack quantizer
Summary:
For some workflows you want to quantize some parts of the model via qat
and then continue eager mode training. After training, you want to
export the whole model and perform PTQ on the rest.
Test Plan:
test added
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D50510480](https://our.internmc.facebook.com/intern/diff/D50510480)
| 3 |
52 | 111,688 |
Document torch.from_file and fix UntypedStorage.from_file docs
|
release notes: python_frontend, topic: docs
|
Fixes https://github.com/pytorch/pytorch/issues/37439
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111688
cc @albanD
| 1 |
53 | 111,687 |
[Release/2.1.1][DCP] Remove _shard_tensor() call in load_sharded_optimizer_state_dict in optimizer.py #111096
| null |
Cherry pick into 2.1.1
[original PR: #111096](https://github.com/pytorch/pytorch/pull/111096)
_shard_tensor() calls into dist.all_gather_object() and this is causing optimizer state dict loading to be super slow. Workaround: call FSDP._shard_utils._create_chunk_sharded_tensor() to construct ShardedTensor without any communication.
| 1 |
54 | 111,686 |
RecursionError for backend='inductor' with a loop
|
oncall: pt2
|
### 🐛 Describe the bug
Running the following code causes RecursionError.
It's not a very practical example, but it works totally fine in eager mode and with `torch.jit.script`.
``` python
import torch
class Net(torch.nn.Module):
def forward(self, x):
for i in range(1000):
x = 1.0 * x
return x
net = Net()
net = torch.compile(net)
x = torch.tensor([1.0])
print(net(x))
```
### Error logs
...
```
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in <listcomp>
return fn(*[load(index) for load in loaders])
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in inner_fn
return fn(*[load(index) for load in loaders])
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in <listcomp>
return fn(*[load(index) for load in loaders])
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/ir.py", line 2393, in loader
return ops.load(self.name, indexer(index))
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 232, in inner
return OpsWrapper._wrap(getattr(_ops, name)(*new_args, **new_kwargs))
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 132, in inner
line = getattr(self.parent_handler, name)(*args, **kwargs)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 69, in inner
fargs = [_arg_str(a) for a in args]
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 69, in <listcomp>
fargs = [_arg_str(a) for a in args]
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 59, in _arg_str
return sympy_str(a)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/utils.py", line 395, in sympy_str
return str(expr)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/core/_print_helpers.py", line 29, in __str__
return sstr(self, order=None)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 372, in __call__
return self.__wrapped__(*args, **kwargs)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/str.py", line 999, in sstr
p = StrPrinter(settings)
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 261, in __init__
self._settings = self._get_initial_settings()
File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 252, in _get_initial_settings
settings = cls._default_settings.copy()
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RecursionError: maximum recursion depth exceeded while calling a Python object
```
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Fedora release 36 (Thirty Six) (x86_64)
GCC version: (conda-forge gcc 10.3.0-16) 10.3.0
Clang version: 11.1.0 (https://github.com/conda-forge/clangdev-feedstock 2816c2cf231a2d3a6d621af9bbb2c590c9e63fe7)
CMake version: version 3.26.1
Libc version: glibc-2.35
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-6.2.15-100.fc36.x86_64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 52%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 6986.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.6.5
[pip3] flake8-comprehensions==3.3.0
[pip3] flake8-executable==2.0.4
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.5.0
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] numpydoc==1.5.0
[pip3] onnx==1.14.1
[pip3] pytorch-sphinx-theme==0.0.19
[pip3] torch==2.1.0
[pip3] torcheval-nightly==2022.12.27
[pip3] torchsnapshot-nightly==2022.11.28
[pip3] torchtnt==0.0.4
[pip3] triton==2.1.0
[conda] magma-cuda116 2.6.1 0 pytorch
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.1 pypi_0 pypi
[conda] numpydoc 1.5.0 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.19 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torcheval-nightly 2022.12.27 pypi_0 pypi
[conda] torchfix 0.1.1 pypi_0 pypi
[conda] torchsnapshot-nightly 2022.11.28 pypi_0 pypi
[conda] torchtnt 0.0.4 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 2 |
55 | 111,685 |
Disable dynamo when running generated opcheck tests
|
fb-exported
|
Summary: Use `TORCHDYNAMO_DISABLE=1` when running generated opcheck tests. Enable some `fbgemm::pack_segments` tests that errored out (with error `RuntimeError: expected int but got s0*s1**2`) because dynamo was being run in the opcheck tests.
Test Plan: `parsh -v --build-flags mode/dev-nosan //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test` then `run_tests("test_pack_segments")`
Differential Revision: D50508958
| 5 |
56 | 111,682 |
[BE]: ruff apply rule PLW1510 to find silent subprocess errors
|
open source, better-engineering, NNC, release notes: jit, module: dynamo, ciflow/inductor
|
Opts in to check=True or check=False to ensure nonzero exit codes are propogated
cc @EikanWang @jgong5 @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
57 | 111,681 |
Make require_stride_order peek into AliasedLayout
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111681
Summary:
`require_stride_order` doesn't know how to handle storage with `AliasedLayout`. It always resorts to a copy even when the view refers to a storage with `FixedLayout`. This causes an unneccessary allocation + copy for collective outputs. Peeking into `AliasedLayout` in `require_stride_order` seems to be the proper way to address the issue.
Original program:
```python
import tempfile
import torch
import torch.distributed as dist
from torch.distributed._functional_collectives import * # noqa
from torch._inductor.utils import run_and_get_triton_code
def func(arg: torch.Tensor) -> torch.Tensor:
buf0 = arg + 42
out0 = torch.ops.c10d_functional.all_reduce(buf0, "avg", "default", [0], 1)
out0 = torch.ops.c10d_functional.wait_tensor(out0)
return out0
if __name__ == "__main__":
with tempfile.NamedTemporaryFile(delete=False) as tmpf:
dist.init_process_group(
backend="nccl", init_method=f"file://{tmpf.name}", rank=0, world_size=1
)
device = torch.device("cuda:0")
compiled = torch.compile(func)
print(run_and_get_triton_code(compiled, torch.rand(4, 4, device=device)))
torch.cuda.synchronize()
dist.destroy_process_group()
```
Before:
```python
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (4, 4), (4, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0) # no-op to ensure context
buf0 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32)
# Source Nodes: [buf0], Original ATen: [aten.add]
stream0 = get_cuda_stream(0)
triton_poi_fused_add_0.run(arg0_1, buf0, 16, grid=grid(16), stream=stream0)
del arg0_1
buf1 = buf0; del buf0 # reuse
buf2_pg = c10d._find_or_create_pg_by_ranks_and_tag('default', [0], 1)
buf2 = buf1
buf2_work = dist.all_reduce(buf2, async_op=True, group=buf2_pg, op=fun_col_impl._str_to_reduce_op('avg'))
fun_col_impl._register_tensor_work(buf2, buf2_work)
buf1 = _wait_tensor(buf1)
buf3 = buf1
buf4 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32)
# Source Nodes: [out0_1], Original ATen: [c10d_functional.wait_tensor]
triton_poi_fused_wait_tensor_1.run(buf3, buf4, 16, grid=grid(16), stream=stream0)
del buf1
del buf3
return (buf4, )
```
After:
```python
def call(args):
arg0_1, = args
args.clear()
assert_size_stride(arg0_1, (4, 4), (4, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0) # no-op to ensure context
buf0 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32)
# Source Nodes: [buf0], Original ATen: [aten.add]
stream0 = get_cuda_stream(0)
triton_poi_fused_add_0.run(arg0_1, buf0, 16, grid=grid(16), stream=stream0)
del arg0_1
buf1 = buf0; del buf0 # reuse
buf2_pg = c10d._find_or_create_pg_by_ranks_and_tag('default', [0], 1)
buf2 = buf1
buf2_work = dist.all_reduce(buf2, async_op=True, group=buf2_pg, op=fun_col_impl._str_to_reduce_op('avg'))
fun_col_impl._register_tensor_work(buf2, buf2_work)
buf1 = _wait_tensor(buf1)
buf3 = buf1
del buf3
return (buf1, )
```
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
58 | 111,680 |
[pytorch-vulkan] Support zero-dim
|
fb-exported, module: vulkan, release notes: vulkan, ciflow/periodic
|
Summary:
1. Add zero-dim (Tensor with 1 element) support.
2. New operator `_local_scalar_dense` that map a zero-dim tensor into a Scalar
3. `sum_dim`:
3.1. Add zero-dim support.
3.2. Fix bug in negative indices when handling multi-dim reduction call
3.3. Add unittests to test new coverages
4. Add `aten::sum` support.
5. Change bug in `add_tensor` (and other binary ops), when `other` is zero dim, we will use broadcast instead.
Test Plan:
## Devserver
Full Paste: P858982150
```
[yipjustin@31799.od ~/fbsource (8593e7559)]$ LD_LIBRARY_PATH=third-party/swiftshader/lib/linux-x64/ buck2 run fbcode/mode/dev-nosan -c pt.has_backtraces=1 //xplat/caffe2:pt_vulkan_api_test_bin --
File changed: fbsource//xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp
Buck UI: https://www.internalfb.com/buck2/90cad0ff-ac98-4dbf-8d6f-0e419c06208d
Network: Up: 43KiB Down: 1.4MiB (reSessionID-dfc3a318-fd1a-4ad6-b077-c454ebb4c6a8)
Jobs completed: 6. Time elapsed: 26.4s.
Cache hits: 0%. Commands: 2 (cached: 0, remote: 1, local: 1)
BUILD SUCCEEDED
Running main() from third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc
[==========] Running 385 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 385 tests from VulkanAPITest
[ RUN ] VulkanAPITest.zero_size_tensor
[ OK ] VulkanAPITest.zero_size_tensor (9 ms)
[ RUN ] VulkanAPITest.zero_dim_tensor_1
[ OK ] VulkanAPITest.zero_dim_tensor_1 (84 ms)
[ RUN ] VulkanAPITest.zero_dim_tensor_2
[ OK ] VulkanAPITest.zero_dim_tensor_2 (22 ms)
[ RUN ] VulkanAPITest.local_scalar_dense
[ OK ] VulkanAPITest.local_scalar_dense (10 ms)
...
[ OK ] VulkanAPITest.lstm_prepack_success (2 ms)
[ RUN ] VulkanAPITest.querypool_flushed_shader_log
xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp:7484: Skipped
QueryPool is not available
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log (0 ms)
[----------] 385 tests from VulkanAPITest (46915 ms total)
[----------] Global test environment tear-down
[==========] 385 tests from 1 test suite ran. (46915 ms total)
[ PASSED ] 382 tests.
[ SKIPPED ] 1 test, listed below:
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log
[ FAILED ] 2 tests, listed below:
[ FAILED ] VulkanAPITest.conv2d_pw_prepack
[ FAILED ] VulkanAPITest.conv2d_pw_prepack_bc
2 FAILED TESTS
YOU HAVE 7 DISABLED TESTS
```
## M1 MAC
P859975219
```
buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 --target-platforms ovr_config//platform/macos:arm64-fbsource -- --gtest_filter="*"
Using additional configuration options from .buckconfig.local
Building: finished in 0.2 sec (100%) 269/2875 jobs, 0/2875 updated
Total time: 0.2 sec
BUILD SUCCEEDED
Running main() from third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc
[==========] Running 384 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 384 tests from VulkanAPITest
[ RUN ] VulkanAPITest.zero_size_tensor
[ OK ] VulkanAPITest.zero_size_tensor (40 ms)
[ RUN ] VulkanAPITest.zero_dim_tensor_1
[ OK ] VulkanAPITest.zero_dim_tensor_1 (7 ms)
[ RUN ] VulkanAPITest.zero_dim_tensor_2
[ OK ] VulkanAPITest.zero_dim_tensor_2 (1 ms)
[ RUN ] VulkanAPITest.local_scalar_dense
[ OK ] VulkanAPITest.local_scalar_dense (0 ms)
[ RUN ] VulkanAPITest.copy_to_texture
[ OK ] VulkanAPITest.copy_to_texture (45 ms)
...
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log (0 ms)
[----------] 384 tests from VulkanAPITest (5127 ms total)
[----------] Global test environment tear-down
[==========] 384 tests from 1 test suite ran. (5127 ms total)
[ PASSED ] 382 tests.
[ SKIPPED ] 1 test, listed below:
[ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log
[ FAILED ] 1 test, listed below:
[ FAILED ] VulkanAPITest.normal_large
1 FAILED TEST
YOU HAVE 5 DISABLED TESTS
```
Differential Revision: D50347338
| 2 |
59 | 111,679 |
[Release/2.1.1] [Test][ShardedTensor] Add test for corner case for chunk sharding spec #109626
|
topic: not user facing
|
Cherry pick https://github.com/pytorch/pytorch/pull/109626 into release/2.1.1
This adds a test case to cover the corner case of empty shards when creating ShardedTensor.
Original fix contributed by a user.
https://github.com/pytorch/pytorch/pull/108915
Cherry-pick PR for the fix above: https://github.com/pytorch/pytorch/pull/108915
| 1 |
60 | 111,678 |
AOT Inductor Does not Work with minifier
|
ciflow/inductor
|
### 🐛 Describe the bug
Because AOT Inductor attaches parameters to the GraphModule, it does not currently work with minifier.
> File "/opt/dlami/nvme/eellison/work/pytorch/torch/_dynamo/repro/after_aot.py", line 444, in repro_common
assert not any(mod.named_parameters())
### Versions
master
| 0 |
61 | 111,677 |
[2.1.1] Update NCCL to 2.18.6 for upstream bugfix
|
open source, topic: not user facing
|
This updates NCCL in PyTorch 2.1 with one tiny bugfix in from this commit: https://github.com/NVIDIA/nccl/commit/4365458757e4107ecbf629b2fd6e0e19a5d237c2 It's a minor bugfix release, otherwise everything is exactly the same as the release currently in PyTorch. We already updated to 2.19 upstream.
| 1 |
62 | 111,676 |
[export] self.buffer += 1 raises error
|
triaged, module: export
|
```
import torch
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("foo", torch.ones(2, 3))
def forward(self, x: torch.Tensor) -> torch.Tensor:
self.foo += x
return self.foo
torch.export(Mod(), (torch.ones(2, 3),))
```
produces
```
Mutating module attribute foo during export.
from user code:
File "/tmp/ipykernel_578241/3307013751.py", line 9, in forward
self.foo += x
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
Changing `self.foo += x` to the equivalent `self.foo.add_(x)` works as expected.
The motivation behind disallowing attribute mutation makes sense. However, buffers should be mutable, and dynamo should be smart enough to recognize that `+=` desugars to `add_` when done on a tensor.
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 0 |
63 | 111,674 |
Dynamo Compile samples should record file/line that raised exception
| null |
### 🐛 Describe the bug
@voznesenskym and I were looking at https://fburl.com/scuba/dynamo_compile/7pzz3bi1 and we noticed that while "Fail reason" doesn't include the file/line that raised the exception, which would be useful.
cc @yanboliang
### Versions
main
| 0 |
64 | 111,673 |
[quant][bc-breaking] Remove deprecated QConfigDynamic
|
release notes: quantization
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111673
Summary: QConfigDynamic was deprecated in PyTorch 1.12. It has
continued to cause confusion to users who wish to use dynamic
quantization. This commit removes this deprecated API and
requires users to use QConfig instead.
BC-breaking before:
```
qconfig = QConfigDynamic(
activation=default_dynamic_quant_observer,
weight=default_weight_observer,
)
```
BC-breaking after:
```
qconfig = QConfig(
activation=default_dynamic_quant_observer,
weight=default_weight_observer,
)
```
Test Plan:
python test/test_quantization.py
Reviewers: jerryzh168
Subscribers: jerryzh168, supriyar
| 3 |
65 | 111,669 |
Buffer overflow not prevented on MPS devices
| null |
### 🐛 Describe the bug
When indexing using an indexing tensor (or list), it is possible to read or write outside the valid range of the tensor.
Minimal example:
```
import torch
x = torch.arange(4, device=torch.device("mps"))
y = x[:2]
y[torch.tensor([3])] = -1
x[3]
```
This code should raise an IndexError and leave x unchanged, but it instead gives -1.
In this example, the overflow reaches a known memory location, but perhaps in general it can reach arbitrary memory on the GPU.
### Versions
Collecting environment information...
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.2 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.7
Libc version: N/A
Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.5.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.1
[pip3] torch==2.1.0
[conda] numpy 1.26.1 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
| 0 |
66 | 111,667 |
[Release/2.1] Introduce is_big_gpu condition for test_max_autotune
|
open source, topic: not user facing, module: inductor
|
Fixes https://github.com/pytorch/pytorch/issues/111527
Other test files that rely on max_autotune mode being enabled already conditionalise the UT suite on this condition (e.g. test_select_algorithm). Proposing to add this condition for test_max_autotune.
Currently we are observing failures on these UTs on the ROCm runners but using MI200+ these tests will pass again context: https://github.com/pytorch/pytorch/pull/111381#issuecomment-1768048732
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
67 | 111,666 |
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy' to ONNX opset version 14 is not supported.
|
module: onnx
|
### 🚀 The feature, motivation and pitch
Unable to export ONNX model from https://github.com/xue-pai/FuxiCTR/tree/main/model_zoo/AFM. While exporting onnx it throws torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy'
### Alternatives
_No response_
### Additional context
_No response_
| 1 |
68 | 111,665 |
[dynamo] Fix guard for ndarray calling `torch.as_tensor(None)`
|
open source, topic: not user facing, module: dynamo, ciflow/inductor
|
Fixes https://github.com/pytorch/pytorch/issues/111662
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano
| 1 |
69 | 111,663 |
[dynamo] Tracking: object identity
| null |
### 🚀 The feature, motivation and pitch
This covers many things:
1. Tensor Identity
2. User objects identity - usual objects, enums, builtins??
Use cases
- [ ] https://github.com/pytorch/pytorch/issues/111550
- [ ] https://github.com/pytorch/pytorch/issues/111556
Tensor Aliasing Methods and Obstacles
- [x] https://github.com/pytorch/pytorch/issues/111585
- [x] https://github.com/pytorch/pytorch/issues/111649
- [ ] https://github.com/pytorch/pytorch/issues/111544
Overall Obstacles and Discussion
- [x] https://github.com/pytorch/pytorch/issues/111542
- [ ] https://github.com/pytorch/pytorch/issues/111562 - not sure if we want to implement non-aliasing for general objects
| 0 |
70 | 111,662 |
torch.dynamo (caching?) issues with `Optional[np.ndarray]` arguments
|
module: numpy, module: dynamo
|
### 🐛 Describe the bug
```
$ cat nonz.py
import torch
import numpy as np
def fn(x=None):
if x is None:
x = np.ones(3)
return x**2
opt_fn = torch.compile(fn)
x = np.zeros((2, 2))
print(opt_fn(x))
print(opt_fn())
```
fails with
```
$ python nonz.py
[[0. 0.]
[0. 0.]]
ERROR RUNNING GUARDS fn nonz.py:9
lambda L, **___kwargs_ignored:
___guarded_code.valid and
___check_global_state() and
hasattr(__as_tensor(L['x']), '_dynamo_dynamic_indices') == False and
utils_device.CURRENT_DEVICE == None and
___skip_backend_check() or ___current_backend() == ___lookup_backend(139895339872512) and
___check_tensors(__as_tensor(L['x']), tensor_check_names=tensor_check_names)
Traceback (most recent call last):
File "nonz.py", line 20, in <module>
print(opt_fn())
File "/home/ev-br/repos/pytorch/torch/_dynamo/eval_frame.py", line 410, in _fn
return fn(*args, **kwargs)
File "<string>", line 7, in guard
RuntimeError: Could not infer dtype of NoneType
```
Curiously, exchanging the order of calls, i.e. making it
```
print(opt_fn())
print(opt_fn(x))
```
works fine and produces the correct result. Removing the numpy from the equation --- chaging all `np.` to `torch.` also works fine and produces the correct result.
### Versions
main
cc @mruberry @rgommers @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
71 | 111,661 |
Higher-level custom op API, V3
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111661
* #111660
* #111310
* #111659
* #111380
This PR introduces:
- a FunctonalBaseOp class.
- To define a new custom op, a user subclasses FunctionalBaseOp, adds
their device-specific implementations (by adding static methods to
their subclass), and adds an abstract impl by overriding the
`abstract` staticmethod.
- Under the hood, we take the class and all its methods and call the
corresponding torch.library.{define, impl} APIs to register it.
The class-based approach more closely resembles PyTorch C++ codegen;
we get all of the information about the operator up front in the class
and we can do things with this information, like add an "autograd not
implemented" kernel if the user didn't specify an autograd kernel (NB:
this functionality is not in this PR).
Please see the docstrings for example usages.
| 1 |
72 | 111,660 |
torch.library: Create helper function `is_functional_schema`
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111661
* __->__ #111660
* #111310
* #111659
* #111380
I will need this again soon.
Test Plan:
- existing tests
| 1 |
73 | 111,659 |
Change torch.library.impl to accept a device string
| null |
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111661
* #111660
* #111310
* __->__ #111659
* #111380
torch.library.impl now accepts a device string (e.g. "cpu", "cuda"). It
still accepts DispatchKey strings, but we no longer document this, because
using arbitrary DispatchKeys is more for the power users.
We map the device string to a DispatchKey and then register the impl for
said DispatchKey. A user may also specify multiple device strings at once
or specify "types=default" to get a CompositeExplicitAutograd registration.
Test Plan:
- new tests
| 1 |
74 | 111,657 |
[aotinductor] Update test utility to use AOTIModelRunner
|
module: inductor, module: dynamo, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111657
Summary: Use AOTIModelRunner provided by libtorch instead of the custom written RAIIModelContainer for testing. This change also makes running AOTInductor benchmarks on CPU possbile.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 1 |
75 | 111,656 |
WIP Adding 512 to xblock size config
|
open source, module: inductor, ciflow/inductor
|
Try to see perf improvements with adding 512 into xblock size:
inductor-A100-perf-nightly :
- https://github.com/pytorch/pytorch/actions/runs/6589467661
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
| 2 |
76 | 111,654 |
Static Linking C++, Op not available at runtime
| null |
### 🐛 Describe the bug
When linking with static libtorch and torchvision libraries, I am able to build, but at runtime, I get an error about an `Unknown builtin op: aten::mul`.
I have found references indicating that including <torchvision/vision.h> should cause the operators to be registered so they are linked in, but that doesn't seem to do the trick.
I've also found references indicating that forcing the linker to link the "whole archive" for libtorch_cpu.a should force it to include all the operators in the linked executable. I have done this, and it does overcome the problem - however, this feels a bit like a workaround, and we aren't able to use that as a long-term solution. When I link in the whole archive, the executable jumps from 87MB to 339MB.
I've also found some references suggesting calling `c10::RegisterOps`, or `torch::RegisterOps`, neither of which seem to exist. I found both `c10::RegisterOperators` and `torch::RegisterOperators`, but calling them doesn't seem to have any effect - admittedly, I might be using them incorrectly, all I did was add a call to `torch::RegisterOperators();` which didn't cause any build errors, but did not overcome the runtime "Unknown builtin op: aten::mul" error.
I tried to make a minimal example:
```c++
// According to: https://github.com/pytorch/vision/#c-api
// and https://github.com/pytorch/vision/issues/2915
// In order to get the torchvision operators registered with
// torch (eg. for the JIT), all you need to do is to ensure
// that you #include <torchvision/vision.h> in your project.
#include <vision.h>
#include <ATen/core/ivalue.h>
#include <fstream>
#include <torch/script.h>
#include <torch/torch.h>
#include <vector>
using namespace std;
int main(int argc, char* argv[])
{
torch::NoGradGuard noGradGuard;
// Load a trained model that has been converted to torchscript
ifstream modelFile("torchscriptModel.pt");
torch::jit::script::Module model;
model = torch::jit::load(modelFile);
modelFile.close();
// Set model to eval mode
model.eval();
// Generate a random inference image
float* imgPix = new float[100*100];
// Normally set image pixels here, left uninitialized for minimal example
// Convert image pixels to format required by forward
at::Tensor imgTensor = torch::from_blob(imgPix, {100, 100, 1});
at::Tensor imgTensorPermuted = imgTensor.permute({2, 0, 1});
imgTensorPermuted.unsqueeze_(0);
vector< at::Tensor > imageTensorVec;
imageTensorVec.push_back(imgTensorPermuted);
vector< torch::jit::IValue > inputToModel;
inputToModel.push_back(torch::cat(imageTensorVec));
at::Tensor forwardResult = model.forward(inputToModel).toTensor();
delete [] imgPix;
return 0;
}
```
To build this, I use the following command:
```
g++ minimalExample.cpp \
-D_GLIBCXX_USE_CXX11_ABI=1 \
-I /usr/src/vision/torchvision/csrc/ \
-I /usr/src/pytorch/build/lib.linux-x86_64-3.8/torch/include/torch/csrc/api/include/ \
-I /usr/src/pytorch/build/lib.linux-x86_64-3.8/torch/include/ \
-Wl,--start-group \
/usr/src/vision/build/libtorchvision.a \
/usr/src/pytorch/build/lib/libc10.a \
/usr/src/pytorch/build/lib/libtorch_cpu.a \
-Wl,--end-group \
/usr/src/pytorch/build/lib/libprotobuf.a \
/usr/src/pytorch/build/lib/libfbgemm.a \
/usr/src/pytorch/build/sleef/lib/libsleef.a \
/usr/src/pytorch/build/lib/libasmjit.a \
/usr/src/pytorch/build/lib/libonnx.a \
/usr/src/pytorch/build/lib/libonnx_proto.a \
/usr/src/pytorch/build/lib/libcpuinfo.a \
/usr/src/pytorch/build/lib/libclog.a \
/usr/src/pytorch/build/lib/libkineto.a \
/usr/src/pytorch/build/lib/libnnpack.a \
/usr/src/pytorch/build/lib/libpytorch_qnnpack.a \
/usr/src/pytorch/build/lib/libXNNPACK.a \
/usr/src/pytorch/build/lib/libpthreadpool.a \
-Wl,--start-group \
/opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_tbb_thread.a \
/opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_core.a \
/opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_blacs_openmpi_lp64.a \
/opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_intel_lp64.a \
/usr/src/onetbb_installed/lib64/libtbb.a \
-Wl,--end-group \
/usr/local/lib/libompitrace.a \
/usr/local/lib/libmpi.a \
/usr/local/lib/libopen-rte.a \
/usr/local/lib/libopen-pal.a \
/usr/local/lib/libz.a \
/usr/lib64/libc_nonshared.a \
-lrt \
-ldl \
-fopenmp \
-pthread \
-o minimalExample.exe
```
As I said, this will build successfully, but it does give a warning when building:
```
/usr/src/vision/torchvision/csrc/vision.h:10:40: warning: ‘_register_ops’ initialized and declared ‘extern’
extern "C" VISION_INLINE_VARIABLE auto _register_ops = &cuda_version;
^~~~~~~~~~~~~
```
When I run the executable, though, I get the following error:
```
$ ./minimalExample.exe
terminate called after throwing an instance of 'torch::jit::ErrorReport'
what():
Unknown builtin op: aten::mul.
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript.
:
File "<string>", line 3
def mul(a : float, b : Tensor) -> Tensor:
return b * a
~~~~~ <--- HERE
def add(a : float, b : Tensor) -> Tensor:
return b + a
'mul' is being compiled since it was called from 'full_out_0_4'
File "<string>", line 3
def full_out_0_4(size:List[int], fill_value:number, *, out:Tensor) -> Tensor:
return torch.full(size, fill_value, out=out)
~~~ <--- HERE
Abort (core dumped)
```
The minimal example runs as expected, without error, if I link the `libtorch_cpu.a` whole archive, by changing the corresponding line in the build command to:
```
-Wl,--start-group \
/usr/src/vision/build/libtorchvision.a \
/usr/src/pytorch/build/lib/libc10.a \
-Wl,--whole-archive /usr/src/pytorch/build/lib/libtorch_cpu.a -Wl,--no-whole-arhive \
-Wl,--end-group \
```
but as I said, the size of the executable jumps way higher, and seems like overkill.
I wasn't sure if this should be a forum post or an issue report, but given that I thought the include of <vision.h> was supposed to manage this, it felt more like an issue report to me.
### Versions
I'm not sure this is especially valuable in this situation. The example is running on an old OS with CPU-only support. The conversion to torchscript was done on a more modern machine with python and pytorch installed, but the machine I am running on is a severely stripped-down machine without python at all.
If I run the minimalExample.exe on the modern machine, it performs the same way though (i.e. errors at runtime without the whole-archive stuff, but runs successfully with the whole-archive stuff). So, here's the env for that machine in case its helpful:
```
Collecting environment information...
PyTorch version: 1.13.0a0+git7c98e70
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1024-fips-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
GPU 2: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 535.98
cuDNN version: Probably one of the following:
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6354 CPU @ 3.00GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 804.039
CPU max MHz: 3600.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 1.7 MiB
L1i cache: 1.1 MiB
L2 cache: 45 MiB
L3 cache: 78 MiB
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.17.4
[pip3] torch==1.13.0a0+git7c98e70
[pip3] torchvision==0.14.0a0+5ce4506
[conda] Could not collect
```
| 0 |
77 | 111,653 |
s390x vectorization: implement atanh for complex vectorized data
|
module: cpu, open source
|
s390x vectorization: implement atanh for complex vectorized data
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
78 | 111,651 |
DISABLED test_meta_outplace_fft_ifft_cpu_int64 (__main__.TestMetaCPU)
|
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
|
Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_int64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17893852781).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_meta_outplace_fft_ifft_cpu_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_meta.py`
cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
| 1 |
79 | 111,650 |
FSDP CPU Offload + fp16 + sharded grad scaler crash / hang
|
oncall: distributed, triaged, module: fsdp
|
### 🐛 Describe the bug
I get the following when running the above combination:
```
ERROR:aiplatform.error_reporting.error_reporting:Exception Found: Could not run 'ate n::_amp_foreach_non_finite_check_and_unscale_' with arguments from the 'CPU' backend . This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Face book employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for p ossible resolutions. 'aten::_amp_foreach_non_finite_check_and_unscale_' is only avai lable for these backends: [CUDA, Meta, BackendSelect, Python, FuncTorchDynamicLayerB ackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, Aut ogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, Autogr adIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPri vateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTens or, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTo rchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDy namicLayerFrontMode, PreDispatch, PythonDispatcher].
```
in my case, the error seems to be reported but the job doesn't crash and just hangs, which is interesting and might be related to other collectives going on?
### Versions
main
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 1 |
80 | 111,649 |
[dynamo] higher-order ops do not preserve `FakeTensor` for in-place ops
|
triaged, module: fakeTensor, module: functorch, module: dynamo
|
### 🐛 Describe the bug
```python
def fn(z):
x = z.clone()
y = torch.vmap(torch.Tensor.acos_)(x)
# y's fake tensor is not x's fake tensor in terms of pyobjects
return y is x
fn_opt = torch.compile(backend="eager", fullgraph=True, dynamic=True)(fn)
z = torch.ones(4, 1)
self.assertEqual(fn(z), fn_opt(z))
```
### Solution
Not sure if this is a bug. But ideally, we should not expect x's fake tensor to be different from y's, as then it can be a method to preserve object identity throughout recursive calls to FX.
One possible way to do this might be to reuse the FakeTensor from the inputs when doing fx tracing for higher order ops.
However, if it is unavoidable (e.g. as a result of fx tracing requirements), another solution is simply to propagate the storage of the original fake tensor to the higher order op fake tensor.
### Versions
main
cc @eellison @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 1 |
End of preview. Expand
in Data Studio
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 38
Size of downloaded dataset files:
18.8 MB
Size of the auto-converted Parquet files:
7.29 MB
Number of rows:
6,000