Serial Number
int64
1
6k
Issue Number
int64
75.6k
112k
Title
stringlengths
3
357
Labels
stringlengths
3
241
Body
stringlengths
9
74.5k
Comments
int64
0
867
1
111,756
userwarning: loky-backed parallel loops cannot be called in a multiprocessing with num_workers=1 but two dataloaders
null
### 🐛 Describe the bug Setting num_workers=1 speeds dataloader a lot, but it doesn't seem to work when I have more than 1 dataloader. When I only have one, no warning appears and the enumeration only takes 0.3s. However, when I have 2 dataloader (train and val), the warning starts to appear on every iteration and it now takes 100s to enumerate the data loader, like when using spawn or without worker. It also happens when I just recreate that 1 dataloader. userwarning: loky-backed parallel loops cannot be called in a multiprocessing with num_workers=1 Environment: Google Colab Python 3.10.12 torch==1.13.1 I also tried upgrading to torch 2.1.0 but it's the same. ### Versions --2023-10-22 05:26:33-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 21737 (21K) [text/plain] Saving to: ‘collect_env.py’ collect_env.py 100%[===================>] 21.23K --.-KB/s in 0s 2023-10-22 05:26:33 (114 MB/s) - ‘collect_env.py’ saved [21737/21737] Collecting environment information... PyTorch version: 1.13.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.27.7 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.120+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 525.105.17 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.00GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 3 BogoMIPS: 4000.34 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 1 MiB (1 instance) L3 cache: 38.5 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.26.1 [pip3] torch==1.13.1 [pip3] torchaudio==2.1.0+cu118 [pip3] torchdata==0.7.0 [pip3] torchinfo==1.8.0 [pip3] torchmetrics==1.2.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.16.0 [pip3] torchvision==0.14.1 [pip3] triton==2.1.0 [conda] Could not collect
0
2
111,755
[dtensor] add device_mesh.device_type to make RNGStateTracker support CUDA-like devices
open source
[dtensor] Add device_mesh.device_type to make RNGStateTracker support CUDA-like devices
1
3
111,754
[dynamo] Better determinism of `ConfigModule` by walking using pytree
null
### 🚀 The feature, motivation and pitch https://github.com/pytorch/pytorch/pull/111318 Currently, validation only occurs at the root. However, we should walk the pytree of each object to ensure types are respected. In particular, we can do conversions of unfriendly types - function objects into `"f{__module__}{__name__}"` strings - sort sets and walk their objects to make them deterministic. - There isn't a canonical order, however; sets have `__hash__`, but this hash is based on `id` which is non-deterministic. So sorting by hash is non-deterministic. Further, not all objects implement `>` operator. - In other words, there is no feasible way to make deterministic representations of sets.
0
4
111,753
[dynamo] AutogradFunctionMethodHigherOrderVariable check for new guards is broken
module: dynamo
AutogradFunctionMethodHigherOrderVariable has a check for new guards being added in the following places: https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1091 https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1115 https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1122-L1123 As written, this check does nothing because we are storing a *reference* in pre_guards that just gets mutated. So `pre_guards is post_guards` evaluates `True`. The following will make the code do as intended: ```patch diff --git a/torch/_dynamo/variables/higher_order_ops.py b/torch/_dynamo/variables/higher_order_ops.py index 2292c71f048..5d7d87f810c 100644 --- a/torch/_dynamo/variables/higher_order_ops.py +++ b/torch/_dynamo/variables/higher_order_ops.py @@ -1088,7 +1088,7 @@ class AutogradFunctionMethodHigherOrderVariable(TorchHigherOrderOperatorVariable else: fn = TorchVariable(self.value) checkpoint = tx.copy_graphstate() - pre_guards = tx.output.guards + pre_guards = tx.output.guards.clone() graph_checkpoint = tx.output.graph # TODO: Support kwargs diff --git a/torch/_guards.py b/torch/_guards.py index e532a32cdd2..4f9e874476e 100644 --- a/torch/_guards.py +++ b/torch/_guards.py @@ -483,6 +483,8 @@ class GuardsSet: for o in others: for g in o: self.add(g, skip=1) + def clone(self): + return GuardsSet(set(self.inner)) class GuardsContext(Checkpointable[GuardsCheckpointState]): ``` However, it causes a test to fail: ``` ____________________________________________ ReproTests.test_hf_xsoftmax_training ____________________________________________ Traceback (most recent call last): File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 59, in testPartExecutor yield File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 591, in run self._callTestMethod(testMethod) File "/home/jansel/conda/envs/pytorch/lib/python3.10/unittest/case.py", line 549, in _callTestMethod method() File "/home/jansel/pytorch/torch/testing/_internal/common_utils.py", line 2453, in wrapper method(*args, **kwargs) File "/home/jansel/pytorch/test/dynamo/test_repros.py", line 3163, in test_hf_xsoftmax_training self.assertEqual(dict(counters["frames"]), {"total": 1, "ok": 1}) File "/home/jansel/pytorch/torch/testing/_internal/common_utils.py", line 3356, in assertEqual raise error_metas.pop()[0].to_error( AssertionError: Scalars are not equal! Expected 1 but got 2. Absolute difference: 1 Relative difference: 1.0 The failure occurred for item ['ok'] To execute this test, run the following from the base repo dir: python test/dynamo/test_repros.py -k test_hf_xsoftmax_training ``` I think this piece of code needs to be revisited, as I am not sure if the graph break it adds is correct. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305
0
5
111,752
Is it a good time to switch to CXX11_ABI?
null
### 🚀 The feature, motivation and pitch Now most of CI jobs use g++>=9 except Android jobs which use g++-8. Given this situation, is it possible to always use CXX11_ABI and get rid of the many checks in build systems? ### Alternatives _No response_ ### Additional context _No response_
0
6
111,749
[dynamo] Expand _nonvar_fields names
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111749 This should be a small compile time optimization, since we won't need to walk these fields in apply(). cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
7
111,748
Allow to specify specific files for debug info
topic: not user facing
Building with `USE_CUSTOM_DEBINFO=torch/csrc/Module.cpp python setup.py develop` for example will provide debug info only for this file. This allows to enable debug symbols very fast from a non-debug build by doing a clean then develop (as long as you have ccache) and avoid very large binaries that take a very long time to load in gdb.
1
8
111,747
New swap function
module: dynamo
This PR is proposing a new approach to solve the nn/optim only linked by python object identity problem. The idea is to have a function that can swap the content of two Tensors t1 and t2 while preserving all the old references. This would allow us to swap the `model.weight` with a new Tensor (can be any subclass of Tensor and any TensorImpl (xla, sparse, nested tensorimpl would work)). The use within nn will be done in a follow up. This is done by swapping the whole content of the PyObject and then putting back the fields associated with external references (refcount, gc tracking and weakrefs). Note that we have to properly handle all the cases where there is memory used before the public pointer PyObject* and where the PyObject is bigger due to dict/weakref being inlined (older CPython version) or due to slots. The main limitation of this approach is that the number of slots need to match for the objects being swapped and thus limit usage of slots in subclasses. Draft right now to see what @colesbury thinks about doing this? cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
3
9
111,746
[dynamo] add repro for functorch/fx interop issue (`allow_in_graph`)
open source, topic: not user facing, module: dynamo
Fixes https://github.com/pytorch/pytorch/issues/109025 by adding repro cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
10
111,745
[dynamo]: `nn.Module` recursively set `training` mode via `train` and `eval`
open source, topic: not user facing, module: dynamo, ciflow/inductor
Fixes https://github.com/pytorch/pytorch/issues/109885 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
11
111,744
ninja: build stopped: subcommand failed
oncall: pt2
### 🐛 Describe the bug When I try to build pytorch from source in linux, I face a confusing problem when I run the 'python setup.py install'. Here are the error logs when I run the 'python setup.py install' the second time. ### Error logs (pytorch_install) [root@cn0 pytorch-1.7]# python setup.py installBuilding wheel torch-1.7.0a0 -- Building version 1.7.0a0 cmake --build . --target install --config Release -- -j 96 [3/2123] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o FAILED: caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../c10/.. -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -mavx2 -mfma -mavx -mf16c -std=gnu++14 -MD -MT caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o -MF caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o.d -o caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o -c ../caffe2/perfkernels/common_avx2.cc ../caffe2/perfkernels/common_avx2.cc:17:2: error: #error ( "You found a build system error: __AVX2__ is defined (via e.g. -mavx2) " "but CAFFE2_PERF_WITH_AVX2 is not defined."); #error( \ ^~~~~ [4/2123] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o FAILED: caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../c10/.. -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -mavx512f -mavx512dq -mavx512vl -mavx2 -mfma -mavx -mf16c -std=gnu++14 -MD -MT caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o -MF caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o.d -o caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o -c ../caffe2/perfkernels/common_avx512.cc ../caffe2/perfkernels/common_avx512.cc:18:2: error: #error ( "You found a build system error: __AVX512F__, __AVX512DQ__, __AVX512VL__ " "is defined (via e.g. -mavx512f, -mavx512dq, and -mavx512vl) " "but CAFFE2_PERF_WITH_AVX512 is not defined."); #error( \ ^~~~~ [5/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o FAILED: caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o -MF caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o.d -o caffe2/CMakeFiles/torch_cpu.dir/onnx/backend.cc.o -c ../caffe2/onnx/backend.cc ../caffe2/onnx/backend.cc:11:10: fatal error: onnx/optimizer/optimize.h: No such file or directory #include "onnx/optimizer/optimize.h" ^~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. [16/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Convolution.cpp.o -c ../aten/src/ATen/native/xnnpack/Convolution.cpp ../aten/src/ATen/native/xnnpack/Convolution.cpp: In function ‘at::native::xnnpack::ContextConv2D at::native::xnnpack::internal::convolution2d::create(const at::Tensor&, const c10::optional<at::Tensor>&, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, c10::IntArrayRef, int64_t, bool, float, float)’: ../aten/src/ATen/native/xnnpack/Convolution.cpp:236:22: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘21’ to ‘xnn_status xnn_create_deconvolution2d_nhwc_f32(uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’ &convolution_op); // operator ^ ../aten/src/ATen/native/xnnpack/Convolution.cpp:264:22: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘21’ to ‘xnn_status xnn_create_convolution2d_nhwc_f32(uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, uint32_t, size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’ &convolution_op); // operator ^ [24/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/xnnpack/Linear.cpp.o -c ../aten/src/ATen/native/xnnpack/Linear.cpp ../aten/src/ATen/native/xnnpack/Linear.cpp: In function ‘at::native::xnnpack::ContextLinear at::native::xnnpack::internal::linear::create(const at::Tensor&, const c10::optional<at::Tensor>&, float, float)’: ../aten/src/ATen/native/xnnpack/Linear.cpp:100:17: error: cannot convert ‘xnn_operator**’ to ‘xnn_caches_t {aka const xnn_caches*}’ for argument ‘10’ to ‘xnn_status xnn_create_fully_connected_nc_f32(size_t, size_t, size_t, size_t, const float*, const float*, float, float, uint32_t, xnn_caches_t, xnn_operator**)’ &linear_op); // operator ^ [53/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp.o -c ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp: In function ‘at::Tensor at::native::{anonymous}::qembeddingbag_byte_unpack(const at::Tensor&)’: ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:77:11: error: ‘Fused8BitRowwiseQuantizedSBFloatToFloat’ is not a member of ‘fbgemm’ fbgemm::Fused8BitRowwiseQuantizedSBFloatToFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:77:11: note: suggested alternative: ‘Fused8BitRowwiseQuantizedSBFloatToFloatOrHalf’ fbgemm::Fused8BitRowwiseQuantizedSBFloatToFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fused8BitRowwiseQuantizedSBFloatToFloatOrHalf ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp: In function ‘at::Tensor at::native::{anonymous}::_qembeddingbag_nbit_unpack_helper(const at::Tensor&, int)’: ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:116:11: error: ‘FusedNBitRowwiseQuantizedSBHalfToFloat’ is not a member of ‘fbgemm’ fbgemm::FusedNBitRowwiseQuantizedSBHalfToFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../aten/src/ATen/native/quantized/cpu/qembeddingbag_unpack.cpp:116:11: note: suggested alternative: ‘FusedNBitRowwiseQuantizedSBHalfToFloatOrHalf’ fbgemm::FusedNBitRowwiseQuantizedSBHalfToFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FusedNBitRowwiseQuantizedSBHalfToFloatOrHalf [55/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o /opt/rh/devtoolset-7/root/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/mpi_share/env/bz/anaconda3/envs/pytorch_install/include/python3.8 -isystem ../cmake/../third_party/pybind11/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include -isystem /home/mpi_share/env/mlnx_sharp/hpcx-v2.7.0-gcc-MLNX_OFED_LINUX-5.1-0.6.6.0-redhat7.8-x86_64/ompi/include -isystem ../cmake/../third_party/cub -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp.o -c ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp: In function ‘at::Tensor at::native::{anonymous}::qembeddingbag_byte_prepack(const at::Tensor&)’: ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:116:11: error: ‘FloatToFused8BitRowwiseQuantizedSBFloat’ is not a member of ‘fbgemm’ fbgemm::FloatToFused8BitRowwiseQuantizedSBFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:116:11: note: suggested alternative: ‘FloatOrHalfToFused8BitRowwiseQuantizedSBFloat’ fbgemm::FloatToFused8BitRowwiseQuantizedSBFloat( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FloatOrHalfToFused8BitRowwiseQuantizedSBFloat ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp: In function ‘at::Tensor at::native::{anonymous}::_qembeddingbag_nbit_prepack_helper(const at::Tensor&, int, bool)’: ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:187:13: error: ‘FloatToFusedNBitRowwiseQuantizedSBHalf’ is not a member of ‘fbgemm’ fbgemm::FloatToFusedNBitRowwiseQuantizedSBHalf( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp:187:13: note: suggested alternative: ‘FloatOrHalfToFusedNBitRowwiseQuantizedSBHalf’ fbgemm::FloatToFusedNBitRowwiseQuantizedSBHalf( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FloatOrHalfToFusedNBitRowwiseQuantizedSBHalf [98/2123] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Functions.cpp.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File "setup.py", line 717, in <module> build_deps() File "setup.py", line 308, in build_deps build_caffe2(version=version, File "/home/mpi_share/env/bz/pytorch-1.7/tools/build_pytorch_libs.py", line 62, in build_caffe2 cmake.build(my_env) File "/home/mpi_share/env/bz/pytorch-1.7/tools/setup_helpers/cmake.py", line 345, in build self.run(build_args, my_env) File "/home/mpi_share/env/bz/pytorch-1.7/tools/setup_helpers/cmake.py", line 141, in run check_call(command, cwd=self.build_dir, env=env) File "/home/mpi_share/env/bz/anaconda3/envs/pytorch_install/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '96']' returned non-zero exit status 1. ### Minified repro _No response_ ### Versions sry, there is also a bug when I use the given commands. pytorch 1.7.0 cuda 10.2 cmake 3.18.4 ninja 1.10.2 python 3.8.2 There are some difficulties to update my cuda version, so I just build the v1.7.0 pytorch to match the cuda version. If there is something you need to know, please tell me. Thanks ^^ cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
0
12
111,743
WIP Implement channels_last_3d convolution
module: cpu, open source
Maybe participates to #59168 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
3
13
111,742
Add CSR tensor with non-contiguous values support to CuSparseSpMatCsrDescriptor
module: sparse, open source, release notes: sparse, topic: new features
Fixes https://github.com/pytorch/pytorch/issues/111574 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111742 cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
3
14
111,741
[dynamo] `{*}Tensor.__init__` from list of ndarray as `torch.stack(List[FakeTensor])`
open source, module: dynamo, ciflow/inductor
Follow up to https://github.com/pytorch/pytorch/pull/111665 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano
2
15
111,740
GPU computation is not equivalent
null
### 🐛 Describe the bug GPU computation is not equivalent, but it is equivalent on CPU. Why? And how can I avoid this? ```python import torch import torch.nn as nn hidden_states = torch.randn([4, 2048, 512]) v_proj = nn.Linear(512, 128, bias=False) value_states = v_proj(hidden_states) h1, h2 = torch.chunk(hidden_states, 2, dim=0) v1 = v_proj(h1) assert h1.equal(hidden_states[:2]) print(v1[0,0,0].item()) print(value_states[0,0,0].item()) assert v1.equal(value_states[:2]) hidden_states = torch.randn([4, 2048, 512]).cuda() v_proj = nn.Linear(512, 128, bias=False).cuda() value_states = v_proj(hidden_states) h1, h2 = torch.chunk(hidden_states, 2, dim=0) v1 = v_proj(h1) assert h1.equal(hidden_states[:2]) print(v1[0,0,0].item()) print(value_states[0,0,0].item()) assert v1.equal(value_states[:2]) ``` running results ```python 0.429298460483551 0.429298460483551 0.3757566213607788 0.37575680017471313 --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) ``` ### Versions ```python PyTorch version: 2.1.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22635-SP0 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU Nvidia driver version: 531.14 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2300 DeviceID=CPU0 Family=207 L2CacheSize=14336 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2300 Name=12th Gen Intel(R) Core(TM) i9-12900HX ProcessorType=3 Revision= Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.3 [pip3] torch==2.1.0+cu121 [pip3] torch-tb-profiler==0.4.3 [pip3] torchaudio==2.1.0+cu121 [pip3] torchvision==0.16.0+cu121 [pip3] torchviz==0.0.2 [conda] Could not collect ```
0
16
111,739
grad is inf/nan when using torch.amp
null
### 🐛 Describe the bug Below is a very simple for using torch.amp, but the gradients are inf/nan. ```python import torch from torch.cuda.amp import GradScaler from torch import optim scaler = GradScaler() a = torch.randn(2, 2, requires_grad=True, device="cuda") b = torch.randn(2, 2, requires_grad=True, device="cuda") optimizer = optim.Adam([a, b], lr=0.1) with torch.autocast(device_type='cuda'): c = a @ b loss = c.sum() scaler.scale(loss).backward() scaler.unscale_(optimizer) print(a.grad) ``` running results: ```python tensor([[-inf, nan], [-inf, nan]], device='cuda:0') ``` ### Versions ```python PyTorch version: 2.1.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22635-SP0 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU Nvidia driver version: 531.14 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=2300 DeviceID=CPU0 Family=207 L2CacheSize=14336 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2300 Name=12th Gen Intel(R) Core(TM) i9-12900HX ProcessorType=3 Revision= Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.3 [pip3] torch==2.1.0+cu121 [pip3] torch-tb-profiler==0.4.3 [pip3] torchaudio==2.1.0+cu121 [pip3] torchvision==0.16.0+cu121 [pip3] torchviz==0.0.2 [conda] Could not collect ```
0
17
111,738
[dynamo] Implement `set.__contains__` for `Tensor` as object match of `FakeTensor`
open source, topic: not user facing, module: dynamo, ciflow/inductor
Fixes https://github.com/pytorch/pytorch/issues/111556 Dynamo implementation of `set.__contains__` previously used `__eq__` match. But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issues/111542 Hence implement as Tensor object match i.e. proxy node `'example_value'` FakeTensor match. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
2
18
111,737
Support calling __torch_function__ attribute access
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111737 * #111731 * #111730 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
19
111,736
Implementation of Lion Optimizer.
null
### 🚀 The feature, motivation and pitch Lion Optimizer is becoming a great alterative to AdamW and Adam Optimizer. It is more efficient as it does not use second order moments and instead uses sign operations in order to update the weights. This saves on memory and decreases training time. In some cases it is better than Adam and AdamW as given in the paper. The original paper for this is : https://arxiv.org/pdf/2302.06675.pdf The RFCS PR for this is: https://github.com/pytorch/rfcs/pull/60 ### Alternatives _No response_ ### Additional context _No response_
0
20
111,735
hack hack hack
ciflow/inductor, release notes: AO frontend
Fixes #ISSUE_NUMBER
1
21
111,734
Is the index_add_ function differentiable?
null
### 🚀 The feature, motivation and pitch verts_normals = torch.zeros_like(cornea_vertex) vertices_faces = cornea_vertex[face_index] faces_normals = torch.cross( vertices_faces[:, 2] - vertices_faces[:, 1], vertices_faces[:, 0] - vertices_faces[:, 1], dim=-1, ) unit_faces_normals = safe_normalize(faces_normals) verts_normals.index_add_(0, face_index[:, 0], unit_faces_normals) verts_normals.index_add_(0, face_index[:, 1], unit_faces_normals) verts_normals.index_add_(0, face_index[:, 2], unit_faces_normals) ### Alternatives _No response_ ### Additional context _No response_ ```[tasklist] ### Tasks ```
0
22
111,733
Bug: torch.compile fails to compile torch.func.vmap with reduction functions and raw python numbers
null
### 🐛 Describe the bug `torch.compile` fails to compile vmap transformation with reduction functions and native python numbers. This bug was only found when using reduction functions, and there are several workarounds as shown in the following examples: ```python import torch torch._dynamo.reset() torch._dynamo.config.capture_func_transforms=True def foo(x): return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error # return torch.vmap(lambda x: torch.mean(x) + 1e-2)(x) # Error # return torch.vmap(lambda x: torch.std(x) + 1e-2)(x) # Error # return torch.vmap(lambda x: torch.sum(x) + torch.tensor(1e-2))(x) # OK # return torch.vmap(lambda x: torch.sum(x, 0, keepdim=True) + 1e-2)(x) # OK # return torch.vmap(lambda x: torch.square(x) + 1e-2)(x) # OK # return torch.vmap(lambda x: x + 1e-2)(x) # OK torch.compile(foo, fullgraph=True)(torch.randn((3, 3), device='cuda:0')) # foo(torch.randn((3, 3), device='cuda:0')) # OK ``` Error messages: ``` BackendCompilerFailed: backend='inductor' raised: AssertionError: While executing %call : [num_users=1] = call_method[target=__call__](args = (%vmap_proxy, %l_x_), kwargs = {}) Original traceback: File "/tmp/ipykernel_672249/1649664715.py", line 7, in foo return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error File "/data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/apis.py", line 188, in wrapped return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs) You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` Traceback: ``` --------------------------------------------------------------------------- BackendCompilerFailed Traceback (most recent call last) Cell In[66], line 16 7 return torch.vmap(lambda x: torch.sum(x) + 1e-2)(x) # Error 8 # return torch.vmap(lambda x: torch.mean(x) + 1e-2)(x) # Error 9 # return torch.vmap(lambda x: torch.std(x) + 1e-2)(x) # Error 10 # return torch.vmap(lambda x: torch.sum(x) + torch.tensor(1e-2))(x) # OK 11 # return torch.vmap(lambda x: torch.sum(x, 0, keepdim=True) + 1e-2)(x) # OK 12 # return torch.vmap(lambda x: torch.square(x) + 1e-2)(x) # OK 13 # return torch.vmap(lambda x: x + 1e-2)(x) # OK ---> 16 torch.compile(foo, fullgraph=True)(torch.randn((3, 3), device='cuda:0')) 17 # foo(torch.randn((3, 3), device='cuda:0')) # OK File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs) 326 dynamic_ctx.__enter__() 327 try: --> 328 return fn(*args, **kwargs) 329 finally: 330 set_eval_frame(prior) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:490, in catch_errors_wrapper.<locals>.catch_errors(frame, cache_entry, frame_state) 487 return hijacked_callback(frame, cache_entry, hooks, frame_state) 489 with compile_lock, _disable_current_modes(): --> 490 return callback(frame, cache_entry, hooks, frame_state) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:133, in wrap_convert_context.<locals>._fn(*args, **kwargs) 131 cleanup = setup_compile_debug() 132 try: --> 133 return fn(*args, **kwargs) 134 finally: 135 cleanup.close() File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:389, in convert_frame_assert.<locals>._convert_frame_assert(frame, cache_entry, hooks, frame_state) 376 compile_id = CompileId(frame_id, frame_compile_id) 378 signpost_event( 379 "dynamo", 380 "_convert_frame_assert._compile", (...) 386 }, 387 ) --> 389 return _compile( 390 frame.f_code, 391 frame.f_globals, 392 frame.f_locals, 393 frame.f_builtins, 394 compiler_fn, 395 one_graph, 396 export, 397 export_constraints, 398 hooks, 399 cache_size, 400 frame, 401 frame_state=frame_state, 402 compile_id=compile_id, 403 ) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:569, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_size, frame, frame_state, compile_id) 567 with compile_context(CompileContext(compile_id)): 568 try: --> 569 guarded_code = compile_inner(code, one_graph, hooks, transform) 570 return guarded_code 571 except ( 572 Unsupported, 573 TorchRuntimeError, (...) 578 ValidationException, 579 ) as e: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:491, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform) 489 for attempt in itertools.count(): 490 try: --> 491 out_code = transform_code_object(code, transform) 492 orig_code_map[out_code] = code 493 break File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1028, in transform_code_object(code, transformations, safe) 1025 instructions = cleaned_instructions(code, safe) 1026 propagate_line_nums(instructions) -> 1028 transformations(instructions, code_options) 1029 return clean_and_assemble_instructions(instructions, keys, code_options)[1] File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:458, in _compile.<locals> .transform(instructions, code_options) 456 try: 457 with tracing(tracer.output.tracing_context): --> 458 tracer.run() 459 except (exc.RestartAnalysis, exc.SkipFrame): 460 raise File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2074, in InstructionTranslator.run(self) 2073 def run(self): -> 2074 super().run() File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:724, in InstructionTranslatorBase.run(self) 719 try: 720 self.output.push_tx(self) 721 while ( 722 self.instruction_pointer is not None 723 and not self.output.should_exit --> 724 and self.step() 725 ): 726 pass 727 except BackendCompilerFailed: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:688, in InstructionTranslatorBase.step(self) 684 unimplemented(f"missing: {inst.opname}") 685 TracingContext.set_current_loc( 686 self.f_code.co_filename, self.lineno, self.f_code.co_name 687 ) --> 688 getattr(self, inst.opname)(inst) 690 return inst.opname != "RETURN_VALUE" 691 except Unsupported: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2162, in InstructionTranslator.RETURN_VALUE(self, inst) 2157 _step_logger()( 2158 logging.INFO, 2159 f"torchdynamo done tracing {self.f_code.co_name} (RETURN_VALUE)", 2160 ) 2161 log.debug("RETURN_VALUE triggered compile") -> 2162 self.output.compile_subgraph( 2163 self, 2164 reason=GraphCompileReason( 2165 "return_value", [self.frame_summary()], graph_break=False 2166 ), 2167 ) 2168 self.output.add_output_instructions([create_instruction("RETURN_VALUE")]) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:833, in OutputGraph.compile_subgraph(self, tx, partial_convert, reason) 830 append_prefix_insts() 831 # optimization to generate better code in a common case 832 self.add_output_instructions( --> 833 self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root) 834 + [create_instruction("UNPACK_SEQUENCE", arg=len(stack_values))] 835 ) 836 else: 837 graph_output_var = self.new_var("graph_out") File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/contextlib.py:81, in ContextDecorator.__call__.<locals>.inner(*args, **kwds) 78 @wraps(func) 79 def inner(*args, **kwds): 80 with self._recreate_cm(): ---> 81 return func(*args, **kwds) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:957, in OutputGraph.compile_and_call_fx_graph(self, tx, rv, root) 952 graph_tabular_log.debug("%s", lazy_format_graph_tabular(name, gm)) 953 graph_sizes_log.debug( 954 "%s", LazyString(lambda: self.get_graph_sizes_log_str(name)) 955 ) --> 957 compiled_fn = self.call_user_compiler(gm) 958 compiled_fn = disable(compiled_fn) 960 counters["stats"]["unique_graphs"] += 1 File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1024, in OutputGraph.call_user_compiler(self, gm) 1022 unimplemented_with_warning(e, self.root_tx.f_code, msg) 1023 except Exception as e: -> 1024 raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 1025 e.__traceback__ 1026 ) from None 1028 signpost_event( 1029 "dynamo", 1030 "OutputGraph.call_user_compiler", (...) 1036 }, 1037 ) 1039 return compiled_fn File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/output_graph.py:1009, in OutputGraph.call_user_compiler(self, gm) 1007 if config.verify_correctness: 1008 compiler_fn = WrapperBackend(compiler_fn) -> 1009 compiled_fn = compiler_fn(gm, self.example_inputs()) 1010 _step_logger()(logging.INFO, f"done compiler function {name}") 1011 assert callable(compiled_fn), "compiler_fn did not return callable" File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py:117, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs) 115 raise 116 else: --> 117 compiled_gm = compiler_fn(gm, example_inputs) 119 return compiled_gm File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py:117, in wrap_backend_debug.<locals>.debug_wrapper(gm, example_inputs, **kwargs) 115 raise 116 else: --> 117 compiled_gm = compiler_fn(gm, example_inputs) 119 return compiled_gm File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/__init__.py:1568, in _TorchCompileInductorWrapper.__call__(self, model_, inputs_) 1565 def __call__(self, model_, inputs_): 1566 from torch._inductor.compile_fx import compile_fx -> 1568 return compile_fx(model_, inputs_, config_patches=self.config) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:1150, in compile_fx(model_, example_inputs_, inner_compile, config_patches, decompositions) 1143 tracing_context = ( 1144 torch._guards.TracingContext.get() or torch._guards.TracingContext(fake_mode) 1145 ) 1147 with V.set_fake_mode(fake_mode), torch._guards.tracing( # type: ignore[call-arg] 1148 tracing_context 1149 ), compiled_autograd.disable(): -> 1150 return aot_autograd( 1151 fw_compiler=fw_compiler, 1152 bw_compiler=bw_compiler, 1153 inference_compiler=inference_compiler, 1154 decompositions=decompositions, 1155 partition_fn=partition_fn, 1156 keep_inference_input_mutations=True, 1157 )(model_, example_inputs_) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/backends/common.py:55, in aot_autograd .<locals>.compiler_fn(gm, example_inputs) 52 try: 53 # NB: NOT cloned! 54 with enable_aot_logging(), patch_config: ---> 55 cg = aot_module_simplified(gm, example_inputs, **kwargs) 56 counters["aot_autograd"]["ok"] += 1 57 return disable(cg) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3891, in aot_module_simplified(mod, args, fw_compiler, bw_compiler, partition_fn, decompositions, keep_inference_input_mutations, inference_compiler) 3875 aot_config = AOTConfig( 3876 fw_compiler=fw_compiler, 3877 bw_compiler=bw_compiler, (...) 3887 no_tangents=False, 3888 ) 3890 with compiled_autograd.disable(): -> 3891 compiled_fn = create_aot_dispatcher_function( 3892 functional_call, 3893 full_args, 3894 aot_config, 3895 ) 3897 # TODO: There is something deeply wrong here; compiled_fn running with 3898 # the boxed calling convention, but aot_module_simplified somehow 3899 # historically returned a function that was not the boxed calling 3900 # convention. This should get fixed... 3901 def forward(*runtime_args): File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/utils.py:189, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs) 187 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 188 t0 = time.time() --> 189 r = func(*args, **kwargs) 190 time_spent = time.time() - t0 191 compilation_time_metrics[key].append(time_spent) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3429, in create_aot_dispatcher_function(flat_fn, flat_args, aot_config) 3426 compiler_fn = partial(aot_wrapper_dedupe, compiler_fn=compiler_fn) 3427 # You can put more passes here -> 3429 compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 3430 if aot_config.is_export: 3432 mutated_user_inp_locs = [ 3433 idx - aot_config.num_params_buffers 3434 for idx in fw_metadata.mutated_inp_indices 3435 if idx >= aot_config.num_params_buffers 3436 ] File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:2212, in aot_wrapper_dedupe(flat_fn, flat_args, aot_config, compiler_fn, fw_metadata) 2209 break 2211 if ok: -> 2212 return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2214 # export path: ban duplicate inputs for now, add later if requested. 2215 if aot_config.is_export: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:2392, in aot_wrapper_synthetic_base(flat_fn, flat_args, aot_config, fw_metadata, needs_autograd, compiler_fn) 2390 # Happy path: we don't need synthetic bases 2391 if synthetic_base_info is None: -> 2392 return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2394 # export path: ban synthetic bases for now, add later if requested. 2395 if aot_config.is_export: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1558, in aot_dispatch_base(flat_fn, flat_args, aot_config, fw_metadata) 1557 def aot_dispatch_base(flat_fn, flat_args: List[Tensor], aot_config: AOTConfig, *, fw_metadata: ViewAndMutationMeta): -> 1558 fw_module = aot_dispatch_base_graph(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 1560 disable_amp = torch._C._is_any_autocast_enabled() 1561 context = torch._C._DisableAutocast if disable_amp else nullcontext File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1533, in aot_dispatch_base_graph(flat_fn, flat_args, aot_config, fw_metadata) 1526 keep_mutations = aot_config.keep_inference_input_mutations 1527 fn_to_trace = fn_input_mutations_to_outputs( 1528 flat_fn, 1529 fw_metadata, 1530 keep_data_input_mutations=aot_config.keep_inference_input_mutations, 1531 ) -> 1533 fw_module = create_functionalized_graph( 1534 fn_to_trace, 1535 flat_args, 1536 meta=fw_metadata, 1537 aot_config=aot_config, 1538 trace_joint=False, 1539 ) 1541 # As long as we opted to remove input mutations, then 1542 # there should be *NO* mutating ops in the graph at this point. 1543 copy_count = assert_functional_graph(fw_module.graph, allow_input_mutations=aot_config.keep_inference_input_mutations) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1420, in create_functionalized_graph(fn, args, meta, aot_config, trace_joint) 1417 helper, args = create_functionalized_rng_ops_wrapper(helper, args, trace_joint) 1419 with enable_python_dispatcher(): -> 1420 fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 1422 return fx_g File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:809, in make_fx.<locals>.wrapped(*args) 801 # We disable the autocast cache as the autocast cache causes type conversions on parameters to 802 # check a cache, which introduces untracked tensors into the graph 803 # 804 # We also disable tracing by any other tensor proxy-based tracers except the current. The 805 # purpose of `make_fx` is to produce graphmodules as a side effect; its internal execution is 806 # thus irrelevant to any external functional trace. 807 with decompose(decomposition_table), fake_tensor_mode, python_dispatcher_mode, pre_dispatch_mode, proxy_function_mode, \ 808 sym_mode, proxy_mode, disable_autocast_cache(), disable_proxy_modes_tracing(enable_current=True): --> 809 t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs)) 811 # TODO: kind of a bad way to do it, should maybe figure out a better way 812 if tracing_mode == "symbolic": File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_compile.py:24, in _disable_dynamo.<locals>.inner(*args, **kwargs) 20 @functools.wraps(fn) 21 def inner(*args, **kwargs): 22 import torch._dynamo ---> 24 return torch._dynamo.disable(fn, recursive)(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs) 326 dynamic_ctx.__enter__() 327 try: --> 328 return fn(*args, **kwargs) 329 finally: 330 set_eval_frame(prior) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/external_utils.py:17, in wrap_inline.<locals>.inner(*args, **kwargs) 15 @functools.wraps(fn) 16 def inner(*args, **kwargs): ---> 17 return fn(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:468, in dispatch_trace(root, tracer, concrete_args) 462 @torch._disable_dynamo 463 def dispatch_trace( 464 root: Union[torch.nn.Module, Callable], 465 tracer: Tracer, 466 concrete_args: Optional[Tuple[Any, ...]] = None, 467 ) -> GraphModule: --> 468 graph = tracer.trace(root, concrete_args) 469 name = root.__class__.__name__ if isinstance(root, torch.nn.Module) else root.__name__ 470 return GraphModule(tracer.root, graph, name) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:328, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs) 326 dynamic_ctx.__enter__() 327 try: --> 328 return fn(*args, **kwargs) 329 finally: 330 set_eval_frame(prior) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_dynamo/external_utils.py:17, in wrap_inline.<locals>.inner(*args, **kwargs) 15 @functools.wraps(fn) 16 def inner(*args, **kwargs): ---> 17 return fn(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:817, in Tracer.trace(self, root, concrete_args) 810 for module in self._autowrap_search: 811 _autowrap_check( 812 patcher, module.__dict__, self._autowrap_function_ids 813 ) 814 self.create_node( 815 "output", 816 "output", --> 817 (self.create_arg(fn(*args)),), 818 {}, 819 type_expr=fn.__annotations__.get("return", None), 820 ) 822 self.submodule_paths = None 823 finally: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:485, in wrap_key.<locals>.wrapped(*proxies) 482 assert isinstance(m, ProxyTorchDispatchMode) 483 track_tensor_tree(flat_tensors, flat_proxies, constant=None, tracer=tracer) --> 485 out = f(*tensors) 486 out = pytree.tree_map_only( 487 torch.Tensor, 488 lambda t: get_proxy_slot(t, tracer, t, lambda x: x.proxy), 489 out 490 ) 491 out = pytree.tree_map_only( 492 (SymInt, SymFloat, SymBool), 493 lambda t: get_proxy_slot(t.node, tracer)(), 494 out 495 ) File <string>:1, in <lambda>(arg0) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1412, in create_functionalized_graph.<locals>.fwd_helper(*args) 1411 def fwd_helper(*args): -> 1412 return functionalized_f_helper(*args) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1363, in create_functionalized_graph.<locals>.functionalized_f_helper(*args) 1360 torch._enable_functionalization(reapply_views=True) 1361 try: 1362 # Run the joint -> 1363 f_outs = fn(*f_args) 1364 finally: 1365 torch._disable_functionalization() File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:1165, in fn_input_mutations_to_outputs.<locals>.inner_fn(*args) 1164 def inner_fn(*args): -> 1165 outs = fn(*args) 1166 assert len(meta.output_info) == len(outs) 1167 # The compiled fw will return mutated input tensors, *including* metadata-only mutation. 1168 # However, if keep_data_input_mutations is set, the compiled fw only needs to return metadata-mutated inputs. 1169 # (because data-only input mutations are handled directly in the compiled graph) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py:3496, in create_functional_call.<locals>.functional_call(*args, **kwargs) 3492 warnings.filterwarnings( 3493 "ignore", "Anomaly Detection has been enabled." 3494 ) 3495 with torch.autograd.detect_anomaly(check_nan=False): -> 3496 out = Interpreter(mod).run(*args[params_len:], **kwargs) 3497 else: 3498 out = mod(*args[params_len:], **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:138, in Interpreter.run(self, initial_env, enable_io_processing, *args) 135 continue 137 try: --> 138 self.env[node] = self.run_node(node) 139 except Exception as e: 140 if self.extra_traceback: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:195, in Interpreter.run_node(self, n) 193 assert isinstance(args, tuple) 194 assert isinstance(kwargs, dict) --> 195 return getattr(self, n.op)(n.target, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/interpreter.py:289, in Interpreter.call_method(self, target, args, kwargs) 287 # Execute the method and return the result 288 assert isinstance(target, str) --> 289 return getattr(self_obj, target)(*args_tail, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/apis.py:188, in vmap.<locals>.wrapped(*args, **kwargs) 187 def wrapped(*args, **kwargs): --> 188 return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:266, in vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs) 262 return _chunked_vmap(func, flat_in_dims, chunks_flat_args, 263 args_spec, out_dims, randomness, **kwargs) 265 # If chunk_size is not specified. --> 266 return _flat_vmap( 267 func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs 268 ) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:38, in doesnt_support_saved_tensors_hooks.<locals>.fn(*args, **kwargs) 35 @functools.wraps(f) 36 def fn(*args, **kwargs): 37 with torch.autograd.graph.disable_saved_tensors_hooks(message): ---> 38 return f(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_functorch/vmap.py:379, in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs) 377 try: 378 batched_inputs = _create_batched_inputs(flat_in_dims, flat_args, vmap_level, args_spec) --> 379 batched_outputs = func(*batched_inputs, **kwargs) 380 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func) 381 finally: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:678, in GraphModule.recompile.<locals>.call_wrapped(self, *args, **kwargs) 677 def call_wrapped(self, *args, **kwargs): --> 678 return self._wrapped_call(self, *args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:284, in _WrappedCall.__call__(self, obj, *args, **kwargs) 282 raise e.with_traceback(None) 283 else: --> 284 raise e File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/graph_module.py:274, in _WrappedCall.__call__(self, obj, *args, **kwargs) 272 return self.cls_call(obj, *args, **kwargs) 273 else: --> 274 return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 275 except Exception as e: 276 assert e.__traceback__ File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:795, in Tracer.trace.<locals>.module_call_wrapper(mod, *args, **kwargs) 788 return _orig_module_call(mod, *args, **kwargs) 790 _autowrap_check( 791 patcher, 792 getattr(getattr(mod, "forward", mod), "__globals__", {}), 793 self._autowrap_function_ids, 794 ) --> 795 return self.call_module(mod, forward, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:425, in PythonKeyTracer.call_module(self, m, forward, args, kwargs) 422 def call_module( 423 self, m: torch.nn.Module, forward: Callable[..., Any], args: Tuple[Any, ...], kwargs: Dict[str, Any] 424 ) -> Any: --> 425 return forward(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py:788, in Tracer.trace.<locals>.module_call_wrapper.<locals>.forward(*args, **kwargs) 787 def forward(*args, **kwargs): --> 788 return _orig_module_call(mod, *args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1529 try: 1530 result = None File <eval_with_key>.429:6, in forward(self, select) 4 def forward(self, select): 5 sum_1 = torch.sum(select); select = None ----> 6 add = sum_1 + 0.01; sum_1 = None 7 return add File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs) 18 simple_call_counter[fn.__qualname__] = 0 19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 ---> 20 return fn(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:555, in ProxyTorchDispatchMode.__torch_dispatch__(self, func, types, args, kwargs) 552 @count 553 def __torch_dispatch__(self, func, types, args=(), kwargs=None): 554 with self.sym_mode.enable(False), set_original_aten_op(func): --> 555 return self.inner_torch_dispatch(func, types, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:580, in ProxyTorchDispatchMode.inner_torch_dispatch(self, func, types, args, kwargs) 577 if func in [prim.device.default]: 578 return func(*args, **kwargs) --> 580 return proxy_call(self, func, self.pre_dispatch, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:262, in proxy_call(proxy_mode, func, pre_dispatch, args, kwargs) 260 if func in CURRENT_DECOMPOSITION_TABLE: 261 with proxy_mode: --> 262 r = CURRENT_DECOMPOSITION_TABLE[func](*args, **kwargs) 263 if r is not NotImplemented: 264 return r File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_decomp/decompositions.py:1737, in _to_copy(x, dtype, layout, device, pin_memory, non_blocking, memory_format) 1735 x = torch._prims.device_put(x, device) 1736 if dtype is not None and not dtype_converted: -> 1737 x = torch._prims.convert_element_type(x, dtype) 1738 dtype_converted = True 1739 # In case of dtype promotion, faketensor converted into tensor. 1740 # Need to convert into faketensor if input was a faketensor. File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_ops.py:448, in OpOverload.__call__(self, *args, **kwargs) 447 def __call__(self, *args, **kwargs): --> 448 return self._op(*args, **kwargs or {}) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs) 18 simple_call_counter[fn.__qualname__] = 0 19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 ---> 20 return fn(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:555, in ProxyTorchDispatchMode.__torch_dispatch__(self, func, types, args, kwargs) 552 @count 553 def __torch_dispatch__(self, func, types, args=(), kwargs=None): 554 with self.sym_mode.enable(False), set_original_aten_op(func): --> 555 return self.inner_torch_dispatch(func, types, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:580, in ProxyTorchDispatchMode.inner_torch_dispatch(self, func, types, args, kwargs) 577 if func in [prim.device.default]: 578 return func(*args, **kwargs) --> 580 return proxy_call(self, func, self.pre_dispatch, args, kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/fx/experimental/proxy_tensor.py:361, in proxy_call(proxy_mode, func, pre_dispatch, args, kwargs) 358 else: 359 args[0].proxy = proxy_out --> 361 out = func(*args, **kwargs) 363 # In some circumstances, we will be tracing in a situation where a tensor 364 # is *statically* known to be a constant (currently, this only happens if 365 # you run torch.tensor; deterministic factory functions like torch.arange (...) 382 # propagating const-ness. Similarly, we don't require the constant to 383 # live on CPU, but we could. 384 any_constant = pytree.tree_any_only(_ProxyTensor, lambda t: t.constant is not None, (f_args, f_kwargs)) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_ops.py:448, in OpOverload.__call__(self, *args, **kwargs) 447 def __call__(self, *args, **kwargs): --> 448 return self._op(*args, **kwargs or {}) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs) 18 simple_call_counter[fn.__qualname__] = 0 19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 ---> 20 return fn(*args, **kwargs) File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1250, in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs) 1248 assert self not in _get_current_dispatch_mode_stack(), func 1249 try: -> 1250 return self.dispatch(func, types, args, kwargs) 1251 except TypeError: 1252 log.exception("fake tensor raised TypeError") File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1470, in FakeTensorMode.dispatch(self, func, types, args, kwargs) 1464 if ( 1465 "prims::" in func._schema.name 1466 and hasattr(func, "prim_meta_impl") 1467 and not stride_incorrect_op(func) 1468 ): 1469 with self: -> 1470 return func.prim_meta_impl(*args, **kwargs) 1472 # Users can register FakeTensor rules for custom operators 1473 # Call them if they exist. 1474 if func.name() in torch._custom_op.impl.global_registry: File /data/shurui.gui/mambaforge/envs/CSR/lib/python3.11/site-packages/torch/_prims/__init__.py:1993, in _convert_element_type_meta(a, dtype) 1991 def _convert_element_type_meta(a: TensorLikeType, dtype: torch.dtype) -> TensorLikeType: 1992 # Type checks -> 1993 assert isinstance(a, TensorLike) 1994 assert isinstance(dtype, torch.dtype) 1996 # dtype conversion preserves dense strides ``` ### Versions PyTorch version: 2.1.0 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:40:35) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.3.52 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe GPU 2: NVIDIA A100 80GB PCIe GPU 3: NVIDIA A100 80GB PCIe GPU 4: NVIDIA A100 80GB PCIe GPU 5: NVIDIA A100 80GB PCIe GPU 6: NVIDIA A100 80GB PCIe GPU 7: NVIDIA A100 80GB PCIe Nvidia driver version: 510.60.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 112 On-line CPU(s) list: 0-111 Thread(s) per core: 2 Core(s) per socket: 28 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz Stepping: 7 CPU MHz: 3399.999 CPU max MHz: 4000.0000 CPU min MHz: 1000.0000 BogoMIPS: 5400.00 Virtualization: VT-x L1d cache: 1.8 MiB L1i cache: 1.8 MiB L2 cache: 56 MiB L3 cache: 77 MiB NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.26.0 [pip3] torch==2.1.0 [pip3] torch_geometric==2.4.0 [pip3] torchaudio==2.1.0 [pip3] torchvision==0.16.0 [pip3] triton==2.1.0 [conda] blas 2.116 mkl conda-forge [conda] blas-devel 3.9.0 16_linux64_mkl conda-forge [conda] cudatoolkit 11.8.0 h4ba93d1_12 conda-forge [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libblas 3.9.0 16_linux64_mkl conda-forge [conda] libcblas 3.9.0 16_linux64_mkl conda-forge [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] liblapack 3.9.0 16_linux64_mkl conda-forge [conda] liblapacke 3.9.0 16_linux64_mkl conda-forge [conda] mkl 2022.1.0 h84fe81f_915 conda-forge [conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge [conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge [conda] numpy 1.26.0 py311h64a7726_0 conda-forge [conda] pyg 2.4.0 py311_torch_2.1.0_cu118 pyg [conda] pytorch 2.1.0 py3.11_cuda11.8_cudnn8.7.0_0 pytorch [conda] pytorch-cuda 11.8 h7e8668a_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.1.0 py311_cu118 pytorch [conda] torchtriton 2.1.0 py311 pytorch [conda] torchvision 0.16.0 py311_cu118 pytorch
0
23
111,732
Pass `ignored_params` at the leaf FSDP wrapping class call
open source, release notes: distributed (fsdp)
Fixes #111623
2
24
111,731
Support tracing base torch_function impl
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111737 * __->__ #111731 * #111730 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
25
111,730
TensorWithTFOverride inheritance from TensorVariable
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111737 * #111731 * __->__ #111730 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
26
111,729
An OOM where there should not be any OOM.
null
### 🐛 Describe the bug I see similar type of errors being asked about in quite a few places, with advice given being usually useless. The suggestion below to muck around with an environment variable is similarly useless. What is confounding to me is that memory allocation is tiny in comparison to still available space. Why is this happening?   Traceback (most recent call last): File "fine-tune.py", line 338, in <module> train() File "fine-tune.py", line 331, in train trainer.train() File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1591, in train return inner_training_loop( File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1726, in _inner_training_loop model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer) File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1280, in prepare result = self._prepare_deepspeed(*args) File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1662, in _prepare_deepspeed engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/__init__.py", line 171, in initialize engine = DeepSpeedEngine(args=args, File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 304, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1212, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1473, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer( File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 509, in __init__ self.initialize_optimizer_states() File "/home/developer/mambaforge/envs/FinGPT/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 644, in initialize_optimizer_states self.optimizer.step() File "/home/developer/pytorch/torch/optim/lr_scheduler.py", line 69, in wrapper return wrapped(*args, **kwargs) File "/home/developer/pytorch/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/home/developer/pytorch/torch/optim/optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) File "/home/developer/pytorch/torch/optim/adamw.py", line 171, in step adamw( File "/home/developer/pytorch/torch/optim/adamw.py", line 321, in adamw func( File "/home/developer/pytorch/torch/optim/adamw.py", line 564, in _multi_tensor_adamw exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 134.00 MiB (GPU 0; 11.93 GiB total capacity; 4.48 GiB already allocated; 6.69 GiB free; 4.77 GiB allowed; 4.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ### Versions UBuntu 20.04 Python 3.8 Anaconda environment Inside a docker
0
27
111,728
Not Implemented Issue
null
### 🚀 The feature, motivation and pitch NotImplementedError: The operator 'aten::_unique2' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. ### Alternatives Please add this ### Additional context _No response_
0
28
111,727
[TESTING] Check Triton update after elementwise dedup fix
ciflow/trunk, topic: not user facing, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111727 This PR is patched over the current Triton pin: https://github.com/openai/triton/pull/2512 .
1
29
111,726
[dynamo] Remove VariableTracker.propagate
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111726 * #111725 * #111415 * #111614 * #111717 * #111306 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
30
111,725
[dynamo] Remove VariableTracker.add_options
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111726 * __->__ #111725 * #111415 * #111614 * #111717 * #111306 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
31
111,724
[torchx] Do not terminate parent process if exit code from child isn't valid
fb-exported
Summary: There's no reason to terminate the parent process trying to find the name of the signal received by the child process. Let's make sure this is handled properly, which then will ensure that parent process can process child failures. Test Plan: Unit tests. Differential Revision: D50516668
6
32
111,722
Add cudagraph_mark_step_begin in torch.compiler, reference in error message
module: inductor, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111722 cc @chauhang cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
33
111,721
Constrain sdpa to fx strides
module: inductor, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111721 Fix for https://github.com/pytorch/pytorch/issues/109607. sdpa requires last dimension strides to be 1. Add constraint so that we run the op with the strides we observed in tracing. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
2
34
111,719
add dry run metrics to td strategies
topic: not user facing
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111719 Creates a decorator in order to emit metrics for dry runs on target determination strategies. @ZainRizvi does this seem reasonable? <!-- copilot:summary --> ### <samp>🤖 Generated by Copilot at c90a623</samp> Add a new module `dry_run.py` to support dry run mode for test dependency strategies. This mode allows testing the logic and performance of different strategies without actually running the tests. This can help improve testing efficiency and quality.
1
35
111,718
Wrong way of checking if CustomModule is a subclass of torch.nn.Module
null
### 🐛 Describe the bug When I build my custom module and try to add it to a sequential text processing, with "[torchtext.transforms.Sequential](https://pytorch.org/text/stable/transforms.html#torchtext.transforms.Sequential)" It raises an error even though I'm doing the sub classing correctly. This is a fragment of the error stack trace: ![image](https://github.com/pytorch/pytorch/assets/147768729/a5e62c5e-7cc8-40b4-aa3d-73bdfdd14da2) When I go directly to the code to see what happens I find that the method used for checking if the provided module is a subclass of torch.nn.Module I find this: ![image](https://github.com/pytorch/pytorch/assets/147768729/a9939b8c-9270-4c81-90ee-e25470f3d92e) This is a mistake because the function for doing the subclassing check is 'issubclass' instead of 'isinstance'. I changed the code and it worked as needed, so please check this bug out. ### Versions Collecting environment information... PyTorch version: 2.1.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Manjaro Linux (x86_64) GCC version: (GCC) 13.2.1 20230801 Clang version: 16.0.6 CMake version: version 3.27.5 Libc version: glibc-2.38 Python version: 3.11.5 (main, Sep 2 2023, 14:16:33) [GCC 13.2.1 20230801] (64-bit runtime) Python platform: Linux-6.1.53-1-MANJARO-x86_64-with-glibc2.38 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Ti Nvidia driver version: 535.104.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: AuthenticAMD Model name: AMD Ryzen 5 3600X 6-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 70% CPU max MHz: 4408,5928 CPU min MHz: 2200,0000 BogoMIPS: 7603,86 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 3 MiB (6 instances) L3 cache: 32 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.1 [pip3] pytorch-lightning==2.1.0 [pip3] torch==2.1.0 [pip3] torchaudio==2.1.0 [pip3] torchdata==0.7.0 [pip3] torchmetrics==1.2.0 [pip3] torchtext==0.16.0 [pip3] torchvision==0.16.0 [pip3] triton==2.1.0 [conda] Could not collect
0
36
111,717
[dynamo] Lazily construct symbolic_locals
module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111726 * #111725 * #111415 * #111614 * __->__ #111717 * #111306 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
37
111,716
Cannot pip install torch 2.0.1
null
### 🐛 Describe the bug I was trying to follow the instruction on the [webpage](https://pytorch.org/get-started/previous-versions/) to install torch 2.0.1 using pip. ``` # ROCM 5.4.2 (Linux only) pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/rocm5.4.2 # CUDA 11.7 pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu117 # CUDA 11.8 pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 # CPU only pip install torch==2.0.1+cpu torchvision==0.15.2+cpu torchaudio==2.0.2 --index-url https://download ``` But it would throw an error e.g. for installing 2.0.1+cu117 ``` ERROR: Could not find a version that satisfies the requirement torch==2.0.1+cu117 (from versions: 1.13.0+cu117, 1.13.1+cu117) ERROR: No matching distribution found for torch==2.0.1+cu117 ``` Commands for other versions above throw similar errors. ### Versions Attempt to install 2.0.1
0
38
111,715
[Export] Don't serialize missing args with default value
fb-exported, topic: not user facing, module: inductor, ciflow/inductor, module: export
Summary: Per https://docs.google.com/document/d/1FzWm-sHYwmRi3x_g036kOxd99KaYquUsA-L5JwOn8ys/edit I wonder if this would break executorch? @larryliu0820 I see exir/serialize.py using export's GraphModuleSerializer. Test Plan: Existing CIs Differential Revision: D50519217 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
4
39
111,713
[dynamo] generic `is_` type shortcut is not appropriately guarded
bug, oncall: pt2, module: dynamo
### 🐛 Describe the bug This hack https://github.com/pytorch/pytorch/blob/5a2f97dee80ca27b732e12b61359d6e475a9c03b/torch/_dynamo/variables/builtin.py#L1310 in https://github.com/pytorch/pytorch/pull/104840 is too strong. ### Use-Cases Support for tracing `is_` when there's type mismatch: https://github.com/pytorch/pytorch/issues/109504 Part of the way to: https://github.com/pytorch/pytorch/issues/111550 ### Solution Perhaps installing unalias check and guarding on as python constant might be good enough to solve generic is_ check without resorting to hacks like this. ### Repro ```python import collections def fn(x, y, z): z += 1 return x is y, z x = collections.OrderedDict({1: 2}) y = {1: 2} z = torch.tensor([1]) opt_fn = torch.compile(fn, backend="eager", fullgraph=True) assert opt_fn(x, y, z) == fn(x, y, z) # Compile with x is y == False assert opt_fn(x, x, z) == fn(x, x, z) # Does not recompile as input types are not guarded ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng ### Versions main cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
0
40
111,712
Re-enable some embedded bag tests
topic: not user facing
They were temporary disabled in 2019 by https://github.com/pytorch/pytorch/pull/26599 May be it has been fixed already... <!-- copilot:poem --> ### <samp>🤖 Generated by Copilot at 1e49d84</samp> > _`TestEmbeddingNN`_ > _CUDA tests restored_ > _Bug fixed in autumn breeze_
1
41
111,711
[aotinductor] 14k models: CppCompileError: C++ compile error
triaged, oncall: pt2
``` 25 errors like: CppCompileError: C++ compile error (example ./generated/test_krrish94_nerf_pytorch.py:SinThetaByTheta # pytest ./generated/test_krrish94_nerf_pytorch.py -k test_001) ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
42
111,710
`fbgemm` update causes failures in `test_embedding.py`
high priority, triage review, module: regression, module: third_party
### 🐛 Describe the bug ``` % python3 test/nn/test_embedding.py -k test_EmbeddingBag_per_sample_weights_and_new_offsets_cpu_int32_int32_bfloat16 ... AssertionError: Tensor-likes are not close! Mismatched elements: 4 / 10 (40.0%) Greatest absolute difference: 9.1875 at index (3, 1) (up to 0.1 allowed) Greatest relative difference: 0.57421875 at index (3, 1) (up to 0 allowed) ``` Reverting https://github.com/pytorch/FBGEMM/pull/1851 fixes the problem ### Versions Nightly cc @ezyang @gchanan @zou3519 @kadeng
2
43
111,709
lintrunner job time keeps growing
triaged, module: devx
For example: Sep 29 https://hud.pytorch.org/pytorch/pytorch/commit/bc047ec906d8e1730e2ccd8192cef3c3467d75d1 - 18 mins Oct 06 https://hud.pytorch.org/pytorch/pytorch/commit/65d40a72c4ff3cf5218dffda8b5da60ea2163890 - 22 mins Today, Oct 20 https://hud.pytorch.org/pytorch/pytorch/commit/303c54dbd9921d78ed01116547c063b450338c74 - 26 mins If we want to reduce the time, we need to investigate what's taking so long. Two possible candidates are ruff and clangtidy. It would be nice to have time split by linter in the lintrunner job logs. cc @ZainRizvi @huydhn @clee2000 @PaliC @malfet
3
44
111,706
DISABLED test_meta_outplace_fft_ifft_cpu_uint8 (__main__.TestMetaCPU)
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_uint8&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17905842710). Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 36 failures and 12 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_meta_outplace_fft_ifft_cpu_uint8` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_meta.py` cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
45
111,704
Add more flexibility on print / output console
null
### 🚀 The feature, motivation and pitch For a large number of debug usage, printing tensors on console is clearly usefull. The current C++ API output fix the `std::cout` precision on float to 4 decimals. > float can hold up to 7 decimal digits accurately while double can hold up to 15 The only 'user' parameter of `torch::print()` is the `int64_t linesize = 80`. ### Alternatives I see at least two major options that can be usefull for common usage: 1. set the number of decimal (see `std::set_precision`) for floating-point numbers. 2. set the scientific output optional, not automatic (see `std::fixed` / `std::scientific`) ### Additional context I currently output (`std::cout << tensor`) two differents 8x8 Float Tensors for comparison: ``` 0.2360 0.0258 0.1689 0.1564 -0.2261 0.0567 0.1844 0.3033 0.2940 -0.2500 -0.0653 -0.0805 0.2112 -0.1635 0.2915 0.3023 0.2912 0.0944 0.1377 -0.1824 -0.1882 -0.2844 0.0189 -0.2718 -0.2812 -0.0292 -0.3035 -0.0724 0.1665 -0.2391 0.0724 -0.1974 -0.2716 0.1460 -0.3044 0.1312 0.2848 0.1549 0.2815 0.1874 0.0980 -0.1967 0.1135 0.2974 -0.1395 0.2800 -0.2298 0.2627 0.2153 0.1423 0.2779 0.0157 -0.3499 0.1718 0.2147 0.2121 0.2856 0.2004 0.0951 -0.0757 -0.3016 0.0643 -0.2685 -0.1260 [ CUDAFloatType{8,8} ] 0.2360 0.0258 0.1689 0.1564 -0.2261 0.0567 0.1844 0.3033 0.2940 -0.2500 -0.0653 -0.0805 0.2112 -0.1635 0.2915 0.3023 0.2912 0.0944 0.1377 -0.1824 -0.1882 -0.2844 0.0189 -0.2718 -0.2812 -0.0292 -0.3035 -0.0724 0.1665 -0.2391 0.0724 -0.1973 -0.2716 0.1460 -0.3044 0.1312 0.2848 0.1549 0.2815 0.1874 0.0980 -0.1967 0.1135 0.2974 -0.1395 0.2800 -0.2298 0.2627 0.2153 0.1423 0.2779 0.0157 -0.3499 0.1718 0.2147 0.2121 0.2856 0.2004 0.0951 -0.0757 -0.3016 0.0643 -0.2685 -0.1260 [ CUDAFloatType{8,8} ] ``` But the `torch::allclose(actual, expected, rtol, atol);` (where `rtol = atol = 1e-5`) give me a `false`. This `false` is probably `true`, but console output don't help for a quick check. !!! Thank you for Torch !!!
0
46
111,695
Runnings SentenceTransformer encoding step causes Docker containers on Mac (Silicon) to crash with code 139
null
### 🐛 Describe the bug Hi! Hopefully there isn't a similar issue already open. I couldn't find one after a search through the issues list. Feel free to mark as duplicate/close if it already exists. I've created this repository with a minimal setup to reproduce the error: https://github.com/sabaimran/repro-torch-bug. You just have to clone it and run `docker-compose up` to see the error. Basically it runs the script below in a minimal Docker container: ```python import torch from langchain.embeddings import HuggingFaceEmbeddings class EmbeddingsModel: def __init__(self): self.model_name = "sentence-transformers/multi-qa-MiniLM-L6-cos-v1" encode_kwargs = {"normalize_embeddings": True} if torch.cuda.is_available(): # Use CUDA GPU device = torch.device("cuda:0") elif torch.backends.mps.is_available(): # Use Apple M1 Metal Acceleration device = torch.device("mps") else: device = torch.device("cpu") self.device = device model_kwargs = {"device": device} self.embeddings_model = HuggingFaceEmbeddings( model_name=self.model_name, encode_kwargs=encode_kwargs, model_kwargs=model_kwargs ) def embed_documents(self, docs: List[str]): logger.info(f"Using device: {self.device} to embed {len(docs)} documents") return self.embeddings_model.embed_documents(docs) model = EmbeddingsModel() embeddings = model.embed_documents(["this is a document", "so is this"]) print(f"Created embeddings of length {len(embeddings)}") ``` If you run this code inside of a Docker container (with the appropriate dependencies), it will fail with exit code 139. Pinning the `torch` package to `2.0.1` circumvents the error. See this other relevant issue: https://github.com/docker/for-mac/issues/7016 ### Versions Collecting environment information... PyTorch version: 2.1.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.2.1 (arm64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: version 3.26.4 Libc version: N/A Python version: 3.11.4 (main, Jul 10 2023, 18:52:37) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime) Python platform: macOS-13.2.1-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M2 Pro Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.1 [pip3] torch==2.1.0 [pip3] torchvision==0.16.0 [conda] Could not collect
0
47
111,694
[Release/2.1.1][ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935
module: onnx, open source, release notes: onnx
Original PR: https://github.com/pytorch/pytorch/pull/110956 Fixes https://github.com/pytorch/pytorch/issues/110597 Summary: * Generic code: The torch._C.Value.node().mustBeNone() is encapsulated into the high-level API JitScalarType.from_value ; _is_none was also extended to allow either None or torch._C.Value.node.mustBeNone(), so users don't manually call into TorchScript API when implementing operators * Specific to new_zeros (and ops of *_like and new_*): When checking dtype, we always must use _is_none, which will call proposed by https://github.com/pytorch/pytorch/pull/110935
1
48
111,693
[export] 14k models: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types
triaged, oncall: pt2, module: export
167 errors like: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types: (<class 'torch.Tensor'>) (example ./generated/test_XPixelGroup_BasicSR.py:SPADEResnetBlock # pytest ./generated/test_XPixelGroup_BasicSR.py -k test_030) cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
1
49
111,692
DISABLED test_sigmoid (__main__.TestQuantizedOps)
oncall: quantization, triaged, module: macos, skipped
Platforms: mac, macos This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_quantization.py%3A%3ATestQuantizedOps%3A%3Atest_sigmoid)). This test is failing on MacOS x86 https://hud.pytorch.org/pytorch/pytorch/commit/ca7d084ff9b67675cfff0d175ea6b96fcedc4950 cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @malfet @albanD
1
50
111,691
[aotinductor] 14k models: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args'
triaged, oncall: pt2
347 errors like: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args' (example ./generated/test_ludwig_ai_ludwig.py:SequenceReducer # pytest ./generated/test_ludwig_ai_ludwig.py -k test_015) cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
51
111,689
[Quantization] Add a test for QAT + PTQ selective quantization in
release notes: quantization
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111689 xnnpack quantizer Summary: For some workflows you want to quantize some parts of the model via qat and then continue eager mode training. After training, you want to export the whole model and perform PTQ on the rest. Test Plan: test added Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D50510480](https://our.internmc.facebook.com/intern/diff/D50510480)
3
52
111,688
Document torch.from_file and fix UntypedStorage.from_file docs
release notes: python_frontend, topic: docs
Fixes https://github.com/pytorch/pytorch/issues/37439 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111688 cc @albanD
1
53
111,687
[Release/2.1.1][DCP] Remove _shard_tensor() call in load_sharded_optimizer_state_dict in optimizer.py #111096
null
Cherry pick into 2.1.1 [original PR: #111096](https://github.com/pytorch/pytorch/pull/111096) _shard_tensor() calls into dist.all_gather_object() and this is causing optimizer state dict loading to be super slow. Workaround: call FSDP._shard_utils._create_chunk_sharded_tensor() to construct ShardedTensor without any communication.
1
54
111,686
RecursionError for backend='inductor' with a loop
oncall: pt2
### 🐛 Describe the bug Running the following code causes RecursionError. It's not a very practical example, but it works totally fine in eager mode and with `torch.jit.script`. ``` python import torch class Net(torch.nn.Module): def forward(self, x): for i in range(1000): x = 1.0 * x return x net = Net() net = torch.compile(net) x = torch.tensor([1.0]) print(net(x)) ``` ### Error logs ... ``` File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in <listcomp> return fn(*[load(index) for load in loaders]) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in inner_fn return fn(*[load(index) for load in loaders]) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/lowering.py", line 397, in <listcomp> return fn(*[load(index) for load in loaders]) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/ir.py", line 2393, in loader return ops.load(self.name, indexer(index)) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 232, in inner return OpsWrapper._wrap(getattr(_ops, name)(*new_args, **new_kwargs)) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 132, in inner line = getattr(self.parent_handler, name)(*args, **kwargs) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 69, in inner fargs = [_arg_str(a) for a in args] File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 69, in <listcomp> fargs = [_arg_str(a) for a in args] File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/virtualized.py", line 59, in _arg_str return sympy_str(a) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/torch/_inductor/utils.py", line 395, in sympy_str return str(expr) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/core/_print_helpers.py", line 29, in __str__ return sstr(self, order=None) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 372, in __call__ return self.__wrapped__(*args, **kwargs) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/str.py", line 999, in sstr p = StrPrinter(settings) File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 261, in __init__ self._settings = self._get_initial_settings() File "/home/sdym/.conda/envs/py39/lib/python3.9/site-packages/sympy/printing/printer.py", line 252, in _get_initial_settings settings = cls._default_settings.copy() torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: RecursionError: maximum recursion depth exceeded while calling a Python object ``` ### Minified repro _No response_ ### Versions Collecting environment information... PyTorch version: 2.1.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Fedora release 36 (Thirty Six) (x86_64) GCC version: (conda-forge gcc 10.3.0-16) 10.3.0 Clang version: 11.1.0 (https://github.com/conda-forge/clangdev-feedstock 2816c2cf231a2d3a6d621af9bbb2c590c9e63fe7) CMake version: version 3.26.1 Libc version: glibc-2.35 Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] (64-bit runtime) Python platform: Linux-6.2.15-100.fc36.x86_64-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 52% CPU max MHz: 4368.1641 CPU min MHz: 2200.0000 BogoMIPS: 6986.81 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 16 MiB (32 instances) L3 cache: 128 MiB (8 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-63 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flake8==6.0.0 [pip3] flake8-bugbear==23.6.5 [pip3] flake8-comprehensions==3.3.0 [pip3] flake8-executable==2.0.4 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.5.0 [pip3] flake8-simplify==0.19.3 [pip3] mypy==1.4.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.1 [pip3] numpydoc==1.5.0 [pip3] onnx==1.14.1 [pip3] pytorch-sphinx-theme==0.0.19 [pip3] torch==2.1.0 [pip3] torcheval-nightly==2022.12.27 [pip3] torchsnapshot-nightly==2022.11.28 [pip3] torchtnt==0.0.4 [pip3] triton==2.1.0 [conda] magma-cuda116 2.6.1 0 pytorch [conda] mkl 2022.1.0 h84fe81f_915 conda-forge [conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge [conda] numpy 1.23.1 pypi_0 pypi [conda] numpydoc 1.5.0 pypi_0 pypi [conda] pytorch-sphinx-theme 0.0.19 pypi_0 pypi [conda] torch 2.1.0 pypi_0 pypi [conda] torcheval-nightly 2022.12.27 pypi_0 pypi [conda] torchfix 0.1.1 pypi_0 pypi [conda] torchsnapshot-nightly 2022.11.28 pypi_0 pypi [conda] torchtnt 0.0.4 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
2
55
111,685
Disable dynamo when running generated opcheck tests
fb-exported
Summary: Use `TORCHDYNAMO_DISABLE=1` when running generated opcheck tests. Enable some `fbgemm::pack_segments` tests that errored out (with error `RuntimeError: expected int but got s0*s1**2`) because dynamo was being run in the opcheck tests. Test Plan: `parsh -v --build-flags mode/dev-nosan //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test` then `run_tests("test_pack_segments")` Differential Revision: D50508958
5
56
111,682
[BE]: ruff apply rule PLW1510 to find silent subprocess errors
open source, better-engineering, NNC, release notes: jit, module: dynamo, ciflow/inductor
Opts in to check=True or check=False to ensure nonzero exit codes are propogated cc @EikanWang @jgong5 @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
57
111,681
Make require_stride_order peek into AliasedLayout
module: inductor, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111681 Summary: `require_stride_order` doesn't know how to handle storage with `AliasedLayout`. It always resorts to a copy even when the view refers to a storage with `FixedLayout`. This causes an unneccessary allocation + copy for collective outputs. Peeking into `AliasedLayout` in `require_stride_order` seems to be the proper way to address the issue. Original program: ```python import tempfile import torch import torch.distributed as dist from torch.distributed._functional_collectives import * # noqa from torch._inductor.utils import run_and_get_triton_code def func(arg: torch.Tensor) -> torch.Tensor: buf0 = arg + 42 out0 = torch.ops.c10d_functional.all_reduce(buf0, "avg", "default", [0], 1) out0 = torch.ops.c10d_functional.wait_tensor(out0) return out0 if __name__ == "__main__": with tempfile.NamedTemporaryFile(delete=False) as tmpf: dist.init_process_group( backend="nccl", init_method=f"file://{tmpf.name}", rank=0, world_size=1 ) device = torch.device("cuda:0") compiled = torch.compile(func) print(run_and_get_triton_code(compiled, torch.rand(4, 4, device=device))) torch.cuda.synchronize() dist.destroy_process_group() ``` Before: ```python def call(args): arg0_1, = args args.clear() assert_size_stride(arg0_1, (4, 4), (4, 1)) with torch.cuda._DeviceGuard(0): torch.cuda.set_device(0) # no-op to ensure context buf0 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32) # Source Nodes: [buf0], Original ATen: [aten.add] stream0 = get_cuda_stream(0) triton_poi_fused_add_0.run(arg0_1, buf0, 16, grid=grid(16), stream=stream0) del arg0_1 buf1 = buf0; del buf0 # reuse buf2_pg = c10d._find_or_create_pg_by_ranks_and_tag('default', [0], 1) buf2 = buf1 buf2_work = dist.all_reduce(buf2, async_op=True, group=buf2_pg, op=fun_col_impl._str_to_reduce_op('avg')) fun_col_impl._register_tensor_work(buf2, buf2_work) buf1 = _wait_tensor(buf1) buf3 = buf1 buf4 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32) # Source Nodes: [out0_1], Original ATen: [c10d_functional.wait_tensor] triton_poi_fused_wait_tensor_1.run(buf3, buf4, 16, grid=grid(16), stream=stream0) del buf1 del buf3 return (buf4, ) ``` After: ```python def call(args): arg0_1, = args args.clear() assert_size_stride(arg0_1, (4, 4), (4, 1)) with torch.cuda._DeviceGuard(0): torch.cuda.set_device(0) # no-op to ensure context buf0 = empty_strided((4, 4), (4, 1), device='cuda', dtype=torch.float32) # Source Nodes: [buf0], Original ATen: [aten.add] stream0 = get_cuda_stream(0) triton_poi_fused_add_0.run(arg0_1, buf0, 16, grid=grid(16), stream=stream0) del arg0_1 buf1 = buf0; del buf0 # reuse buf2_pg = c10d._find_or_create_pg_by_ranks_and_tag('default', [0], 1) buf2 = buf1 buf2_work = dist.all_reduce(buf2, async_op=True, group=buf2_pg, op=fun_col_impl._str_to_reduce_op('avg')) fun_col_impl._register_tensor_work(buf2, buf2_work) buf1 = _wait_tensor(buf1) buf3 = buf1 del buf3 return (buf1, ) ``` Test Plan: Reviewers: Subscribers: Tasks: Tags: cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
2
58
111,680
[pytorch-vulkan] Support zero-dim
fb-exported, module: vulkan, release notes: vulkan, ciflow/periodic
Summary: 1. Add zero-dim (Tensor with 1 element) support. 2. New operator `_local_scalar_dense` that map a zero-dim tensor into a Scalar 3. `sum_dim`: 3.1. Add zero-dim support. 3.2. Fix bug in negative indices when handling multi-dim reduction call 3.3. Add unittests to test new coverages 4. Add `aten::sum` support. 5. Change bug in `add_tensor` (and other binary ops), when `other` is zero dim, we will use broadcast instead. Test Plan: ## Devserver Full Paste: P858982150 ``` [yipjustin@31799.od ~/fbsource (8593e7559)]$ LD_LIBRARY_PATH=third-party/swiftshader/lib/linux-x64/ buck2 run fbcode/mode/dev-nosan -c pt.has_backtraces=1 //xplat/caffe2:pt_vulkan_api_test_bin -- File changed: fbsource//xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp Buck UI: https://www.internalfb.com/buck2/90cad0ff-ac98-4dbf-8d6f-0e419c06208d Network: Up: 43KiB Down: 1.4MiB (reSessionID-dfc3a318-fd1a-4ad6-b077-c454ebb4c6a8) Jobs completed: 6. Time elapsed: 26.4s. Cache hits: 0%. Commands: 2 (cached: 0, remote: 1, local: 1) BUILD SUCCEEDED Running main() from third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc [==========] Running 385 tests from 1 test suite. [----------] Global test environment set-up. [----------] 385 tests from VulkanAPITest [ RUN ] VulkanAPITest.zero_size_tensor [ OK ] VulkanAPITest.zero_size_tensor (9 ms) [ RUN ] VulkanAPITest.zero_dim_tensor_1 [ OK ] VulkanAPITest.zero_dim_tensor_1 (84 ms) [ RUN ] VulkanAPITest.zero_dim_tensor_2 [ OK ] VulkanAPITest.zero_dim_tensor_2 (22 ms) [ RUN ] VulkanAPITest.local_scalar_dense [ OK ] VulkanAPITest.local_scalar_dense (10 ms) ... [ OK ] VulkanAPITest.lstm_prepack_success (2 ms) [ RUN ] VulkanAPITest.querypool_flushed_shader_log xplat/caffe2/aten/src/ATen/test/vulkan_api_test.cpp:7484: Skipped QueryPool is not available [ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log (0 ms) [----------] 385 tests from VulkanAPITest (46915 ms total) [----------] Global test environment tear-down [==========] 385 tests from 1 test suite ran. (46915 ms total) [ PASSED ] 382 tests. [ SKIPPED ] 1 test, listed below: [ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log [ FAILED ] 2 tests, listed below: [ FAILED ] VulkanAPITest.conv2d_pw_prepack [ FAILED ] VulkanAPITest.conv2d_pw_prepack_bc 2 FAILED TESTS YOU HAVE 7 DISABLED TESTS ``` ## M1 MAC P859975219 ``` buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac\#macosx-arm64 --target-platforms ovr_config//platform/macos:arm64-fbsource -- --gtest_filter="*" Using additional configuration options from .buckconfig.local Building: finished in 0.2 sec (100%) 269/2875 jobs, 0/2875 updated Total time: 0.2 sec BUILD SUCCEEDED Running main() from third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc [==========] Running 384 tests from 1 test suite. [----------] Global test environment set-up. [----------] 384 tests from VulkanAPITest [ RUN ] VulkanAPITest.zero_size_tensor [ OK ] VulkanAPITest.zero_size_tensor (40 ms) [ RUN ] VulkanAPITest.zero_dim_tensor_1 [ OK ] VulkanAPITest.zero_dim_tensor_1 (7 ms) [ RUN ] VulkanAPITest.zero_dim_tensor_2 [ OK ] VulkanAPITest.zero_dim_tensor_2 (1 ms) [ RUN ] VulkanAPITest.local_scalar_dense [ OK ] VulkanAPITest.local_scalar_dense (0 ms) [ RUN ] VulkanAPITest.copy_to_texture [ OK ] VulkanAPITest.copy_to_texture (45 ms) ... [ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log (0 ms) [----------] 384 tests from VulkanAPITest (5127 ms total) [----------] Global test environment tear-down [==========] 384 tests from 1 test suite ran. (5127 ms total) [ PASSED ] 382 tests. [ SKIPPED ] 1 test, listed below: [ SKIPPED ] VulkanAPITest.querypool_flushed_shader_log [ FAILED ] 1 test, listed below: [ FAILED ] VulkanAPITest.normal_large 1 FAILED TEST YOU HAVE 5 DISABLED TESTS ``` Differential Revision: D50347338
2
59
111,679
[Release/2.1.1] [Test][ShardedTensor] Add test for corner case for chunk sharding spec #109626
topic: not user facing
Cherry pick https://github.com/pytorch/pytorch/pull/109626 into release/2.1.1 This adds a test case to cover the corner case of empty shards when creating ShardedTensor. Original fix contributed by a user. https://github.com/pytorch/pytorch/pull/108915 Cherry-pick PR for the fix above: https://github.com/pytorch/pytorch/pull/108915
1
60
111,678
AOT Inductor Does not Work with minifier
ciflow/inductor
### 🐛 Describe the bug Because AOT Inductor attaches parameters to the GraphModule, it does not currently work with minifier. > File "/opt/dlami/nvme/eellison/work/pytorch/torch/_dynamo/repro/after_aot.py", line 444, in repro_common assert not any(mod.named_parameters()) ### Versions master
0
61
111,677
[2.1.1] Update NCCL to 2.18.6 for upstream bugfix
open source, topic: not user facing
This updates NCCL in PyTorch 2.1 with one tiny bugfix in from this commit: https://github.com/NVIDIA/nccl/commit/4365458757e4107ecbf629b2fd6e0e19a5d237c2 It's a minor bugfix release, otherwise everything is exactly the same as the release currently in PyTorch. We already updated to 2.19 upstream.
1
62
111,676
[export] self.buffer += 1 raises error
triaged, module: export
``` import torch class Mod(torch.nn.Module): def __init__(self): super().__init__() self.register_buffer("foo", torch.ones(2, 3)) def forward(self, x: torch.Tensor) -> torch.Tensor: self.foo += x return self.foo torch.export(Mod(), (torch.ones(2, 3),)) ``` produces ``` Mutating module attribute foo during export. from user code: File "/tmp/ipykernel_578241/3307013751.py", line 9, in forward self.foo += x Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information ``` Changing `self.foo += x` to the equivalent `self.foo.add_(x)` works as expected. The motivation behind disallowing attribute mutation makes sense. However, buffers should be mutable, and dynamo should be smart enough to recognize that `+=` desugars to `add_` when done on a tensor. cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
0
63
111,674
Dynamo Compile samples should record file/line that raised exception
null
### 🐛 Describe the bug @voznesenskym and I were looking at https://fburl.com/scuba/dynamo_compile/7pzz3bi1 and we noticed that while "Fail reason" doesn't include the file/line that raised the exception, which would be useful. cc @yanboliang ### Versions main
0
64
111,673
[quant][bc-breaking] Remove deprecated QConfigDynamic
release notes: quantization
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111673 Summary: QConfigDynamic was deprecated in PyTorch 1.12. It has continued to cause confusion to users who wish to use dynamic quantization. This commit removes this deprecated API and requires users to use QConfig instead. BC-breaking before: ``` qconfig = QConfigDynamic( activation=default_dynamic_quant_observer, weight=default_weight_observer, ) ``` BC-breaking after: ``` qconfig = QConfig( activation=default_dynamic_quant_observer, weight=default_weight_observer, ) ``` Test Plan: python test/test_quantization.py Reviewers: jerryzh168 Subscribers: jerryzh168, supriyar
3
65
111,669
Buffer overflow not prevented on MPS devices
null
### 🐛 Describe the bug When indexing using an indexing tensor (or list), it is possible to read or write outside the valid range of the tensor. Minimal example: ``` import torch x = torch.arange(4, device=torch.device("mps")) y = x[:2] y[torch.tensor([3])] = -1 x[3] ``` This code should raise an IndexError and leave x unchanged, but it instead gives -1. In this example, the overflow reaches a known memory location, but perhaps in general it can reach arbitrary memory on the GPU. ### Versions Collecting environment information... PyTorch version: 2.1.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.5.2 (arm64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.0.40.1) CMake version: version 3.27.7 Libc version: N/A Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-13.5.2-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M2 Pro Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.1 [pip3] torch==2.1.0 [conda] numpy 1.26.1 pypi_0 pypi [conda] torch 2.1.0 pypi_0 pypi
0
66
111,667
[Release/2.1] Introduce is_big_gpu condition for test_max_autotune
open source, topic: not user facing, module: inductor
Fixes https://github.com/pytorch/pytorch/issues/111527 Other test files that rely on max_autotune mode being enabled already conditionalise the UT suite on this condition (e.g. test_select_algorithm). Proposing to add this condition for test_max_autotune. Currently we are observing failures on these UTs on the ROCm runners but using MI200+ these tests will pass again context: https://github.com/pytorch/pytorch/pull/111381#issuecomment-1768048732 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
67
111,666
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy' to ONNX opset version 14 is not supported.
module: onnx
### 🚀 The feature, motivation and pitch Unable to export ONNX model from https://github.com/xue-pai/FuxiCTR/tree/main/model_zoo/AFM. While exporting onnx it throws torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy' ### Alternatives _No response_ ### Additional context _No response_
1
68
111,665
[dynamo] Fix guard for ndarray calling `torch.as_tensor(None)`
open source, topic: not user facing, module: dynamo, ciflow/inductor
Fixes https://github.com/pytorch/pytorch/issues/111662 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano
1
69
111,663
[dynamo] Tracking: object identity
null
### 🚀 The feature, motivation and pitch This covers many things: 1. Tensor Identity 2. User objects identity - usual objects, enums, builtins?? Use cases - [ ] https://github.com/pytorch/pytorch/issues/111550 - [ ] https://github.com/pytorch/pytorch/issues/111556 Tensor Aliasing Methods and Obstacles - [x] https://github.com/pytorch/pytorch/issues/111585 - [x] https://github.com/pytorch/pytorch/issues/111649 - [ ] https://github.com/pytorch/pytorch/issues/111544 Overall Obstacles and Discussion - [x] https://github.com/pytorch/pytorch/issues/111542 - [ ] https://github.com/pytorch/pytorch/issues/111562 - not sure if we want to implement non-aliasing for general objects
0
70
111,662
torch.dynamo (caching?) issues with `Optional[np.ndarray]` arguments
module: numpy, module: dynamo
### 🐛 Describe the bug ``` $ cat nonz.py import torch import numpy as np def fn(x=None): if x is None: x = np.ones(3) return x**2 opt_fn = torch.compile(fn) x = np.zeros((2, 2)) print(opt_fn(x)) print(opt_fn()) ``` fails with ``` $ python nonz.py [[0. 0.] [0. 0.]] ERROR RUNNING GUARDS fn nonz.py:9 lambda L, **___kwargs_ignored: ___guarded_code.valid and ___check_global_state() and hasattr(__as_tensor(L['x']), '_dynamo_dynamic_indices') == False and utils_device.CURRENT_DEVICE == None and ___skip_backend_check() or ___current_backend() == ___lookup_backend(139895339872512) and ___check_tensors(__as_tensor(L['x']), tensor_check_names=tensor_check_names) Traceback (most recent call last): File "nonz.py", line 20, in <module> print(opt_fn()) File "/home/ev-br/repos/pytorch/torch/_dynamo/eval_frame.py", line 410, in _fn return fn(*args, **kwargs) File "<string>", line 7, in guard RuntimeError: Could not infer dtype of NoneType ``` Curiously, exchanging the order of calls, i.e. making it ``` print(opt_fn()) print(opt_fn(x)) ``` works fine and produces the correct result. Removing the numpy from the equation --- chaging all `np.` to `torch.` also works fine and produces the correct result. ### Versions main cc @mruberry @rgommers @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
71
111,661
Higher-level custom op API, V3
null
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111661 * #111660 * #111310 * #111659 * #111380 This PR introduces: - a FunctonalBaseOp class. - To define a new custom op, a user subclasses FunctionalBaseOp, adds their device-specific implementations (by adding static methods to their subclass), and adds an abstract impl by overriding the `abstract` staticmethod. - Under the hood, we take the class and all its methods and call the corresponding torch.library.{define, impl} APIs to register it. The class-based approach more closely resembles PyTorch C++ codegen; we get all of the information about the operator up front in the class and we can do things with this information, like add an "autograd not implemented" kernel if the user didn't specify an autograd kernel (NB: this functionality is not in this PR). Please see the docstrings for example usages.
1
72
111,660
torch.library: Create helper function `is_functional_schema`
null
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111661 * __->__ #111660 * #111310 * #111659 * #111380 I will need this again soon. Test Plan: - existing tests
1
73
111,659
Change torch.library.impl to accept a device string
null
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #111661 * #111660 * #111310 * __->__ #111659 * #111380 torch.library.impl now accepts a device string (e.g. "cpu", "cuda"). It still accepts DispatchKey strings, but we no longer document this, because using arbitrary DispatchKeys is more for the power users. We map the device string to a DispatchKey and then register the impl for said DispatchKey. A user may also specify multiple device strings at once or specify "types=default" to get a CompositeExplicitAutograd registration. Test Plan: - new tests
1
74
111,657
[aotinductor] Update test utility to use AOTIModelRunner
module: inductor, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111657 Summary: Use AOTIModelRunner provided by libtorch instead of the custom written RAIIModelContainer for testing. This change also makes running AOTInductor benchmarks on CPU possbile. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
75
111,656
WIP Adding 512 to xblock size config
open source, module: inductor, ciflow/inductor
Try to see perf improvements with adding 512 into xblock size: inductor-A100-perf-nightly : - https://github.com/pytorch/pytorch/actions/runs/6589467661 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
2
76
111,654
Static Linking C++, Op not available at runtime
null
### 🐛 Describe the bug When linking with static libtorch and torchvision libraries, I am able to build, but at runtime, I get an error about an `Unknown builtin op: aten::mul`. I have found references indicating that including <torchvision/vision.h> should cause the operators to be registered so they are linked in, but that doesn't seem to do the trick. I've also found references indicating that forcing the linker to link the "whole archive" for libtorch_cpu.a should force it to include all the operators in the linked executable. I have done this, and it does overcome the problem - however, this feels a bit like a workaround, and we aren't able to use that as a long-term solution. When I link in the whole archive, the executable jumps from 87MB to 339MB. I've also found some references suggesting calling `c10::RegisterOps`, or `torch::RegisterOps`, neither of which seem to exist. I found both `c10::RegisterOperators` and `torch::RegisterOperators`, but calling them doesn't seem to have any effect - admittedly, I might be using them incorrectly, all I did was add a call to `torch::RegisterOperators();` which didn't cause any build errors, but did not overcome the runtime "Unknown builtin op: aten::mul" error. I tried to make a minimal example: ```c++ // According to: https://github.com/pytorch/vision/#c-api // and https://github.com/pytorch/vision/issues/2915 // In order to get the torchvision operators registered with // torch (eg. for the JIT), all you need to do is to ensure // that you #include <torchvision/vision.h> in your project. #include <vision.h> #include <ATen/core/ivalue.h> #include <fstream> #include <torch/script.h> #include <torch/torch.h> #include <vector> using namespace std; int main(int argc, char* argv[]) { torch::NoGradGuard noGradGuard; // Load a trained model that has been converted to torchscript ifstream modelFile("torchscriptModel.pt"); torch::jit::script::Module model; model = torch::jit::load(modelFile); modelFile.close(); // Set model to eval mode model.eval(); // Generate a random inference image float* imgPix = new float[100*100]; // Normally set image pixels here, left uninitialized for minimal example // Convert image pixels to format required by forward at::Tensor imgTensor = torch::from_blob(imgPix, {100, 100, 1}); at::Tensor imgTensorPermuted = imgTensor.permute({2, 0, 1}); imgTensorPermuted.unsqueeze_(0); vector< at::Tensor > imageTensorVec; imageTensorVec.push_back(imgTensorPermuted); vector< torch::jit::IValue > inputToModel; inputToModel.push_back(torch::cat(imageTensorVec)); at::Tensor forwardResult = model.forward(inputToModel).toTensor(); delete [] imgPix; return 0; } ``` To build this, I use the following command: ``` g++ minimalExample.cpp \ -D_GLIBCXX_USE_CXX11_ABI=1 \ -I /usr/src/vision/torchvision/csrc/ \ -I /usr/src/pytorch/build/lib.linux-x86_64-3.8/torch/include/torch/csrc/api/include/ \ -I /usr/src/pytorch/build/lib.linux-x86_64-3.8/torch/include/ \ -Wl,--start-group \ /usr/src/vision/build/libtorchvision.a \ /usr/src/pytorch/build/lib/libc10.a \ /usr/src/pytorch/build/lib/libtorch_cpu.a \ -Wl,--end-group \ /usr/src/pytorch/build/lib/libprotobuf.a \ /usr/src/pytorch/build/lib/libfbgemm.a \ /usr/src/pytorch/build/sleef/lib/libsleef.a \ /usr/src/pytorch/build/lib/libasmjit.a \ /usr/src/pytorch/build/lib/libonnx.a \ /usr/src/pytorch/build/lib/libonnx_proto.a \ /usr/src/pytorch/build/lib/libcpuinfo.a \ /usr/src/pytorch/build/lib/libclog.a \ /usr/src/pytorch/build/lib/libkineto.a \ /usr/src/pytorch/build/lib/libnnpack.a \ /usr/src/pytorch/build/lib/libpytorch_qnnpack.a \ /usr/src/pytorch/build/lib/libXNNPACK.a \ /usr/src/pytorch/build/lib/libpthreadpool.a \ -Wl,--start-group \ /opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_tbb_thread.a \ /opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_core.a \ /opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_blacs_openmpi_lp64.a \ /opt/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_intel_lp64.a \ /usr/src/onetbb_installed/lib64/libtbb.a \ -Wl,--end-group \ /usr/local/lib/libompitrace.a \ /usr/local/lib/libmpi.a \ /usr/local/lib/libopen-rte.a \ /usr/local/lib/libopen-pal.a \ /usr/local/lib/libz.a \ /usr/lib64/libc_nonshared.a \ -lrt \ -ldl \ -fopenmp \ -pthread \ -o minimalExample.exe ``` As I said, this will build successfully, but it does give a warning when building: ``` /usr/src/vision/torchvision/csrc/vision.h:10:40: warning: ‘_register_ops’ initialized and declared ‘extern’ extern "C" VISION_INLINE_VARIABLE auto _register_ops = &cuda_version; ^~~~~~~~~~~~~ ``` When I run the executable, though, I get the following error: ``` $ ./minimalExample.exe terminate called after throwing an instance of 'torch::jit::ErrorReport' what(): Unknown builtin op: aten::mul. Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript. : File "<string>", line 3 def mul(a : float, b : Tensor) -> Tensor: return b * a ~~~~~ <--- HERE def add(a : float, b : Tensor) -> Tensor: return b + a 'mul' is being compiled since it was called from 'full_out_0_4' File "<string>", line 3 def full_out_0_4(size:List[int], fill_value:number, *, out:Tensor) -> Tensor: return torch.full(size, fill_value, out=out) ~~~ <--- HERE Abort (core dumped) ``` The minimal example runs as expected, without error, if I link the `libtorch_cpu.a` whole archive, by changing the corresponding line in the build command to: ``` -Wl,--start-group \ /usr/src/vision/build/libtorchvision.a \ /usr/src/pytorch/build/lib/libc10.a \ -Wl,--whole-archive /usr/src/pytorch/build/lib/libtorch_cpu.a -Wl,--no-whole-arhive \ -Wl,--end-group \ ``` but as I said, the size of the executable jumps way higher, and seems like overkill. I wasn't sure if this should be a forum post or an issue report, but given that I thought the include of <vision.h> was supposed to manage this, it felt more like an issue report to me. ### Versions I'm not sure this is especially valuable in this situation. The example is running on an old OS with CPU-only support. The conversion to torchscript was done on a more modern machine with python and pytorch installed, but the machine I am running on is a severely stripped-down machine without python at all. If I run the minimalExample.exe on the modern machine, it performs the same way though (i.e. errors at runtime without the whole-archive stuff, but runs successfully with the whole-archive stuff). So, here's the env for that machine in case its helpful: ``` Collecting environment information... PyTorch version: 1.13.0a0+git7c98e70 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.4.0-1024-fips-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation GPU 1: NVIDIA RTX 6000 Ada Generation GPU 2: NVIDIA RTX 6000 Ada Generation Nvidia driver version: 535.98 cuDNN version: Probably one of the following: /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 72 On-line CPU(s) list: 0-71 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Gold 6354 CPU @ 3.00GHz Stepping: 6 Frequency boost: enabled CPU MHz: 804.039 CPU max MHz: 3600.0000 CPU min MHz: 800.0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 1.7 MiB L1i cache: 1.1 MiB L2 cache: 45 MiB L3 cache: 78 MiB NUMA node0 CPU(s): 0-17,36-53 NUMA node1 CPU(s): 18-35,54-71 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.17.4 [pip3] torch==1.13.0a0+git7c98e70 [pip3] torchvision==0.14.0a0+5ce4506 [conda] Could not collect ```
0
77
111,653
s390x vectorization: implement atanh for complex vectorized data
module: cpu, open source
s390x vectorization: implement atanh for complex vectorized data cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
2
78
111,651
DISABLED test_meta_outplace_fft_ifft_cpu_int64 (__main__.TestMetaCPU)
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_int64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17893852781). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_meta_outplace_fft_ifft_cpu_int64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_meta.py` cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
79
111,650
FSDP CPU Offload + fp16 + sharded grad scaler crash / hang
oncall: distributed, triaged, module: fsdp
### 🐛 Describe the bug I get the following when running the above combination: ``` ERROR:aiplatform.error_reporting.error_reporting:Exception Found: Could not run 'ate n::_amp_foreach_non_finite_check_and_unscale_' with arguments from the 'CPU' backend . This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Face book employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for p ossible resolutions. 'aten::_amp_foreach_non_finite_check_and_unscale_' is only avai lable for these backends: [CUDA, Meta, BackendSelect, Python, FuncTorchDynamicLayerB ackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, Aut ogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, Autogr adIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPri vateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTens or, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTo rchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDy namicLayerFrontMode, PreDispatch, PythonDispatcher]. ``` in my case, the error seems to be reported but the job doesn't crash and just hangs, which is interesting and might be related to other collectives going on? ### Versions main cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
1
80
111,649
[dynamo] higher-order ops do not preserve `FakeTensor` for in-place ops
triaged, module: fakeTensor, module: functorch, module: dynamo
### 🐛 Describe the bug ```python def fn(z): x = z.clone() y = torch.vmap(torch.Tensor.acos_)(x) # y's fake tensor is not x's fake tensor in terms of pyobjects return y is x fn_opt = torch.compile(backend="eager", fullgraph=True, dynamic=True)(fn) z = torch.ones(4, 1) self.assertEqual(fn(z), fn_opt(z)) ``` ### Solution Not sure if this is a bug. But ideally, we should not expect x's fake tensor to be different from y's, as then it can be a method to preserve object identity throughout recursive calls to FX. One possible way to do this might be to reuse the FakeTensor from the inputs when doing fx tracing for higher order ops. However, if it is unavoidable (e.g. as a result of fx tracing requirements), another solution is simply to propagate the storage of the original fake tensor to the higher order op fake tensor. ### Versions main cc @eellison @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
1
81
111,646
torchrun: elastic training not restarted on missing keep-alive heartbeat/scale-down event
null
### 🐛 Describe the bug When running elastic training with C10d backend and multiple nodes, the workers need to be restarted in case of a down-scale event. If this does not happen, as right now, the remaining workers get stuck in NCCL operations and wait by default 30 minutes until they finish with a timeout error. *Expected Behaviour* If a worker misses its heartbeat/leaves the rendevous, a new rendevous should happen. *Minimal Example* There are two scripts for the master worker and a faulty worker **master** ```python import torch.distributed.run import logging logging.getLogger("torch.distributed.elastic.rendezvous.dynamic_rendezvous").level=10 torch.distributed.run.main(["--nnodes=1:4","--rdzv-backend=c10d", "--rdzv-endpoint=localhost:9999", "--rdzv-conf=last_call_timeout=10", "--no-python", "bash","-c",f"echo start; sleep 600; echo done"]) ``` **faulty** ```python import torch.distributed.run import logging logging.getLogger("torch.distributed.elastic.rendezvous.dynamic_rendezvous").level=10 torch.distributed.run.main(["--nnodes=1:4","--rdzv-backend=c10d", "--local-addr=faulty", "--rdzv-endpoint=localhost:9999", "--rdzv-conf=last_call_timeout=10", "--no-python", "bash","-c",f"echo start; sleep 10; kill -9 $$PPID"]) ``` The minimal example enables debug logging to highlight the problematic area. It does the following: **master** runs normally, while **faulty** simulates a forceful termination after 10 seconds. Both ranks print `start` when a new generation is started. Run first master and then faulty script. The output ``` [2023-10-20 11:55:30,978] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified. [2023-10-20 11:55:30,983] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [INFO] The node 'georg-AERO-15-YC_945987_0' attempts to join the next round of the rendezvous 'none'. [2023-10-20 11:55:31,071] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' added itself to the participants of round 0 of the rendezvous 'none'. Pending sync. [2023-10-20 11:55:31,077] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:55:31,080] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:36,088] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:55:36,093] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:41,102] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' marked round 0 of the rendezvous 'none' as complete. Pending sync. [2023-10-20 11:55:41,107] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:41,110] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [INFO] The node 'georg-AERO-15-YC_945987_0' has joined round 0 of the rendezvous 'none' as rank 0 in a world of size 2. start [2023-10-20 11:55:46,112] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:55:46,115] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:46,117] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has sent a keep-alive heartbeat to the rendezvous 'none'. [2023-10-20 11:55:51,121] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:55:51,124] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:51,126] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has sent a keep-alive heartbeat to the rendezvous 'none'. [2023-10-20 11:55:56,129] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:55:56,132] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:55:56,134] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has sent a keep-alive heartbeat to the rendezvous 'none'. [2023-10-20 11:56:01,138] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:56:01,140] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:56:01,141] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has sent a keep-alive heartbeat to the rendezvous 'none'. [2023-10-20 11:56:06,144] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:56:06,147] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. [2023-10-20 11:56:06,150] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has sent a keep-alive heartbeat to the rendezvous 'none'. [2023-10-20 11:56:11,153] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' updated its keep-alive heartbeat time for the rendezvous 'none'. Pending sync. [2023-10-20 11:56:11,156] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] As part of the sync operation the node(s) 'faulty_945990_0' have been removed from the rendezvous 'none' since they had no heartbeat. [2023-10-20 11:56:11,156] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [DEBUG] The node 'georg-AERO-15-YC_945987_0' has successfully synced its local changes with other nodes in the rendezvous 'none'. ``` Expected would now be that after the missing heartbeat and removal of faulty the rendezvous process should be started again, as is also documented in [rdzv_doc](https://github.com/pytorch/pytorch/blob/619ae87a1d1ae086f59a64d3b71dbfe4af8b804a/docs/source/elastic/etcd_rdzv_diagram.png) also the [__init__.py](https://github.com/pytorch/pytorch/blob/619ae87a1d1ae086f59a64d3b71dbfe4af8b804a/torch/distributed/elastic/rendezvous/__init__.py#L71) mention a restart in such a case. ### Versions ``` Collecting environment information... PyTorch version: 2.1.0 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.35 Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU Nvidia driver version: 535.104.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i9-10980HK CPU @ 2.40GHz CPU family: 6 Model: 165 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 2 CPU max MHz: 5300,0000 CPU min MHz: 800,0000 BogoMIPS: 6199.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 2 MiB (8 instances) L3 cache: 16 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.6.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.0 [pip3] optree==0.9.1 [pip3] torch==2.1.0 [pip3] torchaudio==2.1.0 [pip3] torchvision==0.16.0 [pip3] triton==2.1.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] mkl 2023.1.0 h213fc3f_46343 [conda] mkl-include 2023.1.0 h06a4308_46343 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] mkl_fft 1.3.8 py311h5eee18b_0 [conda] mkl_random 1.2.4 py311hdb19cb5_0 [conda] numpy 1.26.0 py311h08b1b3b_0 [conda] numpy-base 1.26.0 py311hf175353_0 [conda] optree 0.9.1 pypi_0 pypi [conda] pytorch 2.1.0 py3.11_cuda11.8_cudnn8.7.0_0 pytorch [conda] pytorch-cuda 11.8 h7e8668a_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.1.0 py311_cu118 pytorch [conda] torchtriton 2.1.0 py311 pytorch [conda] torchvision 0.16.0 py311_cu118 pytorch ```
0
82
111,645
ValueError: Using a target size (torch.Size([491])) that is different to the input size (torch.Size([1, 491])) is deprecated. Please ensure they have the same size.
null
### 🐛 Describe the bug Hi, I have a script that i am trying to run, but it gives the following error - ``` `` batch_size_16,learning_rate_0.001,epoch_times_45 selected_9606_protein_scores.csv selected_9606_protein_scores.csv Traceback (most recent call last): File "/home/bvsbic/Downloads/MLtest/Validation1.py", line 1084, in <module> Terms = ['BP', 'MF', 'CC'] File "/home/bvsbic/Downloads/MLtest/Validation1.py", line 1065, in validation test_set = benchmark[test_index].tolist() File "/home/bvsbic/Downloads/MLtest/Validation1.py", line 903, in Main domain_train_out, domain_test_out, domain_t = Domain_train(0.001, 32, train_benchmark, test_benchmark, 45, File "/home/bvsbic/Downloads/MLtest/Validation1.py", line 639, in Domain_train loss = loss_function(out, GO_annotiations) File "/home/bvsbic/Downloads/MLtest/okenv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/bvsbic/Downloads/MLtest/okenv/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 530, in forward return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "/home/bvsbic/Downloads/MLtest/okenv/lib/python3.9/site-packages/torch/nn/functional.py", line 2518, in binary_cross_entropy raise ValueError("Using a target size ({}) that is different to the input size ({}) is deprecated. " ***ValueError: Using a target size (torch.Size([491])) that is different to the input size (torch.Size([1, 491])) is deprecated. Please ensure they have the same size.*** ``` ``` Actually, I am new here, and I don't know how to solve it; plz help me. Here is whole code given below- ### Versions import os # Disable HIP and set CUDA device os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Use NVIDIA GPU 0 os.environ["ROCBLAS_LAYER"] = "0" # Disable ROCm/HIP os.environ["ROCFFT_LAYER"] = "0" # Disable ROCm/HIP import torch import my_Utils # Set GPU memory fraction (adjust the fraction as needed) #torch.cuda.set_per_process_memory_fraction(0.5) torch.backends.cuda.max_split_size_mb = 256 # Adjust this value as needed from torch.utils.data import Dataset from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np from sklearn import metrics from sklearn.metrics import roc_auc_score, roc_curve, auc, precision_score, recall_score, f1_score, average_precision_score from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from torch.nn import BCEWithLogitsLoss import os import time import random # seqSet = 'seqSet.csv' # domainSet = 'domainSet.csv' # Benchmark_list = open('data.human.benchmark.list', 'r') torch.manual_seed(100) os.environ["CUDA_VISIBLE_DEVICES"] = "0" # You can adjust this value as needed #os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "0" time_start=time.time() # GO_IDs = [] CFG = { 'cfg00': [16, 'M', 16, 'M'], 'cfg01': [16, 'M', 32, 'M'], 'cfg02': [32, 'M'], 'cfg03': [64, 'M'], 'cfg04': [16, 'M', 16, 'M',32, 'M'], 'cfg05': [64, 'M', 32, 'M',16, 'M'], 'cfg06': [64, 'M', 32, 'M',32, 'M'], 'cfg07': [128, 'M', 64, 'M2'], 'cfg08': [512, 'M', 128, 'M2',32, 'M2'], } OUT_nodes = { 'BP': 491, 'MF': 321, 'CC': 240, } Thresholds = [0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99] file_path = '/home/bvsbic/Downloads/MLtest/protVec_dict.npy' ProtVec = np.load(file_path, allow_pickle=True).item() #ProtVec = np.load('protVec_dict.npy', allow_pickle=True).item() # ProtDict = np.load('data/prot_kmerWord_dict.npy').item() Seqfile_name = 'seqSet.csv' # Domainfile_name = 'data/domainSet.csv' Domainfile_name = 'NewdomainSet.csv' GOfile_name = 'humanProteinGO.csv' class Dataload(Dataset): def __init__(self, benchmark_list, seqfile_name, domainfile_name, GOfile_name, func='MF', transform=None): self.benchmark_list = benchmark_list self.sequeces = {} self.max_seq_len = 1500 # 序列长度小于5000的序列中的最大值wei 4981 大于1000的有1827条 self.doamins = {} self.max_domains_len = 357 # 蛋白质所包含的domain数量的第二大值(最大值为1242,舍去该蛋白质) self.ppiVecs = {} self.GO_annotiations = {} # self.max_GOnums_len = 0 #含有GO标注最多的蛋白质的GO数量 with open(seqfile_name, 'r') as f: #seqfile_name = 'seqSet.csv' for line in f: items = line.strip().split(',') prot, seq = items[0], items[1] self.sequeces[prot] = seq self.protDict = ProtVec with open(domainfile_name, 'r') as f: #domainfile_name = 'domainSet.csv' for line in f: items = line.strip().split(',') prot, domains = items[0], items[1:] domains = [int(x) for x in domains] self.doamins[prot] = domains # ppi_file = 'PPI_data/selected_uniprot_protein_scores.csv' # ppi_file = 'PPI_data/selected_uniprot_protein_links.csv' ppi_file = 'selected_9606_protein_scores.csv' print(ppi_file) with open(ppi_file, 'r') as f: num = 1 for line in f: if num == 1: num = 2 continue items = line.strip().split(',') prot, vector =items[0], items[1:] self.ppiVecs[prot] = vector with open(GOfile_name, 'r') as f: #GOfile_name = 'humanProteinGO.csv' num = 1 for line in f: if num == 1: num = 2 # items = line.strip().split(',') # GO_IDs = items continue items = line.strip().split(',') if func == 'BP': prot, GO_annotiation = items[0], items[1:492] elif func == 'MF': prot, GO_annotiation = items[0], items[492:813] elif func == 'CC': prot, GO_annotiation = items[0], items[813:] # prot, GO_annotiation = items[0], items[1:] self.GO_annotiations[prot] = GO_annotiation def __getitem__(self, idx): iprID = self.benchmark_list[idx] # # 获取seq的输入向量 # seq = self.sequeces[iprID] # if len(seq) >= self.max_seq_len: # seq = seq[0:self.max_seq_len] # seqSentence = my_Utils.mer_k_Sentence(seq, self.protDict, 3) # seqSentence = np.array(seqSentence, dtype=int) # seqSentence = np.pad(seqSentence, (0, self.max_seq_len - len(seqSentence)), 'constant', constant_values=0) # seqSentence = torch.from_numpy(seqSentence).type(torch.LongTensor).cuda() # 获取seq的输入矩阵 seq = self.sequeces[iprID] if len(seq) > self.max_seq_len: seq = seq[0:self.max_seq_len] seqMatrix = my_Utils.mer_k(seq, self.protDict, 3) seqMatrix = np.array(seqMatrix, dtype=float) if (seqMatrix.shape[0]) < self.max_seq_len: seqMatrix = np.pad(seqMatrix, ((0, self.max_seq_len - (seqMatrix.shape[0])), (0, 0)), 'constant', constant_values=0) seqMatrix = seqMatrix.T seqMatrix = torch.from_numpy(seqMatrix).type(torch.FloatTensor).cuda() #获取domain的输入向量 domain_s = self.doamins[iprID] if len(domain_s) >= self.max_domains_len: domain_s = np.array(domain_s[0:self.max_domains_len], dtype=int) # if len(domain_s) < self.max_domains_len: else: domain_s = np.array(domain_s, dtype=int) domain_s = np.pad(domain_s, ((0, self.max_domains_len-len(domain_s))), 'constant', constant_values=0) domainSentence = torch.from_numpy(domain_s).type(torch.LongTensor).cuda() # 获取PPI的输入向量 if iprID not in self.ppiVecs: ppiVect = np.zeros((18901), dtype=float).tolist() else: ppiVect = self.ppiVecs[iprID] ppiVect = [float(x) for x in ppiVect] ppiVect = torch.Tensor(ppiVect).cuda() ppiVect = ppiVect.type(torch.FloatTensor) #获取蛋白质的GO标注向量 GO_annotiations = self.GO_annotiations[iprID] GO_annotiations = [int(x) for x in GO_annotiations] GO_annotiations = torch.Tensor(GO_annotiations).cuda() # GO_annotiations = GO_annotiations.type(torch.LongTensor) GO_annotiations = GO_annotiations.type(torch.FloatTensor) #GO_annotiations = one_hot_encode_target(GO_annotiations) return seqMatrix, domainSentence, ppiVect, GO_annotiations def __len__(self): return len(self.benchmark_list) #返回蛋白质的数量 class weight_Dataload(Dataset): def __init__(self, benchmark_list, seqdict, domaindict, ppidict, GOfile_name, func = 'MF', transform=None): self.benchmark = benchmark_list self.weghtdict = {} self.GO_annotiations = {} for i in range(len(benchmark_list)): prot = benchmark_list[i] # temp = torch.cat((seqdict[prot], domaindict[prot]), 0) # temp = torch.cat((temp, ppidict[prot]), 0) temp = [seqdict[prot], domaindict[prot], ppidict[prot]] temp = np.array(temp) self.weghtdict[benchmark_list[i]] = temp.flatten().tolist() assert len(seqdict[prot]) == len(domaindict[prot]) == len(ppidict[prot]) == OUT_nodes[func] with open(GOfile_name, 'r') as f: #GOfile_name = 'data/humanProteinGO.csv' num = 1 for line in f: if num == 1: num = 2 # items = line.strip().split(',') # GO_IDs = items continue items = line.strip().split(',') if func == 'BP': prot, GO_annotiation = items[0], items[1:492] elif func == 'MF': prot, GO_annotiation = items[0], items[492:813] elif func == 'CC': prot, GO_annotiation = items[0], items[813:] # prot, GO_annotiation = items[0], items[1:] self.GO_annotiations[prot] = GO_annotiation def __getitem__(self, idx): prot = self.benchmark[idx] #获取weight_classifier的输入向量 weight_features = self.weghtdict[prot] weight_features = [float(x) for x in weight_features] weight_features = torch.Tensor(weight_features).cuda() weight_features = weight_features.type(torch.FloatTensor) # 获取蛋白质的GO标注向量 GO_annotiations = self.GO_annotiations[prot] GO_annotiations = [int(x) for x in GO_annotiations] GO_annotiations = torch.Tensor(GO_annotiations).cuda() # GO_annotiations = GO_annotiations.type(torch.LongTensor) GO_annotiations = GO_annotiations.type(torch.FloatTensor) return weight_features, GO_annotiations def __len__(self): return len(self.benchmark) class Seq_Module(nn.Module): def __init__(self, func): super(Seq_Module, self).__init__() # self.seq_emblayer = nn.Embedding(8001, 128, padding_idx=0) self.seq_CNN = self.SeqConv1d(CFG['cfg05']).cuda() self.seq_FClayer = nn.Linear(3008, 1024).cuda() #self.seq_outlayer = nn.Linear(1024, 491).cuda() self.seq_outlayer = nn.Linear(1024, OUT_nodes[func]).cuda() #self.seq_outlayer = nn.Linear(1024, OUT_nodes[func]).cuda() # Adjust num_classes as needed #self.seq_outlayer = nn.Linear(1024, num_classes) def forward(self, seqMatrix): # seqMatrix = self.seq_emblayer(seqSentence) seq_out = self.seq_CNN(seqMatrix) seq_out = seq_out.view(seq_out.size(0), -1) # 展平多维的卷积图 # print(seq_out) seq_out = F.dropout(self.seq_FClayer(seq_out), p=0.3, training=self.training) seq_out = F.relu(seq_out) #seq_out = self.seq_outlayer(seq_out) seq_out = F.sigmoid(self.seq_outlayer(seq_out)) #seq_out = F.sigmoid(seq_out) return seq_out # sequence的1D_CNN模型 def SeqConv1d(self, cfg): layers = [] in_channels = 100 for x in cfg: if x == 'M': layers += [nn.MaxPool1d(kernel_size=2)] elif x == 'M2': layers += [nn.MaxPool1d(kernel_size=2, stride=1)] else: layers += [nn.Conv1d(in_channels, x, kernel_size=16, stride=1, padding=8), nn.ReLU(inplace=True)] in_channels = x return nn.Sequential(*layers) class Domain_Module(nn.Module): def __init__(self, func): super(Domain_Module, self).__init__() self.dom_emblayer = nn.Embedding(14243, 128, padding_idx=0).cuda() self.dom_CNN = self.DomainConv1d(CFG['cfg07']).cuda() self.dom_FClayer = nn.Linear(1088, 512).cuda() self.dom_outlayer = nn.Linear(512, OUT_nodes[func]).cuda() #self.dom_outlayer = nn.Linear(512, OUT_nodes[func]).cuda() def forward(self, domainSentence): #seq 4981*100 ,domain domain_matrix = self.dom_emblayer(domainSentence) domain_out = self.dom_CNN(domain_matrix) domain_out = domain_out.view(domain_out.size(0), -1) # 展平多维的卷积图 # print(domain_out) domain_out = F.dropout(self.dom_FClayer(domain_out), p=0.3, training=self.training) domain_out = F.relu(domain_out) domain_out = self.dom_outlayer(domain_out) domain_out = F.sigmoid(domain_out) return domain_out # domain的1D_CNN模型 def DomainConv1d(self, cfg): layers = [] # in_channels = 128 in_channels = 357 for x in cfg: if x == 'M': layers += [nn.MaxPool1d(kernel_size=2, stride=2)] elif x == 'M2': layers += [nn.MaxPool1d(kernel_size=2, stride=1)] else: layers += [nn.Conv1d(in_channels, x, kernel_size=2, stride=2, padding=2), nn.ReLU(inplace=True)] in_channels = x return nn.Sequential(*layers) class PPI_Module(nn.Module): def __init__(self, func): super(PPI_Module, self).__init__() self.ppi_inputlayer = nn.Linear(18901, 4096).cuda() self.ppi_hiddenlayer = nn.Linear(4096, 1024).cuda() self.ppi_outlayer = nn.Linear(1024, OUT_nodes[func]).cuda() #self.ppi_outlayer = nn.Linear(1024, OUT_nodes[func]).cuda() def forward(self, ppiVec): ppi_out = F.dropout(self.ppi_inputlayer(ppiVec), p=0.00005, training=self.training) ppi_out = F.dropout(self.ppi_hiddenlayer(ppi_out), p=0.3, training=self.training) ppi_out = self.ppi_outlayer(ppi_out) ppi_out = F.sigmoid(ppi_out) return ppi_out class Weight_classifier(nn.Module): def __init__(self, func): super(Weight_classifier, self).__init__() # self.weight_layer = nn.Linear(OUT_nodes[func]*3, OUT_nodes[func]) self.weight_layer = MaskedLinear(OUT_nodes[func]*3, OUT_nodes[func], '{}_maskmatrix.csv'.format(func)).cuda() self.outlayer= nn.Linear(OUT_nodes[func], OUT_nodes[func]) def forward(self, weight_features): weight_out = self.weight_layer(weight_features) # weight_out = F.sigmoid(weight_out) weight_out = F.relu(weight_out) weight_out = F.sigmoid(self.outlayer(weight_out)) return weight_out class MaskedLinear(nn.Linear): def __init__(self, in_features, out_features, relation_file, bias=True): super(MaskedLinear, self).__init__(in_features, out_features, bias) mask = self.readRelationFromFile(relation_file) self.register_buffer('mask', mask) self.iter = 0 def forward(self, input): masked_weight = self.weight * self.mask return F.linear(input, masked_weight, self.bias) def readRelationFromFile(self, relation_file): mask = [] with open(relation_file, 'r') as f: for line in f: l = [int(x) for x in line.strip().split(',')] for item in l: assert item == 1 or item == 0 # relation 只能为0或者1 mask.append(l) return Variable(torch.Tensor(mask)) def benchmark_set_split(term_arg='MF'): benchmark_file = '/home/bvsbic/Downloads/MLtest/{}_benchmarkSet_2.csv'.format( term_arg) print(benchmark_file) trainset, testset = [], [] all_data = [] with open(benchmark_file, 'r') as f: for line in f: item = line.strip() all_data.append(item) idx_list = np.arange(len(all_data)).tolist() # nums = { # 'BP': 10000, # 'MF': 10000, # 'CC': 10600, # 'test': 10 # } nums = { 'BP': 491, #10700, 'MF': 321, #10500, 'CC': 240, #10000, 'test': 10 } # random_index = random.sample(idx_list, nums[term_arg]) #11000,在0--idx_list范围内随机产生nums[term_arg}个随机数 # random_index = [] with open('{}_random_index.csv'.format(term_arg), 'r') as f: for line in f: item = line.strip().split(',') random_index.append(int(item[0])) for i in range(len(all_data)): if i in random_index: trainset.append(all_data[i]) else: testset.append(all_data[i]) assert len(trainset) + len(testset) == len(all_data) return trainset, testset def calculate_performance(actual, pred_prob, threshold=0.4, average='micro'): pred_lable = [] for l in range(len(pred_prob)): eachline = (np.array(pred_prob[l]) > threshold).astype(int) eachline = eachline.tolist() pred_lable.append(eachline) f_score = f1_score(np.array(actual), np.array(pred_lable), average=average) recall = recall_score(np.array(actual), np.array(pred_lable), average=average) precision = precision_score(np.array(actual), np.array(pred_lable), average=average) return f_score, recall, precision def cacul_aupr(lables, pred): precision, recall, _thresholds = metrics.precision_recall_curve(lables, pred) aupr = metrics.auc(recall, precision) return aupr def Seq_train(learningrate, batchsize, train_benchmark, test_benchmark, epochtime, func='MF'): print('{} seqmodel start'.format(func)) seq_model = Seq_Module(func).cuda() batch_size = 16 # You can reduce the batch size to a smaller value #batch_size = batchsize learning_rate = learningrate epoch_times = epochtime print(seq_model) print('batch_size_{},learning_rate_{},epoch_times_{}'.format(batch_size, learning_rate, epoch_times)) #loss_function = nn.BCEWithLogitsLoss() loss_function = nn.BCEWithLogitsLoss(reduction='mean') # Add reduction argument #loss_function = nn.BCELoss() optimizer = optim.Adam(seq_model.parameters(), lr=learning_rate, weight_decay = 0.00001) train_dataset = Dataload(train_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataset = Dataload(test_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=False) seq_model.train() best_fscore = 0 for epoch in range(epoch_times): _loss = 0 batch_num = 0 torch.cuda.empty_cache() # Add this line to release GPU memory for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(train_data_loader): seqMatrix = Variable(seqMatrix).cuda() GO_annotiations = torch.squeeze(GO_annotiations) GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() #GO_annotiations = Variable(GO_annotiations).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) # Remove the first dimension #GO_annotiations = GO_annotiations.unsqueeze(0) # Add a batch dimension #GO_annotiations = GO_annotiations.unsqueeze(-1) out = seq_model(seqMatrix) print("Output Shape:", out.shape) print("Target Shape:", GO_annotiations.shape) # Reshape the model's output to match the shape of GO_annotiations out = out.view(GO_annotiations.shape) optimizer.zero_grad() loss = loss_function(out, GO_annotiations) batch_num += 1 loss.backward() optimizer.step() _loss += loss.item() # Before training loop torch.cuda.empty_cache() epoch_loss = "{}".format(_loss / batch_num) t_loss = 0 test_batch_num = 0 pred = [] actual = [] for idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): seqMatrix = Variable(seqMatrix).cuda() #GO_annotiations = Variable(GO_annotiations).cuda() GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) # Remove the first dimension #GO_annotiations = GO_annotiations.unsqueeze(0) # Add a batch dimension out = seq_model(seqMatrix) test_batch_num = test_batch_num + 1 pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) one_loss = loss_function(out, GO_annotiations) t_loss += one_loss.item() # t_loss += one_loss.data[0] test_loss = "{}".format(t_loss / test_batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) score_dict = {} each_best_fcore = 0 each_best_scores = [] for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score >= each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores if each_best_fcore >= best_fscore: best_fscore = each_best_fcore best_scores = each_best_scores best_score_dict = score_dict torch.save(seq_model, 'savedpkl/Seq1DVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)) t, f_score, recall = each_best_scores[0], each_best_scores[1], each_best_scores[2] precision, auc_score = each_best_scores[3], each_best_scores[4] print('epoch{},loss{},testloss:{},t{},f_score{}, auc{}, recall{}, precision{}'.format( epoch, epoch_loss, test_loss, t, f_score, auc_score, recall, precision)) bestthreshold, f_max, recall_max = best_scores[0], best_scores[1], best_scores[2] prec_max, bestauc_score = best_scores[3], best_scores[4] print('lr:{},batch:{},epoch{},f_max:{}\nauc{},recall_max{},prec_max{},threshold:{}'.format( learning_rate, batch_size, epoch_times, f_max, bestauc_score, recall_max, prec_max, bestthreshold)) test_Seqmodel = torch.load('savedpkl/Seq1DVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)).cuda() t_loss = 0 seq_test_outs = {} # seq_test_outs = [] pred = [] actual = [] score_dict = {} batch_num = 0 for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): seqMatrix = Variable(seqMatrix).cuda() GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() #GO_annotiations = Variable(GO_annotiations).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) # Remove the first dimension #GO_annotiations = GO_annotiations.unsqueeze(0) # Add a batch dimension out = test_Seqmodel(seqMatrix) batch_num += 1 seq_test_outs[test_benchmark[batch_idx]] = out.data[0].cpu().tolist() pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) loss = loss_function(out, GO_annotiations) t_loss += one_loss.item() #t_loss += loss.data[0] test_loss = "{}".format(t_loss / batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) each_best_fcore = 0 for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score > each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores bestthreshold, f_max, recall_max = each_best_scores[0], each_best_scores[1], each_best_scores[2] prec_max, bestauc_score = each_best_scores[3], each_best_scores[4] print('test_loss:{},lr:{},batch:{},epoch{},f_max:{}\nauc_score{},recall_max{},prec_max{},threshold:{}'.format( test_loss, learning_rate, batch_size, epoch_times, f_max, auc_score ,recall_max, prec_max, bestthreshold)) output_dir = 'out/weight_out/' if not os.path.exists(output_dir): os.makedirs(output_dir) with open('out/weight_out/Seqout{}_lr{}_bat{}_epo{}.csv'.format( func, learning_rate, batch_size, epoch_times), 'w') as f: f.write('lr:{},batchsize:{},epochtimes:{}\n'.format(learning_rate, batch_size, epoch_times)) f.write('f_max:{},recall_max{},prec_max{},auc_score:{}\n'.format( f_max,recall_max, prec_max, auc_score)) f.write('threshold,f_score,recall,precision, roc_auc,auc\n') for i in range(len(Thresholds)): f.write('{},'.format(str(Thresholds[i]))) f.write('{}\n'.format(','.join(str(x) for x in score_dict[Thresholds[i]]))) for key, var in seq_test_outs.items(): f.write('{},'.format(str(key))) f.write('{}\n'.format(','.join(str(x) for x in var))) #获取再最优模型下的训练集的输出 train_out_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=False) seq_train_outs = {} for batch_idx, (seqMatrix, domainsMatrix, ppiVect, GO_annotiations) in enumerate(train_out_loader): seqMatrix = Variable(seqMatrix).cuda() # GO_annotiations = Variable(GO_annotiations).cuda() out = test_Seqmodel(seqMatrix) seq_train_outs[train_benchmark[batch_idx]] = out.data[0].cpu().tolist() return seq_train_outs, seq_test_outs,bestthreshold #返回再最优的Seq模型下的训练集的输出和测试集的输出,用于训练weight_classifier def Domain_train(learningrate, batchsize, train_benchmark, test_benchmark, epochtime, func='MF'): print('{} domainmodel start'.format(func)) domain_model = Domain_Module(func).cuda() batch_size = 16 # You can reduce the batch size to a smaller value #batch_size = batchsize learning_rate = learningrate epoch_times = epochtime print(domain_model) print('batch_size_{},learning_rate_{},epoch_times_{}'.format(batch_size, learning_rate, epoch_times)) loss_function = nn.BCELoss() optimizer = optim.Adam(domain_model.parameters(), lr=learning_rate, weight_decay=0.00001) train_dataset = Dataload(train_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataset = Dataload(test_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=False) domain_model.train() best_fscore = 0 for epoch in range(epoch_times): _loss = 0 batch_num = 0 torch.cuda.empty_cache() # Add this line to release GPU memory for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(train_data_loader): domainStence = Variable(domainStence).cuda() GO_annotiations = torch.squeeze(GO_annotiations) GO_annotiations = Variable(GO_annotiations).cuda() out = domain_model(domainStence) optimizer.zero_grad() loss = loss_function(out, GO_annotiations) batch_num += 1 loss.backward() optimizer.step() _loss += loss.item() # _loss += loss.data[0] epoch_loss = "{}".format(_loss / batch_num) t_loss = 0 test_batch_num = 0 pred = [] actual = [] for idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): domainStence = Variable(domainStence).cuda() GO_annotiations = Variable(GO_annotiations).cuda() out = domain_model(domainStence) test_batch_num = test_batch_num + 1 pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) one_loss = loss_function(out, GO_annotiations) t_loss += one_loss.item() # Use .item() to get the scalar value #t_loss += one_loss.data[0] test_loss = "{}".format(t_loss / test_batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) score_dict = {} each_best_fcore = 0 each_best_scores = [] for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score >= each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores if each_best_fcore >= best_fscore: best_fscore = each_best_fcore best_scores = each_best_scores best_score_dict = score_dict torch.save(domain_model, 'savedpkl/Doamin1DVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)) t, f_score, recall = each_best_scores[0], each_best_scores[1], each_best_scores[2] precision, auc_score = each_best_scores[3], each_best_scores[4] print('epoch{},loss{},testloss:{},t{},f_score{}, auc{}, recall{}, precision{}'.format( epoch, epoch_loss, test_loss, t, f_score, auc_score, recall, precision)) bestthreshold, f_max, recall_max = best_scores[0], best_scores[1], best_scores[2] prec_max, bestauc_score = best_scores[3], best_scores[4] print('lr:{},batch:{},epoch{},f_max:{}\nauc{},recall_max{},prec_max{},threshold:{}'.format( learning_rate, batch_size, epoch_times, f_max, bestauc_score, recall_max, prec_max, bestthreshold)) test_Domainmodel = torch.load( 'savedpkl/Doamin1DVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)).cuda() t_loss = 0 doamin_test_outs = {} pred = [] actual = [] score_dict = {} batch_num = 0 for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): domainStence = Variable(domainStence).cuda() GO_annotiations = Variable(GO_annotiations).cuda() out = test_Domainmodel(domainStence) batch_num += 1 doamin_test_outs[test_benchmark[batch_idx]] = out.data[0].cpu().tolist() pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) loss = loss_function(out, GO_annotiations) t_loss += loss.item() #t_loss += loss.data[0] test_loss = "{}".format(t_loss / batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) each_best_fcore = 0 for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score > each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores bestthreshold, f_max, recall_max = each_best_scores[0], each_best_scores[1], each_best_scores[2] prec_max, bestauc_score = each_best_scores[3], each_best_scores[4] print('test_loss:{},lr:{},batch:{},epoch{},f_max:{}\nauc_score{},recall_max{},prec_max{},threshold:{}'.format( test_loss, learning_rate, batch_size, epoch_times, f_max, auc_score, recall_max, prec_max, bestthreshold)) with open('out/weight_out/Domainout{}_lr{}_bat{}_epo{}.csv'.format( func, learning_rate, batch_size, epoch_times), 'w') as f: f.write('lr:{},batchsize:{},epochtimes:{}\n'.format(learning_rate, batch_size, epoch_times)) f.write('f_max:{},recall_max{},prec_max{},auc_score:{}\n'.format( f_max, recall_max, prec_max, auc_score)) f.write('threshold,f_score,recall,precision, roc_auc,auc\n') for i in range(len(Thresholds)): f.write('{},'.format(str(Thresholds[i]))) f.write('{}\n'.format(','.join(str(x) for x in score_dict[Thresholds[i]]))) for key, var in doamin_test_outs.items(): f.write('{},'.format(str(key))) f.write('{}\n'.format(','.join(str(x) for x in var))) # 获取再最优模型下的训练集的输出 train_out_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=False) domain_train_outs = {} for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(train_out_loader): domainStence = Variable(domainStence).cuda() # GO_annotiations = Variable(GO_annotiations).cuda() out = test_Domainmodel(domainStence) domain_train_outs[train_benchmark[batch_idx]] = out.data[0].cpu().tolist() return domain_train_outs, doamin_test_outs, bestthreshold # 返回再最优的Domain模型下的训练集的输出和测试集的输出,用于训练weight_classifier def PPI_train(learningrate, batchsize, train_benchmark, test_benchmark, epochtime, func='MF'): print('{} PPImodel start'.format(func)) ppi_model = PPI_Module(func).cuda() batch_size = 16 # You can reduce the batch size to a smaller value #batch_size = batchsize learning_rate = learningrate epoch_times = epochtime print(ppi_model) print('batch_size_{},learning_rate_{},epoch_times_{}'.format(batch_size, learning_rate, epoch_times)) loss_function = nn.BCELoss() optimizer = optim.Adam(ppi_model.parameters(), lr=learning_rate, weight_decay=0.00001) train_dataset = Dataload(train_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataset = Dataload(test_benchmark, Seqfile_name, Domainfile_name, GOfile_name, func=func) test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=False) ppi_model.train() best_fscore = 0 for epoch in range(epoch_times): _loss = 0 batch_num = 0 torch.cuda.empty_cache() # Add this line to release GPU memory for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(train_data_loader): ppiVect = Variable(ppiVect).cuda() GO_annotiations = torch.squeeze(GO_annotiations) GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) #GO_annotiations = Variable(GO_annotiations).cuda() out = ppi_model(ppiVect) # Reshape the model's output to match the shape of GO_annotiations out = out.view(GO_annotiations.shape) optimizer.zero_grad() loss = loss_function(out, GO_annotiations) batch_num += 1 loss.backward() optimizer.step() _loss += loss.item() #_loss += loss.data[0] epoch_loss = "{}".format(_loss / batch_num) t_loss = 0 test_batch_num = 0 pred = [] actual = [] for idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): ppiVect = Variable(ppiVect).cuda() GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) #GO_annotiations = Variable(GO_annotiations).cuda() out = ppi_model(ppiVect) test_batch_num = test_batch_num + 1 pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) one_loss = loss_function(out, GO_annotiations) t_loss += one_loss.item() # Use .item() to get the scalar value #t_loss += one_loss.data[0] test_loss = "{}".format(t_loss / test_batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) score_dict = {} each_best_fcore = 0 each_best_scores = [] for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score >= each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores if each_best_fcore >= best_fscore: best_fscore = each_best_fcore best_scores = each_best_scores best_score_dict = score_dict torch.save(ppi_model, 'savedpkl/PPIVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)) t, f_score, recall = each_best_scores[0], each_best_scores[1], each_best_scores[2] precision, auc_score = each_best_scores[3], each_best_scores[4] print('epoch{},loss{},testloss:{},t{},f_score{}, auc{}, recall{}, precision{}'.format( epoch, epoch_loss, test_loss, t, f_score, auc_score, recall, precision)) bestthreshold, f_max, recall_max = best_scores[0], best_scores[1], best_scores[2] prec_max, bestauc_score = best_scores[3], best_scores[4] print('lr:{},batch:{},epoch{},f_max:{}\nauc{},recall_max{},prec_max{},threshold:{}'.format( learning_rate, batch_size, epoch_times, f_max, bestauc_score, recall_max, prec_max, bestthreshold)) test_PPImodel = torch.load( 'savedpkl/PPIVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)).cuda() t_loss = 0 ppi_test_outs = {} pred = [] actual = [] score_dict = {} batch_num = 0 for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(test_data_loader): ppiVect = Variable(ppiVect).cuda() GO_annotiations = Variable(GO_annotiations.unsqueeze(1)).cuda() GO_annotiations = torch.squeeze(GO_annotiations, dim=0) #GO_annotiations = Variable(GO_annotiations).cuda() out = test_PPImodel(ppiVect) batch_num += 1 ppi_test_outs[test_benchmark[batch_idx]] = out.data[0].cpu().tolist() pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) loss = loss_function(out, GO_annotiations) t_loss += loss.item() #t_loss += loss.data[0] test_loss = "{}".format(t_loss / batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) each_best_fcore = 0 for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score > each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores bestthreshold, f_max, recall_max = each_best_scores[0], each_best_scores[1], each_best_scores[2] prec_max, bestauc_score = each_best_scores[3], each_best_scores[4] print('test_loss:{},lr:{},batch:{},epoch{},f_max:{}\nauc_score{},recall_max{},prec_max{},threshold:{}'.format( test_loss, learning_rate, batch_size, epoch_times, f_max, auc_score, recall_max, prec_max, bestthreshold)) with open('out/weight_out/PPIout{}_lr{}_bat{}_epo{}.csv'.format( func, learning_rate, batch_size, epoch_times), 'w') as f: f.write('lr:{},batchsize:{},epochtimes:{}\n'.format(learning_rate, batch_size, epoch_times)) f.write('f_max:{},recall_max{},prec_max{},auc_score:{}\n'.format( f_max, recall_max, prec_max, auc_score)) f.write('threshold,f_score,recall,precision, roc_auc,auc\n') for i in range(len(Thresholds)): f.write('{},'.format(str(Thresholds[i]))) f.write('{}\n'.format(','.join(str(x) for x in score_dict[Thresholds[i]]))) for key, var in ppi_test_outs.items(): f.write('{},'.format(str(key))) f.write('{}\n'.format(','.join(str(x) for x in var))) # 获取再最优模型下的训练集的输出 train_out_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=False) ppi_train_outs = {} for batch_idx, (seqMatrix, domainStence, ppiVect, GO_annotiations) in enumerate(train_out_loader): ppiVect = Variable(ppiVect).cuda() # GO_annotiations = Variable(GO_annotiations).cuda() out = test_PPImodel(ppiVect) ppi_train_outs[train_benchmark[batch_idx]] = out.data[0].cpu().tolist() return ppi_train_outs, ppi_test_outs, bestthreshold # 返回再最优的PPI模型下的训练集的输出和测试集的输出,用于训练weight_classifier def Main(train_benchmark, test_benchmark, func='MF'): if func == 'BP': seq_train_out, seq_test_out, seq_t = Seq_train(0.0001, 16, train_benchmark, test_benchmark, 30, func) # 15 domain_train_out, domain_test_out, domain_t = Domain_train(0.001, 32, train_benchmark, test_benchmark, 45, func) # 40 else: seq_train_out, seq_test_out, seq_t = Seq_train(0.001, 8, train_benchmark, test_benchmark, 17, func) #15 domain_train_out, domain_test_out, domain_t = Domain_train(0.001, 16, train_benchmark, test_benchmark, 38, func) #40 ppi_train_out, ppi_test_out, ppi_t = PPI_train(0.0001, 8, train_benchmark, test_benchmark, 38, func) #40 print('{} Weight_model start'.format(func)) learning_rate = 0.001 batch_size = 32 epoch_times = 40 weight_model = Weight_classifier(func).cuda() print(weight_model) print('batch_size_{},learning_rate_{},epoch_times_{}'.format(batch_size, learning_rate, epoch_times)) loss_function = nn.BCELoss() optimizer = optim.Adam(weight_model.parameters(), lr=learning_rate, weight_decay=0.00001) train_dataset = weight_Dataload(train_benchmark, seq_train_out, domain_train_out, ppi_train_out, GOfile_name, func=func) train_data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataset = weight_Dataload(test_benchmark, seq_test_out, domain_test_out, ppi_test_out, GOfile_name, func=func) test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=False) weight_model.train() best_fscore = 0 for epoch in range(epoch_times): _loss = 0 batch_num = 0 for batch_idx, (weight_features, GO_annotiations) in enumerate(train_data_loader): weight_features = Variable(weight_features).cuda() # print(weight_features) #GO_annotiations = torch.squeeze(GO_annotiations) GO_annotiations = GO_annotiations.view(1, -1) # Reshape target tensor GO_annotiations = Variable(GO_annotiations.view(1, -1)).cuda() # Reshape and apply to the model #GO_annotiations = Variable(GO_annotiations).cuda() out = torch.sigmoid(weight_model(weight_features)) #out = weight_model(weight_features) optimizer.zero_grad() loss = loss_function(out, GO_annotiations) batch_num += 1 loss.backward() optimizer.step() _loss += loss.item() #_loss += loss.data[0] epoch_loss = "{}".format(_loss / batch_num) t_loss = 0 test_batch_num = 0 pred = [] actual = [] for idx, (weight_features, GO_annotiations) in enumerate(test_data_loader): weight_features = Variable(weight_features).cuda() GO_annotiations = Variable(GO_annotiations.view(1, -1)).cuda() # Reshape and apply to the model #GO_annotiations = Variable(GO_annotiations).cuda() out = weight_model(weight_features) test_batch_num = test_batch_num + 1 pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) one_loss = loss_function(out, GO_annotiations) t_loss += one_loss.item() #t_loss += one_loss.data[0] test_loss = "{}".format(t_loss / test_batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) score_dict = {} each_best_fcore = 0 each_best_scores = [] for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score >= each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores if each_best_fcore >= best_fscore: best_fscore = each_best_fcore best_scores = each_best_scores best_score_dict = score_dict torch.save(weight_model, 'savedpkl/WeightVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)) t, f_score, recall = each_best_scores[0], each_best_scores[1], each_best_scores[2] precision, auc_score = each_best_scores[3], each_best_scores[4] print('epoch{},loss{},testloss:{},t{},f_score{}, auc{}, recall{}, precision{}'.format( epoch, epoch_loss, test_loss, t, f_score, auc_score, recall, precision)) bestthreshold, f_max, recall_max = best_scores[0], best_scores[1], best_scores[2] prec_max, bestauc_score = best_scores[3], best_scores[4] print('lr:{},batch:{},epoch{},f_max:{}\nauc{},recall_max{},prec_max{},threshold:{}'.format( learning_rate, batch_size, epoch_times, f_max, bestauc_score, recall_max, prec_max, bestthreshold)) # return best_scores test_weight_model = torch.load( 'savedpkl/WeightVal_{}_{}_{}_{}.pkl'.format(func, batch_size, learning_rate, epoch_times)).cuda() t_loss = 0 weight_test_outs = {} pred = [] actual = [] score_dict = {} batch_num = 0 for batch_idx, (weight_features, GO_annotiations) in enumerate(test_data_loader): weight_features = Variable(weight_features).cuda() GO_annotiations = Variable(GO_annotiations.view(1, -1)).cuda() # Reshape and apply to the model #GO_annotiations = Variable(GO_annotiations).cuda() out = test_weight_model(weight_features) batch_num += 1 weight_test_outs[test_benchmark[batch_idx]] = out.data[0].cpu().tolist() pred.append(out.data[0].cpu().tolist()) actual.append(GO_annotiations.data[0].cpu().tolist()) loss = loss_function(out, GO_annotiations) t_loss += loss.item() #t_loss += loss.data[0] test_loss = "{}".format(t_loss / batch_num) fpr, tpr, th = roc_curve(np.array(actual).flatten(), np.array(pred).flatten(), pos_label=1) auc_score = auc(fpr, tpr) aupr = cacul_aupr(np.array(actual).flatten(), np.array(pred).flatten()) each_best_fcore = 0 for i in range(len(Thresholds)): f_score, recall, precision = calculate_performance( actual, pred, threshold=Thresholds[i], average='micro') if f_score > each_best_fcore: each_best_fcore = f_score each_best_scores = [Thresholds[i], f_score, recall, precision, auc_score, aupr] scores = [f_score, recall, precision, auc_score] score_dict[Thresholds[i]] = scores bestthreshold, f_max, recall_max = each_best_scores[0], each_best_scores[1], each_best_scores[2] prec_max, bestauc_score, aupr_score = each_best_scores[3], each_best_scores[4], each_best_scores[5] print('test_loss:{},lr:{},batch:{},epoch{},f_max:{}\nauc_score{},recall_max{},prec_max{},threshold:{}'.format( test_loss, learning_rate, batch_size, epoch_times, f_max, auc_score, recall_max, prec_max, bestthreshold)) # with open('out/weight_out/Weight_out{}_lr{}_bat{}_epo{}.csv'.format( # func, learning_rate, batch_size, epoch_times), 'w') as f: # f.write('lr:{},batchsize:{},epochtimes:{}\n'.format(learning_rate, batch_size, epoch_times)) # f.write('f_max:{},recall_max{},prec_max{},auc_score:{}\n'.format( # f_max, recall_max, prec_max, auc_score)) # f.write('threshold,f_score,recall,precision, roc_auc,auc\n') # for i in range(len(Thresholds)): # f.write('{},'.format(str(Thresholds[i]))) # f.write('{}\n'.format(','.join(str(x) for x in score_dict[Thresholds[i]]))) # for key, var in weight_test_outs.items(): # f.write('{},'.format(str(key))) # f.write('{}\n'.format(','.join(str(x) for x in var))) return each_best_scores def read_benchmark(term_arg='MF'): benchmark_file = '{}_benchmarkSet_2.csv'.format(term_arg) print(benchmark_file) all_data = [] with open(benchmark_file, 'r') as f: for line in f: item = line.strip() all_data.append(item) return all_data def validation(func='MF', k_fold=5): kf = KFold(n_splits=k_fold) benchmark = np.array(read_benchmark(func)) scores = [] for train_index, test_index in kf.split(benchmark): train_set = benchmark[train_index].tolist() test_set = benchmark[test_index].tolist() each_fold_scores = Main(train_set, test_set, func=func) scores.append(each_fold_scores) f_maxs, pre_maxs, rec_maxs, auc_s, aupr_s = [], [], [], [], [] for i in range(len(scores)): f_maxs.append(scores[i][1]) rec_maxs.append(scores[i][2]) pre_maxs.append(scores[i][3]) auc_s.append(scores[i][4]) aupr_s.append(scores[i][5]) f_mean = np.mean(np.array(f_maxs)) rec_mean = np.mean(np.array(rec_maxs)) pre_mean = np.mean(np.array(pre_maxs)) auc_mean = np.mean(np.array(auc_s)) aupr_mean = np.mean(np.array(aupr_s)) print('{}:f_mean{},rec_mean{},pre_mean{},auc_mean{}, aupr_mean{}'.format( func, f_mean, rec_mean, pre_mean, auc_mean, aupr_mean)) if __name__ == '__main__': Terms = ['BP', 'MF', 'CC'] validation(Terms[0], 5) # run(func=Terms[1]) # learning_rates = [0.001] # # learning_rates = [0.001, 0.0001, 0.01, 0.00001] # # batchsizes = [8, 16, 32, 64] # batchsizes = [32] # is_train = True # for i in range(len(learning_rates)): # for j in range(len(batchsizes)): # if is_train: # Main(learning_rates[i], batchsizes[j], 40, func=Terms[0], is_train=True) # is_first = False # else: # Main(learning_rates[i], batchsizes[j], 40, func=Terms[0], is_train=False) time_end = time.time() print('time cost', time_end - time_start,'s') Thank you!
0
83
111,644
DISABLED test_nested_tensor_chunk_cpu_float16 (__main__.TestNestedTensorDeviceTypeCPU)
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nested_tensor_chunk_cpu_float16&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17886253401). Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_nested_tensor_chunk_cpu_float16` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_nestedtensor.py` cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
84
111,641
Can't export a pth model to onnx (RuntimeError: Couldn't lower all tuples)
module: onnx
### 🐛 Describe the bug Hello everyone, I am trying to convert a pytorch model to onnx using the library onnx in torch. The model architecture and weights are in the repository https://github.com/Sense-X/Co-DETR , I am usign the model CO-Dino with a Swin-L backbone, however when I launch the model exportation I encounter this error ```RuntimeError: Couldn't lower all tuples```. ![image](https://github.com/pytorch/pytorch/assets/59453891/62666dfd-e2ff-4ef1-a674-54e8f9ef70ea) PS: Sorry, I can't copy poste a simple python code to reproduce the problem. ### Versions [pip3] numpy==1.24.3 [pip3] onnx==1.14.1 [pip3] onnxruntime==1.8.1 [pip3] onnxruntime-gpu==1.8.1 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0 [pip3] torchvision==0.12.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.24.3 py38h14f4228_0 [conda] numpy-base 1.24.3 py38h31eccc5_0 [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 0.11.0 py38_cu113 pytorch [conda] torchvision 0.12.0 py38_cu113 pytorch
0
85
111,640
[RFC] Enable Int8-Mixed-BF16 PT2E PTQ Quantization with Inductor
oncall: quantization
### 🚀 The feature, motivation and pitch [Intel-Extension-for-PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX) offers an advanced int8-mixed-bf16 quantization path, which transforms the output of quantized Conv/GEMM operations into the BF16 data type if there is no subsequent quantized operator. This enhancement significantly improves the inference performance of models such as Bert/DistilBert, as the pointwise operators following GEMM will operate with BF16 data instead of FP32. * Here is the [example code](https://gist.github.com/leslie-fang-intel/acb568f1416c7395b18cd64774aa5a64) for how use this feature with IPEX. * Please note that this feature may result in accuracy loss for certain models. With IPEX, we have verified its accuracy in models such as Bert, DistilBert, stable diffusion, and some other LLM models. However, we have also observed accuracy issues in models like vision transformers. * Similarly, we recently recive a feature request in https://github.com/pytorch/pytorch/issues/111487. ### Alternatives We typically have two options to enable this feature. ### Option 1: Use Autocast Autocast is naturally employed for BF16 optimization in Inductor. Similarly, we can harness it for PT2E int8-mixed-bf16 features to generate a pattern like `q -> dq -> float32_to_bfloat16 -> conv -> bfloat16_to_fp32 -> q -> dq`. * `to_bfloat16` node before conv should be inserted when used Autocast + `torch.compile` together, since conv is in whitelist of Autocast. * As for inserting `bfloat16_to_fp32` node after conv node, we need to extend the implementation of https://github.com/pytorch/pytorch/blob/93a9b1314b4bc88ccddc0aa438d4d332955027a8/torch/ao/quantization/fx/_decomposed.py#L36-L64 by add these lines at beginning of this function ``` if input.dtype == torch.bfloat16: input = input.to(torch.float32) ``` Here's an example code snippet: ``` exported_model = capture_pre_autograd_graph( model, example_inputs ) # Create X86InductorQuantizer quantizer = X86InductorQuantizer() quantizer.set_global(xiq.get_default_x86_inductor_quantization_config()) # PT2E Quantization flow prepared_model = prepare_pt2e(exported_model, quantizer) # Calibration converted_model = convert_pt2e(prepared_model) torch.ao.quantization.move_exported_model_to_eval(converted_model) with torch.autocast(device_type="cpu", dtype=torch.bfloat16, enabled=enable_int8_mixed_bf16), torch.no_grad(): optimized_model = torch.compile(converted_model) # Int8-Mixed-BF16 Inference quant_output = optimized_model(images) ``` * Pros: * Utilize the existing int8-mixed-fp32 quantizer and PT2E flow implementation. * Make use of the existing Autocast operator list and mechanism. * Cons: * The Autocast mechanism will convert each input, including the convolution's bias, to BF16. However, for X86InductorQuantizer and the associated Inductor optimization, we anticipate that using float32 for the bias input may yield better accuracy. ### Option 2: Add BFloat16 as a quantization type in PT2E Flow (in QuantizationSpec) Alternatively, we can introduce BFloat16 as a quantization type in PT2E Flow (within QuantizationSpec). * We may need to extend the Observer implementation to annotate its use for int8-mixed-bf16, depending on the quantization recipe. * During the convert phase, we will examine the observer information to determine if it has been annotated with int8-mixed-bf16. * If the input of a quantization node is in BFloat16 data type, an additional `to_float` node will be inserted before the quantization node. * Following the dequantization node, an additional `to_bf16` node will be inserted. ``` exported_model = capture_pre_autograd_graph( model, example_inputs ) # Create X86InductorQuantizer quantizer = X86InductorQuantizer() quantizer.set_global(xiq.get_default_x86_inductor_quantization_config(dtype=BFloat16)) # PT2E Quantization flow prepared_model = prepare_pt2e(exported_model, quantizer) # Calibration converted_model = convert_pt2e(prepared_model) torch.ao.quantization.move_exported_model_to_eval(converted_model) with torch.no_grad(): optimized_model = torch.compile(converted_model) # Int8-Mixed-BF16 Inference quant_output = optimized_model(images) ``` * Pros: * We can achieve more flexibility with a customized implementation for int8-mixed-bf16 quantization, allowing us to overcome certain limitations in Autocast, such as bias conversion.. * Cons: * Non-trivial changes may need in QuantizationSpec, Observer, Quantizer and PT2 Flow convert implementation. We prefer option 1 as it requires fewer changes in the PT2E quantization flow and is clear and straightforward. ### Additional context ### Optimization Inside Inductor * Conv/GEMM Here is the pattern after quantization flow we expect to see in Inductor. ``` q -> dq -> float32_to_bfloat16 -> conv -> bfloat16_to_fp32 -> q -> dq ``` * Step 1: In the weight prepack phase, `dq -> float32_to_bfloat16 -> conv` will be matched at first to generate a `qconv_bf16_output` node with `int8` input dtype and `bfloat16` output dtype. * Step 2: Further more, we will check if `bfloat16_to_fp32 -> q` pattern exists after this `qconv_bf16_output` node. If so, we will further merge `qconv_bf16_output -> bfloat16_to_fp32 -> q` into a `qconv` node with with `int8` input dtype and `int8` output dtype. * Non-Conv/GEMM. * Non-Conv/GEMM pattern will lowering in Inductor CPP Backend for Code Generation. ### Enabling Plans Here is some plans to follow up the option 1: * Makes all `onednn.qconv1d_pointwise/linear_pointwise` operators support BF16 output. * Remove the annotation of output at conv/linear in `X86InductorQuantizer`. * Extend the decomposed quant to support bf16 input. * Extend Weight prepack pattern matcher of `dequant -> to_bf16 -> conv/linear`. * Extend QConv/Linear int8-mixed-bf16 output patterns matcher of `dequant -> to_bf16 -> conv/linear -> to_fp32 -> quant`. * Extend Postop-passes pattern match of Conv/Linear ReLU/Add/Add_ReLU fusion with FP32/BF16 output. cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen
2
86
111,639
[Kineto][NCCL][1/n] Add the world size info in NCCL metadata
fb-exported, release notes: distributed (c10d)
Summary: This diff adds the world size info in NCCL metadata, as we need the information to calculate the algorithmic bandwidth and bus Bandwidth. Test Plan: **Tested together with stacked diffs**: - Trace: https://fburl.com/perfdoctor/ke1d4g2z - The information is shown as `Group size` in the trace: {F1125504721} Differential Revision: D50439185
2
87
111,638
DISABLED test_meta_outplace_fft_ifft_cpu_float64 (__main__.TestMetaCPU)
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_float64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17883819575). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_meta_outplace_fft_ifft_cpu_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_meta.py` cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
2
88
111,637
Support fp8 in AOTInductor
module: inductor, ciflow/inductor
Current error: https://gist.github.com/ipiszy/7f8cb8bff482444d299e23735422a236 Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111637 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
89
111,636
torch2.1.0 DDP+compile+dynamic_shape cause error
null
### 🐛 Describe the bug Excuse me! When I use **torch2.1.0** DDP+compile to wrapper huggingface transformers gpt2 model, it will cause error only for the last batch data (shape is smaller than batch_size). Detailed logs as follows: ```bash [default0]:10/20/2023 13:45:50 - INFO - __main__ - local_rank=0, rank=0, world_size=2 [default1]:10/20/2023 13:45:50 - INFO - __main__ - local_rank=1, rank=1, world_size=2 [default0]:10/20/2023 13:45:53 - INFO - __main__ - GPT2LMHeadModel( [default0]: (transformer): GPT2Model( [default0]: (wte): Embedding(50257, 768) [default0]: (wpe): Embedding(1024, 768) [default0]: (drop): Dropout(p=0.1, inplace=False) [default0]: (h): ModuleList( [default0]: (0-11): 12 x GPT2Block( [default0]: (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: (attn): GPT2Attention( [default0]: (c_attn): Conv1D() [default0]: (c_proj): Conv1D() [default0]: (attn_dropout): Dropout(p=0.1, inplace=False) [default0]: (resid_dropout): Dropout(p=0.1, inplace=False) [default0]: ) [default0]: (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: (mlp): GPT2MLP( [default0]: (c_fc): Conv1D() [default0]: (c_proj): Conv1D() [default0]: (act): NewGELUActivation() [default0]: (dropout): Dropout(p=0.1, inplace=False) [default0]: ) [default0]: ) [default0]: ) [default0]: (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: ) [default0]: (lm_head): Linear(in_features=768, out_features=50257, bias=False) [default0]:) [default0]:10/20/2023 13:35:36 - INFO - __main__ - [rank 0][0/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:[rank0]:[2023-10-20 13:35:36,584] [0/0] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored [default0]:10/20/2023 13:35:41 - INFO - __main__ - [rank 0][0/9]: workers well [default0]:10/20/2023 13:35:41 - INFO - __main__ - [rank 0][1/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][1/9]: workers well [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][2/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][2/9]: workers well [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][3/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][3/9]: workers well [default1]:10/20/2023 13:35:42 - INFO - __main__ - [rank 1][0/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:[rank1]:[2023-10-20 13:35:43,025] [0/0] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored [default0]:10/20/2023 13:35:42 - INFO - __main__ - [rank 0][4/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:43 - INFO - __main__ - [rank 0][4/9]: workers well [default0]:10/20/2023 13:35:43 - INFO - __main__ - [rank 0][5/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:43 - INFO - __main__ - [rank 0][5/9]: workers well [default0]:10/20/2023 13:35:43 - INFO - __main__ - [rank 0][6/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:44 - INFO - __main__ - [rank 0][6/9]: workers well [default0]:10/20/2023 13:35:44 - INFO - __main__ - [rank 0][7/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:35:44 - INFO - __main__ - [rank 0][7/9]: workers well [default0]:10/20/2023 13:35:44 - INFO - __main__ - [rank 0][8/9]: batch_data['input_ids'].shape=torch.Size([7, 1024]) [default0]:[rank0]:[2023-10-20 13:35:44,536] [0/1] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored [default1]:10/20/2023 13:35:48 - INFO - __main__ - [rank 1][0/9]: workers well [default1]:10/20/2023 13:35:48 - INFO - __main__ - [rank 1][1/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:48 - INFO - __main__ - [rank 1][1/9]: workers well [default1]:10/20/2023 13:35:48 - INFO - __main__ - [rank 1][2/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:48 - INFO - __main__ - [rank 1][2/9]: workers well [default1]:10/20/2023 13:35:49 - INFO - __main__ - [rank 1][3/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:49 - INFO - __main__ - [rank 1][3/9]: workers well [default1]:10/20/2023 13:35:49 - INFO - __main__ - [rank 1][4/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:49 - INFO - __main__ - [rank 1][4/9]: workers well [default1]:10/20/2023 13:35:49 - INFO - __main__ - [rank 1][5/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][5/9]: workers well [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][6/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][6/9]: workers well [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][7/9]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][7/9]: workers well [default1]:10/20/2023 13:35:50 - INFO - __main__ - [rank 1][8/9]: batch_data['input_ids'].shape=torch.Size([7, 1024]) [default0]:Traceback (most recent call last): [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py", line 274, in __call__ [default0]: return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl [default0]: return self._call_impl(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl [default0]: return forward_call(*args, **kwargs) [default0]: File "<eval_with_key>.63", line 24, in forward [default0]: matmul = torch.matmul(permute, transpose); permute = transpose = None [default0]:RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides [default0]: [default0]:Call using an FX-traced Module, line 24 of the traced Module's generated forward function: [default0]: transpose = permute_1.transpose(-1, -2) [default0]: matmul = torch.matmul(permute, transpose); permute = transpose = None [default0]: [default0]:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE [default0]: full = torch.full([], 8.0, dtype = torch.float32, device = device(type='cuda', index=0)) [default0]: [default0]: truediv = matmul / full; matmul = full = None [default0]:Traceback (most recent call last): [default0]: File "/home/yuzhe.wu/grace-compute-dl-benchmark/nlp/gpt/gpt.py", line 294, in <module> [default0]: main() [default0]: File "/home/yuzhe.wu/grace-compute-dl-benchmark/nlp/gpt/gpt.py", line 276, in main [default0]: validate(args, 0, eval_dataloader, model, world_size, device, summary_writer) [default0]: File "/home/yuzhe.wu/grace-compute-dl-benchmark/nlp/gpt/gpt.py", line 114, in validate [default0]: outputs = model(**batch_data) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl [default0]: return self._call_impl(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl [default0]: return forward_call(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn [default0]: return fn(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 17, in inner [default0]: return fn(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl [default0]: return self._call_impl(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl [default0]: return forward_call(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward [default0]: else self._run_ddp_forward(*inputs, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward [default0]: return self.module(*inputs, **kwargs) # type: ignore[index] [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl [default0]: return self._call_impl(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl [default0]: return forward_call(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 487, in catch_errors [default0]: return hijacked_callback(frame, cache_entry, hooks, frame_state) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame [default0]: result = inner_convert(frame, cache_size, hooks, frame_state) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn [default0]: return fn(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert [default0]: return _compile( [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile [default0]: guarded_code = compile_inner(code, one_graph, hooks, transform) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper [default0]: r = func(*args, **kwargs) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner [default0]: out_code = transform_code_object(code, transform) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object [default0]: transformations(instructions, code_options) [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform [default0]: tracer.run() [default0]: File "/root/anaconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run [default1]:Traceback (most recent call last): [default1]: File "/root/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py", line 274, in __call__ [default1]: return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] [default1]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl [default1]: return self._call_impl(*args, **kwargs) [default1]: File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl [default1]: return forward_call(*args, **kwargs) [default1]: File "<eval_with_key>.63", line 24, in forward [default1]: matmul = torch.matmul(permute, transpose); permute = transpose = None [default1]:RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides ``` However, the single gpu (without DDP wrapper) workers well: ```bash [default0]:10/20/2023 12:00:20 - INFO - __main__ - local_rank=0, rank=0, world_size=1 [default0]:10/20/2023 12:00:24 - INFO - __main__ - GPT2LMHeadModel( [default0]: (transformer): GPT2Model( [default0]: (wte): Embedding(50257, 768) [default0]: (wpe): Embedding(1024, 768) [default0]: (drop): Dropout(p=0.1, inplace=False) [default0]: (h): ModuleList( [default0]: (0-11): 12 x GPT2Block( [default0]: (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: (attn): GPT2Attention( [default0]: (c_attn): Conv1D() [default0]: (c_proj): Conv1D() [default0]: (attn_dropout): Dropout(p=0.1, inplace=False) [default0]: (resid_dropout): Dropout(p=0.1, inplace=False) [default0]: ) [default0]: (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: (mlp): GPT2MLP( [default0]: (c_fc): Conv1D() [default0]: (c_proj): Conv1D() [default0]: (act): NewGELUActivation() [default0]: (dropout): Dropout(p=0.1, inplace=False) [default0]: ) [default0]: ) [default0]: ) [default0]: (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) [default0]: ) [default0]: (lm_head): Linear(in_features=768, out_features=50257, bias=False) [default0]:) [default0]:10/20/2023 13:27:16 - INFO - __main__ - [rank 0][0/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:21 - INFO - __main__ - [rank 0][0/17]: workers well [default0]:10/20/2023 13:27:21 - INFO - __main__ - [rank 0][1/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:21 - INFO - __main__ - [rank 0][1/17]: workers well [default0]:10/20/2023 13:27:21 - INFO - __main__ - [rank 0][2/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:22 - INFO - __main__ - [rank 0][2/17]: workers well [default0]:10/20/2023 13:27:22 - INFO - __main__ - [rank 0][3/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:22 - INFO - __main__ - [rank 0][3/17]: workers well [default0]:10/20/2023 13:27:22 - INFO - __main__ - [rank 0][4/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][4/17]: workers well [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][5/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][5/17]: workers well [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][6/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][6/17]: workers well [default0]:10/20/2023 13:27:23 - INFO - __main__ - [rank 0][7/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][7/17]: workers well [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][8/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][8/17]: workers well [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][9/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][9/17]: workers well [default0]:10/20/2023 13:27:24 - INFO - __main__ - [rank 0][10/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:25 - INFO - __main__ - [rank 0][10/17]: workers well [default0]:10/20/2023 13:27:25 - INFO - __main__ - [rank 0][11/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:25 - INFO - __main__ - [rank 0][11/17]: workers well [default0]:10/20/2023 13:27:25 - INFO - __main__ - [rank 0][12/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][12/17]: workers well [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][13/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][13/17]: workers well [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][14/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][14/17]: workers well [default0]:10/20/2023 13:27:26 - INFO - __main__ - [rank 0][15/17]: batch_data['input_ids'].shape=torch.Size([16, 1024]) [default0]:10/20/2023 13:27:27 - INFO - __main__ - [rank 0][15/17]: workers well [default0]:10/20/2023 13:27:27 - INFO - __main__ - [rank 0][16/17]: batch_data['input_ids'].shape=torch.Size([13, 1024]) [default0]:10/20/2023 13:27:32 - INFO - __main__ - [rank 0][16/17]: workers well ``` The following is complete code in `demo.py`: ```python import os import sys import math import logging import torch import torch.distributed as dist from typing import List from itertools import chain from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.data import DataLoader from torch.utils.data.distributed import DistributedSampler from datasets import load_dataset from transformers import ( default_data_collator, CONFIG_MAPPING, AutoConfig, AutoModelForCausalLM, AutoTokenizer, ) logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger = logging.getLogger(__name__) def print_rank_0(message): """If distributed is initialized, print only on rank 0.""" if dist.is_initialized(): if dist.get_rank() == 0: logger.info(message) else: logger.info(message) def build_datasets(tokenizer, phase): dataset_name = "wikitext" dataset_config_name="wikitext-103-raw-v1" cache_dir="./.cache/wikitext/wikitext103" validation_split_percentage = 5 raw_datasets = load_dataset( dataset_name, dataset_config_name, cache_dir=cache_dir ) if "validation" not in raw_datasets.keys(): raw_datasets["validation"] = load_dataset( dataset_name, dataset_config_name, split=f"train[:{validation_split_percentage}%]", ) raw_datasets["train"] = load_dataset( dataset_name, dataset_config_name, split=f"train[{validation_split_percentage}%:]", ) # Preprocessing the datasets. # First we tokenize all the texts. column_names = raw_datasets["train"].column_names text_column_name = "text" if "text" in column_names else column_names[0] def tokenize_function(examples): return tokenizer(examples[text_column_name]) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=16, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on dataset", ) block_size = tokenizer.model_max_length if block_size > 1024: logger.warning( "The chosen tokenizer supports a `model_max_length` that is longer than the default" " `block_size` value of 1024. If you would like to use a longer `block_size` up to" " `tokenizer.model_max_length` you can override this with `--block_size xxx`." ) block_size = 1024 # Main data processing function that will concatenate all texts from our dataset and generate # chunks of block_size. def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it # instead of this drop, you can # customize this part to your needs. if total_length >= block_size: total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result # Note that with `batched=True`, this map processes 1,000 texts together, # so group_texts throws away a remainde for each of those groups of 1,000 texts. # You can adjust that batch_siz here but a higher value might be slowe to preprocess. # # To speed up this part, we use multiprocessing. # See the documentation of the map method for more information: # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map lm_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=16, load_from_cache_file=True, desc=f"Grouping texts in chunks of {block_size}", ) return lm_datasets[phase] if __name__ == "__main__": local_rank = int(os.environ["LOCAL_RANK"]) dist.init_process_group(backend="nccl") rank = dist.get_rank() world_size = dist.get_world_size() logger.info(f"local_rank={local_rank}, rank={rank}, world_size={world_size}") device = torch.device("cuda", local_rank) seed = 1234 torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.set_device(local_rank) ### build model and tokenizer ### model_name_or_path = "gpt2" config = AutoConfig.from_pretrained(model_name_or_path) tokenizer = AutoTokenizer.from_pretrained( model_name_or_path, use_fast=True ) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, from_tf=bool(".ckpt" in model_name_or_path), config=config, low_cpu_mem_usage=False ) embedding_size = model.get_input_embeddings().weight.shape[0] if len(tokenizer) > embedding_size: model.resize_token_embeddings(len(tokenizer)) model.tie_weights() print_rank_0(model) model = model.to(device) # wrapper DDP if world_size > 1: model = DDP(model, device_ids=[device], output_device=device) # wrapper torch compile # torch._logging.set_logs(dynamo=logging.INFO) # torch._dynamo.config.verbose = True def comstom_backend( gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor] ): # gm.graph.print_tabular() return gm.forward torch._dynamo.reset() model = torch.compile(model, backend=comstom_backend) ### build data loader ### # Downloading and loading a dataset from the hub. train_dataset = build_datasets(tokenizer, "train") eval_dataset = build_datasets(tokenizer, "test") train_dataloader = DataLoader( train_dataset, collate_fn=default_data_collator, shuffle=False, batch_size=16, sampler=DistributedSampler(train_dataset), ) eval_dataloader = DataLoader( eval_dataset, collate_fn=default_data_collator, shuffle=False, batch_size=16, sampler=DistributedSampler(eval_dataset), ) ### start eval ### model.eval() with torch.no_grad(): for idx, batch_data in enumerate(eval_dataloader): # H2D batch_data["input_ids"] = batch_data["input_ids"].to(device) batch_data["attention_mask"] = batch_data["attention_mask"].to(device) batch_data["labels"] = batch_data["labels"].to(device) logger.info(f"[rank {rank}][{idx}/{len(eval_dataloader)}]: batch_data['input_ids'].shape={batch_data['input_ids'].shape}") # forward outputs = model(**batch_data) logger.info(f"[rank {rank}][{idx}/{len(eval_dataloader)}]: workers well") ``` The launch script `run.sh` is: ```bash #!/bin/bash set -e NUM_GPUS=$1 python -u -m torch.distributed.run --nproc_per_node $NUM_GPUS --nnodes 1 --node_rank 0 --master_port 6777 --master_addr localhost --max_restarts 0 --tee 3 demo.py ``` The execute command is `bash run.sh 1 (or 2)`. ### Versions Environments: ```bash Collecting environment information... PyTorch version: 2.1.0+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.25.2 Libc version: glibc-2.31 Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe GPU 2: NVIDIA A100 80GB PCIe GPU 3: NVIDIA A100 80GB PCIe GPU 4: NVIDIA A100 80GB PCIe GPU 5: NVIDIA A100 80GB PCIe GPU 6: NVIDIA A100 80GB PCIe GPU 7: NVIDIA A100 80GB PCIe Nvidia driver version: 525.85.12 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz Stepping: 6 Frequency boost: enabled CPU MHz: 799.979 CPU max MHz: 3600.0000 CPU min MHz: 800.0000 BogoMIPS: 6200.00 Virtualization: VT-x L1d cache: 1.5 MiB L1i cache: 1 MiB L2 cache: 40 MiB L3 cache: 72 MiB NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Versions of relevant libraries: [pip3] flake8==4.0.1 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.21.5 [pip3] numpydoc==1.4.0 [pip3] torch==2.1.0+cu118 [pip3] torchvision==0.16.0 [pip3] triton==2.1.0 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7f8727e_0 [conda] mkl_fft 1.3.1 py39hd3c417c_0 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] numpy 1.21.5 py39h6c91a56_3 [conda] numpy-base 1.21.5 py39ha15fc14_3 [conda] numpydoc 1.4.0 py39h06a4308_0 [conda] torch 2.1.0+cu118 pypi_0 pypi [conda] torchvision 0.16.0 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi ``` Looking forward to your reply, thanks a lot!
0
90
111,634
Batched matmul gives incorrect result on MPS devices
null
### 🐛 Describe the bug When the dimensions are large enough, batched matmul gives the wrong answer on MPS devices. Minimal example: ``` import torch zeros = torch.zeros(911, 9, 1, device=torch.device("mps")) ones = torch.ones(1, 32769, device=torch.device("mps")) zeros @ ones ``` This should give a tensor of 0s, but it instead gives a tensor in which 50,505,735 of the entries are 1. If the operation is performed a second time with the same tensors, the number of 1s changes to 182,632,455. The dimensions in this example are minimal, i.e. the code runs correctly if any of them is made any smaller. ### Versions PyTorch version: 2.1.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.5.2 (arm64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.0.40.1) CMake version: version 3.27.7 Libc version: N/A Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-13.5.2-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M2 Pro Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.1 [pip3] torch==2.1.0 [conda] numpy 1.26.1 pypi_0 pypi [conda] torch 2.1.0 pypi_0 pypi
0
91
111,633
Status Tracker And Summary of Support Needed: Make Dynamo Generated Artifacts Debuggable
null
Currently, dynamically translated bytecode from Dynamo can only be run as a black box. With the help of recent tools like https://github.com/thuml/depyf , it is possible to decompile bytecode into source code, and to enable debugging tools to debug those generated bytecodes. This issue (status tracker) ducoments the progress and support needed from PyTorch side to fulfill the goal. # General idea: PyTorch generate bytecode (and many other artifacts) dynamically. We can decompile and dump the source code into files, let users set breakpoints, and re-run the program to hit those breakpoints. # Main difficulty: In essence, artifacts (transformed bytecode, compiled functions, etc.) are generated **dynamically**. We need to have a stable naming for each artifact so that users can recognize them and reuse them across runs. # Some minor concerns: Some function names in Dynamo are not valid (notoriously, the resume functions have `<resume in xxx>` name). We need to assign valid names for them. # Example usage: The main usage come from the `depyf` side. The rough idea is to set up a src directory, and dump src there for debugging. ```diff + import depyf + # set up a hook to dump src of dynamically generated artifacts + depyf.enable_torch_compile_debugging(dump_dir="/path/to/dumped/src") def toy_example(a, b): x = a / (torch.abs(a) + 1) if b.sum() < 0: b = b * -1 return x * b for _ in range(100): toy_example(torch.randn(10), torch.randn(10)) ``` The ideal result looks like this: ![image](https://github.com/pytorch/pytorch/assets/23236638/8f586170-7745-4cf6-9950-2d16fe831a46) ```[tasklist] ### Tasks - [x] make code name of resume functions valid, finished in https://github.com/pytorch/pytorch/pull/111635 . ```
1
92
111,632
[dynamo][profiler] console spew of ..."torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored" for pages...
null
### 🐛 Describe the bug when compiling either NanoGPT or T5 using FSDP, get pages of console spew of this warning: `[rank1]:[2023-10-20 04:13:04,946] [6/21] torch._dynamo.variables.torch: [WARNING] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored` Request that this be modified to a warn once type of warning - it is warning the same thing over and over, and then on every rank, spamming up pages of console view. for example: <img width="1434" alt="console_prof_function_warning" src="https://github.com/pytorch/pytorch/assets/46302957/bc3451ed-5770-4814-ae5d-b8d47f0d0fa6"> you can repro using latest nightly and run this for T5 example (nanogpt one is worse in terms of total spew): https://github.com/lessw2020/transformer_framework/blob/main/run_training.sh (adjust to 8 gpus if needed) *you may need to edit config/t5_config and ensure use_torch_compile = True ### Versions PyTorch version: 2.2.0.dev20231019+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.31 Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.15.0-1036-aws-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A10G GPU 1: NVIDIA A10G GPU 2: NVIDIA A10G GPU 3: NVIDIA A10G Nvidia driver version: 525.85.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7R32 Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 768 KiB L1i cache: 768 KiB L2 cache: 12 MiB L3 cache: 96 MiB NUMA node0 CPU(s): 0-47 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid Versions of relevant libraries: [pip3] flake8==4.0.1 [pip3] flake8-bugbear==22.4.25 [pip3] flake8-polyfill==1.0.2 [pip3] memory-efficient-attention-pytorch==0.1.6 [pip3] mypy==1.0.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.3 [pip3] pytorch-triton==2.1.0+6e4932cda8 [pip3] torch==2.2.0.dev20231019+cu121 [pip3] torch-model-archiver==0.5.3b20220226 [pip3] torch-workflow-archiver==0.2.8b20230512 [pip3] torchaudio==2.2.0.dev20231019+cu121 [pip3] torchmultimodal==0.1.0b0 [pip3] torchserve==0.6.0b20220513 [pip3] torchtext==0.14.1 [pip3] torchvision==0.17.0.dev20231019+cu121 [pip3] triton==2.0.0.dev20221202 [pip3] triton-nightly==2.1.0.dev20231012235740 [pip3] vit-pytorch==1.2.2 [conda] blas 1.0 mkl conda-forge [conda] captum 0.5.0 0 pytorch [conda] cudatoolkit 11.7.0 hd8887f6_11 conda-forge [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libblas 3.9.0 16_linux64_mkl conda-forge [conda] libcblas 3.9.0 16_linux64_mkl conda-forge [conda] liblapack 3.9.0 16_linux64_mkl conda-forge [conda] magma-cuda117 2.6.1 1 pytorch [conda] mkl 2022.2.1 h84fe81f_16997 conda-forge [conda] numpy 1.24.3 py39h6183b62_0 conda-forge [conda] pytorch-cuda 11.7 h778d358_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi [conda] torch 2.2.0.dev20231019+cu121 pypi_0 pypi [conda] torch-model-archiver 0.5.3 py39_0 pytorch [conda] torch-workflow-archiver 0.2.8 py39_0 pytorch [conda] torchaudio 2.2.0.dev20231019+cu121 pypi_0 pypi [conda] torchmultimodal 0.1.0b0 pypi_0 pypi [conda] torchserve 0.6.0 py39_0 pytorch [conda] torchtext 0.14.1 py39 pytorch [conda] torchvision 0.17.0.dev20231019+cu121 pypi_0 pypi [conda] triton 2.0.0.dev20221202 pypi_0 pypi [conda] triton-nightly 2.1.0.dev20231012235740 pypi_0 pypi [conda] vit-pytorch 1.2.2 pypi_0 pypi
0
93
111,631
int_mm microbenchmark experiments
module: inductor, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111631 * #111413 Summary: block pointers, new configs and group_m=4 possible ---------------------------------------ALLOW GROUP_M=4--------------------------------------- test GPU time | shape | torch int_mm | triton matmul | inductor aten int_mm | inductor triton int_mm | inductor triton bp int_mm | |--------------------------|------|------|------|------|------| |[128, 9216] x [9216, 4096]|0.0692|0.0653|0.0696|0.0743|0.0743| |[128, 4096] x [4096, 4096]|0.0335|0.0331|0.0339|0.0377|0.0381| |[128, 4096] x [4096, 1000]|0.0406|0.0261|0.0406|0.0356|0.0352| |[2048, 768] x [768, 768]|0.0210|0.0150|0.0210|0.0161|0.0166| |[2048, 768] x [768, 3072]|0.0448|0.0341|0.0449|0.0400|0.0404| |[2048, 3072] x [3072, 768]|0.0450|0.0315|0.0446|0.0317|0.0317| |[1024, 768] x [768, 768]|0.0154|0.0116|0.0151|0.0146|0.0142| |[1024, 768] x [768, 3072]|0.0264|0.0206|0.0264|0.0239|0.0239| |[1024, 3072] x [3072, 768]|0.0322|0.0242|0.0324|0.0290|0.0294| |[1024, 768] x [768, 2304]|0.0255|0.0199|0.0252|0.0212|0.0208| ---------------------------------------WITHOUT GROUP_M=4--------------------------------------- test GPU time | shape | torch int_mm | triton matmul | inductor aten int_mm | inductor triton int_mm | inductor triton bp int_mm | |--------------------------|------|------|------|------|------| |[128, 9216] x [9216, 4096]|0.0694|0.0661|0.0693|0.0744|0.0744| |[128, 4096] x [4096, 4096]|0.0336|0.0334|0.0337|0.0378|0.0378| |[128, 4096] x [4096, 1000]|0.0409|0.0261|0.0409|0.0356|0.0352| |[2048, 768] x [768, 768]|0.0211|0.0150|0.0210|0.0164|0.0160| |[2048, 768] x [768, 3072]|0.0443|0.0341|0.0448|0.0405|0.0405| |[2048, 3072] x [3072, 768]|0.0441|0.0310|0.0440|0.0316|0.0312| |[1024, 768] x [768, 768]|0.0151|0.0118|0.0155|0.0147|0.0143| |[1024, 768] x [768, 3072]|0.0264|0.0206|0.0264|0.0236|0.0237| |[1024, 3072] x [3072, 768]|0.0326|0.0242|0.0322|0.0292|0.0292| |[1024, 768] x [768, 2304]|0.0250|0.0198|0.0250|0.0208|0.0212| ---------------------------------------WITHOUT GROUP_M=4 and without bp kernel branches--------------------------------------- test GPU time | shape | torch int_mm | triton matmul | inductor aten int_mm | inductor triton int_mm | inductor triton bp int_mm | |--------------------------|------|------|------|------|------| |[128, 9216] x [9216, 4096]|0.0694|0.0664|0.0694|0.0748|0.0747| |[128, 4096] x [4096, 4096]|0.0335|0.0330|0.0337|0.0381|0.0377| |[128, 4096] x [4096, 1000]|0.0405|0.0264|0.0408|0.0355|0.0355| |[2048, 768] x [768, 768]|0.0211|0.0153|0.0216|0.0164|0.0160| |[2048, 768] x [768, 3072]|0.0441|0.0341|0.0441|0.0403|0.0403| |[2048, 3072] x [3072, 768]|0.0446|0.0319|0.0445|0.0312|0.0316| |[1024, 768] x [768, 768]|0.0154|0.0152|0.0154|0.0147|0.0147| |[1024, 768] x [768, 3072]|0.0261|0.0203|0.0265|0.0240|0.0236| |[1024, 3072] x [3072, 768]|0.0323|0.0245|0.0323|0.0296|0.0296| |[1024, 768] x [768, 2304]|0.0255|0.0198|0.0255|0.0212|0.0213| ---------------------------------------WITHOUT GROUP_M=4 and without new configs--------------------------------------- test GPU time | shape | torch int_mm | triton matmul | inductor aten int_mm | inductor triton int_mm | inductor triton bp int_mm | |--------------------------|------|------|------|------|------| |[128, 9216] x [9216, 4096]|0.0693|0.0660|0.0697|0.0743|0.0743| |[128, 4096] x [4096, 4096]|0.0341|0.0335|0.0340|0.0381|0.0377| |[128, 4096] x [4096, 1000]|0.0408|0.0260|0.0404|0.0353|0.0352| |[2048, 768] x [768, 768]|0.0210|0.0154|0.0210|0.0212|0.0208| |[2048, 768] x [768, 3072]|0.0443|0.0345|0.0443|0.0452|0.0448| |[2048, 3072] x [3072, 768]|0.0441|0.0313|0.0446|0.0499|0.0503| |[1024, 768] x [768, 768]|0.0154|0.0117|0.0150|0.0142|0.0142| |[1024, 768] x [768, 3072]|0.0265|0.0205|0.0261|0.0263|0.0263| |[1024, 3072] x [3072, 768]|0.0321|0.0246|0.0321|0.0325|0.0321| |[1024, 768] x [768, 2304]|0.0250|0.0195|0.0250|0.0238|0.0239| ---------------------------------------ALLOW GROUP_M=4 but without new configs--------------------------------------- test GPU time | shape | torch int_mm | triton matmul | inductor aten int_mm | inductor triton int_mm | inductor triton bp int_mm | |--------------------------|------|------|------|------|------| |[128, 9216] x [9216, 4096]|0.0696|0.0666|0.0697|0.0747|0.0747| |[128, 4096] x [4096, 4096]|0.0337|0.0331|0.0341|0.0381|0.0377| |[128, 4096] x [4096, 1000]|0.0409|0.0264|0.0408|0.0356|0.0356| |[2048, 768] x [768, 768]|0.0211|0.0154|0.0210|0.0214|0.0210| |[2048, 768] x [768, 3072]|0.0445|0.0345|0.0447|0.0450|0.0446| |[2048, 3072] x [3072, 768]|0.0442|0.0310|0.0442|0.0500|0.0499| |[1024, 768] x [768, 768]|0.0154|0.0120|0.0154|0.0142|0.0146| |[1024, 768] x [768, 3072]|0.0265|0.0202|0.0265|0.0256|0.0256| |[1024, 3072] x [3072, 768]|0.0321|0.0244|0.0321|0.0325|0.0321| |[1024, 768] x [768, 2304]|0.0254|0.0200|0.0250|0.0234|0.0238| Test Plan: Reviewers: Subscribers: Tasks: Tags: cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
94
111,629
DISABLED test_narrow_cpu_float64 (__main__.TestNestedTensorDeviceTypeCPU)
triaged, module: flaky-tests, module: nestedtensor, skipped, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_narrow_cpu_float64&suite=TestNestedTensorDeviceTypeCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17880663159). Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_narrow_cpu_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_nestedtensor.py` cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
1
95
111,628
Use Dr.CI GitHub checkrun summary when querying its API fails
topic: not user facing, test-config/default
This will allow internal SandCastle job to access Dr.CI classification results via GitHub checkrun summary and correctly ignore unrelated failures. ### Testing Adding `TestBypassFailuresOnSandCastle` where Dr.CI API returns nothing.
4
96
111,627
[inductor] Implement clone removal for user defined triton kernel via reinplace_scatters
ciflow/trunk, module: inductor, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #111627 * #111434 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler
1
97
111,626
DISABLED test_meta_outplace_fft_hfft_cpu_uint8 (__main__.TestMetaCPU)
triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_hfft_cpu_uint8&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17879045178). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_meta_outplace_fft_hfft_cpu_uint8` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_meta.py` cc @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
2
98
111,623
Missing `ignored_param` when calling wrapper_cls (FSDP) recursively
oncall: distributed, triaged
https://github.com/pytorch/pytorch/blob/935f6977542affc0d16c66333a13d60dae6aa5fa/torch/distributed/fsdp/wrap.py#L561 When calling FSDP class recursively, `ignored_param` (which is supposed to be passed by `ignored_states`) seems to be missing. When I pass parameters to ignore to FSDP, this information is lost at this interface when calling FSDP class recursively. I think `ignored_states` should be re-defined here and should be part of `kwargs`. cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
5
99
111,622
[5/N] Make torch context manager a TorchCtxManagerClassVariable
topic: not user facing, module: dynamo, ciflow/inductor
Major change in this PR is to make torch context manager class a separate ```TorchCtxManagerClassVariable```, since we have dynamo implementation for these ctx managers. I was thinking to wrap them as ```UserDefinedClassVariable``` and do dispatch at ```USCVariable.call_function```, but it seems almost the same amount of work and this way is more clear. This is on the way of moving ```TorchVariable``` to ```TorchFunctionVariable``` which will only handle the functions who would be allowed in graph (e.g, ```torch.sin```) and constant folded (e.g, ```torch.is_floating_point```). All other torch functions would be go through skip/inline rules, and would be wrapped as ```UserFunctionVariable``` (for inlined) and ```SkipFilesVariable``` (for skipped). The next steps: * Wrap torch modules, classes, objects as regular ```PythonModuleVariable```, ```UserDefinedClassVariable``` and ```UserDefinedObjectVariable```. * Generate the allow in graph torch functions list and wrap them as ```TorchFunctionVariable```. * Finally merge ```skipfiles.check``` and ```is_allowed``` into one function ```allow_skip.check(fn)``` which would return a Enum of allow, skip and inline. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
3
100
111,621
maximum Python version supported is not indicated
module: docs, triaged
### 📚 The doc issue https://pytorch.org/get-started/locally/ Python 3.12 has been released, but it's not supported by the PyTorch wheels. This is not indicated on the page. Only the minimum Python version is indicated ### Suggest a potential alternative/fix Indicate both min and max Python versions supported when installing via pip. cc @svekars @carljparker
0