Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
701 | 110,032 |
Race condition on shutdown involving PThreadPool and autograd
|
triaged, module: multithreading, module: sanitizers
|
### π Describe the bug
I don't have an OSS repro. The internal test that (only very rarely) triggers the race is https://www.internalfb.com/intern/test/281475084563302?ref_report_id=0
Here is the TSAN error (I verified all the backtraces are benign):
```
WARNING: ThreadSanitizer: data race (pid=11205)
Write of size 8 at 0x00000340fef0 by main thread:
#0 std::unique_ptr<caffe2::PThreadPool, std::default_delete<caffe2::PThreadPool> >::~unique_ptr() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/unique_ptr.h:362 (lite_trainer_test+0x2b9a1d1)
#1 cxa_at_exit_wrapper(void*) <null> (lite_trainer_test+0x3219eb0)
Previous read of size 8 at 0x00000340fef0 by thread T1:
#0 std::__uniq_ptr_impl<caffe2::PThreadPool, std::default_delete<caffe2::PThreadPool> >::_M_ptr() const fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/unique_ptr.h:173 (lite_trainer_test+0x2b9ba61)
#1 std::unique_ptr<caffe2::PThreadPool, std::default_delete<caffe2::PThreadPool> >::get() const fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/unique_ptr.h:422 (lite_trainer_test+0x2b9a385)
#2 caffe2::pthreadpool() xplat/caffe2/caffe2/utils/threadpool/pthreadpool-cpp.cc:105 (lite_trainer_test+0x2b99fe2)
#3 at::init_num_threads() xplat/caffe2/aten/src/ATen/ParallelNative.cpp:207 (lite_trainer_test+0x8961b7)
#4 torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) xplat/caffe2/torch/csrc/autograd/engine.cpp:353 (lite_trainer_test+0x2c5ac25)
#5 __invoke_impl<void, void (torch::autograd::Engine::*)(int, const std::shared_ptr<torch::autograd::ReadyQueue> &, bool), torch::autograd::Engine *, signed char, std::shared_ptr<torch::autograd::ReadyQueue>, bool> fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:74 (lite_trainer_test+0x2c6c629)
#6 __invoke<void (torch::autograd::Engine::*)(int, const std::shared_ptr<torch::autograd::ReadyQueue> &, bool), torch::autograd::Engine *, signed char, std::shared_ptr<torch::autograd::ReadyQueue>, bool> fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:96 (lite_trainer_test+0x2c6c629)
#7 void std::thread::_Invoker<std::tuple<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool), torch::autograd::Engine*, signed char, std::shared_ptr<torch::autograd::ReadyQueue>, bool> >::_M_invoke<0ul, 1ul, 2ul, 3ul, 4ul>(std::_Index_tuple<0ul, 1ul, 2ul, 3ul, 4ul>) fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:253 (lite_trainer_test+0x2c6c629)
#8 operator() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:260 (lite_trainer_test+0x2c6c441)
#9 std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (torch::autograd::Engine::*)(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool), torch::autograd::Engine*, signed char, std::shared_ptr<torch::autograd::ReadyQueue>, bool> > >::_M_run() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:211 (lite_trainer_test+0x2c6c441)
#10 execute_native_thread_routine /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/src/c++11/../../../.././libstdc++-v3/src/c++11/thread.cc:82:18 (libstdc++.so.6+0xdf4e4)
Location is global 'caffe2::pthreadpool()::threadpool' of size 8 at 0x00000340fef0 (lite_trainer_test+0x00000340fef0)
Thread T1 (tid=11291, running) created by main thread at:
#0 pthread_create <null> (lite_trainer_test+0x3258e8d)
#1 __gthread_create /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/include/x86_64-facebook-linux/bits/gthr-default.h:663:35 (libstdc++.so.6+0xdf80e)
#2 std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)()) /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/src/c++11/../../../.././libstdc++-v3/src/c++11/thread.cc:147:37 (libstdc++.so.6+0xdf80e)
#3 torch::autograd::Engine::start_device_threads() xplat/caffe2/torch/csrc/autograd/engine.cpp:1483 (lite_trainer_test+0x2c60efe)
#4 __invoke_impl<void, void (torch::autograd::Engine::*const &)(), torch::autograd::Engine *> fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:74 (lite_trainer_test+0x2c6b8d6)
#5 __invoke<void (torch::autograd::Engine::*const &)(), torch::autograd::Engine *> fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:96 (lite_trainer_test+0x2c6b8d6)
#6 operator()<torch::autograd::Engine *> fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/functional:131 (lite_trainer_test+0x2c6b8d6)
#7 invoke<void (torch::autograd::Engine::*&)(), torch::autograd::Engine *> fbsource/c10/util/C++17.h:195 (lite_trainer_test+0x2c6b8d6)
#8 void c10::once_flag::call_once_slow<void (torch::autograd::Engine::*)(), torch::autograd::Engine*>(void (torch::autograd::Engine::*&&)(), torch::autograd::Engine*&&) fbsource/c10/util/CallOnce.h:50 (lite_trainer_test+0x2c6b8d6)
#9 call_once<c10::once_flag, void (torch::autograd::Engine::*)(), torch::autograd::Engine *> fbsource/c10/util/CallOnce.h:20 (lite_trainer_test+0x2c60b96)
#10 torch::autograd::Engine::initialize_device_threads_pool() xplat/caffe2/torch/csrc/autograd/engine.cpp:1277 (lite_trainer_test+0x2c60b96)
#11 torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>, torch::autograd::InputBuffer&&) xplat/caffe2/torch/csrc/autograd/engine.cpp:1285 (lite_trainer_test+0x2c61011)
#12 torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) xplat/caffe2/torch/csrc/autograd/engine.cpp:1257 (lite_trainer_test+0x2c63515)
#13 torch::autograd::run_backward(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool) xplat/caffe2/torch/csrc/autograd/autograd.cpp:139 (lite_trainer_test+0x2c48a46)
#14 torch::autograd::backward(std::vector<at::Tensor, std::allocator<at::Tensor> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, c10::optional<bool>, bool, std::vector<at::Tensor, std::allocator<at::Tensor> > const&) xplat/caffe2/torch/csrc/autograd/autograd.cpp:171 (lite_trainer_test+0x2c47f62)
#15 torch::autograd::VariableHooks::_backward(at::Tensor const&, c10::ArrayRef<at::Tensor>, c10::optional<at::Tensor> const&, c10::optional<bool>, bool) const xplat/caffe2/torch/csrc/autograd/variable.cpp:564 (lite_trainer_test+0x2c88c02)
#16 at::Tensor::_backward(c10::ArrayRef<at::Tensor>, c10::optional<at::Tensor> const&, c10::optional<bool>, bool) const xplat/caffe2/aten/src/ATen/core/Tensor.cpp:140 (lite_trainer_test+0x8ba331)
#17 at::Tensor::backward(at::Tensor const&, c10::optional<bool>, bool, c10::optional<c10::ArrayRef<at::Tensor> >) const fbsource/ATen/core/TensorBody.h:445 (lite_trainer_test+0x721f3c)
#18 torch::jit::LiteTrainer_LinearRegression_Test::TestBody() xplat/caffe2/fb/lite_trainer/lite_trainer_test.cpp:71 (lite_trainer_test+0x714c95)
#19 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:2665 (lite_trainer_test+0x31b3eed)
#20 testing::Test::Run() third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:2682 (lite_trainer_test+0x31b3b13)
#21 testing::TestInfo::Run() third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:2861 (lite_trainer_test+0x31b50b6)
#22 testing::TestSuite::Run() third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:3015 (lite_trainer_test+0x31b6c42)
#23 testing::internal::UnitTestImpl::RunAllTests() third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:5917 (lite_trainer_test+0x31cc067)
#24 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:2665 (lite_trainer_test+0x31cb8f4)
#25 testing::UnitTest::Run() third-party/googletest/1.11.0/googletest/googletest/src/gtest.cc:5500 (lite_trainer_test+0x31cb34e)
#26 RUN_ALL_TESTS() fbsource/gtest/gtest.h:2490 (lite_trainer_test+0x31a7a0a)
#27 main third-party/googletest/1.11.0/googletest/googletest/src/gtest_main.cc:52 (lite_trainer_test+0x31a79b2)
```
### Versions
main
| 1 |
702 | 110,029 |
Dataloader resetting with num_workers=1 and persistent_workers=True
|
module: dataloader, triaged
|
### π Describe the bug
I hit the following bug when initializing a pytorch Dataloader with `num_workers=1` and `persistent_workers=True`. It seems the Dataloader gets reset:
`UserWarning: Length of IterableDataset <ConstantLengthDataset object at 0x7f4f3d028790> was reported to be 55936 (when accessing len(dataloader)), but 69087 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker.`
I also tried using the `worker_init_fn` as suggested, where `dataset.start` and `.end` are the start and end indices. It still hits the above warning:
```
def worker_init_fn(worker_id):
worker_info = torch.utils.data.get_worker_info()
dataset = worker_info.dataset # the dataset copy in this worker process
overall_start = dataset.start
overall_end = dataset.end
# configure the dataset to only process the split workload
per_worker = int(math.ceil((overall_end - overall_start) / float(worker_info.num_workers)))
worker_id = worker_info.id
dataset.start = overall_start + worker_id * per_worker
dataset.end = min(dataset.start + per_worker, overall_end)
logger.info(f"Initializing worker with split of data from {dataset.start} to {dataset.end}")
```
From the training perspective, I hit this error about every 2.46 epochs and the epochs get reset to 1 and then it continues training from there, but I have a suspicion it's skipping part of the data. I worry this might have affected training.
### Versions
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==2.0.8
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.0.1+cu118
[pip3] torch-optimizer==0.3.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2+cpu
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[pip3] triton-pre-mlir==2.0.0
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 0 |
703 | 110,027 |
[ignore] placeholder PR
|
topic: not user facing
|
Probably going to delete
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #105236
* #104974
* #104483
* __->__ #110027
| 1 |
704 | 110,026 |
DISABLED test_raises_mesh_dim_less_than_2 (__main__.TestDeviceMeshGetItem)
|
oncall: distributed, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_raises_mesh_dim_less_than_2&suite=TestDeviceMeshGetItem) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17104082187).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_raises_mesh_dim_less_than_2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/_tensor/test_device_mesh.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 1 |
705 | 110,021 |
SummaryWriter.add_figure: add type hints
|
triaged, module: tensorboard, open source, oncall: visualization
|
Discovered a bug in our code that could have been prevented by type hints, so I added them π
| 4 |
706 | 110,020 |
[aotinductor] Pass TorchIR to AOTInductor
|
fb-exported, ciflow/trunk, module: inductor, module: dynamo, ciflow/inductor, release notes: inductor, module: export
|
Updates `_export.aot_compile` to pass a torch IR graph to inductor, allowing inductor to now run the pre_grad_passes, and reuse more of inductor's code.
Also updates the API to only return the `so_path`, and not returning the exported program. The pytree call spec is now serialized and placed inside of the generated model code. When calling the model, because there is no c++ pytree implementation linked yet, we can access the call specs through `get_call_spec()`, and call pytree flatten/unflattenin python.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan
| 9 |
707 | 110,014 |
tan/tanh discrepancies with complex due to jiterator
|
module: cuda, triaged, module: jiterator
|
### π Describe the bug
There is perhaps a large discrepancy (requiring atol being increased to 3e-4) between the before+after #102427 implementation of tan/tanh on CUDA with complex values.
Repro:
```
import torch
x = torch.tensor(-7.8167-0.0451j, device='cuda:0')
torch.set_printoptions(precision=10)
print(torch.tan(x))
print(torch._foreach_tan([x])[0])
print(torch._foreach_tan([x.to(device="cpu")])[0])
```
Before:

After:

I happened to observe this when debugging why some tan tests were failing when I added new sample inputs to the foreach tests in https://github.com/pytorch/pytorch/pull/109402#discussion_r1333562975.
### Versions
trunk
cc @ptrblck @mruberry @parth-desai @peterbell10
| 2 |
708 | 110,012 |
DISABLED test_tags_module (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_module&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17093971077).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_module`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
709 | 110,006 |
Change signature of CompilerFn for register_backend decorator
|
triaged, open source, topic: not user facing, module: dynamo, ciflow/inductor
|
## Description
Add `...` to show that CompilerFn for custom backend could take additional options
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 3 |
710 | 110,004 |
Please offer packages with local version `torch==2.1.0+cpu` for macOS
|
module: build, oncall: releng, triaged
|
### π The feature, motivation and pitch
https://download.pytorch.org/whl/test/torch/ offers CPU builds of version 2.1.0 RC. For Intel Linux and Windows both packages are available with local version `2.1.0+cpu`. For macOS there is only version `2.1.0` without an appended local version.
This currently makes it very hard to add PyTorch as a dependency to a project managed with poetry if compatibility to both Linux and macOS is desired. In poetry it's currently not possible to prefer `2.1.0` over `2.1.0+cpu` (https://github.com/python-poetry/poetry/issues/7256) and there's no working way to configure packages from this same source for different systems with different local version numbers.
### Alternatives
Users could host the packages themselves (with controlled version numbers), but that is not possible in all situations.
### Additional context
Poetry is a very popular solution to manage dependencies of complex Python projects. It would be great if PyTorch worked well with it.
cc @malfet @seemethere
| 0 |
711 | 110,003 |
[aotinductor] support _scaled_dot_product_flash_attention fallback
|
module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #110003
This PR supports _scaled_dot_product_flash_attention fallback kernel.
Note that in the abi_compatible mode, we retrieve outputs by passing
output argument pointers rather than relying on std::get.
It also fixes an issue related to dynamic shapes, where we wrongfully
query undefined dynamic symbols.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @kadeng @muchulee8 @aakhundov
| 9 |
712 | 110,000 |
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED
|
module: cuda, triaged
|
### π Describe the bug
It throws `RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED` when I call `torch.cuda.current_device()`.
```python
import torch
torch.cuda.current_device()
```
stacktrace:
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 260, in _lazy_init
queued_call()
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 145, in _check_capability
capability = get_device_capability(d)
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 381, in get_device_capability
prop = get_device_properties(device)
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 399, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined]
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1682343997789/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 674, in current_device
_lazy_init()
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 264, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1682343997789/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch.
CUDA call was originally invoked at:
[' File "<stdin>", line 1, in <module>\n', ' File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 850,
in exec_module\n', ' File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', ' File "/opt/conda/lib/python3.9/site-packages/torch/__init__.py", line 1146, in <module>\n _C._initExtension(manager_path())\n', ' File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', ' File "<frozen imp
ortlib._bootstrap>", line 986, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 850, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', ' File "/opt/conda/lib/python3.9/
site-packages/torch/cuda/__init__.py", line 197, in <module>\n _lazy_call(_check_capability)\n', ' File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 195, in _lazy_call\n _queued_calls.append((callable, traceback.format_stack()))\n']
[' File "<stdin>", line 1, in <module>\n', ' File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', ' File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 8
50, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', ' File "/opt/conda/lib/python3.9/site-packages/torch/__init__.py", line 1146, in <module>\n _C._initExtension(manager_path())\n', ' File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', ' File "<froz
en importlib._bootstrap>", line 986, in _find_and_load_unlocked\n', ' File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', ' File "<frozen importlib._bootstrap_external>", line 850, in exec_module\n', ' File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', ' File "/opt/conda/lib/p
ython3.9/site-packages/torch/cuda/__init__.py", line 197, in <module>\n _lazy_call(_check_capability)\n', ' File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 195, in _lazy_call\n _queued_calls.append((callable, traceback.format_stack()))\n']
```
### Versions
Python 3.9.17
Torch version: 2.0.1
GPU: 4 * A100, MIG enabled
Can't run the script because throwing the same exception.
cc @ptrblck
| 1 |
713 | 109,997 |
DISABLED test_mandelbrot_numpy (__main__.MiscTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mandelbrot_numpy&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17086622044).
Over the past 3 hours, it has been determined flaky in 37 workflow(s) with 111 failures and 37 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mandelbrot_numpy`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
714 | 109,996 |
DISABLED test_tags_function_with_kwargs (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_function_with_kwargs&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17085019023).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 24 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_function_with_kwargs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 7 |
715 | 109,995 |
DISABLED test_mandelbrot_numpy_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mandelbrot_numpy_dynamic_shapes&suite=DynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17085074563).
Over the past 3 hours, it has been determined flaky in 32 workflow(s) with 96 failures and 32 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mandelbrot_numpy_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 8 |
716 | 109,994 |
add test cases for gradscaler on CPU
|
open source, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/inductor, ciflow/slow
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109994
* #109993
* #109281
| 1 |
717 | 109,993 |
add gradscaler on CPU
|
module: cpu, open source, module: amp (automated mixed precision), ciflow/trunk, release notes: distributed (sharded), ciflow/periodic, ciflow/inductor, ciflow/slow
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109994
* __->__ #109993
* #109281
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| 1 |
718 | 109,991 |
Implmenet kthvalue for bfloat16 on CUDA
|
module: cuda, triaged, module: bfloat16
|
### π The feature, motivation and pitch
[`torch.kthvalue`](https://pytorch.org/docs/stable/generated/torch.kthvalue.html) isn't currently supported on CUDA for `bfloat16`:
```python
>>> import torch
>>> torch.tensor([1., 2., 3.], dtype=torch.bfloat16, device="cuda:0").kthvalue(1)
RuntimeError: "kthvalue_cuda" not implemented for 'BFloat16'
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck
| 0 |
719 | 109,987 |
Static quantization for Transformer block : AttributeError 'function' object has no attribute 'is_cuda'
|
triaged, oncall: transformer/mha
|
### π Describe the bug
I'm trying to apply static quantization to a model using a `nn.TransformerEncoderLayer`.
But when running the model, I get the following error :
```
File "/envs/transfo/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 556, in <genexpr>
elif not all((x.is_cuda or 'cpu' in str(x.device)) for x in tensor_args):
AttributeError: 'function' object has no attribute 'is_cuda'
```
---
Here is a Colab notebook reproducing the issue : https://colab.research.google.com/drive/14jJBQk6DSn6DJxfOa8gx5zhpol9vwBMp?usp=sharing
Here is the script reproducing the issue :
```python
import torch
from torch import nn
from torch.ao.quantization import qconfig
class Quantformer(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, dropout_rate, max_seq_len):
super().__init__()
self.hidden_dim = hidden_dim
self.embedding_dim = embedding_dim
self.quant = torch.ao.quantization.QuantStub()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.pos_embedding = nn.Embedding(max_seq_len, embedding_dim)
self.transformer = nn.TransformerEncoderLayer(
d_model=embedding_dim,
nhead=8,
dim_feedforward=hidden_dim,
dropout=dropout_rate,
batch_first=True,
)
self.dropout = nn.Dropout(dropout_rate)
self.fc = nn.Linear(embedding_dim, vocab_size)
self.dequant = torch.ao.quantization.DeQuantStub()
def forward(self, src):
seq_len = src.size(1)
batch_size = src.size(0)
pos_ids = torch.arange(seq_len, dtype=src.dtype, device=src.device).unsqueeze(0).repeat(batch_size, 1)
embeds = self.dropout(self.embedding(src)) + self.pos_embedding(pos_ids)
embeds = self.quant(embeds)
mask = nn.Transformer.generate_square_subsequent_mask(embeds.size(1), device=embeds.device)
out = self.transformer(embeds, src_mask=mask)
lm_logits = self.dropout(self.fc(out))
lm_logits = self.dequant(lm_logits)
return lm_logits
device = torch.device("cpu")
sq_model = Quantformer(
vocab_size=10000,
embedding_dim=128,
hidden_dim=512,
dropout_rate=0,
max_seq_len=10,
).to(device)
sq_model.eval()
sq_model.qconfig = torch.ao.quantization.get_default_qconfig("qnnpack")
sq_model.embedding.qconfig = qconfig.float_qparams_weight_only_qconfig
sq_model.pos_embedding.qconfig = qconfig.float_qparams_weight_only_qconfig
sq_model_prepared = torch.ao.quantization.prepare(sq_model)
x = torch.randint(3, 10000, (1, 10))
sq_model_prepared(x)
squant_model = torch.ao.quantization.convert(sq_model_prepared)
yy = squant_model(x)
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.1+cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 3 |
720 | 109,983 |
DISABLED test_tags_function_via_global_checkpoint (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_tags_function_via_global_checkpoint) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17056185819).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_tags_function_via_global_checkpoint`
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
721 | 109,982 |
DISABLED test_cat_addmm (__main__.TestMaxAutotune)
|
module: rocm, triaged, module: flaky-tests, skipped, module: inductor
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_cat_addmm) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17058274254).
Over the past 72 hours, it has flakily failed in 9 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_cat_addmm`
Test file path: `inductor/test_max_autotune.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
722 | 109,979 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
723 | 109,970 |
torch-<version>.dist-info WHEEL file contains incorrect metadata for M1/M2 macOS platform
|
oncall: binaries, oncall: releng, triaged, module: m1
|
### π Describe the bug
The `pip install` explicitly downloads `torch-<version>-cp310-none-macosx_11_0_arm64.whl`, however, the `WHEEL` metadata file has the tag: `Tag: cp310-cp310-macosx_11_0_x86_64`. Since the `arm64.whl` is downloaded, the tag should contain the `arm64` tag instead of the `x86_64` tag.
### System
```bash
Chip: Apple M2 Max
macOS: 13.5.1
```
### Steps to reproduce
```bash
python3 --version
# Python 3.10.12
python3 -m venv .venv
source .venv/bin/activate
pip install --no-cache-dir torch==2.0.1
# Collecting torch==2.0.1
# Downloading torch-2.0.1-cp310-none-macosx_11_0_arm64.whl (55.8 MB)
# ...
cat .venv/lib/python3.10/site-packages/torch-2.0.1.dist-info/WHEEL
# Wheel-Version: 1.0
# Generator: bdist_wheel (0.38.4)
# Root-Is-Purelib: false
# Tag: cp310-cp310-macosx_11_0_x86_64
```
The same behaviour is seen with `torch==1.13.1` as well. I haven't tested more versions, it could be present there as well.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.12 (main, Aug 23 2023, 13:33:00) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.5.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] torch==2.0.1
[conda] Could not collect
Process finished with exit code 0
cc @seemethere @malfet
| 0 |
724 | 109,968 |
Dtype hard-coded in DataLoader (for python floats).
|
triaged, module: data
|
### π Describe the bug
The default dtype is not respected in `DataLoader` when Dataset `__getitem__()` returns a float. `DataLoader` uses `collate_float_fn` ([link to the source](https://github.com/pytorch/pytorch/blob/a683bc54fdc130537c1d3d81d7290ac9c49df3b2/torch/utils/data/_utils/collate.py#L179)) which has `torch.float64` hardcoded as follows:
```python
def collate_float_fn(batch, *, collate_fn_map: Optional[Dict[Union[Type, Tuple[Type, ...]], Callable]] = None):
return torch.tensor(batch, dtype=torch.float64)
```
Is there any reason to force it to `float64`? This seems inconsistent with other types that seem to respect `torch.set_default_dtype()`.
### Versions
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro N
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 535.98
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3500
DeviceID=CPU0
Family=205
L2CacheSize=20480
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3500
Name=13th Gen Intel(R) Core(TM) i5-13600K
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.1
[conda] No relevant packages
cc @VitalyFedyunin @ejguan @dzhulgakov
| 1 |
725 | 109,966 |
[WIP] Make ONNX OpSchema function matcher more robust
|
module: onnx, open source, release notes: onnx
|
Make the algorithm for "finding the perfect/nearest matched OnnxFunction for the given FX node, arguments, and keyword arguments" more robust in the case when keyword arguments differ, but are set to the default value.
For example:
- fx node is: "split" (tensor,dim,drop_remainder=False (default value))
- ONNX op is: "split" (tensor,dim)
(see PR #107484 which adds an optional drop_remainder=False to torch.split). The [corresponding torch op in ONNX script](https://github.com/microsoft/onnxscript/blob/b6c5e3bb9729f939e979a41d8d724009df9f3b58/onnxscript/function_libs/torch_lib/ops/core.py#L6918) doesn't yet have the new attribute "drop_remainder", this causes the following error during ONNX dynamo_export:
`torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Cannot find any perfect/nearest match of symbolic function for aten::split.Tensor,which should be registered under aten.split.Tensor.`
(because of the attribute list mismatch between ONNX script torch_op and the Pytorch fx node).
This PR improves the lookup of a any perfect/nearest match of symbolic function, by ignoring keyword attributes that are set to a default value (which should be backward compatible).
Fixes https://github.com/pytorch/pytorch/issues/110131
| 22 |
726 | 109,964 |
[2/N] Cleanup header inclusions in torch_cpu by iwyu
|
open source, NNC, ciflow/binaries, ciflow/trunk, release notes: jit, ciflow/periodic
|
Further cleaning up of torch_cpu header inclusions.
cc @EikanWang @jgong5 @ezyang
| 9 |
727 | 109,963 |
WelfordReduction seems to have invalid/dead code when reduction_numel <= 1
|
triaged, module: inductor
|
I uncovered this trying to add types to `ir.py`. I'm not sure if this ever worked, but I'm pretty sure it doesn't successfully execute now due to the fact that it references an undefined `dst_dtype` local:
https://github.com/pytorch/pytorch/blob/b7a95f4fdb8a79dc459cc757dafcdbd0953b1a62/torch/_inductor/ir.py#L1179
https://github.com/pytorch/pytorch/blob/b7a95f4fdb8a79dc459cc757dafcdbd0953b1a62/torch/_inductor/ir.py#L1203
and also calls `const` with 1 instead of the required 2 arguments:
https://github.com/pytorch/pytorch/blob/b7a95f4fdb8a79dc459cc757dafcdbd0953b1a62/torch/_inductor/ir.py#L1189
Moreover, I wasn't able to find a test that triggered those code paths.
It seems like the code was added in 18b1c2907d672759a2b9c909004636b460fd9a5c. @peterbell10, would you mind taking a look to see if it's actually needed?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 0 |
728 | 109,958 |
How to compile torch 2.0.1 version from source?
|
oncall: releng, triaged
|
### π Describe the bug
While I was using 'git clone --branch v2.0.1 https://github.com/pytorch/pytorch.git & python setup.py develop', and 'Building wheel torch-1.14.0a0+410ce96' version was being built.
### Versions
I also checked the version.txt, it shows '2.0.0a0' which should be the version in v2.0.1 tag branch.
So how should I compile torch 2.0.1 version from source? Thanks!
| 1 |
729 | 109,957 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
730 | 109,948 |
Simple script segfaulting when grad is enabled
|
module: autograd, triaged, needs design, module: edge cases
|
### π Describe the bug
I was just toying around a bit with `torch` and came across this strange bug:
```python
import torch
print(torch.__version__)
FEATURE_SIZE = 768
BATCH_SIZE = 512
NUM_ITERS = 171
has_grad = True
with torch.set_grad_enabled(has_grad):
torch.random.manual_seed(0)
linear_layer = torch.nn.Linear(FEATURE_SIZE, FEATURE_SIZE, bias=True)
input_tensor = torch.randn(BATCH_SIZE, FEATURE_SIZE)
output = torch.zeros_like(input_tensor)
for iter_num in range(NUM_ITERS):
print(f"\rForward pass {iter_num}", end='')
for batch_idx in range(BATCH_SIZE):
output[batch_idx] = linear_layer(input_tensor[batch_idx])
```
This results in the following output on `torch==2.0.1`:
```
2.0.1
Forward pass 170[1] 59045 segmentation fault (core dumped) python3 test.py
```
and also on recent nightly builds
```
2.2.0.dev20230912+cu121
Forward pass 170[1] 59424 segmentation fault (core dumped) python3 test.py
```
In both cases, the script runs until after the last forward pass and then segfaults. On my setup, this happens for `NUM_ITERS>=171`. With `NUM_ITERS=170`, everything works fine. Similarly, setting `has_grad=False` also prevents the segfault.
### Versions
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 2 |
731 | 109,946 |
Indexed batch matrix multiplication to support MoEs and FFFs
|
module: sparse, triaged
|
### π The feature, motivation and pitch
Large models are becoming increasingly modularised. Recent research on [mixture-of-expert](https://arxiv.org/abs/1701.06538) networks (MoEs) and [fast feedforward](https://arxiv.org/abs/2308.14711) networks (FFFs) proposes to handle different inputs by different sets of parameters according to their nature, where the subset of parameters to be used is determined by "gating" or "tree" routing networks. Both MoEs and FFFs have been shown to deliver significant acceleration (50-220x) of feedforward layer inference.
## Problem: PyTorch does not provide any tools to implement MoEs / FFFs on scale
Let `w1s,w2s,b1s,b2s` contain the weights and biases of the modularised feedforward layer, let `x` be the batched inputs, and suppose `modules` contains the per-sample decisions of the gating/tree network on which experts/leaves are to use which sub-network of the MoE/FFF layer.
Naively, one could do the following
```
w1s = ... # (n_modules, input_width, hidden_width)
b1s = ... # (n_modules, 1, hidden_width)
w2s = ... # (n_modules, hidden_width, output_width)
b2s = ... # (n_modules, 1, output_width)
x = ... # (batch_size, input_width)
modules = ... # (batch_size, n_modules)
# execute the MoE / FFF layer
hidden = torch.matmul(x.unsqueeze(-2), w1s[modules]) # (batch_size, 1, hidden_width)
hidden += b1s[modules]
activations = activation(hidden)
outputs = torch.matmul(activations, w2s[modules])
outputs += b2s[modules]
```
The problem is that every indexing by `modules` makes `batch_size`-many copies of the size of one module/expert/leaf before feeding them into the matrix multiplication. It may not sound like such an issue, but just to give you an idea: if we split BERT-base layers into 4 experts, this means a memory overhead of `batch_size * 768 ^2` floats before multiplying and reducing to the output of `batch_size * 768`. That's an overhead quadratic in the hidden dimension of the module/expert/leaf -- in practice, instead of doing BERT inference with batch size 512 on a single 80GB A100, you have to reduce the batch size down to a mere dozen. It's fast but the GPU memory becomes a bottleneck.
## Proposed solution #1: Indexed BMM, i.e. `ibmm`
In the spirit of the `Tensor.index_add` function which selects source elements and performs operations on the fly to produce the summation output, I propose "indexed batch matrix multiplication".
To avoid unnecessary copying before performing the matrix multiplication, one could hope for an indexed variant of batched matrix multiplication `bmm` to multiply an input `k x m` matrix with an `m x n` matrix selected by an index tensor from a larger database of `n_modules`-many `m x n` tensors.
**Note that this is only needed for inference -- no autograd support is needed.**
Proposed signature and behaviour:
```
def ibmm(a: torch.Tensor, b: torch.Tensor, indices: torch.Tensor):
# a has shape (batch_size, k, m)
# b has shape (n_modules, m, n)
# indices is a (batch_size,) long tensor of numbers between 0 and n_modules - 1
# output has shape (batch_size, k, n)
# parallelized batch for-loop
for i in range(batch_size):
output[i] = torch.mm(a[i], b[indices[i]]
return output
```
## Proposed solution #2: Finish the implementation of COO sparse `matmul`
If a hybrid `2+2` COO sparse matrix could be multiplied with a dense matrix, the above `ibmm` would not be needed at all. Currently and to the best of my knowledge, no backend supports the `matmul` of hybrid (`len(shape) > 2`) COO sparse matrices with dense matrices. If that were possible, a whole new world would open for modularisation of networks.
In a world where a hybrid COO sparse matrix can be batch-multiplied with a dense matrix, the following would do the job:
```
x_coordinates = torch.arange(batch_size, dtype=torch.long) # (batch_size,)
y_coordinates = modules
indices = torch.stack((x_coordinates, y_coordinates), dim=0) # (2, batch_size)
x_sparse = torch.sparse_coo_tensor(
indices=indices,
values=x.unsqueeze(-2),
size=(batch_size, n_modules, 1, input_width)
) # (batch_size, n_modules, 1, input_width)
hidden = torch.matmul(x_sparse, w1s).sum(dim=1) # (batch_size, 1, hidden_width)
hidden = hidden.squeeze(dim=1) # (batch_size, hidden_width)
hidden += b1s[modules] # (batch_size, hidden_width)
activations = activation(hidden) # (batch_size, hidden_width)
activations_sparse = torch.sparse_coo_tensor(
indices=indices,
values=activations.unsqueeze(-1),
size=(batch_size, n_modules, 1, hidden_width)
) # (batch_size, n_modules, 1, hidden_width)
output = torch.matmul(activations_sparse , w2s).squeeze(-2)
output += b2s[modules] # (batch_size, output_width)
```
### Alternatives
## Workaround: Nasty but does the job for small enough networks
While `matmul` for hybrid sparse matrices is not implemented, the `mm` is. So if one flattens and indexes their data properly, they will get a working solution (with a linear overhead but hey, it's better than nothing).
```
x_coordinates = torch.arange(batch_size, dtype=torch.long, device=x.device) # (batch_size,)
y_coordinate_bases = (leaves.long() * input_width).unsqueeze(-1) # (batch_size, 1)
y_coordinate_offsets = torch.arange(input_width, dtype=torch.long, device=x.device).unsqueeze(0) # (1, input_width)
y_coordinates = y_coordinate_bases + y_coordinate_offsets # (batch_size, input_width)
indices = torch.stack((x_coordinates.repeat(input_width), y_coordinates.flatten()), dim=0) # (2, batch_size * input_width)
x_sparse = torch.sparse_coo_tensor(
indices=indices,
values=x.flatten(),
size=(batch_size, n_modules * input_width),
device=x.device
) # (batch_size, n_modules * input_width)
logits = torch.mm(x_sparse, w1s.flatten(0, 1)) # (batch_size, hidden_width)
logits += b1s[leaves] # (batch_size, hidden_width)
activations = activation(logits) # (batch_size, hidden_width)
y_coordinate_bases = (leaves.long() * hidden_width).unsqueeze(-1) # (batch_size, 1)
y_coordinate_offsets = torch.arange(hidden_width, dtype=torch.long, device=x.device).unsqueeze(0) # (1, hidden_width)
y_coordinates = y_coordinate_bases + y_coordinate_offsets # (batch_size, hidden_width)
second_indices = torch.stack((x_coordinates.repeat(hidden_width), y_coordinates.flatten()), dim=0) # (2, batch_size * hidden_width)
activations_sparse = torch.sparse_coo_tensor(
indices=second_indices,
values=activations.flatten(),
size=(batch_size, n_modules * hidden_width),
device=x.device
) # (batch_size, n_modules, 1, hidden_width)
new_logits = torch.mm(activations_sparse, w2s.flatten(0, 1)) # (batch_size, output_width)
new_logits += b2s[leaves] # (batch_size, output_width)
```
### Additional context
Ultimately, the most general solution that could be implemented to support this kind of data manipulation is introducing indexed views -- i.e. views of tensors that go beyond slicing and reshaping. Somewhat ironically, in pure C++, any of this could be usually done without copying simply by dereferencing a pointer by `*`...
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 1 |
732 | 109,943 |
Problems when loading PT files und Linux - Duda which are created under Mac Apple Silicon MPS
|
triaged, module: mps
|
### π Describe the bug
We trained a Model under Apple-Silicon and are not able to reload the model under Linux.
```
raceback (most recent call last):
File "/home/automatum/automatum-data-v3/scripts/playPeter.py", line 116, in <module>
model = pipeUrbanClassificator.UrbanClassifierNet.init_from_pt_file('/home/automatum/automatum-data-v3/src/recording_pipeline/urban_classifier_weights.pt', torch.device('cpu'))
File "/home/automatum/automatum-data-v3/src/recording_pipeline/pipeUrbanClassificator.py", line 51, in init_from_pt_file
model.load_state_dict(torch.load(path_to_pt_file))
File "/home/automatum/automatum-data-v3/env/lib/python3.10/site-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/automatum/automatum-data-v3/env/lib/python3.10/site-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/home/automatum/automatum-data-v3/env/lib/python3.10/site-packages/torch/_utils.py", line 266, in _rebuild_device_tensor_from_numpy
tensor = torch.from_numpy(data).to(dtype=dtype, device=device)
NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'MPS' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
Meta: registered at aten/src/ATen/RegisterMeta.cpp:26824 [kernel]
QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:929 [kernel]
QuantizedCUDA: registered at aten/src/ATen/RegisterQuantizedCUDA.cpp:459 [kernel]
BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:726 [kernel]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:21 [kernel]
Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:23 [kernel]
ZeroTensor: fallthrough registered at ../aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:16726 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
```
Implementation of the Model:
```
class UrbanClassifierNet(nn.Module):
"""Urban Classifier Net:
Input: 224x224x3 image
Output: Class label as Number for each class. """
def __init__(self):
super(UrbanClassifierNet, self).__init__()
self.get_number_of_classes_from_pipe_config()
self.model = resnet50(weights='ResNet50_Weights.DEFAULT')
self.model.fc = nn.Linear(self.model.fc.in_features, self.model.fc.out_features)
self.relu = nn.ReLU()
self.fc_x = nn.Linear(self.model.fc.out_features, NUM_CLASSES)
def forward(self, x):
x_model = self.model(x)
x = self.relu(x_model)
x = self.fc_x(x)
return x
@staticmethod
def init_from_pt_file(path_to_pt_file, device):
model = UrbanClassifierNet()
model.load_state_dict(torch.load(path_to_pt_file))
model.to(device)
model.eval()
return model
@staticmethod
def get_number_of_classes_from_pipe_config():
global NUM_CLASSES
obj_class_dict = cfgMgmt('recording_pipline.cfg').get('object_class_labels')
NUM_CLASSES = len(obj_class_dict)
```
### Versions
```
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2023-09-23 17:48:50-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
AuflΓΆsen des Hostnamens raw.githubusercontent.com (raw.githubusercontent.com) β¦ 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Verbindungsaufbau zu raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443 β¦ verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet β¦ 200 OK
LΓ€nge: 21709 (21K) [text/plain]
Wird in βcollect_env.pyβ gespeichert.
collect_env.py 100%[=====================================================================================================================================================================================================================================>] 21,20K --.-KB/s in 0,004s
2023-09-23 17:48:50 (5,37 MB/s) - βcollect_env.pyβ gespeichert [21709/21709]
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
AdressgrΓΆΓen: 46 bits physical, 48 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 32
Liste der Online-CPU(s): 0-31
Anbieterkennung: GenuineIntel
Modellname: Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz
Prozessorfamilie: 6
Modell: 62
Thread(s) pro Kern: 2
Kern(e) pro Socket: 8
Sockel: 2
Stepping: 4
Maximale Taktfrequenz der CPU: 4000,0000
Minimale Taktfrequenz der CPU: 1200,0000
BogoMIPS: 6783.49
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
Virtualisierung: VT-x
L1d Cache: 512 KiB (16 instances)
L1i Cache: 512 KiB (16 instances)
L2 Cache: 4 MiB (16 instances)
L3 Cache: 50 MiB (2 instances)
NUMA-Knoten: 2
NUMA-Knoten0 CPU(s): 0-7,16-23
NUMA-Knoten1 CPU(s): 8-15,24-31
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==1.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 1 |
733 | 109,942 |
pytorch XLA document error
|
module: docs, triaged, module: xla
|
### π The doc issue
https://pytorch.org/xla/master/
https://pytorch.org/xla/master/assets/spmd_mode.png not found
### Suggest a potential alternative/fix
_No response_
cc @svekars @carljparker @bdhirsh
| 0 |
734 | 109,941 |
Need latest NCCL support to reduce GPU HBM consumption
|
oncall: distributed, module: nccl
|
### π The feature, motivation and pitch
Since NCCL 2.18.1 opened a new API "ncclCommSplit" which allows communicators share GPU HBM on same GPU.
We want this feature support to run larger models on same GPU clusters.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
735 | 109,938 |
Batching for is_in
|
triaged, module: functorch
|
### π The feature, motivation and pitch
I was trying to use torch.vmap for running is_in on a batch of vectors and got this:
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::isin.Tensor_Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343997789/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
So filing and issue because it told me to essentially
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
736 | 109,937 |
Fix S367052 to unblock ICVR MC3
|
fb-exported, release notes: fx
|
Summary: Somehow "getitem" started to get Tensor starting from ads_ranking:996 and broke SDD pipelining FX-transformer. We need to skip the Tensor node in annotation.
Test Plan:
N4326037
with ads_ranking kernel
# Before
ads_ranking:v996
{F1100009226}
# With this diff
{F1100009310}
Differential Revision: D49567615
| 7 |
737 | 109,934 |
test test_2d_fsdp_integration_fsdp_nested_param_groups failed
|
oncall: distributed, triaged
|
### π Describe the bug
This test in `test_fsdp_2d_parallel.py` is trying to test 2D (TP + FSDP) with nested wrapping and multiple param groups. Somehow if I disable nested wrapping or comment the second param works, test all passed.
We need to investigate why this happens and filed an issue to track it.
### Versions
PyTorch nightly build
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 0 |
738 | 109,930 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
739 | 109,929 |
Memory access fault with AMD Rocm
|
needs reproduction, module: rocm, triaged
|
### π Describe the bug
When using Pytorch with Rocm, trying to train or infer with an upscaling model, I get this error:
```
Memory access fault by GPU node-1 (Agent handle: 0x55eb9b596570) on address 0x7f66960b2000. Reason: Page not present or supervisor privilege.
Abandon (core dumped)
```
### Versions
PyTorch: 2.2.0.dev20230920+rocm5.6
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| 3 |
740 | 109,927 |
(pytorch) add List[float] type to get_lr
|
fb-exported, release notes: optim
|
Summary: Added `List[float]` as additional return type to `def get_lr(self)`
Test Plan: sandcastle
Differential Revision: D49563705
| 9 |
741 | 109,926 |
cm3leon_generate failing compilation
|
triaged, oncall: pt2
|
### π Describe the bug
```python
import torch
import importlib
import sys
sys.path.append("benchmark")
c = "torchbenchmark.models.cm3leon_generate"
module = importlib.import_module(c)
device = "cuda"
benchmark_cls = getattr(module, "Model", None)
benchmark = benchmark_cls(test="eval", device = device)
model, example = benchmark.get_module()
eager = model(*example)
print("eager works")
print("be patient compilation takes a while on this model")
compiled = torch.compile(model)
compiled_out = compiled(*example)
```
### Error logs
Surprisingly this model works both in eager and compiler but is showing up as failing in pt2 dashboard something else is going on
### Minified repro
_No response_
### Versions
n/a
cc @ezyang @wconstab @bdhirsh @anijain2305
| 8 |
742 | 109,923 |
Import order issue with torch and pybind11 Library Statically Linked to libstdc++
|
module: abi, triaged, module: static linking
|
### π Describe the bug
I've encountered an issue when importing torch alongside another pybind11 library. The problem surfaces when torch is imported after the sample library, when that library statically links to libstdc++ and includes iostream (there may be other inludes to cause this as well). To rule out external factors, I've created a minimal pybind11 shared object (.so). This library essentially does nothing but includes iostream, links to libstdc++, and exposes an object to Python.
The following sequence will lead to a failure:
```python
import testlib # This is the noop library that merely includes <iostream> and is static linked to libstdc++
import torch
```
However, swapping the order works without any issues:
```python
import torch
import testlib # This is the noop library that merely includes <iostream> and is static linked to libstdc++
```
Interestingly, if I remove the <iostream> inclusion from testlib, the order of imports doesn't matter and no errors arise. Additionally, if I link to libstdc++ dynamically, I encounter no issues, even with <iostream> included.
The call stack suggests a crash during some string conversion, potentially from an exception handler. Unfortunately, there aren't any debug messages sent to stderr or stdout. Yet, I'm speculating if this crash might be hiding a Thread-Local Storage (TLS) issue, as documented here: https://github.com/pytorch/pytorch/issues/2575.
SEGFAULT BT:
```
Program received signal SIGSEGV, Segmentation fault.
0x00007fffc24d242b in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) bt
#0 0x00007fffc24d242b in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#1 0x00007fffc24d42cf in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#2 0x00007fffc24d4434 in std::codecvt<char16_t, char, __mbstate_t>::do_in(__mbstate_t&, char const*, char const*, char const*&, char16_t*, char16_t*, char16_t*&) const ()
from /lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007fffc253d3ca in std::ostream& std::ostream::_M_insert<unsigned long>(unsigned long) () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007fffa7f0687b in c10::detail::_str_wrapper<char const*, char const* const&, char const*, unsigned int const&>::call(char const* const&, char const* const&, char const* const&, unsigned int const&) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#5 0x00007fffa7eff41a in torch::(anonymous namespace)::debugString(std::string, char const*, unsigned int) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#6 0x00007fffa7f04755 in torch::Library::_fallback(torch::CppFunction&&) & () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#7 0x00007fffa7c550ef in at::native::TORCH_LIBRARY_IMPL_init___Conjugate_2(torch::Library&) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#8 0x00007fffa7c58fd2 in torch::detail::TorchLibraryInit::TorchLibraryInit(torch::Library::Kind, void (*)(torch::Library&), char const*, c10::optional<c10::DispatchKey>, char const*, unsigned int) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#9 0x00007fffa7b5773c in _GLOBAL__sub_I_ConjugateFallback.cpp () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#10 0x00007ffff7fc947e in call_init (l=<optimized out>, argc=argc@entry=2, argv=argv@entry=0x7fffffffcea8, env=env@entry=0x7fffffffcec0) at ./elf/dl-init.c:70
#11 0x00007ffff7fc9568 in call_init (env=0x7fffffffcec0, argv=0x7fffffffcea8, argc=2, l=<optimized out>) at ./elf/dl-init.c:33
#12 _dl_init (main_map=0x555555fc39f0, argc=2, argv=0x7fffffffcea8, env=0x7fffffffcec0) at ./elf/dl-init.c:117
#13 0x00007ffff7d74c85 in __GI__dl_catch_exception (exception=<optimized out>, operate=<optimized out>, args=<optimized out>) at ./elf/dl-error-skeleton.c:182
#14 0x00007ffff7fd0ff6 in dl_open_worker (a=0x7fffffffa660) at ./elf/dl-open.c:808
#15 dl_open_worker (a=a@entry=0x7fffffffa660) at ./elf/dl-open.c:771
#16 0x00007ffff7d74c28 in __GI__dl_catch_exception (exception=<optimized out>, operate=<optimized out>, args=<optimized out>) at ./elf/dl-error-skeleton.c:208
#17 0x00007ffff7fd134e in _dl_open (file=<optimized out>, mode=-2147483646, caller_dlopen=0x5555557b960b, nsid=-2, argc=2, argv=<optimized out>, env=0x7fffffffcec0)
at ./elf/dl-open.c:883
#18 0x00007ffff7c906bc in dlopen_doit (a=a@entry=0x7fffffffa8d0) at ./dlfcn/dlopen.c:56
#19 0x00007ffff7d74c28 in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffa830, operate=<optimized out>, args=<optimized out>) at ./elf/dl-error-skeleton.c:208
#20 0x00007ffff7d74cf3 in __GI__dl_catch_error (objname=0x7fffffffa888, errstring=0x7fffffffa890, mallocedp=0x7fffffffa887, operate=<optimized out>, args=<optimized out>)
--Type <RET> for more, q to quit, c to continue without paging--
at ./elf/dl-error-skeleton.c:227
#21 0x00007ffff7c901ae in _dlerror_run (operate=operate@entry=0x7ffff7c90660 <dlopen_doit>, args=args@entry=0x7fffffffa8d0) at ./dlfcn/dlerror.c:138
#22 0x00007ffff7c90748 in dlopen_implementation (dl_caller=<optimized out>, mode=<optimized out>, file=<optimized out>) at ./dlfcn/dlopen.c:71
#23 ___dlopen (file=<optimized out>, mode=<optimized out>) at ./dlfcn/dlopen.c:81
#24 0x00005555557b960b in ?? ()
#25 0x00005555557b80f7 in ?? ()
#26 0x00005555556b4969 in ?? ()
#27 0x000055555569f2c1 in _PyEval_EvalFrameDefault ()
#28 0x00005555556b470c in _PyFunction_Vectorcall ()
#29 0x00005555556a28a2 in _PyEval_EvalFrameDefault ()
#30 0x00005555556b470c in _PyFunction_Vectorcall ()
#31 0x000055555569cf52 in _PyEval_EvalFrameDefault ()
#32 0x00005555556b470c in _PyFunction_Vectorcall ()
#33 0x000055555569ce0d in _PyEval_EvalFrameDefault ()
#34 0x00005555556b470c in _PyFunction_Vectorcall ()
#35 0x000055555569ce0d in _PyEval_EvalFrameDefault ()
#36 0x00005555556b470c in _PyFunction_Vectorcall ()
#37 0x000055555569ce0d in _PyEval_EvalFrameDefault ()
#38 0x00005555556b470c in _PyFunction_Vectorcall ()
#39 0x00005555556b3b24 in ?? ()
#40 0x00005555557934af in _PyObject_CallMethodIdObjArgs ()
#41 0x00005555556c80ca in PyImport_ImportModuleLevelObject ()
#42 0x000055555569f9e5 in _PyEval_EvalFrameDefault ()
#43 0x000055555578de56 in ?? ()
#44 0x000055555578dcf6 in PyEval_EvalCode ()
```
C++ Test lib code, just create a pybind module so we can import in Python.
```
#include <iostream> //including this causes segfault on torch import, if torch is the next import
#include <pybind11/pybind11.h>
namespace py = pybind11;
PYBIND11_MODULE(nvcv, m)
{
//noop
}
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-31-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A3000 Laptop GPU
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family: 6
Model: 141
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4800.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2+cu118
[pip3] torchnvjpeg==0.1.0
[pip3] triton==2.0.0
[conda] Could not collect
| 0 |
743 | 109,921 |
[torch] Defer resolution of allowed/disallowed decorators
|
fb-exported, topic: not user facing, module: dynamo, ciflow/inductor
|
Summary:
To prevent triggering circular imports while decorating functions using
FunctionIdSet, we defer the resolution of the lazy initialization until the
time the object is used.
Test Plan: Existing tests
Differential Revision: D49350894
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
744 | 109,916 |
[dynamo] fix reconstruct of ConvertSymintSource.
|
ciflow/trunk, release notes: fx, module: dynamo, ciflow/inductor, release notes: dynamo
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #109734
* __->__ #109916
**Motivation:**
dynamo produces optimized python byte code that calls the optimized graphs. Previously, the reconstruct method of ConvertIntSource uses the wrapped sym bool's reconstruct method, which is essentially accessing the symbool from inputs directly. However, the optimized graph that dynamo extracted expects input to be SymInt. This creates a inconsistency between what optimized byte code passes into optimized graph and what the optimized graph expects.
**Implementation:**
This PR fixes the reconstruct method of ConvertSymintSource by inserting the cast_symbool_to_symint_guardless call into the optimized python byte code so that before optimized graph is invoked, the SymBool has already been converted to SymInt. The implication is that when optimized python byte code is executed, a cast_symbool_to_symint_guardless call will be inserted right before calling the graph that dynamo traced.
After this PR, for the following code :
```python
def true_fn(x):
return x - x.cos()
def false_fn(x):
return x + x.sin()
def foo(x):
return cond(x.shape[0] == 4, true_fn, false_fn, [x])
gm = make_fx(foo, tracing_mode='symbolic')(torch.ones(3, 4))
```
In cond, we call torch.compile and dynamo extracts the following graph for cond:
```python
===== __compiled_fn_0 =====
<eval_with_key>.2 class GraphModule(torch.nn.Module):
def forward(self, L_args_0_ : torch.SymInt, s1 : torch.SymInt, s2 : torch.SymInt, L_args_3_0_ : torch._subclasses.fake_tensor.FakeTensor):
l_args_0_ = L_args_0_
l_args_3_0_ = L_args_3_0_
# File: /home/yidi/local/pytorch/torch/_dynamo/external_utils.py:17, code: return fn(*args, **kwargs)
cond_true_0 = self.cond_true_0
cond_false_0 = self.cond_false_0
cond = torch.ops.higher_order.cond(l_args_0_, cond_true_0, cond_false_0, [l_args_3_0_]); l_args_0_ = cond_true_0 = cond_false_0 = l_args_3_0_ = None
return (cond,)
class GraphModule(torch.nn.Module):
def forward(self, l_args_3_0_):
# File: /home/yidi/local/pytorch/test/functorch/test_control_flow.py:1538, code: return x - x.cos()
cos = l_args_3_0_.cos()
sub = l_args_3_0_ - cos; l_args_3_0_ = cos = None
return sub
class GraphModule(torch.nn.Module):
def forward(self, l_args_3_0_):
# File: /home/yidi/local/pytorch/test/functorch/test_control_flow.py:1541, code: return x + x.sin()
sin = l_args_3_0_.sin()
add = l_args_3_0_ + sin; l_args_3_0_ = sin = None
return add
```
make_fx then runs the optimized code and produce a graph module like below:
```python
class foo(torch.nn.Module):
def forward(self, x_1: f32[s0, s1]):
# No stacktrace found for following nodes
sym_size: Sym(s0) = torch.ops.aten.sym_size(x_1, 0)
eq: Sym(Eq(s0, 4)) = sym_size == 4; sym_size = None
cast_symbool_to_symint_guardless: Sym(Piecewise((1, Eq(s0, 4)), (0, True))) = torch.ops.higher_order.cast_symbool_to_symint_guardless(eq); eq = None
true_graph_0 = self.true_graph_0
false_graph_0 = self.false_graph_0
conditional: f32[s0, s1] = torch.ops.higher_order.cond(cast_symbool_to_symint_guardless, true_graph_0, false_graph_0, [x_1]); cast_symbool_to_symint_guardless = true_graph_0 = false_graph_0 = x_1 = None
return conditional
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[s0, s1]):
# No stacktrace found for following nodes
cos: f32[s0, s1] = torch.ops.aten.cos.default(arg0_1)
sub: f32[s0, s1] = torch.ops.aten.sub.Tensor(arg0_1, cos); arg0_1 = cos = None
return sub
class <lambda>(torch.nn.Module):
def forward(self, arg0_1: f32[s0, s1]):
# No stacktrace found for following nodes
sin: f32[s0, s1] = torch.ops.aten.sin.default(arg0_1)
add: f32[s0, s1] = torch.ops.aten.add.Tensor(arg0_1, sin); arg0_1 = sin = None
return add
```
Before this PR, the optimized python byte code directly feed Eq into __compiled_fn_0. With this PR, we'll additionally call cast_symbool_to_symint_guardless to convert the SymBool to SymInt and feed the SymInt to __compiled_fn_0.
Additionally, we need to track this newly created SymInt in current ProxyTensorDispatchMode.sym_mode during make_fx so that this conversion is recorded in the graph. Otherwise, we won't be able to find the proxy of the SymInt in the tracer and throws an error when creating args in cond's ProxyTorchMode handling logic. We implemented a higher order op for this purpose.
**Test plan:**
See modified tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
745 | 109,914 |
Decompose to native_dropout in eval mode as well
|
ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109914
Summary: https://github.com/pytorch/pytorch/pull/106274 added a
decomp for dropout such that for train mode we get
`aten.native_dropout`, while for eval mode we get `aten.clone`.
This commit makes it such that we get `aten.native_dropout` for
both train and eval modes. For eval mode, we let the cloning
happen in the aten op itself.
The main motivation behind this change is QAT, which needs to
swap between `aten.native_dropout(train=True)` and
`aten.native_dropout(train=False)` in the graph. This was
previously difficult to do since there was no dropout op to
match and replace in eval mode.
Test Plan: python test/test_ops.py
Reviewers: SherlockNoMad, bdhirsh
Subscribers: SherlockNoMad, bdhirsh, supriyar
| 1 |
746 | 109,913 |
[inductor] Avoid bool being upcast to int
|
open source, ciflow/trunk, topic: not user facing, module: inductor, ciflow/inductor
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109913
* #110388
Currently the inductor code for `x.any(-1)` does a this strange dance:
```python
tmp0 = tl.load(in_ptr0 + (r1 + (128*x0)), rmask & xmask)
tmp1 = tmp0.to(tl.int64)
tmp2 = (tmp1 != 0)
```
This happens because `register_lowering` is doing type promotion with the
dimension argument, and so promotes to `int64` which we then cast back to bool.
A better fix would be to fix `register_lowering` but for now I just remove
the unnecessary type promotion from `aten.any`.
In the current code we also see:
```python
tmp5 = tl.where(rmask & xmask, tmp3, 0)
```
which promotes the boolean value to int since `0` is an int32 in triton.
This fixes it to generate a boolean constant instead.
Finally there is also a triton bug where the `tl.load` itself upcasts to
`tl.int8`. I fix this by adding an explicit cast to `tl.int1`. The final
kernel code looks like:
```python
tmp0 = tl.load(in_ptr0 + (r1 + (128*x0)), rmask & xmask).to(tl.int1)
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, RBLOCK])
tmp3 = tl.full([1, 1], 0, tl.int1)
tmp4 = tl.where(rmask & xmask, tmp1, tmp3)
tmp5 = triton_helpers.any(tmp4, 1)[:, None]
```
| 4 |
747 | 109,910 |
Dynamo error for autograd function
|
module: autograd, triaged, oncall: pt2, module: dynamo
|
# Summary
Encountering Dynamo error when attempting to compile an autograd function that returns a dtype.
### Repro
``` Python
import torch
class dtype_test(torch.autograd.Function):
@staticmethod
def forward(
ctx,
tensor: torch.Tensor,
dtype=torch.dtype,
):
orig_precision = tensor.dtype
ctx.orig_precision = orig_precision
return tensor.to(dtype), dtype
@staticmethod
def backward(ctx, dOut):
return dOut.to(ctx.orig_precision), None, None,
def main():
x = torch.randn(16, 16, device="cpu", dtype=torch.float32)
out = dtype_test.apply(x, torch.bfloat16)
def test_func(x, dtype):
return dtype_test.apply(x, dtype)
compiled_func = torch.compile(test_func, fullgraph=True)
y = compiled_func(x, torch.bfloat16)
if __name__ == "__main__":
main()
```
### Output
``` Shell
File "/home/drisspg/miniconda3/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 329, in call_method
raise unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/home/drisspg/miniconda3/envs/nightly/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 176, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_method TupleVariable() to [ConstantVariable(dtype)] {}
from user code:
File "/home/drisspg/meta/float8_experimental/../scripts/compile/autograd_dtype_float.py", line 21, in test_func
return dtype_test.apply(x, dtype)
File "/home/drisspg/meta/float8_experimental/../scripts/compile/autograd_dtype_float.py", line 12, in forward
return tensor.to(dtype), dtype
```
#### Note
This is a proxy issue I am creating for https://github.com/pytorch-labs/float8_experimental/issues/108
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 7 |
748 | 109,909 |
Large Discrepancies between PyTorch and ONNXRuntime Inference
|
module: onnx, triaged
|
### π Describe the bug
I am getting large discrepancies between PyTorch inference and ONNXRuntime inference for a number of models. I am unsure why this is the case. See below:
| model | greatest abs diff. |
|:--------------------|---------------------:|
| model_78_3459169476 | 9.22337e+18 |
| model_68_72227799 | 164704 |
| model_6_419668759 | 111273 |
| model_96_728966504 | 26586.2 |
| model_30_213242519 | 4178 |
| model_15_2262524614 | 2369.29 |
| model_29_103575771 | 1258.29 |
| model_78_2707514266 | 481.639 |
| model_60_674754574 | 66.582 |
| model_91_3302636491 | 41 |
| model_50_1243263724 | 29 |
| model_74_3620935663 | 12 |
| model_6_3867095439 | 8.49595 |
| model_25_1123596368 | 5.14588 |
| model_23_3959668477 | 5 |
I generated these models using the [NNSmith Fuzzing Tool](https://github.com/ise-uiuc/nnsmith) and found the discrepancies using the `find_mismatch` function. Due to the way, NNSmith creates models I am unable to provide a minimal reproduction but linked are pickled objects that are loaded in the reproduction script ([here](https://drive.google.com/drive/folders/1sMWNhIZ4PutSlZVaVsehnzvBp59pmTaA?usp=sharing)).
# Reproduction:
## Dependencies:
- [NNSmith](https://github.com/ise-uiuc/nnsmith) : `pip install "git+https://github.com/ise-uiuc/nnsmith@main#egg=nnsmith[torch,onnx]" --upgrade`
- Model Link: ([Original Model and Exported Repro](https://drive.google.com/drive/folders/1sMWNhIZ4PutSlZVaVsehnzvBp59pmTaA?usp=sharing))
## Script:
```python
import pickle
from pathlib import Path
import torch
from nnsmith.materialize import Model, Oracle
from torch.onnx.verification import find_mismatch
path = Path("./model_78_3459169476")
# Get the paths for pickles and weights
gir_path: Path = path / "gir.pkl"
oracle_path: Path = path / "oracle.pkl"
weights_path: Path = path / "model.pth"
# Load the model from pickle
with gir_path.open("rb") as f:
gir = pickle.load(f)
model_type = Model.init("torch", "cpu")
model = model_type.from_gir(gir)
# Load weights from weight path.
model.torch_model.load_state_dict(torch.load(weights_path), strict=False)
# Load oracle
oracle = Oracle.load(oracle_path)
model_args = tuple([torch.from_numpy(val) for key, val in oracle.input.items()])
print(f"Testing: {str(path)}")
graph_info = find_mismatch(
model.torch_model,
model_args,
opset_version=16,
keep_initializers_as_inputs=False,
)
repro_path = path / "model_repro"
graph_info.export_repro(repro_path)
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.2 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.5
Libc version: N/A
Python version: 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.5.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.0
[pip3] torchviz==0.0.2
[conda] numpy 1.25.2 py310h3b2db8e_0
[conda] numpy-base 1.25.2 py310ha9811e2_0
[conda] pytorch-lightning 2.0.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.0 py310_cpu pytorch
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.0 py310_cpu pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
| 0 |
749 | 109,907 |
Add `endpoint` argument in `linspace` to match numpy behavior
|
triaged, open source
|
Fixes #70919
Basically I copied the line from `numpy`
https://github.com/numpy/numpy/blob/d35cd07ea997f033b2d89d349734c61f5de54b0d/numpy/core/function_base.py#L125
There is also a slight change in @lezcano 's implementation using `torch.where` to improve performance by 10% when the `steps` is an odd number.
I updated the docs as well and I am not sure if this is needed. @mruberry
| 12 |
750 | 109,905 |
[core IR] Add decompositions for _assert_async to no-op
|
module: inductor, ciflow/inductor, release notes: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109905
## Context
`aten._assert_async` and `aten._assert_async.msg` were introduced based on https://github.com/pytorch/pytorch/issues/36853 to allow asynchronous assert behaviour when using CUDA tensors.
> It's common to use assertion to check pre-conditions or unexpected inputs in code, as a form of defensive programming. For example, sometimes we want to assert that all elements of a tensor is all positive, or finite.
>
> However, doing assert in python has the following problems in pytorch:
>
> * when dealing with cuda tensors, using python's assert (t>0).all() or assert torch.isfinite(t).all() will wait for results of the cuda kernel and most if not all its preceding kernel launches, thus potentially cause significantly slow down.
> * It would be good to have a way to execute an assert as an async call.
> * It does not work nicely with tracing or FX, because it is considered as control logic
However, when considering ops to preserve during export, these ops are a strange case:
* They are quite narrow in scope; only being relevant for graphs that expect to work with CUDA inputs
* i.e. these operators are not generic; in the context of the core ATen decomposition table, it would then make sense for it to remove these ops
* Was speaking to @SherlockNoMad and he expected these to be removed by functionalization anyway.
* As outlined in the original comment, assertions were considered "control logic" and thus does not play well with tracing/FX. Implementing assertions as a torch op is a way to circumvent that issue but the original point still stands imo
* Personally I would think that input checking should be the responsibility of the implementation of individual operators
## Motivation
The premise of these changes is that when exporting a graph that uses `aten._assert_async`, export should remove these ops by default mainly due to the below point
> * As outlined in the original comment, assertions were considered "control logic" and thus does not play well with tracing/FX. Implementing assertions as a torch op is a way to circumvent that issue but the original point still stands imo
> * Personally I would think that input checking should be the responsibility of the implementation of individual operators
I'm not certain if this premise is correct. Please let me know if you disagree.
## Changes
This PR adds decompositions of `aten._assert_async` and `aten._functional_assert_async.msg` to no-op and adds the decomp to the core ATen decomposition table. The assumption is that most export workflows will not want to preserve these operators due to the stated premise.
In fact Inductor already added these decomps so I am essentially moving them to the more general `decompositions.py` location, and removing the existing Inductor decomps as they are now superfluous.
Please let me know if there is a better approach to remove these operators.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 8 |
751 | 109,903 |
Error using torch.onnx.dynamo_export
|
module: onnx, triaged
|
### π Describe the bug
I am using dynamo_exporter from pytorch to convert a torchscript model to ONNX. The torchscript model uses ```aten::stft``` which cannot be exported using torch.onnx.export. I get the following errors when doing the conversion.
Error:
```
/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/onnx/_internal/exporter.py:130: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
Traceback (most recent call last):
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 2601, in _signature_from_callable
sig = _get_signature_of(call)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 2521, in _signature_from_callable
return _signature_from_builtin(sigcls, obj,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 2328, in _signature_from_builtin
raise ValueError("no signature found for builtin {!r}".format(func))
ValueError: no signature found for builtin <instancemethod __call__ at 0x7fc1dd182a70>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/onnx/_internal/exporter.py", line 1195, in dynamo_export
).export()
^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/onnx/_internal/exporter.py", line 941, in export
graph_module = self.options.fx_tracer.generate_fx(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 199, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1117, in inner
original_signature = inspect.signature(call_to_inspect)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 3280, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 3028, in from_callable
return _signature_from_callable(obj, sigcls=cls,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/inspect.py", line 2604, in _signature_from_callable
raise ValueError(msg) from ex
ValueError: no signature found for <torch.ScriptMethod object at 0x7fc17c2b3dd0>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/divyansh/MoonPy/torchscriptToONNX.py", line 22, in <module>
torch.onnx.dynamo_export(
File "/home/divyansh/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/onnx/_internal/exporter.py", line 1206, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
```
Code to reproduce:
```
import torch
import torchaudio
import torch.onnx
jit_preprocessor = torch.jit.load("tmp.pt")
wav_file = "/home/divyansh/AAT3_16khz.wav"
signal, sample_rate = torchaudio.load(wav_file)
length = torch.tensor(signal.size(1)).unsqueeze(0)
input_names = ["input_signal", "length"]
output_names = ["output_features", "output_lengths"]
onnx_file = "NeMoPreprocessor.onnx"
jit_preprocessor.eval()
torch.onnx.dynamo_export(
jit_preprocessor,
(signal, length),
onnx_file,
verbose=True,
opset_version=18
)
```
Required files and SARIF report: [dynamo_export_bug_files.zip](https://github.com/pytorch/pytorch/files/12703871/dynamo_export_bug_files.zip)
### Versions
```
PyTorch version: 2.2.0.dev20230922+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35
Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-33-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.4
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 400.0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0.dev20230922+cu118
[pip3] torchaudio==2.2.0.dev20230922+cu118
[pip3] torchvision==0.17.0.dev20230922+cu118
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpy-base 1.25.2 py311hf175353_0
[conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi
[conda] torch 2.2.0.dev20230922+cu118 pypi_0 pypi
[conda] torchaudio 2.2.0.dev20230922+cu118 pypi_0 pypi
[conda] torchvision 0.17.0.dev20230922+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 6 |
752 | 109,901 |
DISABLED test_tags_function (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_function&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17044930976).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_function`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
753 | 109,895 |
moco: torch._dynamo.exc.Unsupported: hasattr: TensorVariable()
|
triaged, oncall: pt2
|
Repro:
```
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only moco
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
754 | 109,891 |
Revert D49433268: Multisect successfully blamed "D49433268: [pytorch][PR] [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation" for test or build failures
|
fb-exported, module: inductor, ciflow/inductor
|
Summary:
This diff is reverting D49433268
D49433268: [pytorch][PR] [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation by yanboliang has been identified to be causing the following test or build failures:
Tests affected:
- [caffe2/torch/fb/model_transform/experimental/benchmark/test/aotinductor:test_inductor_benchmark - test_inductor_benchmark_cmf30x (caffe2.torch.fb.model_transform.experimental.benchmark.test.aotinductor.test_inductor_benchmark.InductorBenchmark)](https://www.internalfb.com/intern/test/562950063524812/)
Here's the Multisect link:
https://www.internalfb.com/multisect/3104208
Here are the tasks that are relevant to this breakage:
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.
If you believe this diff has been generated in error you may Commandeer and Abandon it.
Test Plan: NA
Reviewed By: hl475
Differential Revision: D49536556
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 6 |
755 | 109,889 |
Support torch.export.export through torch.onnx.dynamo_export
|
module: onnx, triaged, enhancement, onnx-triaged, release notes: onnx
|
### π Describe the bug
Currently, `torch.onnx.dynamo_export` leverages `torch._dynamo.export` to obtain a `GraphModule` graph. By leveraging `torch.export`, we may have a more stable and complete `GraphModule` graph, including:
* Functionalization.
* Lowering from torch ops to aten ops (aten ops are a lot more normalized than torch ops)
* Decomposition of Aten ops to a smaller opset (core aten ops)
@SherlockNoMad FYI
### Versions
pytorch main
```[tasklist]
### Tasks
- [ ] https://github.com/microsoft/onnxscript/issues/1077
- [ ] https://github.com/pytorch/pytorch/issues/110100
```
| 0 |
756 | 109,886 |
Fix CPU bitwise shifts for out-of-limit values in VSX-vec
|
module: cpu, triaged, open source
|
Similar to #96659 this implements the conditionals handling the out-of-limit values in the shift amounts (rhs) for the vectorized VSX code using the same logic as the scalar code.
Fixes #109777
@quickwritereader @cdeepali Can you double-check?
A quick test from my side looks good
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| 2 |
757 | 109,885 |
DALLE2_pytorch: "torch._dynamo.exc.Unsupported: call_method NNModuleVariable() eval [] {}"
|
triaged, oncall: pt2, module: inductor
|
Repro:
```
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only DALLE2_pytorch
```
Error:
```
Traceback (most recent call last):
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 2107, in check_accuracy
optimized_model_iter_fn = optimize_ctx(self.run_n_iterations)
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 1159, in export_aot_inductor
module, exported, output_spec = AOTInductorModelCache.load(
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 1132, in load
so_path, exported = torch._export.aot_compile(
File "/home/binbao/pytorch/torch/_export/__init__.py", line 759, in aot_compile
ep = export(f, args, kwargs, constraints)
File "/home/binbao/pytorch/torch/_export/__init__.py", line 431, in export
gm_torch_level, _ = torch._dynamo.export(
File "/home/binbao/pytorch/torch/_dynamo/eval_frame.py", line 1216, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/binbao/pytorch/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/binbao/pytorch/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/binbao/pytorch/torch/_dynamo/eval_frame.py", line 406, in _fn
return fn(*args, **kwargs)
File "/home/binbao/pytorch/torch/_dynamo/eval_frame.py", line 554, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 140, in _fn
return fn(*args, **kwargs)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 380, in _convert_frame_assert
return _compile(
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 559, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/binbao/pytorch/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 481, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/binbao/pytorch/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/binbao/pytorch/torch/_dynamo/convert_frame.py", line 451, in transform
tracer.run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2094, in run
super().run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 739, in run
and self.step()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 702, in step
getattr(self, inst.opname)(inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 403, in wrapper
return inner_fn(self, inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1175, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 576, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/binbao/pytorch/torch/_dynamo/variables/nn_module.py", line 338, in call_function
return tx.inline_user_function_return(
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 612, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2221, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2343, in inline_call_
tracer.run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 739, in run
and self.step()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 702, in step
getattr(self, inst.opname)(inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 403, in wrapper
return inner_fn(self, inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1175, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 576, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 612, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2221, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2343, in inline_call_
tracer.run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 739, in run
and self.step()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 702, in step
getattr(self, inst.opname)(inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 403, in wrapper
return inner_fn(self, inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1175, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 576, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 612, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2221, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 2343, in inline_call_
tracer.run()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 739, in run
and self.step()
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 702, in step
getattr(self, inst.opname)(inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 403, in wrapper
return inner_fn(self, inst)
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 1135, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/binbao/pytorch/torch/_dynamo/symbolic_convert.py", line 576, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/binbao/pytorch/torch/_dynamo/variables/functions.py", line 304, in call_function
return self.obj.call_method(
File "/home/binbao/pytorch/torch/_dynamo/variables/nn_module.py", line 638, in call_method
return super().call_method(tx, name, args, kwargs)
File "/home/binbao/pytorch/torch/_dynamo/variables/base.py", line 329, in call_method
raise unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/home/binbao/pytorch/torch/_dynamo/exc.py", line 176, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_method NNModuleVariable() eval [] {}
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 0 |
758 | 109,884 |
basic_gnn_gcn: ERROR:common:TypeError: object of type 'GreaterThan' has no len()
|
triaged, ezyang's list, oncall: pt2, module: dynamic shapes, module: inductor
|
Repro:
```
python benchmarks/dynamo/torchbench.py --bfloat16 --accuracy --inference --device cuda --export-aot-inductor --only basic_gnn_gcn
```
Error:
```
ERROR:common:TypeError: object of type 'GreaterThan' has no len()
target: aten.scalar_tensor.default
args[0]: i18 >= 0
Traceback (most recent call last):
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 2107, in check_accuracy
optimized_model_iter_fn = optimize_ctx(self.run_n_iterations)
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 1159, in export_aot_inductor
module, exported, output_spec = AOTInductorModelCache.load(
File "/home/binbao/pytorch/benchmarks/dynamo/common.py", line 1132, in load
so_path, exported = torch._export.aot_compile(
File "/home/binbao/pytorch/torch/_export/__init__.py", line 775, in aot_compile
so_path = torch._inductor.aot_compile(unlifted_module, flat_example_inputs, options) # type: ignore[arg-type]
File "/home/binbao/pytorch/torch/_inductor/__init__.py", line 48, in aot_compile
result = compile_fx_aot(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 850, in compile_fx_aot
return compile_fx(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 951, in compile_fx
return compile_fx(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 980, in compile_fx
return compile_fx(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 1154, in compile_fx
return inference_compiler(model_, example_inputs_)
File "/home/binbao/pytorch/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 1096, in fw_compiler_base
return inner_compile(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 211, in wrapper
compiled = inner_compile(
File "/home/binbao/local/miniconda3/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/binbao/pytorch/torch/_dynamo/repro/after_aot.py", line 80, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/binbao/pytorch/torch/_inductor/debug.py", line 297, in inner
return fn(*args, **kwargs)
File "/home/binbao/local/miniconda3/envs/pytorch-3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 340, in compile_fx_inner
compiled_graph: CompiledFxGraph = fx_codegen_and_compile(
File "/home/binbao/pytorch/torch/_inductor/compile_fx.py", line 535, in fx_codegen_and_compile
graph.run(*example_inputs)
File "/home/binbao/pytorch/torch/_dynamo/utils.py", line 190, in time_wrapper
r = func(*args, **kwargs)
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 464, in run
return super().run(*args)
File "/home/binbao/pytorch/torch/fx/interpreter.py", line 138, in run
self.env[node] = self.run_node(node)
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 735, in run_node
result = super().run_node(n)
File "/home/binbao/pytorch/torch/fx/interpreter.py", line 195, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 625, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/binbao/pytorch/torch/_inductor/graph.py", line 622, in call_function
out = lowerings[target](*args, **kwargs)
File "/home/binbao/pytorch/torch/_inductor/lowering.py", line 278, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/binbao/pytorch/torch/_inductor/lowering.py", line 2219, in tensor
elif len(data) == 0 or isinstance(data[0], (float, int)) and len(data) <= 8:
torch._inductor.exc.LoweringException: TypeError: object of type 'GreaterThan' has no len()
target: aten.scalar_tensor.default
args[0]: i18 >= 0
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @tugsbayasgalan
| 4 |
759 | 109,881 |
Move at::{Refcounted,}MapAllocator to c10
|
open source, Merged, Reverted, ciflow/trunk, topic: not user facing, ciflow/periodic
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109881
`libshm.so` depends on the torch library exclusively for `at::RefcountedMapAllocator`,
so it makes sense to move it to c10 along with the other memory allocators.
This means `libshm.so` only depends on `c10` and we don't need to relink
`libshm.so` for every ATen change.
| 28 |
760 | 109,880 |
[FSDP ]How to convert sharded_state_dict files into full_state_dict offline without distributed process
|
oncall: distributed, triaged, module: fsdp, module: distributed_checkpoint
|
### π The feature, motivation and pitch
Currently, if I use FSDP with 128 gpus and save checkpoints with sharded_state_dict to avoid gathering the full_state_dict on rank0 for saving, there is no way to obtain the full_state_dict ckpt offline.
The only way to obtain full_state_dict is to launch the exact 128GPU distributed process with FSDP to load that sharded_state_dict model, then switch to full_state_dict config and save the ckpt to files, which is originally problem we wanted to avoid.
I cannot read the sharded_state_dict file (with `torch.load()`) individually either, except if I launch a 128gpu distributed process to read it. The file contain `ShardedTensor` which requires the same world_size=128 to load.
I would like to have an offline script to read each sharded file and write iterative to a pytorch_model_0.bin, pytorch_model_1.bin, pytorch_model_2.bin...
And then we can load the model with `AutoModelForCausalLM.from_pretrained(...)` by loading each `.bin`
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin
| 2 |
761 | 109,874 |
[inductor][cpu] performance regression
|
triaged, oncall: pt2, module: inductor, module: cpu inductor
|
<p>new_perf_regression: 2023-09-20 nightly release vs 2023-09-17 nightly release</p>
<p>Note: multi threads secnario for models above first *, single thread for models between two *</p>
<p>new_perf_regression</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
<tr>
<td>llama</td>
<td>1</td>
<td>0.077857</td>
<td>0.416423326</td>
<td>0.032421470892381996</td>
<td>39.62569</td>
<td>1</td>
<td>0.318457</td>
<td>0.102005337</td>
<td>0.032484313605009</td>
<td>38.0415</td>
<td>0.24</td>
<td>1.0</td>
<td>0.24</td>
<td>0.96</td>
</tr>
<tr>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
<td>*</td>
</tr>
</tbody>
</table>
<p>SW info</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>SW</th>
<th>Nightly commit</th>
<th>Main commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pytorch</td>
<td>00ae5fa</td>
<td>1b3e5b5</td>
</tr>
<tr>
<td>Torchbench</td>
<td>/</td>
<td>ffbbebb9</td>
</tr>
<tr>
<td>torchaudio</td>
<td>475b6ae</td>
<td>ede4309</td>
</tr>
<tr>
<td>torchtext</td>
<td>142d029</td>
<td>45e4b8c</td>
</tr>
<tr>
<td>torchvision</td>
<td>8636bf3</td>
<td>4ac707a</td>
</tr>
<tr>
<td>torchdata</td>
<td>eb9bf61</td>
<td>d76d92c</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>0200b11</td>
<td>/</td>
</tr>
</tbody>
</table>
</table>
<p>Reference SW info</p>
<table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>SW</th>
<th>Nightly commit</th>
<th>Main commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pytorch</td>
<td>0de2555</td>
<td></td>
</tr>
<tr>
<td>Torchbench</td>
<td>/</td>
<td>ffbbebb9</td>
</tr>
<tr>
<td>torchaudio</td>
<td>475b6ae</td>
<td></td>
</tr>
<tr>
<td>torchtext</td>
<td>142d029</td>
<td></td>
</tr>
<tr>
<td>torchvision</td>
<td>8636bf3</td>
<td></td>
</tr>
<tr>
<td>torchdata</td>
<td>eb9bf61</td>
<td></td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>0200b11</td>
<td>/</td>
</tr>
</tbody>
</table>
<p>Repro</p>
<a href=https://github.com/chuanqi129/inductor-tools/blob/yudong/aws_auto/scripts/modelbench/inductor_single_run.sh>
inductor_single_run.sh </a>
<code>
bash inductor_single_run.sh multiple/single inference performance torchbench/huggingface/timm_models model float32 first static default 0
</code>
<p><a href=https://github.com/pytorch/pytorch/commit/1a361e4e9ff1a8dadd72c7301441cfb83d687f24#diff-cf6ca00beddc32a2a6a2933fb9913b6a2b925ffc3b745488967210e4343134ac>
Suspected guilty commit </a></p>
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
| 3 |
762 | 109,873 |
Allow try except check for numpy bfloat16 representation
|
triaged, module: numpy
|
### π The feature, motivation and pitch
if you run this code :
```
import ml_dtypes
import torch
torch.tensor([1,2,3],dtype=torch.bfloat16).numpy()
```
It will throw this error :
```TypeError: Got unsupported ScalarType BFloat16```
even when numpy will support `bfloat16` in this case. If you guys could add a try except check then it would be cool.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @rgommers
| 1 |
763 | 109,872 |
DISABLED test_tags_dropout (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_dropout&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17031141549).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_dropout`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 5 |
764 | 109,870 |
Wrongly returns nan for vectorized complex numbers division on PPC/ZArch
|
triaged, module: POWER
|
### π Describe the bug
Copied from #92043 which still applies for vectorized PPC and likely to ZArch while it was fixed for x86 by #93277
Pytorch wrongly returns nan for complex numbers division where the results are still within the range of the datatype.
For example: (0.0 + 0.0j) / (1e-36 + 0.0j) in complex64. In this case, both operands and the results (which is 0) are within the defined range of complex64. However, it returns (nan + nanj).
Here's the reproducing code:
```
a = torch.tensor([0.0 + 0.0j]*8)
b = torch.tensor([1e-36 + 0.0j]*8)
print(a / b) # it's (nan + nanj),...
```
Or just run `test_complex_div_underflow_overflow`.
The reason is that [`abs_2_`](https://github.com/pytorch/pytorch/blob/d7c05bb2e8de24386664c01e887357ff50a09842/aten/src/ATen/cpu/vec/vec256/vsx/vec256_complex_double_vsx.h#L434) already overflows in the tested range (e.g. `finfo.min / 2` in `test_complex_div_underflow_overflow`) and the `elwise_mult(vr)` does too.
### Versions
PyTorch 2.1.0
| 0 |
765 | 109,864 |
DISABLED test_tags_decomps (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tags_decomps&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17027062892).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tags_decomps`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 6 |
766 | 109,863 |
[BUG?] Why Allocator use stream to manage Block?
|
module: cuda, triaged
|
### π Describe the bug
- when I read code about DeviceCachingAllocatorοΌ I find code as follows:
```c++
struct Block {
int device; // gpu
cudaStream_t stream; // allocation stream
stream_set stream_uses; // streams on which the block was used
size_t size; // block size in bytes
BlockPool* pool; // owning memory pool
void* ptr; // memory address
bool allocated; // in-use flag
Block* prev; // prev block if split from a larger allocation
Block* next; // next block if split from a larger allocation
int event_count; // number of outstanding CUDA events
int gc_count; // counter for prioritizing older / less useful blocks for
// garbage collection
std::unique_ptr<HistoryChain> history;
HistoryChain* history_last;
...
}
static bool BlockComparator(const Block* a, const Block* b) {
if (a->stream != b->stream) {
return (uintptr_t)a->stream < (uintptr_t)b->stream;
}
if (a->size != b->size) {
return a->size < b->size;
}
return (uintptr_t)a->ptr < (uintptr_t)b->ptr;
}
Block* malloc(int device, size_t orig_size, cudaStream_t stream) {
// done outside the lock because we don't know what locks the recorder needs
// to have...
CreateContextFn context_recorder = context_recorder_.load();
std::shared_ptr<Context> context =
context_recorder ? context_recorder() : nullptr;
std::unique_lock<std::recursive_mutex> lock(mutex);
if (C10_LIKELY(captures_underway == 0)) {
// Processes end-of-life events for outstanding allocations used on
// multiple streams (checks if their GPU-side uses are complete and
// recycles their memory if so)
//
// Q. Why skip process_events if a capture might be underway?
// A. process_events involves cudaEventQueries, illegal during CUDA graph
// capture.
// Dumb simple solution: defer reclaiming these allocations until after
// capture. Cross-stream memory use is uncommon, so the deferral's
// effect on memory use during capture should be small.
process_events();
}
size_t size = round_size(orig_size);
auto& pool = get_pool(size, stream);
const size_t alloc_size = get_allocation_size(size);
AllocParams params(device, size, stream, &pool, alloc_size, stats);
params.stat_types[static_cast<size_t>(StatType::AGGREGATE)] = true;
params.stat_types[static_cast<size_t>(get_stat_type_for_pool(pool))] = true;
// First, try to get a block from the existing pool.
bool block_found =
// Search pool
get_free_block(params)
// Trigger callbacks and retry search
|| (trigger_free_memory_callbacks(params) && get_free_block(params));
...
}
bool get_free_block(AllocParams& p) {
BlockPool& pool = *p.pool;
if (C10_UNLIKELY(
set_fraction &&
CachingAllocatorConfig::garbage_collection_threshold() > 0.0)) {
// Track block reuse interval only when garbage collection is enabled.
for (auto& b : pool.blocks) {
++b->gc_count;
}
}
auto it = pool.blocks.lower_bound(&p.search_key);
if (it == pool.blocks.end() || (*it)->stream != p.stream())
return false;
}
```
- my problem is, when debug allocator, it really exists:
code as above, the block is sorted by stream first , then the size , while computation use default stream and mccl pg use none default stream from the stream pool, so when I use cuda memory from the block pool when using mccl, I could not find any block not used, but stream 0 has some blocks, then I could not use blocks from stream 0, anyway when the system do not have any blocks, this will cause gc to release stream 0 blocks, but my problem is why should use stream to manage blocks, though I know we use stream use counts to fulfill record stream api, any other reasons.
- this is an example I give for the strategy:
the block set : 512(stream0) -> 1024(stream 0)->2048(stream 0)->1024(stream 1) -> 512 stream(3)
when I need a size of 2048 with stream 2, I find I can not find, the result will be 512 stream(3), but I can not use, this case will trigger gc to release blocks from stream 0 .
### Versions
with out
cc @ptrblck
| 0 |
767 | 109,862 |
DISABLED test_symints_location (__main__.ActivationCheckpointingViaTagsTests)
|
module: rocm, triaged, module: flaky-tests, skipped, module: unknown, oncall: pt2
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_symints_location&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17017086249).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 33 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_symints_location`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_activation_checkpointing.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
768 | 109,861 |
Cannot use constrain_as_size from fake tensor implementations: RuntimeError: tried to get Int out of SymInt
|
triaged, oncall: pt2, module: dynamic shapes, module: export
|
### π Describe the bug
Patch in:
```
diff --git a/torch/_subclasses/fake_tensor.py b/torch/_subclasses/fake_tensor.py
index 34408a9b3f4..00cfce23df0 100644
--- a/torch/_subclasses/fake_tensor.py
+++ b/torch/_subclasses/fake_tensor.py
@@ -585,7 +585,8 @@ def nonzero(fake_mode, func, arg):
if arg.numel() >= 2:
maxval = int(arg.numel())
- _constrain_range_for_size(nnz, max=maxval)
+ from torch.export import constrain_as_size
+ constrain_as_size(nnz)
arg._nonzero_memo = nnz
arg._nonzero_memo_vc = arg._version
```
there may be fuzz depending on if you patched in https://github.com/pytorch/pytorch/pull/109857 just replace the constrain call with this.
This fails with
```
File "/data/users/ezyang/c/pytorch/torch/fx/experimental/proxy_tensor.py", line 574, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/c/pytorch/torch/fx/experimental/proxy_tensor.py", line 609, in inner_torch_dispatch
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File "/data/users/ezyang/c/pytorch/torch/fx/experimental/proxy_tensor.py", line 354, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/c/pytorch/torch/_ops.py", line 498, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/c/pytorch/torch/utils/_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "/data/users/ezyang/c/pytorch/torch/_subclasses/fake_tensor.py", line 1300, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/ezyang/c/pytorch/torch/_subclasses/fake_tensor.py", line 1543, in dispatch
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/data/users/ezyang/c/pytorch/torch/_subclasses/fake_tensor.py", line 587, in nonzero
torch.sym_constrain_range_for_size(nnz, min=None, max=maxval)
RuntimeError: tried to get Int out of SymInt
```
when you run `python test/test_proxy_tensor.py -k test_make_fx_fake_exhaustive_nonzero_cpu_float32`
I suspect what is going on is you're going to the C++ kernel implementation.
Somehow a simplified repro doesn't trigger though:
```
import torch
from torch.fx.experimental.proxy_tensor import make_fx
def f(x):
y = x.item()
print(y)
torch.sym_constrain_range_for_size(y, min=None, max=None)
return torch.empty(y)
make_fx(f, tracing_mode="symbolic")(torch.tensor([3]))
```
maybe pydispatcher related?
### Versions
main
cc @msaroufim @wconstab @bdhirsh @anijain2305 @tugsbayasgalan
| 0 |
769 | 109,860 |
Move InputDim to torch.export instead of defining in a pass
|
module: export
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109860
* #109859
| 1 |
770 | 109,856 |
Severe performance regression on deterministic algorithm in torch 2.0
|
module: performance, triaged, module: cublas, module: determinism
|
### π Describe the bug
I've noticed a significant performance slowdown in torch 2.0 when enabling determinism.
Here is a simple example using the diffusers library:
```python
def set_deterministic(mode=True):
import torch
import os
torch.backends.cudnn.benchmark = not mode
torch.backends.cudnn.deterministic = mode
torch.use_deterministic_algorithms(mode, warn_only=True)
if mode:
os.putenv("CUBLAS_WORKSPACE_CONFIG", ":4096:8")
else:
os.unsetenv("CUBLAS_WORKSPACE_CONFIG")
print(f"Deterministic: {mode}")
def go():
from datetime import timedelta
import time
import torch
from diffusers import UNet2DModel
import torch
torch.backends.cuda.matmul.allow_tf32 = True
scaler = torch.cuda.amp.GradScaler()
batch_size = 8
channels = 3
sample_size = 64
n = 20
device = torch.device("cuda")
model = UNet2DModel(
sample_size=sample_size,
in_channels=channels,
out_channels=channels,
layers_per_block=2,
block_out_channels=(128, 128, 256, 256, 512, 512),
norm_num_groups=32,
down_block_types=("DownBlock2D", "DownBlock2D", "DownBlock2D", "DownBlock2D", "AttnDownBlock2D", "DownBlock2D"),
up_block_types=("UpBlock2D", "AttnUpBlock2D", "UpBlock2D", "UpBlock2D", "UpBlock2D", "UpBlock2D"),
)
model = model.to(device=device)
model.train()
torch.cuda.synchronize(device)
start = time.time()
rng = torch.Generator(device="cuda").manual_seed(0)
for step in range(n):
input = torch.randn((batch_size, channels, sample_size, sample_size), device=device)
target = torch.randn((batch_size, channels, sample_size, sample_size), device=device)
bs = input.shape[0]
timestep = torch.randint(0, 1000, (bs,), generator=rng, device=device)
with torch.autocast(device_type="cuda", dtype=torch.float16):
output = model(input, timestep=timestep)
loss = torch.nn.functional.mse_loss(output.sample, target, reduction="none").mean()
scaler.scale(loss).backward()
torch.cuda.synchronize(device)
duration = timedelta(seconds=time.time() - start)
print(f"Train duration {duration} ({n/duration.total_seconds():.02f} it/s)")
model = model.to(dtype=torch.float16)
model.eval()
torch.cuda.synchronize(device)
start = time.time()
with torch.no_grad():
for i in range(n):
input = torch.randn(
(batch_size, channels, sample_size, sample_size), device=model.device, dtype=model.dtype
)
timestep = torch.randint(0, 1000, (batch_size,), device=model.device, dtype=model.dtype)
output = model(input, timestep=timestep)
torch.cuda.synchronize(device)
duration = timedelta(seconds=time.time() - start)
print(f"Eval duration {duration} ({n/duration.total_seconds():.02f} it/s)")
def main(mode):
import torch
print(f"Torch version: {torch.__version__}")
set_deterministic(mode)
go()
if __name__ == "__main__":
import sys
main(bool(int(sys.argv[1])))
```
With pytorch-1.13, performance is roughly equal whether determinism is enabled or not:
```txt
Torch version: 1.13.0a0+git49444c3
Deterministic: False
Train duration 0:00:02.284633 (8.75 it/s)
Eval duration 0:00:00.496868 (40.25 it/s)
Torch version: 1.13.0a0+git49444c3
Deterministic: True
Train duration 0:00:02.295383 (8.71 it/s)
Eval duration 0:00:00.490550 (40.77 it/s)
```
But with pytorch-2.0, performance degrades by 2-4x (or even worse on more complex cases):
```txt
Torch version: 2.0.0a0+gite9ebda2
Deterministic: False
Train duration 0:00:02.245685 (8.91 it/s)
Eval duration 0:00:00.487197 (41.05 it/s)
Torch version: 2.0.0a0+gite9ebda2
Deterministic: True
Train duration 0:00:05.965989 (3.35 it/s)
Eval duration 0:00:01.810603 (11.05 it/s)
```
The difference also happens without using mixed precision, but it is especially visible when using it. GPU usage goes from 100% in non-deterministic mode to <50% in deterministic mode, making me think some operations might be running on the CPU.
Given that enabling determinism did not degrade performance in 1.13, and that 2.0 is presented as "same as 1.x, but faster if you use compilation", I would expect similar results in 2.0. Did something change in 2.0 to explain this result? Does determinism need to be enabled differently?
### Versions
torch versions 1.13.0a0+git49444c3 vs 2.0.0a0+gite9ebda2
Running on Linux version 4.15.0-213-generic (buildd@lcy02-amd64-079) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023
Hardware is a Quadro RTX 8000, driver version 460.84, using cuda-11.2.2 and libcudnn-8.9.4 in both cases.
cc @csarofeen @ptrblck @xwang233 @mruberry @kurtamohler
| 6 |
771 | 109,854 |
Directly support assert on Scalar, instead of forcing Tensor
|
triaged, oncall: pt2, module: export
|
### π Describe the bug
Repro:
```
import torch
import torch._dynamo.comptime
def f(x):
y = x.item()
torch.export.constrain_as_size(y)
return torch.zeros(y)
print(torch.export.export(f, (torch.tensor([3]),)))
```
This prints
```
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: i64[1]):
# File: /data/users/ezyang/c/pytorch/wz.py:4, code: y = x.item()
_local_scalar_dense: Sym(i4) = torch.ops.aten._local_scalar_dense.default(arg0_1); arg0_1 = None
# File: /data/users/ezyang/c/pytorch/torch/_export/pass_base.py:45, code: return NodeMetadata({"stack_trace": "".join(traceback.format_stack(limit=1))})
ge: Sym(i4 >= 0) = _local_scalar_dense >= 0
scalar_tensor: f32[] = torch.ops.aten.scalar_tensor.default(ge); ge = None
_assert_async = torch.ops.aten._assert_async.msg(scalar_tensor, '_local_scalar_dense is outside of inline constraint [0, inf].'); scalar_tensor = None
# File: /data/users/ezyang/c/pytorch/wz.py:5, code: torch.export.constrain_as_size(y)
sym_constrain_range_for_size = torch.ops.aten.sym_constrain_range_for_size.default(_local_scalar_dense, min = None, max = None)
# File: /data/users/ezyang/c/pytorch/wz.py:6, code: return torch.zeros(y)
full: f32[i4] = torch.ops.aten.full.default([_local_scalar_dense], 0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False); _local_scalar_dense = None
# File: /data/users/ezyang/c/pytorch/torch/_export/pass_base.py:45, code: return NodeMetadata({"stack_trace": "".join(traceback.format_stack(limit=1))})
sym_size: Sym(i4) = torch.ops.aten.sym_size.int(full, 0)
ge_1: Sym(i4 >= 0) = sym_size >= 0; sym_size = None
scalar_tensor_1: f32[] = torch.ops.aten.scalar_tensor.default(ge_1); ge_1 = None
_assert_async_1 = torch.ops.aten._assert_async.msg(scalar_tensor_1, 'full.shape[0] is outside of inline constraint [0, inf].'); scalar_tensor_1 = None
return (full,)
```
Notice that the asserts are actually very odd:
```
sym_size: Sym(i4) = torch.ops.aten.sym_size.int(full, 0)
ge_1: Sym(i4 >= 0) = sym_size >= 0; sym_size = None
scalar_tensor_1: f32[] = torch.ops.aten.scalar_tensor.default(ge_1); ge_1 = None
_assert_async_1 = torch.ops.aten._assert_async.msg(scalar_tensor_1, 'full.shape[0] is outside of inline constraint [0, inf].'); scalar_tensor_1 = None
```
This is effectively creating a tensor *just* so that it can be sent to assert async. This is wasteful. We should just have a variant of assert that takes a scalar directly.
But I also question the premise that these asserts have to live in the graph. For example, in Inductor, it wouldn't be necessary to query `sym_size` on full to extract the integer; Inductor always has all shape env symbols in scope, and can just generate code to tests any needed invariants on these symbols. IMO, there's a decent case to be made that these should managed extralingually. Maybe this is the most convenient format for arbitrary export, but for Inductor I think we prefer something else.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @aakhundov @tugsbayasgalan @gmagogsfm
### Versions
main
| 2 |
772 | 109,853 |
Fix S367052 to unblock ICVR MC3
|
fb-exported, Merged, ciflow/trunk, release notes: fx
|
Summary: Somehow "getitem" started to get Tensor starting from ads_ranking:996 and broke SDD pipelining FX-transformer. We need to skip the Tensor node in annotation.
Test Plan:
N4326037
# Before
{F1099052907}
# With this diff
{F1099052270}
Differential Revision: D49528046
| 20 |
773 | 109,850 |
torch._export has no logging
|
module: logging, triaged, oncall: pt2, module: export
|
### π Describe the bug
torch._export has a fairly sophisticated pass infrastructure. But there is no logging.
```
$ git grep log torch/_export/
torch/_export/serde/serialize.py:import logging
torch/_export/serde/serialize.py:log = logging.getLogger(__name__)
torch/_export/serde/serialize.py: log.warning(f"Symbol {k} did not appear in the graph that was deserialized") # noqa: G004
torch/_export/serde/serialize.py: The logic of this method:
torch/_export/serde/serialize.py: log.warning("Compiler doesn't have a version table for op namespace: {ns}. ", extra={"ns": namespace})
torch/_export/serde/upgrade.py:import logging
torch/_export/serde/upgrade.py:log = logging.getLogger(__name__)
torch/_export/serde/upgrade.py: log.warning("Missing an upgrader to upgrade to version {ver}.", extra={"ver": ver})
```
At minimum, I would expect there to be a way to log out each IR after each pass. Probably more things to log too.
cc @msaroufim @wconstab @bdhirsh @anijain2305 @gmagogsfm @tugsbayasgalan
### Versions
main
| 0 |
774 | 109,848 |
[dynamo][stream] Stream runtime operation in FX graph is ignored by remaining compiler
|
oncall: distributed, triaged, ezyang's list, oncall: pt2, module: aotdispatch, module: dynamo
|
### π Describe the bug
Hi,
Dynamo can capture the stream and its runtime API with this PR https://github.com/pytorch/pytorch/pull/93808.
Stream APIs can be traced into the FX graph and no graph break will happen, while i found the remaining compiler(AOTAutograd and Inductor) ignore the associated stream operations, for example, `torch.cuda.set_stream`.
Here is a simple reproducer for the issue:
```
import torch
@torch.compile(backend='inductor')
def test(x):
x = x + 1.0
stream = torch.cuda.Stream()
torch.cuda.set_stream(stream)
x = x * 2.0
return x
input = torch.randn(16, 16, device='cuda')
for _ in range(10):
output = test(input)
```
After dynamo and FX, the whole graph is shown as below. No graph break and runtime APIs are traced into the FX graph correctly, which is expected.
```
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] ===== __compiled_fn_0 =====
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] def forward(self, L_x_ : torch.Tensor):
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] l_x_ = L_x_
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/pt-gpu/zejun/issue.py:5, code: x = x + 1.0
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] x = l_x_ + 1.0; l_x_ = None
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/pt-gpu/zejun/issue.py:6, code: stream = torch.cuda.Stream()
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] stream = torch.cuda.streams.Stream()
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/pt-gpu/zejun/issue.py:7, code: torch.cuda.set_stream(stream)
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] set_stream = torch.cuda.set_stream(stream); stream = None
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/pt-gpu/zejun/issue.py:8, code: x = x * 2.0
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] x_1 = x * 2.0; x = None
[2023-09-22 10:22:45,730] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] return (x_1,)
```
Then the graph after AOTAutograd is:
```
def forward(self, arg0_1):
add = torch.ops.aten.add.Tensor(arg0_1, 1.0); arg0_1 = None
mul = torch.ops.aten.mul.Tensor(add, 2.0); add = None
return (mul,)
```
The backend compiler, like inductor, cannot know that the torch.add and the torch.mul need to execute on the different cuda streams. The info about streams switching is ignored, so the compiled triton kernel fuses the add and mul `triton_poi_fused_add_mul_0.run`:
```
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] def call(args):
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] arg0_1, = args
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] args.clear()
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] assert_size_stride(arg0_1, (16, 16), (16, 1))
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] with torch.cuda._DeviceGuard(0):
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] torch.cuda.set_device(0) # no-op to ensure context
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] buf0 = empty_strided((16, 16), (16, 1), device='cuda', dtype=torch.float32)
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] # Source Nodes: [x, x_1], Original ATen: [aten.add, aten.mul]
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] stream0 = get_cuda_stream(0)
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] triton_poi_fused_add_mul_0.run(arg0_1, buf0, 256, grid=grid(256), stream=stream0)
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] del arg0_1
[2023-09-22 10:22:46,665] [0/0] torch._inductor.graph.__output_code: [DEBUG] return (buf0, )
```
Add and mul should not be fused together because they execute on the different streams. I launch the compiled `test` function many times while the `torch.cuda.set_stream` are only called on the 1st iteration.
Solution: Maybe there could be an annotation or a mechanism(weak graph break) to tell the backend compiler(AOTAutograd or inductor) about the operators' execution info, for example, some torch ops are launched on different streams so they cannot be fused. I will try to give a solution candidate.
Thank you.
I found this issue when I support the device agnostic stream capture and trace the stream call methods into the FX graph: https://github.com/pytorch/pytorch/pull/108312
### Versions
Collecting environment information...
PyTorch version: 2.2.0a0+git772e104
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.31
Python version: 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.4.152
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 470.82.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 3720.7029
CPU min MHz: 1500.0000
BogoMIPS: 4000.04
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.2.0a0+git772e104
[pip3] torchvision==0.15.2a0+fa99a53
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.2.0a0+git772e104 dev_0 <develop>
[conda] torchvision 0.15.2a0+fa99a53 pypi_0 pypi
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
775 | 109,843 |
[Reland2] Update NVTX to NVTX3
|
triaged, open source, ciflow/binaries, topic: not user facing, ciflow/periodic
|
Another attempt to update NVTX to NVTX3. We now avoid changing NVTX header inclusion of existing code.
cc @izaitsevfb
| 20 |
776 | 109,838 |
[MPS] add support for heaviside
|
triaged, open source, release notes: mps, ciflow/mps
|
Fixes #[ISSUE_NUMBER](https://github.com/pytorch/pytorch/issues/77764#issuecomment-1710749840)
Implements torch.heaviside for mps backend
| 1 |
777 | 109,837 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
778 | 109,833 |
Implement Copy-on-write (COW) tensors
|
module: internals, triaged, module: viewing and reshaping
|
### π The feature, motivation and pitch
Copy-on-write (COW) tensor is a new kind of tensor that copies its underlying data to a new location if an operation tries to mutate the data.
COW tensors were proposed in [this document](https://docs.google.com/document/d/1kWIRUeixlgnOk1eXJSsbxOAKUKQFpmz0yzwLZUjWotg/edit?usp=sharing), which provides the full context of the idea and the problems it solves. Briefly, here's my summary of the problems, as far as I understand them at the moment:
* Currently, there are operators like `torch.reshape` which conditionally return either a copy or a view of a tensor, depending on striding. This semantic difference is an issue for the compiler, and it would be better to change these operators to always semantically return a copy. However, even if we semantically do a copy, we can use COW tensors to avoid actually doing the copy until it's absolutely necessary.
* When a user saves a tensor for backward and then an inplace operator mutates a view of that same tensor, the saved tensor no longer has the value that the user wanted to have saved. People have learned to avoid this by simply avoiding inplace operators in this case, but it would be better if users could use inplace operators without shooting themselves in the foot like this. If the saved tensor is a COW tensor, then "views" of it will actually just be copies, and the saved tensor's data will not change if the copies are mutated
Some machinery already exists in [`c10/core/impl/cow/`](https://github.com/pytorch/pytorch/tree/c789ed6e62cdee9917ae08a432ab9b6be8d59e64/c10/core/impl/cow)
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 3 |
779 | 109,829 |
DISABLED test_kwargs (__main__.ActivationCheckpointingViaTagsTests)
|
triaged, module: flaky-tests, skipped, module: dynamo
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_kwargs&suite=ActivationCheckpointingViaTagsTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17013126575).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 24 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_kwargs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_activation_checkpointing.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 4 |
780 | 109,827 |
PIN disabled tests for the release
|
oncall: binaries, oncall: releng, topic: binaries
|
Code reference:
https://github.com/pytorch/pytorch/blob/main/tools/stats/import_test_stats.py#L78
| 0 |
781 | 109,819 |
ValueError: only one element tensors can be converted to Python scalars
|
needs reproduction, triaged, module: regression
|
### π Describe the bug
This used to work. Now it doesn't.
```python
import torch
from matplotlib.pyplot import contourf, axis
mesh = torch.arange(-1, 1.1, 0.1)
x, y = torch.meshgrid(mesh, mesh, indexing='xy')
z = torch.randn_like(x)
# Breaking line
contourf(x, y, z)
axis('square')
```
`math.isfinite(val)` says `ValueError: only one element tensors can be converted to Python scalars`
To fix it, one can simply run
```python
contourf(x.numpy(), y.numpy(), z)
axis('square')
```
### Versions
Latest.
| 8 |
782 | 109,810 |
[Inductor CUTLASS backend] Epilogue fusion codegen prototype
|
topic: not user facing, module: inductor, ciflow/inductor
|
Draft pull request to discuss the changes while development is still in progress
@aakhundov @ipiszy
This is unfinished, please don't review yet.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @muchulee8 @aakhundov @ColinPeppler
| 3 |
783 | 109,806 |
Incompatible dimensions error for FusedMatMul
|
module: onnx, triaged
|
### π Describe the bug
I am getting a `[ShapeInferenceError] Incompatible dimensions for matrix multiplication` error when attempting to load an ONNX model.
```
Traceback (most recent call last):
File "/Users/jajal/research/exporter_testing/bugs/torch/bug_1/repro_jit.py", line 11, in <module>
ort.InferenceSession("./model.onnx", providers=["CPUExecutionProvider"])
File "/Users/jajal/anaconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/jajal/anaconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 471, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (/MatMul_FusedMatMulAndScale) Op (FusedMatMul) [ShapeInferenceError] Incompatible dimensions for matrix multiplication
```
This appears similar to this ONNXRuntime issue [microsoft/onnxruntime#7098](https://github.com/microsoft/onnxruntime/issues/7098) and PyTorch #55597 .
I found this using the [NNSmith Fuzzing Tool](https://github.com/ise-uiuc/nnsmith) on the `torch.onnx` converter. Due to the way NNSmith creates models I am unable to provide a minimal reproduction but I was able to trace the model and have the TorchScript version of it ([here](https://drive.google.com/file/d/1m5ouLOA5ZbIOEmF-eg55uXJ6x6g05t_9/view?usp=sharing)).
The reproduction requires the download of: [model](https://drive.google.com/file/d/1m5ouLOA5ZbIOEmF-eg55uXJ6x6g05t_9/view?usp=drive_link) and [inputs](https://drive.google.com/file/d/1cygSJOsJJdP6IXpKyJArMeexDZPXOhi-/view?usp=drive_link). After the download execute the following:
```python
import onnxruntime as ort
import torch
model_args = torch.load("model_args.pt")
loaded = torch.jit.load("traced_model.pt")
torch.onnx.export(loaded, model_args, "./model.onnx", opset_version=16)
ort.InferenceSession("./model.onnx", providers=["CPUExecutionProvider"])
```
### Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.5.2 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: version 3.27.5
Libc version: N/A
Python version: 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.5.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.0
[pip3] torchviz==0.0.2
[conda] numpy 1.25.2 py310h3b2db8e_0
[conda] numpy-base 1.25.2 py310ha9811e2_0
[conda] pytorch-lightning 2.0.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torchaudio 2.0.0 py310_cpu pytorch
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.0 py310_cpu pytorch
[conda] torchviz 0.0.2 pypi_0 pypi
| 0 |
784 | 109,804 |
[Not for Land] Add verbose all-gather info
|
release notes: distributed (fsdp)
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #109804
Differential Revision: [D49507887](https://our.internmc.facebook.com/intern/diff/D49507887)
| 3 |
785 | 109,802 |
Bits types cannot be used under deterministic mode
|
triaged, module: determinism
|
#104995 made it impossible to use bits types under deterministic mode:
```
import torch
torch.set_deterministic_debug_mode("warn")
x=torch.empty(4, dtype=torch.bits8)
```
produces `RuntimeError: "fill_empty_deterministic_" not implemented for 'Bits8'`
While I'm sympathetic to the goals of #104995, I think that a blanket approach forcing filling all empty tensors is too strict. In many cases, determinism mode is set in production runs (because if only deterministic ops are used it imposes very little overhead and provides nice guarantees), and with valid code the empty calls are not a problem. The approach taken in #104995 requires paying filling penalty unconditionally, and also requires implementing fill op for all bits types which pytorch currently doesn't do.
Deterministic empty tensors is a valuable option to have, but in my opinion should not be mixed with existing deterministic mode.
cc @mruberry @kurtamohler
| 18 |
786 | 109,795 |
Surround num-destroyed-communicators with spaces
|
fb-exported, release notes: distributed (c10d)
|
Summary:
As title says
Lines look like this
{F1095151529}
Created from CodeHub with https://fburl.com/edit-in-codehub
Test Plan:
NOP change
waitforsandcastle
Sandcastle run
Reviewed By: shnavid
Differential Revision: D49378701
| 9 |
787 | 109,792 |
Fix tensor unpickling
|
open source, release notes: jit
|
Fixes #109791
Validates that tensor have sizes and strides that are in bounds of its storage.
PyTorch method `set_sizes_and_strides` used during tensor unpickling from raw binary data, but there are no checks for consistency of tensor metadata.
| 7 |
788 | 109,791 |
Heap-buffer-overflow during tensor unpickling
|
module: serialization, triaged
|
### π Describe the bug
Hi! We've been fuzzing PyTorch project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz).
We've found a heap buffer overflow error at `conv_serialization.h:179` in jit module.
The out-of-buffer read occurs during traversal of tensor' input data. Tensor `accessor` computes indexes for its inner data according to the tensor `sizes` and `strides` values. `sizes` and `strides` are set for a tensor during it's deserialization from a raw binary data. The comments in code for a corresponding PyTorch API function (`set_sizes_and_strides`) says, that this function does not check if the requested sizes/strides are in bounds for the storage that is allocated and this is the responsibility of the caller.
PyTorch version: cdf7f3e78032a17600f701e9153e9bb49fad8ce7
OS: Ubuntu 20.04
How to reproduce
1. Build docker from [here](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch) and run the container:
sudo docker build -t oss-sydr-fuzz-pytorch .
sudo docker run --privileged --rm -v `pwd`:/fuzz -it oss-sydr-fuzz-pytorch /bin/bash
2. Run the target on this input: [crash.txt](https://github.com/pytorch/pytorch/files/12686267/crash.txt)
/load_afl crash.txt
3. You will see the following output:
```
==1003314==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x609000000158 at pc 0x000002b37962 bp 0x7fffffff7a10 sp 0x7fffffff7a08
READ of size 2 at 0x609000000158 thread T0
#0 0x2b37961 in void __gnu_cxx::new_allocator<long>::construct<long, short&>(long*, short&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/ext/new_allocator.h:150:27
#1 0x2b37961 in void std::allocator_traits<std::allocator<long> >::construct<long, short&>(std::allocator<long>&, long*, short&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/alloc_traits.h:512:8
#2 0x2b368db in long& std::vector<long, std::allocator<long> >::emplace_back<short&>(short&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/vector.tcc:115:6
#3 0x2b32249 in std::tuple<long, std::vector<long, std::allocator<long> >, std::vector<c10::optional<at::Tensor>, std::allocator<c10::optional<at::Tensor> > > > parse_conv_serialized_state<2u>(c10::IValue) /pytorch/aten/src/ATen/native/quantized/cpu/conv_serialization.h:179:19
#4 0x2b30276 in int register_conv_params<2>()::'lambda'(c10::IValue)::operator()(c10::IValue) const /pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp:410:49
#5 0x2b30014 in std::enable_if<!(std::is_member_pointer<std::decay<int register_conv_params<2>()::'lambda'(c10::IValue) const&>::type>::value), std::invoke_result<int register_conv_params<2>()::'lambda'(c10::IValue) const&, c10::IValue>::type>::type c10::guts::invoke<int register_conv_params<2>()::'lambda'(c10::IValue) const&, c10::IValue>(int register_conv_params<2>()::'lambda'(c10::IValue) const&, c10::IValue&&) /pytorch/c10/util/C++17.h:203:10
#6 0x2b2f7e7 in torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&)::operator()(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&) const /pytorch/torch/custom_class.h:328:11
#7 0x2b2f570 in c10::guts::infer_function_traits<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)>::type::return_type torch::detail::call_torchbind_method_from_stack<torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&), false, 0ul, 1ul>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&, std::vector<c10::IValue, std::allocator<c10::IValue> >&, std::integer_sequence<unsigned long, 0ul, 1ul>) /pytorch/torch/custom_class_detail.h:139:10
#8 0x2b2f408 in c10::guts::infer_function_traits<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)>::type::return_type torch::detail::call_torchbind_method_from_stack<torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&), false>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&, std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch/torch/custom_class_detail.h:153:10
#9 0x2b2f408 in torch::detail::BoxedProxy<void, torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&, torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&)&) /pytorch/torch/custom_class_detail.h:174:5
#10 0x2b2f38d in torch::jit::Function* torch::class_<ConvPackedParamsBase<2> >::defineMethod<torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&)>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::initializer_list<torch::arg>)::'lambda'(std::vector<c10::IValue, std::allocator<c10::IValue> >&)::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch/torch/custom_class.h:407:7
#11 0x2b2f38d in int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&) std::__invoke_impl<void, torch::jit::Function* torch::class_<ConvPackedParamsBase<2> >::defineMethod<torch::class_<ConvPackedParamsBase<2> >& torch::class_<ConvPackedParamsBase<2> >::def_pickle<int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), int register_conv_params<2>()::'lambda'(c10::IValue)>(int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&)&&, int register_conv_params<2>()::'lambda'(c10::IValue)&&)::'lambda'(c10::tagged_capsule<ConvPackedParamsBase<2> >, c10::IValue&&)>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int register_conv_params<2>()::'lambda'(c10::intrusive_ptr<ConvPackedParamsBase<2>, c10::detail::intrusive_target_default_null_type<ConvPackedParamsBase<2> > > const&), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::initializer_list<torch::arg>)::'lambda'(std::vector<c10::IValue, std::allocator<c10::IValue> >&)&, std::vector<c10::IValue, std::allocator<c10::IValue> >&>(std::__invoke_other, int register_conv_params<2>()::'lambda'(c10::IValue)&&, std::vector<c10::IValue, std::allocator<c10::IValue> >&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
#12 0x125654e in torch::jit::Function::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, c10::IValue> > > const&) /pytorch/aten/src/ATen/core/function.h:62:5
#13 0xec2c1c6 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1::operator()(c10::StrongTypePtr const&, c10::IValue) const /pytorch/torch/csrc/jit/serialization/import.cpp:172:7
#14 0xec2c1c6 in c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > std::__invoke_impl<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> >, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1&, c10::StrongTypePtr, c10::IValue>(std::__invoke_other, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1&, c10::StrongTypePtr&&, c10::IValue&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
#15 0xec2b9a0 in std::enable_if<is_invocable_r_v<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> >, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1&, c10::StrongTypePtr, c10::IValue>, c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > >::type std::__invoke_r<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> >, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1&, c10::StrongTypePtr, c10::IValue>(torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1&, c10::StrongTypePtr&&, c10::IValue&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:113:9
#16 0xec2b8ae in std::_Function_handler<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue), torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_1>::_M_invoke(std::_Any_data const&, c10::StrongTypePtr&&, c10::IValue&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:291:9
#17 0xeda0c63 in std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)>::operator()(c10::StrongTypePtr, c10::IValue) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14
#18 0xed8062d in torch::jit::Unpickler::readGlobal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_9::operator()() const /pytorch/torch/csrc/jit/serialization/unpickler.cpp:863:20
#19 0xed8062d in void std::__invoke_impl<void, torch::jit::Unpickler::readGlobal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_9&>(std::__invoke_other, torch::jit::Unpickler::readGlobal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_9&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14
#20 0xed877c6 in torch::jit::Unpickler::readInstruction() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:545:7
#21 0xed85b27 in torch::jit::Unpickler::run() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:253:27
#22 0xed85781 in torch::jit::Unpickler::parse_ivalue() /pytorch/torch/csrc/jit/serialization/unpickler.cpp:206:3
#23 0xec9c7be in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) /pytorch/torch/csrc/jit/serialization/import_read.cpp:53:20
#24 0xec2b168 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /pytorch/torch/csrc/jit/serialization/import.cpp:184:10
#25 0xec27235 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize(c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:287:19
#26 0xec25644 in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&, bool, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:389:25
#27 0xec2dcbe in torch::jit::import_ir_module(std::shared_ptr<torch::jit::CompilationUnit>, std::istream&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:325:10
#28 0xec30659 in torch::jit::load(std::istream&, c10::optional<c10::Device>, bool) /pytorch/torch/csrc/jit/serialization/import.cpp:485:10
#29 0x8d8636 in LLVMFuzzerTestOneInput /load.cc:42:14
#30 0x8d835d in ExecuteFilesOnyByOne /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255:7
#31 0x8d8168 in LLVMFuzzerRunDriver /AFLplusplus/utils/aflpp_driver/aflpp_driver.c
#32 0x8d7d28 in main /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300:10
#33 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
#34 0x817add in _start (/load_afl+0x817add)
0x609000000158 is located 2 bytes to the right of 22-byte region [0x609000000140,0x609000000156)
allocated by thread T0 here:
#0 0x89b534 in __interceptor_posix_memalign /llvm-project-llvmorg-14.0.6/compiler-rt/lib/asan/asan_malloc_linux.cpp:145:3
SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/ext/new_allocator.h:150:27 in void __gnu_cxx::new_allocator<long>::construct<long, short&>(long*, short&)
Shadow bytes around the buggy address:
0x0c127fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c127fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c127fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c127fff8000: fa fa fa fa fa fa fa fa fd fa fa fa fa fa fa fa
0x0c127fff8010: fa fa fa fa fa fa fa fa fd fa fa fa fa fa fa fa
=>0x0c127fff8020: fa fa fa fa fa fa fa fa 00 00 06[fa]fa fa fa fa
0x0c127fff8030: fa fa fa fa fa fa fa fa fd fa fa fa fa fa fa fa
0x0c127fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c127fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==1003314==ABORTING
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 14.0.6
CMake version: version 3.27.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7702 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1495.812
CPU max MHz: 2000.0000
CPU min MHz: 1500.0000
BogoMIPS: 3992.16
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.2.0a0+git761308a
[conda] Could not collect
cc @mruberry @mikaylagawarecki
| 0 |
789 | 109,784 |
[inductor] Add lowering for aten.take
|
open source, ciflow/trunk, module: inductor, ciflow/inductor
|
Adds lowering for `aten.take`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @lezcano
| 5 |
790 | 109,782 |
test/test_static_runtime.py: test_fork_wait_4 sometimes deadlocks
|
module: tests, triaged, module: multithreading
|
### π Describe the bug
Test `test_fork_wait_4` from `test/test_static_runtime.py` sometimes deadlocks itself.
Test spawns children threads, which in turn spawn grandchildren threads. Those children threads wait for grandchildren before exiting.
If there are more children threads than available threads in threadpool, due to scheduling sometimes it may happen that all threads in threadpool are occupied by children threads waiting for grandchildren threads, and no grandchildren threads may be started due to no free threads in threadpool.
If it doesn't reproduce for you due to having more than 10 threads available in threadpool, increasing children and grandchildren threads count may help to trigger issue, like this:
<details>
<summary>New test diff</summary>
```
diff --git a/test/test_static_runtime.py b/test/test_static_runtime.py
index 032e6776407..16d628d4a8f 100644
--- a/test/test_static_runtime.py
+++ b/test/test_static_runtime.py
@@ -323,6 +323,36 @@ class TestStaticModule(TestCase):
output_test.wait()
torch.testing.assert_close(output_test.value(), output_ref)
+ """
+ Test Case: To test fork/wait operation in a graph on
+ multiple nested fork/wait operations with a lot of threads
+ """
+ def test_fork_wait_5(self):
+ input = torch.ones(3, 3)
+ num_forks = 100
+ num_child_forks = 100
+ torch_graph = torch.jit.script(fork_wait_graph4)
+ static_runtime_module = StaticModule(torch_graph)
+ output_ref = torch_graph(input, num_forks, num_child_forks)
+ output_test = static_runtime_module(input, num_forks, num_child_forks)
+ torch.testing.assert_close(output_test, output_ref)
+
+ """
+ Test Case: To test fork/wait operation in a graph with multiple
+ nested fork/wait operations on runAsync API returning future
+ """
+ def test_fork_wait_5_async(self):
+ input = torch.ones(3, 3)
+ num_forks = 100
+ num_child_forks = 100
+ torch_graph = torch.jit.script(fork_wait_graph4)
+ static_runtime_module = StaticModule(torch_graph)
+ output_ref = torch_graph(input, num_forks, num_child_forks)
+ output_test = static_runtime_module.runAsync(
+ (input, num_forks, num_child_forks), {})
+ output_test.wait()
+ torch.testing.assert_close(output_test.value(), output_ref)
+
"""
Test Case: To test exception handling in fork/wait
operation. Add.Tensor op is called for tensors with
```
</details>
Reproduces on current main branch.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitefc7c36
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Gentoo Linux (x86_64)
GCC version: (Gentoo 12.3.1_p20230526 p2) 12.3.1 20230526
Clang version: 16.0.6
CMake version: version 3.26.5
Libc version: glibc-2.37
Python version: 3.11.4 (main, Aug 17 2023, 22:32:09) [GCC 11.3.1 20230427] (64-bit runtime)
Python platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-11th_Gen_Intel-R-_Core-TM-_i7-1165G7_@_2.80GHz-with-glibc2.37
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 54%
CPU max MHz: 4700.0000
CPU min MHz: 400.0000
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] torch==2.1.0a0+gitefc7c36
[conda] Could not collect
cc @VitalyFedyunin @mruberry
| 1 |
791 | 109,781 |
`torch.embedding`, `weight[indices]`, `torch.index_select` returns random data with indices on meta device
|
module: bc-breaking, triaged, module: embedding, ezyang's list, module: meta tensors, topic: bc breaking
|
### π Describe the bug
```python
import torch
weight = torch.ones(2,3) # CPU
indices = torch.zeros(4,5, device="meta", dtype=torch.int32)
y1 = torch.embedding(weight, indices)
y1a = torch.embedding(weight, indices)
y2 = weight[indices]
y2a = weight[indices]
y3 = torch.index_select(weight, dim=0, index=indices.flatten()).reshape(indices.shape + weight.shape[1:])
y3a = torch.index_select(weight, dim=0, index=indices.flatten()).reshape(indices.shape + weight.shape[1:])
```
Now `y1`, `y1a`, `y2`, `y2a`, `y3`, `y3a` contain random data. And even all those `y*` are totally different, so it's non-deterministic.
Actually, I wondered also, why does this not throw an exception, such that it only works when both inputs are on the same device?
<s>Or maybe `torch.embedding` actually supports mixing devices, e.g. `indices` on CPU?</s>
(Slightly off-topic question: What is really the difference between `torch.index_select(weight, dim=0, index=indices.flatten())` + `reshape`, `weight[indices]` and `torch.embedding(weight, indices)`? I don't really know when to use what.)
I'm not really sure what output to expect. Maybe also some tensor on the "meta" device?
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.6.3 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.202)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02) [Clang 11.1.0 ] (64-bit runtime)
Python platform: macOS-12.6.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] evotorch==0.1.0
[pip3] lovely-numpy==0.2.8
[pip3] mypy==0.991
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.2
[pip3] torch==2.0.1
[pip3] torch-tb-profiler==0.4.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchvision==0.15.2
[conda] Could not collect
cc @ezyang @gchanan @eellison @bdhirsh
| 1 |
792 | 109,778 |
an issue occurs while `loss.backward()`: You are trying to call the hook of a dead module
|
needs reproduction, module: autograd, module: nn, triaged
|
### π Describe the bug
Can be reproduced in:
https://github.com/Ethan-Chen-plus/dp_with_llm/blob/master/main-csv-error3-new.ipynb
As above, `trainer.training_step` can work ok as well as `loss.backward()`
However if we use trainer.train(), some errors happend:


### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-6ubuntu2) 7.5.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
Stepping: 7
CPU MHz: 3600.062
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 32 MiB
L3 cache: 44 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] functorch==0.2.1
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2a0+cxx11.abi
[pip3] triton==2.0.0
[conda] blas 1.0 mkl conda-forge
[conda] cudatoolkit 11.6.0 hd754f66_10 conda-forge
[conda] functorch 0.2.1 pypi_0 pypi
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.2.1 intel_16993 conda-forge
[conda] numpy 1.25.2 py39h6183b62_0 conda-forge
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 py39_cu118 pytorch
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.2 py39_0 pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 5 |
793 | 109,777 |
Wrong vector shift results on PowerPC
|
triaged, module: vectorization, module: POWER
|
### π Describe the bug
This is basically https://github.com/pytorch/pytorch/issues/70904 which is still present for PPC after https://github.com/pytorch/pytorch/pull/98511 added vector operations.
Basically: Shifting a vector with out-of-bounds values results in undefined behavior and sporadic wrong results.
E.g. test_non_contig_bitwise_right_shift_cpu_int32 fails because of right shifts with too-large values and test_shift_limits_cpu_int16 fails due to left shifts with negative values. The latter is very much understandable given that #98511 uses a `reinterpret_cast` to the unsigned vector type
Reproduced with `python run_test.py -i test_binary_ufuncs` which fails
- test_contig_vs_every_other_bitwise_right_shift_cpu_int16
- test_contig_vs_every_other_bitwise_right_shift_cpu_int32
- test_contig_vs_every_other_bitwise_right_shift_cpu_int64
- test_non_contig_bitwise_right_shift_cpu_int16
- test_non_contig_bitwise_right_shift_cpu_int32
- test_non_contig_bitwise_right_shift_cpu_int64
- test_shift_limits_cpu_int16
- test_shift_limits_cpu_int32
- test_shift_limits_cpu_int64
### Versions
Using PyTorch 2.0.1 with the above PRs applied as patches.
| 2 |
794 | 109,774 |
[DDP + Dynamo] Tracing DDP AllReduce
|
oncall: distributed, triaged, module: ddp, oncall: pt2, module: dynamo
|
### π The feature, motivation and pitch
**Background**
DistributedDataParallel (DDP) uses `Reducer` to bucket and issue `allreduce` calls. The main entry point of `Reducer` is through the gradient hook of `AccumuldateGrad`. However, both `Reducer` and the gradient hook are not not traceable by Dynamo. As a result, when using `torch.compile` to trace a DDP module, all the `allreduce` buckets will only be called at the end of the backend graph, resulting in zero overlapping between communication and backward computation. The current solution is to use `DDPOptimizer` to analyze the bucketing process of `Reducer` and breaks the forward graph into multiple ones so that each corresponding backward graph issues one `allreduce`.
With the recent development of CompiledAutograd, we would like to revisit tracing the whole DDP into a graph and capture all the `allreduce` in the backward graph.
**Approach**
The initial concept involves utilizing CompiledAutograd to trace the gradient hook registered by `Reducer` and properly record the `allreduce` calls. We don't need to trace the entire `Reducer` but only the bucketing process and `allreduce` calls. While CompiledAutograd could trace the gradient hook as a node, the gradient hook function must be traceable by Dynamo to be inlined in the graph challenge arises because the gradient hook o`Reducer` is written in C++ and not a torch op, Dynamo could not trace it. While it is possible to address this issue, the solutions can be difficult to maintain and handling rebucketing and graph breaks remains uncertain.
As an alternative approach, we propose the complete deactivation of `Reducer` and the registration of gradient hooks at the Python layer during DDP compilation. Unlike the gradient hook within `Reducer`, which has to manages bucketing, the Python gradient hook solely invokes the functional `allreduce` for the relevant gradient. The Python gradient hook is simple to maintain. This tracing process will result in very bad communication performance but the `allreduce` calls are completely captured, allowing the subsequent optimization. We will implement the `allreduce` bucketing in Inductor.
**Pros/Cons**
The advantages of the proposed solution inherently support graph breaks. Since the graph does not contain any `allreduce` buckets, any graph break won't disturb the allreduce process, and the bucketing function in Inductor can continue functioning smoothly. Additionally, rebucketing is seamlessly managed by Dynamo/CompiledAutograd. When there are changes to the used parameters, Dynamo and CompiledAutograd will automatically recompile the model and trigger the rebucketing process.
A potential concern with the proposed solution lies in the compilation speed, particularly it can be slow for large models. The first iteration has to invoke unbucketing `allreduce`, which can be quite slow.
**Milestones**
1. Trace the gradient hook registered in the Python layer with `CompiledAutograd`
2. Trace DDP communication hook (the basic ones, like bf16_compress).
3. Disable `Reducer` when compiling.
4. Implement bucketing in Inductor.
We have validated milestones 1 and 2 and are working on milestone 3 and 4. After all the 4 milestones are done. We will have a MVP of fully traceable DDP.
cc., @wconstab, @jansel, @yf225, @voznesenskym, @ezyang, @animesh
### Alternatives
The proposed approach may also work with https://github.com/pytorch/pytorch/pull/109537/. We will verify with the PR after it is landed.
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng
| 2 |
795 | 109,770 |
Slow performance when running torch.jit traced model with Flash Attention using libtorch on Windows
|
module: windows, triaged, oncall: transformer/mha
|
### π Describe the bug
I have encountered a performance problem when executing a model that utilizes Flash Attention using torch.jit trace with C++ libtorch on Windows. The inference speed on Windows is 2 to 3 times slower than on Linux, leading me to question whether Flash Attention is genuinely being utilized during the operation.
While there are no warnings in the stable version(2.0.1) of PyTorch, when I use the nightly version(2.2.0.dev20230920+cu118) I receive the following warning:
```bash
[W sdp_utils.cpp:234] Warning: Torch was not compiled with flash attention. (function use_flash_attention)
```
I'll provide more detailed steps to describe the bug:
1. My model has a module that employs Flash Attention, as shown in the code below:
```python
# Use flash attention
self.backend_map = {
"enable_math": False,
"enable_flash": True,
"enable_mem_efficient": False,
}
# Run scaled dot product attention
with sdp_kernel(**self.backend_map):
a = F.scaled_dot_product_attention(q.half(), k.half(), v.half()).transpose(2, 3)
```
2. I then convert it to a torch.jit traced model, as illustrated below:
```python
# Trace model
with torch.no_grad():
traced_script_module = torch.jit.trace(
model,
input,
strict=True
)
# Save model
traced_script_module.save("traced_model.pt")
```
3. I download libtorch for Windows.
```
https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.0.1%2Bcu118.zip
```
4. I compile and then execute using libtorch
```cpp
The C compiler identification is MSVC 19.37.32822.0
torch::jit::script::Module module;
module = torch::jit::load(model_path);
at::Tensor outputTensor= module(inputs).toTensor().squeeze(0);
```
5. While it runs without any errors or warnings, the speed is more than twice as slow on Windows compared to its performance on Linux.
6. Upon switching to the nightly version (2.2.0.dev20230920+cu118) of libtorch and recompiling, the execution time remains consistent with the previous result. However, I now see the warning message mentioned above.
Any assistance on this matter would be greatly appreciated!
### Versions
#### The machine used to torch.jit.trace the model (Ubuntu)
Collecting environment information...
PyTorch version: 2.1.0a0+b5021ba
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7513 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5190.25
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+b5021ba
[pip3] torch-tensorrt==1.5.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0
[conda] Could not collect
#### The machine used to run traced model with libtorch (Windows)
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: N/A
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 537.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture=9
CurrentClockSpeed=3801
DeviceID=CPU0
Family=107
L2CacheSize=4096
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3801
Name=AMD Ryzen 7 5800X 8-Core Processor
ProcessorType=3
Revision=8448
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
| 1 |
796 | 109,768 |
LLaMA-2 70b model convert from PyTorch to ONNX format
|
module: onnx, triaged
|
### π Describe the bug
error in converting pytorch llama2 format to onnx format
```python
from optimum.onnxruntime import ORTModelForCausalLM
name = "meta-llama/Llama-2-70b-hf"
model = ORTModelForCausalLM.from_pretrained(
name,
export=True,
use_auth_token=True,
)
model.save_pretrained(name.split("/")[-1] + "-onnx")
```
results below
```
Loading the tokenizer from the `special_tokens_map.json` and the `added_tokens.json` will be removed in `transformers 5`, it is kept for forward compatibility, but it is recommended to update your `tokenizer_config.json` by uploading it again. You will see the new `added_tokens_decoder` attribute that will store the relevant information.
Using framework PyTorch: 2.0.0+cu117
Overriding 1 configuration item(s)
- use_cache -> True
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:599: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:123: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_len > self.max_seq_len_cached:
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:352: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:359: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:369: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
Saving external data to one file...
Using framework PyTorch: 2.0.0+cu117
Overriding 1 configuration item(s)
- use_cache -> True
Asked a sequence length of 16, but a sequence length of 1 will be used with use_past == True for `input_ids`.
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Traceback (most recent call last):
File "/workdir/hf-optimum/model_convert.py", line 4, in <module>
model = ORTModelForCausalLM.from_pretrained(
File "/workdir/hf-optimum/optimum/onnxruntime/modeling_ort.py", line 652, in from_pretrained
return super().from_pretrained(
File "/workdir/hf-optimum/optimum/modeling_base.py", line 372, in from_pretrained
return from_pretrained_method(
File "/workdir/hf-optimum/optimum/onnxruntime/modeling_decoder.py", line 567, in _from_transformers
main_export(
File "/workdir/hf-optimum/optimum/exporters/onnx/__main__.py", line 446, in main_export
_, onnx_outputs = export_models(
File "/workdir/hf-optimum/optimum/exporters/onnx/convert.py", line 760, in export_models
export(
File "/workdir/hf-optimum/optimum/exporters/onnx/convert.py", line 863, in export
export_output = export_pytorch(
File "/workdir/hf-optimum/optimum/exporters/onnx/convert.py", line 580, in export_pytorch
onnx_export(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/jit/_trace.py", line 1268, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/workdir/hf-optimum/optimum/exporters/onnx/model_patcher.py", line 102, in patched_forward
outputs = self.orig_forward(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 824, in forward
outputs = self.model(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 712, in forward
layer_outputs = decoder_layer(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 428, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 341, in forward
key_states = torch.cat([past_key_value[0], key_states], dim=2)
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 64 but got size 8 for tensor number 1 in the list.
```
### Versions
/home/onnxruntimedev/miniconda3/lib/python3.9/site-packages/torch/cuda/__init__.py:173: UserWarning:
NVIDIA H100 80GB HBM3 with CUDA capability sm_90 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86.
If you want to use the NVIDIA H100 80GB HBM3 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.9.2 (default, Mar 3 2021, 20:02:32) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8468
Stepping: 8
CPU MHz: 2100.000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.0.0
[pip3] torch-ort==1.16.0
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
| 1 |
797 | 109,763 |
[ONNX] Remove the deprecated function `_export`
|
module: onnx, triaged, open source, Merged, Reverted, ciflow/trunk, release notes: onnx
|
`_export` API was deprecated and should be removed after 2.0.
See: https://github.com/pytorch/pytorch/pull/107208
| 27 |
798 | 109,762 |
DTensor: summon full tensor API?
|
oncall: distributed
|
### π The feature, motivation and pitch
Wondering if there's an API or plans to support materializing the full DTensor from their shards. I couldn't quite find anything in https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/api.py, but might be missing something.
This could be useful for debugging / iterating on issues related to DTensor. Currently, the following seems like a workaround:
```
# redistribute to single world size
one_ws = DeviceMesh(dist.get_rank())
redist = dtensor.redistribute(one_ws)
whole_tensor = redist.to_local()
```
but not sure.
### Alternatives
_No response_
### Additional context
_No response_
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu
| 3 |
799 | 109,756 |
[vision hash update] update the pinned vision hash
|
open source, ciflow/trunk, topic: not user facing, ciflow/inductor
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/_update-commit-hash.yml).
Update the pinned vision hash.
| 4 |
800 | 109,753 |
fp16 parity issue with traced code on GPU
|
oncall: jit, triaged, module: half, module: export
|
### π Describe the bug
I have more details in [this colab notebook](https://colab.research.google.com/drive/1lf8RjX2p5F4E3MzN_blbRICughqHkSJP?usp=sharing), but the issue has to do with a specific sequence of operations that are not equivalent between the regular and traced models when running on a Tesla T4 GPU. The colab notebook was run with a T4 GPU runtime and `torch.__version__=="2.2.0.dev20230920+cu118"`.
Here is the code that fails. This does not fail on cpu, or if the operations in `CoordTransform` are simplified.
```python
import torch
print(torch.__version__)
gen = torch.random.manual_seed(123456789)
model_input = torch.rand(512, 512, 2, generator=gen, dtype=torch.float16).to(torch.device("cuda"))
class CoordTransform(torch.nn.Module):
def forward(self, x_i: torch.Tensor) -> torch.Tensor:
x = torch.arange(0, 512).to(x_i)
x = x + x_i[0, :, 0]
x = x * 0.4
return x
model = CoordTransform()
ts_model = torch.jit.trace(model, model_input)
res = model(model_input)
res_ts = ts_model(model_input)
torch.testing.assert_close(res, res_ts, rtol=1e-4, atol=1e-1)
```
Output:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-20-931a9fd6043a> in <cell line: 6>()
4 res = model(model_input)
5 res_ts = ts_model(model_input)
----> 6 torch.testing.assert_close(res, res_ts, rtol=1e-4, atol=1e-1)
/usr/local/lib/python3.10/dist-packages/torch/testing/_comparison.py in assert_close(actual, expected, allow_subclasses, rtol, atol, equal_nan, check_device, check_dtype, check_layout, check_stride, msg)
1518 if error_metas:
1519 # TODO: compose all metas into one AssertionError
-> 1520 raise error_metas[0].to_error(msg)
1521
1522
AssertionError: Tensor-likes are not close!
Mismatched elements: 58 / 512 (11.3%)
Greatest absolute difference: 0.125 at index (282,) (up to 0.1 allowed)
Greatest relative difference: 0.00110626220703125 at index (282,) (up to 0.0001 allowed)
```
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20230920+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.120+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.33
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0.dev20230920+cu118
[pip3] torchaudio==2.2.0.dev20230920+cu118
[pip3] torchdata==0.6.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.17.0.dev20230920+cu118
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.