Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
β | Body
stringlengths 9
74.5k
β | Comments
int64 0
867
|
---|---|---|---|---|---|
5,501 | 79,118 |
About the source code of
|
triaged, topic: docs
|
### π The doc issue
I am actually trying to find the source implementation of a very simple function - cosine similarity, due to some concerns from its computation scheme.
However, when I was looking at [the website](https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html), I cannot locate the source code. I also failed to find the function from the original GitHub repository (or I am being too lazy to dig everything. I stopped after several migrating using F12 on VScode...).
I am aware that such "torch.nn.functional" functions are implemented not purely in Python but wrapped with Cython (correct me). But still I would like to see the inner working mechanism. Could you please kindly point me to where the actual implementation locates in the repo? Cython or even C/C++ code will be fine. Thanks!
### Suggest a potential alternative/fix
I actually would like to suggest to at least add one "source location" for the functions that can be browsed in the website. It does not have to be Python code or even code necessarily contained in the repo, but at least something making people feel whatever they will feel to open ideas or discussions.
| 1 |
5,502 | 79,091 |
Error with Named Tensors and multiple threads
|
triaged, module: named tensor
|
### π Describe the bug
When using Named Tensors I got an exception when the number of operations was too large.
Steps to reproduce:
```python
torch.randn((50000, 2), names=('K', 'M')).mean('M').mean('K')
```
We get the exception:
```
RuntimeError: aten::unsqueeze is not yet supported with named tensors. Please drop names via `tensor = tensor.rename(None)`, call the op with an unnamed tensor, and set names on the result of the operation.
```
However, if we instead run
```python
torch.randn((500, 2), names=('K', 'M')).logsumexp('M').mean('K')
```
we get no errors.
Moreover, if we run the original code but without multithreading, there is no error either:
```python
torch.set_num_threads(1)
torch.randn((50000, 2), names=('K', 'M')).mean('M').mean('K')
```
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.11.1
Libc version: N/A
Python version: 3.9.7 (default, Sep 3 2021, 12:45:31) [Clang 12.0.0 (clang-1200.0.32.29)] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] Could not collect
```
cc @zou3519
| 1 |
5,503 | 79,089 |
Improve PrimTorch testing for view consistency.
|
module: tests, triaged, module: viewing and reshaping, module: primTorch
|
### π The feature, motivation and pitch
Currently, we check view consistency by checking if `a._is_view` and `b._is_view` are both correct: https://github.com/pytorch/pytorch/blob/master/test/test_ops.py#L394
However, this is not particularly correct. As @ezyang notes (https://github.com/pytorch/pytorch/pull/78994#discussion_r891721598), view is an autograd concept. Thus, even though the ATen version of stack returns a view (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorShape.cpp#L2213), `a._is_view` returns False for the ATen implementation.
This is problematic for a variety of ops, including `rot90`: https://github.com/pytorch/pytorch/pull/78080#discussion_r885104613
cc: @mruberry @ngimel @ezyang
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry @ezyang @ngimel
| 3 |
5,504 | 79,083 |
CI workflow creates too many tags in RSS feed
|
module: ci, triaged
|
Hi, I use GitHub's RSS feed to track the release of many repos. However, it seems that the PyTorch RSS feed is flooded by CI workflow (see https://github.com/pytorch/pytorch/releases.atom and https://github.com/pytorch/pytorch/tags).
Is it possible that CI workflow is no longer tagged so that I receive only updates for important builds in the RSS feed?
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 4 |
5,505 | 79,073 |
Multi-node, Multi-GPU set up tutorial for Slurm cluster
|
oncall: distributed, triaged
|
### π The feature, motivation and pitch
We need to have a tutorial/doc about setting up Multi-node, Multi-GPU environment for Slurm cluster. https://github.com/aivanou/disttraining/tree/main/slurm is a good starting point, but we need to furnish it and make sure it works and publish a tutorial/doc on it.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,506 | 93,760 |
inspect.signature.bind is not supported
|
triaged, oncall: pt2, module: dynamo
|
This is being used in elementwise_type_promotion_wrapper in PrimTorch which means I cannot nopython through PrimTorch refs.
cc @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @mruberry maybe we just shouldn't use this API in PrimTorch
| 4 |
5,507 | 79,053 |
[MetaIssue] Propagating SymInts through Autograd
|
triaged, lazy
|
* a scalar grad check relies on numel
* support implicit reductions
* sum_to
* is_expandable_to
* add SymInt overload to zeros
* codegen to use sym_sizes for ops w/ symint overloads in derivative formulas
* InputMetadata to use SymInt
| 1 |
5,508 | 79,052 |
Mirror and implement `SymbolicIntNode` API for `SymInt` so we can trace in C++
|
triaged, lazy
| null | 0 |
5,509 | 79,050 |
[MetaIssue] Investigate if we should be reusing primtorch formulas for `is_dynamic`
|
triaged, lazy
|
1. Investigate how hard it would be to integrate https://github.com/pytorch/pytorch/blob/1c1c68b1e31351030843c773e73fab736cbfaf1b/torch/_meta_registrations.py formulas to compute `is_dynamic` in LTC
2. Cross reference how many MaskRCNN ops that are not covered with existing SSA shape functions already have shape functions in https://github.com/pytorch/pytorch/blob/1c1c68b1e31351030843c773e73fab736cbfaf1b/torch/_meta_registrations.py and decomps in https://github.com/pytorch/pytorch/blob/master/torch/_decomp/decompositions.py
3. Sync with @ezyang about his timeline maybe it would be possible to prioritize some ops for MaskRCNN?
4. Benchmark JIT SSA vs meta shape propagations.
5. Communicate to Google if we decide to transition to meta functions
6. If we decide to transition to meta functions: E2E PoC. Hopefully this will build enough infrastructure.
7. If we decide to transition to meta functions: Migration plan
This is based on @Gamrix 's doc: https://docs.google.com/document/d/1DhNxC9mtQdSXOktlSIoAKUCgN95Dx998FHcoPhJdJU8/edit#heading=h.l6gc8x2z7bv5
| 1 |
5,510 | 79,049 |
DDP Freezes w/ No Output for PyTorch Geometric GNN Multi-GPU Node Classification
|
oncall: distributed, triaged, module: ddp
|
### π Describe the bug
For some mysterious reason self.reducer._rebuild_buckets() from within the DDP forward call, hangs on the second iteration of calling the DDP module forward pass. It only happens on node property prediction for PyTorch Geometric, link prediction works with no issues. Originally the issue was found when predicting missing information in the Microsoft Academic Graph, but the issue has been isolated down to be reproducible with node classification on synthetic data. [Repro](https://github.com/puririshi98/debug_MAG_pyg/blob/main/synthetic_repro.py)
The lack of output when it freezes is making the debugging impossible for @rusty1s as well.
Hacky fix:
```
try:
bool2 = self.reducer._rebuild_buckets()
except:
bool2 = False
if torch.is_grad_enabled() and bool2:
logger.info("Reducer buckets have been rebuilt in this iteration.")
self._has_rebuilt_buckets = True
```
as opposed to:
```
if torch.is_grad_enabled() and self.reducer._rebuild_buckets(:
logger.info("Reducer buckets have been rebuilt in this iteration.")
self._has_rebuilt_buckets = True
```
Not sure the cause or what the correct long term solution is.
### Versions
latest pytorch and pytorch-geometric installs should reproduce the issue
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 10 |
5,511 | 79,039 |
Add Autograd Support for Nested Tensor
|
triaged, module: nestedtensor
|
## Summary
This is the base issue with sub issues stored and tracked here.
Currently NestedTensors do not support autograd. The goal of this issue is to change that. There are a few parts to making this work and the sub components will be tracked in related issues referenced below.
#### Linked Issues:
- #79040
- #79447
- #79044
- #83773
Nice to have but there is a work around:
- #79046
- #79048
cc @cpuhrsch
| 0 |
5,512 | 79,032 |
Add a test that shows that lazy_ir reuse breaks SizeNodes
|
triaged, lazy
|
I suspect Bin's lazy_ir reuse breaks `SizeNode` as these eventually point to DeviceData leaves and are updated after each mark_step whereas `SizeNode` persist across multiple mark_steps
| 2 |
5,513 | 79,031 |
Implement SymbolicIntNode interface for lazy (i.e. lazy::SymbolicIntNode)
|
triaged, lazy
|
These need to be implemented for lazy/xla tensors:
https://github.com/pytorch/pytorch/blob/eca474f93cfaa19786d665655378411edc370bbe/c10/core/SymbolicIntNode.h#L17-L63
by constructing IR nodes similar to https://github.com/pytorch/pytorch/blob/master/torch/csrc/lazy/core/tensor_impl.cpp#L91-L92
The IR is defined in https://github.com/pytorch/pytorch/blob/master/torch/csrc/lazy/ts_backend/dynamic_ir.h
| 0 |
5,514 | 79,030 |
Devirtualize `sym_sizes`. It still has to work for python tensor subclasses and LTC/Xla
|
triaged, lazy
|
The issue is as follows:
1. THPVariable_size now returns `sym_sizes`
2. Python tensors set `CustomSizes`
1. Python tensors needs to run the default implementation of `sym_sizes`
2. `CustomSizes` makes us use `sym_sizes_custom` which throws
3. LTC sets `CustomSizes`
1. LTC needs to *reimplement* `sym_sizes`
4. Nested Tensors sets `CustomSizes`
1. Nested Tensors needs to *reimplement* `sym_sizes`
I hacked around the issue by removing a check for the size policy in `sym_sizes` making it virtual and making it always call the default implementation. This inhibits compiler optimizations.
| 1 |
5,515 | 79,021 |
Building PyTorch from Source with BUILD_LAZY_TS_BACKEND_ON
|
module: build, triaged
|
### π Describe the bug
BUILD_LAZY_TS_BACKEND ON
```
FAILED: bin/test_lazy
: && /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Xpreprocessor -fopenmp -I/usr/local/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -DUSE_MPS -fno-objc-arc -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=10.9 -Wl,-search_paths_first -Wl,-headerpad_max_install_names -rdynamic test_lazy/CMakeFiles/test_lazy.dir/__/common/main.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_backend_device.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_cache.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_ir.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_ir_util.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_misc.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_permutation_util.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_shape.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_symbolic_shape.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_trie_cache.cpp.o test_lazy/CMakeFiles/test_lazy.dir/test_util.cpp.o -o bin/test_lazy -Wl,-rpath,/Users/davidlaxer/pytorch/build/lib -Wl,-rpath,/Users/davidlaxer/anaconda3/envs/AI-feynman/lib lib/libtorch.dylib lib/libgtest.a lib/libtorch_cpu.dylib /Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib lib/libc10.dylib /Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_intel_lp64.dylib /Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_intel_thread.dylib /Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_core.dylib /Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libiomp5.dylib /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk/usr/lib/libpthread.tbd /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk/usr/lib/libm.tbd && :
ld: warning: dylib (/Users/davidlaxer/protobuf/src/.libs/libprotobuf.dylib) was built for newer macOS version (12.0) than being linked (10.9)
ld: warning: dylib (/Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_intel_lp64.dylib) was built for newer macOS version (10.14) than being linked (10.9)
ld: warning: dylib (/Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_intel_thread.dylib) was built for newer macOS version (10.14) than being linked (10.9)
ld: warning: dylib (/Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libmkl_core.dylib) was built for newer macOS version (10.14) than being linked (10.9)
ld: warning: dylib (/Users/davidlaxer/anaconda3/envs/AI-feynman/lib/libiomp5.dylib) was built for newer macOS version (10.11) than being linked (10.9)
Undefined symbols for architecture x86_64:
"torch::lazy::GetTSBackendImpl()", referenced from:
__GLOBAL__sub_I_test_symbolic_shape.cpp in test_symbolic_shape.cpp.o
"torch::lazy::TsNode::TsNode(torch::lazy::OpKind, torch::lazy::Shape, unsigned long, torch::lazy::hash_t)", referenced from:
std::__1::shared_ptr<torch::lazy::Node> torch::lazy::MakeNode<torch::lazy::TsNode, torch::lazy::OpKind, torch::lazy::Shape, int, torch::lazy::hash_t const&>(torch::lazy::OpKind&&, torch::lazy::Shape&&, int&&, torch::lazy::hash_t const&) in test_ir.cpp.o
"torch::lazy::TsNode::TsNode(torch::lazy::OpKind, torch::lazy::Shape, unsigned long, torch::lazy::hash_t)", referenced from:
torch::lazy::CacheNodeWithShape::CacheNodeWithShape(torch::lazy::Shape const&) in test_cache.cpp.o
"torch::lazy::SizeAdd::SizeAdd(torch::lazy::Value, torch::lazy::Value)", referenced from:
std::__1::shared_ptr<torch::lazy::Node> torch::lazy::MakeNode<torch::lazy::SizeAdd, torch::lazy::Value, torch::lazy::Value>(torch::lazy::Value&&, torch::lazy::Value&&) in test_ir.cpp.o
"torch::lazy::SizeMul::SizeMul(torch::lazy::Value, torch::lazy::Value)", referenced from:
std::__1::shared_ptr<torch::lazy::Node> torch::lazy::MakeNode<torch::lazy::SizeMul, torch::lazy::Value, torch::lazy::Value>(torch::lazy::Value&&, torch::lazy::Value&&) in test_ir.cpp.o
"torch::lazy::SizeNode::SizeNode(torch::lazy::Value, unsigned long)", referenced from:
std::__1::shared_ptr<torch::lazy::Node> torch::lazy::MakeNode<torch::lazy::SizeNode, torch::lazy::Value, int>(torch::lazy::Value&&, int&&) in test_ir.cpp.o
"torch::lazy::TsNode::hash() const", referenced from:
vtable for torch::lazy::CacheNodeWithShape in test_cache.cpp.o
"torch::lazy::TsNode::Lower(std::__1::shared_ptr<torch::jit::GraphFunction>, torch::lazy::TSLoweringContext*) const", referenced from:
vtable for torch::lazy::CacheNodeWithShape in test_cache.cpp.o
"torch::lazy::TsNode::shapeHash() const", referenced from:
vtable for torch::lazy::CacheNodeWithShape in test_cache.cpp.o
"typeinfo for torch::lazy::TsNode", referenced from:
typeinfo for torch::lazy::CacheNodeWithShape in test_cache.cpp.o
torch::lazy::IrTest_TsNodeTest_Test::TestBody() in test_ir.cpp.o
"typeinfo for torch::lazy::SizeAdd", referenced from:
torch::lazy::IrTest_DimensionNodeTest_Test::TestBody() in test_ir.cpp.o
"typeinfo for torch::lazy::SizeMul", referenced from:
torch::lazy::IrTest_DimensionNodeTest_Test::TestBody() in test_ir.cpp.o
"typeinfo for torch::lazy::SizeNode", referenced from:
torch::lazy::IrTest_DimensionNodeTest_Test::TestBody() in test_ir.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
### Versions
```
% python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.22.4
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:16:26) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] numpydoc==1.2
[pip3] pytorch-lightning==1.6.3
[pip3] pytorch-transformers==1.1.0
[pip3] torch==1.13.0a0+gitcbdb694
[pip3] torchmetrics==0.8.2
[pip3] torchtext==0.12.0
[pip3] torchvision==0.13.0.dev20220517
[conda] blas 1.0 mkl anaconda
[conda] libblas 3.9.0 12_osx64_mkl conda-forge
[conda] libcblas 3.9.0 12_osx64_mkl conda-forge
[conda] liblapack 3.9.0 12_osx64_mkl conda-forge
[conda] liblapacke 3.9.0 12_osx64_mkl conda-forge
[conda] mkl 2021.4.0 hecd8cb5_637 anaconda
[conda] mkl-include 2022.0.0 hecd8cb5_105 anaconda
[conda] mkl-service 2.4.0 py38h9ed2024_0 anaconda
[conda] mkl_fft 1.3.1 py38h4ab4a9b_0 anaconda
[conda] mkl_random 1.2.2 py38hb2f4e1b_0 anaconda
[conda] numpy 1.22.3 pypi_0 pypi
[conda] numpy-base 1.21.5 py38h3b1a694_2 anaconda
[conda] numpydoc 1.2 pyhd3eb1b0_0 anaconda
[conda] pytorch 1.11.0 py3.8_0 pytorch
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torch 1.12.0.dev20220517 pypi_0 pypi
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchtext 0.12.0 py38 pytorch
[conda] torchvision 0.13.0.dev20220517 pypi_0 pypi
```
cc @malfet @seemethere
| 0 |
5,516 | 79,020 |
When setting sizes and strides on a tensor subclass in `THPVariable_make_wrapper_subclass`, also make offset symbolic
|
triaged, lazy, module: lazy
|
More discussion here.
https://github.com/pytorch/pytorch/blob/eca474f93cfaa19786d665655378411edc370bbe/torch/csrc/autograd/python_variable.cpp#L673-L679
| 1 |
5,517 | 79,018 |
Random functions should infer device from user-specified Generator
|
triaged, enhancement, module: random
|
### π The feature, motivation and pitch
As of PyTorch 1.11.0, PyTorch's functions for drawing random numbers will not infer the appropriate device from a user-provided RNG:
```python
gen = tr.Generator(device="cuda:0")
tr.rand(1, generator=gen) # RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'
tr.rand(1, generator=gen, device=gen.device) # OK
```
I am recommending that functions like `tr.rand` should infer their device from the RNG when `tr.rand(... device=None)`
```python
# pitch: these should all be OK
gen = tr.Generator(device="cuda:0")
tr.rand(1, generator=gen)
tr.rand(1, generator=gen, device=None)
tr.rand(1, generator=gen, device=gen.device)
```
### Alternatives
_No response_
### Additional context
I have found that the current behavior makes it all too easy to write buggy code, since writing `tr.rand(1, generator=gen)` "feels" like it should always work.
Furthermore, it is challenging to catch this potential bug in tests, unless you already anticipate this exact issue (it is not very likely that people would think to test cases where the generator is placed on various devices).
cc @pbelevich
| 0 |
5,518 | 79,016 |
Corner cases of ShardedTensor checkpoint when using TorchRec
|
oncall: distributed, module: checkpoint, triaged, sharded_tensor, release notes: distributed (sharded)
|
### π Describe the bug
Our team is trying torchrec in our business. It seems that torchrec does not support its own model save/load api that supports resharding, e.g. training with multiple gpus while inferencing with one. I've tried to use the recently introduced distributed checkpoint for ShardedTensor (https://github.com/pytorch/pytorch/pull/76123) according to the suggesting here: https://github.com/pytorch/torchrec/issues/346. It's really nice to have a unified way to store tensors and sharded tensors across machine, but I encountered some corner cases that are not working at the moment, for examples:
1. In torchrec, some embeddings (sharded tensors) only locate on part of the gpus, but we only stored the metadata of rank 0 now, which will trigger error when loading.
2. Some small embeddings are planned as data-paralleled when using multiple cards, which will be stored into a Tensor for each rank, but they are planned as row-wise when using 1 gpu, which will be stored as a ShardedTensor and cause type mismatch when loading.
3. We hope not to use the DMP (distributed model parallel) wrapper when doing inference so that the JIT tracing will work. However, all tensors in the model will be plain Tensor. Users of FSDP may also encounter this problem.
The first bug here can be solved separately and I will try to make a PR for it :) However, I think the last 2 is relevant to the design choices of pytorch and torchrec, for instance, shall we allow loading sharded tensor into a single tensor if the total size matches and shall the all tensors in torchrec in be labeled as data-parallel when training with only one gpu? I wonder if you could share some of your ideas for these problems. I would be great if all of these have be solved in the 'snapshot API' mentioned here: https://github.com/pytorch/torchrec/issues/346#issuecomment-1145203954. And if that's the case, could you share some of the progress of this API?
Our team love torchrec as it has brought us great flexibility of pytorch and the speed from the gpu native design! And of course we love pytorch! Therefore we hope to be able to use torchrec in practice soon :) I'd love to help if there is anything I can do~
Thank you for your time on this long issue and looking forward to your ideas!
gently ping @kumpera @dstaay-fb @colin2328
### Versions
nightly
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 6 |
5,519 | 79,014 |
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
|
module: crash, module: cudnn, module: cuda, triaged, module: ddp
|
### π Describe the bug
I have run train.py in YOLOX project, I got the error message as title description.
when i train the script by `python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 0 -b 8 --fp16 -o -c yolox_s.pth`, I meet the problem:
### To Reproduce
My sample code as the link: https://github.com/Megvii-BaseDetection/YOLOX/blob/main/tools/train.py
The error message as below:
### Error message:
2022-06-06 16:24:43 | ERROR | yolox.core.launch:98 - An error has been caught in function 'launch', process 'MainProcess' (7800), thread 'MainThread' (7924):
Traceback (most recent call last):
File "tools\train.py", line 140, in
args=(exp, args),
β β Namespace(batch_size=4, cache=False, ckpt='yolox_s.pth', devices=0, dist_backend='nccl', dist_url=None, exp_file='exps/exampl...
β βββββββββββββββββββββ€ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ...
File "d:\e\spore\yolox\yolox\core\launch.py", line 98, in launch
main_func(*args)
β β (βββββββββββββββββββββ€βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ...
β <function main at 0x0000027189606378>
File "tools\train.py", line 117, in main
trainer.train()
β β <function Trainer.train at 0x0000027189A686A8>
β <yolox.core.trainer.Trainer object at 0x0000027189A74828>
File "d:\e\spore\yolox\yolox\core\trainer.py", line 76, in train
self.train_in_epoch()
β β <function Trainer.train_in_epoch at 0x0000027189A68D90>
β <yolox.core.trainer.Trainer object at 0x0000027189A74828>
File "d:\e\spore\yolox\yolox\core\trainer.py", line 85, in train_in_epoch
self.train_in_iter()
β β <function Trainer.train_in_iter at 0x0000027189A68E18>
β <yolox.core.trainer.Trainer object at 0x0000027189A74828>
File "d:\e\spore\yolox\yolox\core\trainer.py", line 91, in train_in_iter
self.train_one_iter()
β β <function Trainer.train_one_iter at 0x0000027189A68EA0>
β <yolox.core.trainer.Trainer object at 0x0000027189A74828>
File "d:\e\spore\yolox\yolox\core\trainer.py", line 110, in train_one_iter
self.scaler.scale(loss).backward()
β β β β tensor(10.1536, device='cuda:0', grad_fn=)
β β β <function GradScaler.scale at 0x00000271FE5FED90>
β β <torch.cuda.amp.grad_scaler.GradScaler object at 0x0000027189A74860>
β <yolox.core.trainer.Trainer object at 0x0000027189A74828>
File "D:\F\Anaconda3\envs\yolox\lib\site-packages\torch_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
β β β β β β β β None
β β β β β β β False
β β β β β β None
β β β β β None
β β β β tensor([665425.], device='cuda:0', grad_fn=)
β β β <function backward at 0x00000271FEACE1E0>
β β <module 'torch.autograd' from 'D:\F\Anaconda3\envs\yolox\lib\site-packages\torch\autograd\init.py'>
β <module 'torch' from 'D:\F\Anaconda3\envs\yolox\lib\site-packages\torch\init.py'>
File "D:\F\Anaconda3\envs\yolox\lib\site-packages\torch\autograd_init_.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([4, 128, 20, 20], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(128, 1, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
data_type = CUDNN_DATA_HALF
padding = [0, 0, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 00000274B9F17FB0
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 4, 128, 20, 20,
strideA = 51200, 400, 20, 1,
output: TensorDescriptor 00000274B9F16F80
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 4, 1, 20, 20,
strideA = 400, 400, 20, 1,
weight: FilterDescriptor 000002747033C530
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 1, 128, 1, 1,
Pointer addresses:
input: 00000009550AC000
output: 0000000716DFB000
weight: 00000007167FFA00
### My Action:
I have already run the code snippet as below which provided by error message. It's running successfully. No error.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([4, 128, 20, 20], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(128, 1, kernel_size=[1, 1], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
Looking for your reply.
Thanks a lot.
### Versions
Collecting environment information...
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 ββΝ₯βββΔ°β
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: N/A
Python version: 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.1.74
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 472.39
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.10.0
[pip3] torchaudio==0.10.0
[pip3] torchfile==0.1.0
[pip3] torchinfo==1.6.3
[pip3] torchnet==0.0.4
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main
[conda] cudatoolkit 11.3.1 h59b6b97_2 https://repo.anaconda.com/pkgs/main
[conda] mkl 2021.4.0 haa95532_640 https://repo.anaconda.com/pkgs/main
[conda] mkl-service 2.4.0 py37h2bbff1b_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_fft 1.3.1 py37h277e83a_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_random 1.2.2 py37hf11a4ad_0 https://repo.anaconda.com/pkgs/main
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.0 py37_cu113 pytorch
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchinfo 1.6.3 pypi_0 pypi
[conda] torchnet 0.0.4 pypi_0 pypi
[conda] torchvision 0.11.1 py37_cu113 pytorch
(yolo)
yolo is my virtual environment.
**My computer's environment set is:**
Collecting environment information...
PyTorch version: 1.7.1
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 ββΝ₯βββΔ°β
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: N/A
Python version: 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.1.74
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 472.39
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] numpy==1.16.5
[pip3] numpydoc==0.9.1
[pip3] torch==1.7.1
[conda] blas 1.0 mkl https://repo.anaconda.com/pkgs/main
[conda] cudatoolkit 11.0.221 h74a9793_0 https://repo.anaconda.com/pkgs/main
[conda] mkl 2019.4 245 https://repo.anaconda.com/pkgs/main
[conda] mkl-service 2.3.0 py37hb782905_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_fft 1.0.14 py37h14836fe_0 https://repo.anaconda.com/pkgs/main
[conda] mkl_random 1.1.0 py37h675688f_0 https://repo.anaconda.com/pkgs/main
[conda] numpy 1.16.5 py37h19fb1c0_0 https://repo.anaconda.com/pkgs/main
[conda] numpy-base 1.16.5 py37hc3f5095_0 https://repo.anaconda.com/pkgs/main
[conda] numpydoc 0.9.1 py_0 https://repo.anaconda.com/pkgs/main
[conda] pytorch 1.7.1 py3.7_cuda110_cudnn8_0 pytorch
I'm not sure virtual environment cuda version and computer cuda version are influential each other.
cc @csarofeen @ptrblck @xwang233 @ngimel
| 0 |
5,520 | 79,013 |
Multi30k can't be downloaded the destination domain can't be reached
|
triaged
|
### π Describe the bug
Could not get the file at http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz.
This website can't be reached!
Is there a substitution?
### Versions
torch 1.11.0
torchaudio 0.11.0
torchdata 0.3.0
torchtext 0.12.0
torchvision 0.12.0
| 1 |
5,521 | 79,008 |
torchscript jit trace support custom op without specific csrc and .so
|
oncall: jit
|
### π The feature, motivation and pitch
Despite of using a custom op in csrc way. sometimes, users just want wrap a block of lines into a single func, the using it as a single op, rather than tace them one by one.
For example, I have a block calucation like this:
```
a = torch.sum(a*b, dim=1)
c = torch.einsum("ij,jk->ik", a, b)
```
I wanna using a single func as a Op like:
```
def my_dead_awesome_op(a, b):
a = torch.sum(a*b, dim=1)
c = torch.einsum("ij,jk->ik", a, b)
```
and I wanna using jit trace this `my_dead_awesome_op` out, is there a way to do so?
### Alternatives
_No response_
### Additional context
_No response_
| 0 |
5,522 | 79,004 |
Doc on index of CPU Device seems wrong
|
module: cpp, triaged
|
### π Describe the bug
Comment on this line says "When the device type is CPU, the device index must be zero".
https://github.com/pytorch/pytorch/blob/bf4b6d0dce/c10/core/Device.h#L29
However, I run PyTorch and found a CPU tensor's device has index = -1. Maybe the comment is wrong.
### Versions
```
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 13.0.0
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.10.0-13-amd64-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.4.120
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.11.0
[conda] Could not collect
```
cc @jbschlosser
| 0 |
5,523 | 79,003 |
Libtorch C++ mobile build linking error
|
module: build, oncall: mobile
|
### π Describe the bug
I am implementing model training program on mobile platform using Libtorch C++ API. I run `script/build_mobile.sh` with `CMAKE_ARGS+=("-DBUILD_SHARED_LIBS=ON")` to build shared libraries. And in order to support training features, I also change the root CMakeLists.txt as follows:
```
option(BUILD_MOBILE_AUTOGRAD "Build autograd function in mobile build (in development)" ON)
set(NO_API OFF)
```
After successfully building Libtorch, the libraries and headers are generated under `pytorch/build_mobile/install`, and I use the following CMakeLists.txt to build my own source code:
```
cmake_minimum_required(VERSION 3.1)
project(pytorch_android)
set(CMAKE_CXX_STANDARD 14)
add_executable(pytorch_android main.cpp)
target_include_directories(pytorch_android PRIVATE
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/include
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/include/torch/csrc/api/include
${CMAKE_SOURCE_DIR}/../pytorch/aten/src
)
target_link_libraries(pytorch_android PRIVATE
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/lib/libc10.so
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/lib/libtorch_global_deps.so
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/lib/libtorch.so
${CMAKE_SOURCE_DIR}/../pytorch/build_mobile/install/lib/libtorch_cpu.so)
```
My source code is simply including the header file:
```
#include <iostream>
#include <torch/torch.h>
int main() {
return 0;
}
```
And the following error occurred after running cmake:
```
-- The C compiler identification is GNU 11.1.0
-- The CXX compiler identification is GNU 11.1.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ultraz/Project/pytorch_android/build_x86_linux
Scanning dependencies of target pytorch_android
[ 50%] Building CXX object CMakeFiles/pytorch_android.dir/main.cpp.o
[100%] Linking CXX executable pytorch_android
/usr/bin/ld: ../../pytorch/build_mobile/install/lib/libtorch_cpu.so: undefined reference to `torch::jit::ExportModule(torch::jit::Module const&, std::function<unsigned long (void const*, unsigned long)> const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, bool, bool, bool)'
/usr/bin/ld: ../../pytorch/build_mobile/install/lib/libtorch_cpu.so: undefined reference to `torch::jit::ExportModule(torch::jit::Module const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, bool, bool, bool)'
/usr/bin/ld: ../../pytorch/build_mobile/install/lib/libtorch_cpu.so: undefined reference to `torch::jit::ExportModule(torch::jit::Module const&, std::ostream&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, bool, bool, bool)'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/pytorch_android.dir/build.make:88: pytorch_android] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/pytorch_android.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
```
Do you know how to fix this error?
Is there any issue with my pytorch/CMakeLists.txt options? If so, How can I enable training features (torch::nn, torch::optim, torch::data) on mobile platform?
Another potential problem could be my own CMakeLists.txt. Is this the right way to link Libtorch libraries?
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.10
Python version: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid
Is CUDA available: N/A
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA RTX A4000
GPU 1: NVIDIA RTX A4000
GPU 2: NVIDIA RTX A4000
GPU 3: NVIDIA RTX A4000
GPU 4: NVIDIA RTX A4000
GPU 5: NVIDIA RTX A4000
GPU 6: NVIDIA RTX A4000
GPU 7: NVIDIA RTX A4000
Nvidia driver version: 470.63.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py37h7f8727e_0
[conda] mkl_fft 1.3.1 py37hd3c417c_0
[conda] mkl_random 1.2.2 py37h51133e4_0
[conda] numpy 1.20.1 pypi_0 pypi
[conda] numpy-base 1.21.2 py37h79a1101_0
[conda] pytorch 1.11.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py37_cu113 pytorch
[conda] torchvision 0.12.0 py37_cu113 pytorch
```
cc @malfet @seemethere
| 3 |
5,524 | 93,759 |
TorchInductor failing inference models tracker
|
triaged, oncall: pt2, module: inductor
|
This is meta issue to track multiple issues with TorchInductor inference models in TorchBench.
Note all the models tracked by pytorch/torchdynamo#331 also don't work with TorchInductor.
Reproduce with:
`./torchbench.py -dcuda --inductor --float16 --no-skip -k <name>`
Updated list as of 8/26/2022:
- [x] pytorch/pytorch#93788
- Accuracy Error:
- [x] pytorch/torchdynamo#622
- [x] pytorch/torchdynamo#838
- [x] pytorch/torchdynamo#634
- [x] pytorch/torchdynamo#630
- [x] pytorch/torchdynamo#1039
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @mlazos @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire
| 2 |
5,525 | 78,987 |
DataLoader leaking resources?
|
module: dataloader, triaged
|
### π Describe the bug
There seems to be some sort of resource leakage happening when using `DataLoader` with >0 `num_workers`. I have discovered this with LMDB not sure if it will apply to other similar resources.
Here is a minimum repro
```python
import lmdb
import torch
from torch.utils.data import DataLoader
from itertools import count
from pathlib import Path
path = Path("~/data/mp_repro").expanduser().absolute()
class Dataset():
def __init__(self):
self._cache = None
def __getitem__(self, idx):
if self._cache is None:
self._cache = lmdb.open(
path.as_posix(),
readonly=False,
# lock=False,
readahead=False,
meminit=False,
map_size=int(1e10), # 10gb
)
with self._cache.begin() as tx:
results = tx.get(f"{idx}".encode('utf-8'))
return results
def __len__(self):
return 1000
cache = lmdb.open(
path.as_posix(),
readonly=False,
# lock=False,
readahead=False,
meminit=False,
map_size=int(1e10), # 10gb
)
for idx in range(1000):
encoded = f"{idx}".encode('utf-8')
with cache.begin(write=True) as tx:
tx.put(encoded, encoded)
del cache
if __name__=="__main__":
loader = DataLoader(
Dataset(),
batch_size=100,
num_workers = 8
)
print(f"Running with 8 workers")
for i in count():
print(f"Running epoch {i}")
list(loader)
```
Running this I get
```
Running with 8 workers
Running epoch 0
Running epoch 1
Running epoch 2
Running epoch 3
Running epoch 4
Running epoch 5
Running epoch 6
Running epoch 7
Running epoch 8
Running epoch 9
Running epoch 10
Running epoch 11
Running epoch 12
Running epoch 13
Running epoch 14
Running epoch 15
Traceback (most recent call last):
File "mp_repro_torch.py", line 66, in <module>
list(loader)
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
data = self._next_data()
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
data.reraise()
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/_utils.py", line 457, in reraise
raise exception
lmdb.ReadersFullError: Caught ReadersFullError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/oliver/miniconda3/envs/nvflare/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "mp_repro_torch.py", line 19, in __getitem__
self._cache = lmdb.open(
lmdb.ReadersFullError: mdb_txn_begin: MDB_READERS_FULL: Environment maxreaders limit reached
```
At first I believed this to be an LMDB issue but was unable to reproduce without pytorch. Here is the pytorch-less code that does **not** exhibit this behavior
```python
import lmdb
import torch
from itertools import count, repeat
from pathlib import Path
import multiprocessing
path = Path("~/data/mp_repro").expanduser().absolute()
class Dataset():
def __init__(self):
self._cache = None
def __getitem__(self, idx):
if self._cache is None:
self._cache = lmdb.open(
path.as_posix(),
readonly=False,
# lock=False,
readahead=False,
meminit=False,
map_size=int(1e10), # 10gb
)
with self._cache.begin() as tx:
results = tx.get(f"{idx}".encode('utf-8'))
return results
def __len__(self):
return 1000
cache = lmdb.open(
path.as_posix(),
readonly=False,
# lock=False,
readahead=False,
meminit=False,
map_size=int(1e10), # 10gb
)
for idx in range(1000):
encoded = f"{idx}".encode('utf-8')
with cache.begin(write=True) as tx:
tx.put(encoded, encoded)
del cache
def batch(dataset, start, batch_size=100):
return [dataset[i] for i in range(start, start+batch_size)]
def batch_dataset(dataset, batch_size=100):
pool = multiprocessing.Pool(8)
for i in pool.starmap(batch, zip(repeat(dataset, 10), range(0, len(dataset), batch_size))):
pass
if __name__=="__main__":
dataset = Dataset()
print(f"Running with {torch.multiprocessing.cpu_count()} workers")
for i in count():
print(f"Running epoch {i}")
batch_dataset(dataset)
```
I let this run to >epoch 1000 with no issues
### Versions
```
[pip3] mypy==0.950
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.0
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.11.0+cpu
[pip3] torchaudio==0.11.0+cpu
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.12.0+cpu
[conda] numpy 1.20.0 pypi_0 pypi
[conda] pytorch-lightning 1.5.10 pypi_0 pypi
[conda] torch 1.11.0+cpu pypi_0 pypi
[conda] torchaudio 0.11.0+cpu pypi_0 pypi
[conda] torchmetrics 0.7.3 pypi_0 pypi
[conda] torchvision 0.12.0+cpu pypi_0 pypi
```
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 9 |
5,526 | 78,961 |
[forwardAD] torch.no_grad has no effect under forward_ad
|
triaged, module: forward ad
|
### π Describe the bug
```python
import torch
import torch.autograd.forward_ad as fwAD
primal = torch.ones(10, 10)
tangent = torch.ones(10, 10)
def fn(x):
return x ** 3
with fwAD.dual_level():
dual_input = fwAD.make_dual(primal, tangent)
with torch.no_grad():
dual_output = fn(dual_input)
jvp = fwAD.unpack_dual(dual_output).tangent
torch.testing.assert_close(jvp, torch.ones(10, 10)) # Error
```
cc: @soulitzer @zou3519
### Versions
master
| 1 |
5,527 | 78,954 |
Can we have Additive Attention?
|
triaged, oncall: transformer/mha
|
### π The feature, motivation and pitch
Like MultiheadAttention which was proposed in **`Transformer`** paper similarly, can we have Additive attention
proposed in `**Neural Machine Translation by Jointly Learning to Align and Translate**`.
### Alternatives
`nn.MultiheadAttention`
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch
| 2 |
5,528 | 78,938 |
library libshm.dylib is missing
|
high priority, triage review, oncall: binaries, triaged, module: macos
|
i have basically the same problem but the library libshm.dylib is missing.
```
dlopen(/usr/local/lib/python3.9/site-packages/torch/_C.cpython-39-darwin.so, 2): Library not loaded: @loader_path/libshm.dylib
Referenced from: /usr/local/lib/python3.9/site-packages/torch/lib/libtorch_python.dylib
Reason: image not found
```
all answers i found to fix that say to[ install libomp](https://github.com/pytorch/pytorch/issues/20030), which i reinstalled but it didnt fix the issue, though for others [that seemed to work in previous years](https://discuss.pytorch.org/t/importerror-on-mac-with-pytorch-1-1/44305).
i cant find the library libshm in any lib folders on my machine. Im wondering if its included somewhere in libomp and how to point pytorch to it, or if it can be downloaded from somewhere else?
i have the current torch version installed 1.11.0 with Python 3.9 on Mac Big Sur.
Found the [libshm library](https://github.com/afeldman/libshm) required for sharedmemory alloc on linux machines.
Does PyTorch require a bugfix to include this dylib? Or is there any other workaround? @malfet
_Originally posted by @jskye in https://github.com/pytorch/pytorch/issues/36941#issuecomment-1146030734_
cc @ezyang @gchanan @zou3519 @seemethere @malfet @albanD
| 8 |
5,529 | 78,929 |
Add type() support for mps backend
|
triaged, actionable, module: mps
|
As reported in https://discuss.pytorch.org/t/m1-macos-12-3-torchvision-ops-nms-error/152887/3?u=alband
Calling `Tensor.type()` on an mps Tensor returns `torch.mps.FloatTensor` which the parser code does not handle (as it is special cased for cpu/cuda.
We could update the parsing logic to handle mps at https://github.com/pytorch/pytorch/blob/129d9dbb1581ead52061f140cba200662373574c/torch/csrc/utils/tensor_types.cpp#L48
But isn't type() deprecated? Do we want to add more to it? cc @kulinseth @albanD @ezyang
| 1 |
5,530 | 78,924 |
If large enough tensor is being cloned, parallel dataloading hangs on M1 Mac
|
high priority, module: dataloader, triaged, module: macos, module: deadlock, module: arm
|
## Issue description
This is really strange issue. I drilled it down to the smallest reproducible example possible.
The dataloader hangs if:
- num_workers > 0
- large enough batches are being produced from a dataloader (see code)
- the large enough (smaller tensors like (100, 100) work ok) is being cloned some time during execution of a program; interestingly it may be any tensor, even the one not related to dataloading (see code)
- the program is being run on M1 Mac (it works ok on x86 machines)
## Code example
```
import torch
import itertools
def _samples_to_XY(samples):
X = [x for x, _ in samples]
X = torch.tensor(X)
Y = torch.tensor([y for _, y in samples])
return X, Y
def _gen(tensor_size):
while True:
yield (list(range(tensor_size)), 1)
def _gen_batch(tensor_size, batch_size):
gen = _gen(tensor_size)
while True:
xys = list(itertools.islice(gen, batch_size))
if not xys:
break
yield _samples_to_XY(xys)
class TheDataset(torch.utils.data.IterableDataset):
def __init__(self, tensor_size, batch_size):
self.batch_size = batch_size
self.tensor_size = tensor_size
def __iter__(self):
return _gen_batch(self.tensor_size, self.batch_size)
def into_dataloader(self, num_workers):
mproc_config = {'multiprocessing_context': 'fork'}
mproc_config = mproc_config if num_workers > 0 else {}
return torch.utils.data.DataLoader(self,
num_workers=num_workers,
**mproc_config)
if __name__ == '__main__':
some_tensor = torch.rand(10000, 50)
# if this statement is commented, none of the code below hangs
some_tensor = some_tensor.clone()
# works ok
if next(iter(TheDataset(tensor_size=64, batch_size=512).into_dataloader(num_workers=1))):
print('ok')
# works ok
if next(iter(TheDataset(tensor_size=128, batch_size=256).into_dataloader(num_workers=1))):
print('ok')
# hangs
if next(iter(TheDataset(tensor_size=128, batch_size=512).into_dataloader(num_workers=1))):
print('ok')
```
Console output:
```
[W ParallelNative.cpp:229] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
ok
[W ParallelNative.cpp:229] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
ok
[W ParallelNative.cpp:229] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
```
Console output with clone statement commented out:
```
ok
ok
ok
```
## System Info
```
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.12 (main, Jun 1 2022, 06:34:44) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @SsnL @VitalyFedyunin @ejguan @NivekT @malfet @albanD @kulinseth
| 10 |
5,531 | 78,921 |
Do we really need sampler for IterableDataset?
|
module: performance, module: dataloader, triaged
|
### π The feature, motivation and pitch
I have noticed that "_InfiniteConstantSampler" may be unnecessary for "IterableDataset". The removal of the "_InfiniteConstantSampler" and related logic in "_IterableDatasetFetcher" could speedup by 27%.
In detail, the MapDataset use the "RandomSampler" for the shuffle operation, however "IterableDataset" is mutually exclusive with shuffle operaton. This means that we does not need this sampler in the "IterableDataset" case. Otherwise, is there any other purpose of "_InfiniteConstantSampler" for the "IterableDataset"?
**The orginal code:**
```python
class _IterableDatasetFetcher(_BaseDatasetFetcher):
def __init__(self, dataset, auto_collation, collate_fn, drop_last):
super(_IterableDatasetFetcher, self).__init__(dataset, auto_collation, collate_fn, drop_last)
self.dataset_iter = iter(dataset)
self.ended = False
def fetch(self, possibly_batched_index):
if self.ended:
raise StopIteration
if self.auto_collation:
data = []
try:
data = [next(self.dataset_iter) for _ in range(self.batch_size)]
except StopIteration:
self.ended = True
if len(data) == 0 or (self.drop_last and len(data) < self.batch_size):
raise StopIteration
else:
data = next(self.dataset_iter)
return self.collate_fn(data)
```
**New code:**
```python
class _IterableDatasetFetcher(_BaseDatasetFetcher):
def __init__(self, dataset, auto_collation, collate_fn, drop_last, batch_size):
super(_IterableDatasetFetcher, self).__init__(dataset, auto_collation, collate_fn, drop_last)
self.dataset_iter = iter(dataset)
self.batch_size = batch_size # I set 64 in the experiment
def fetch(self, possibly_batched_index):
if self.auto_collation:
data = [next(self.dataset_iter) for _ in range(self.batch_size)]
if len(data) == 0 or (self.drop_last and len(data) < self.batch_size):
raise StopIteration
else:
data = next(self.dataset_iter)
return self.collate_fn(data)
```
### Alternatives
_No response_
### Additional context
**My env:**
**os:** Centos
**cpu:** Intel(R) Xeon(R) CPU E5-2682 v4 @ 2.50GHz
**python:** 3.6.8
**PyTorch version:** 1.11
**Test result:**
I have test on 5000000 samples in single process dataloader.
Experiment Name | Duration
-- | --
Original code | 13.02s
New code | 9.63s (-27%)
</div>
**My Test Script:**
```python
# coding: utf-8
import gc
import time
import torch
from torch.utils.data import IterableDataset
from dataloader_utils import MyDataLoader as DataLoader
gc.collect()
torch.cuda.empty_cache()
class RandomIterableDataset(IterableDataset):
def __init__(self, num=10) -> None:
self.num = num
self.sample = {'id': '1',
'url': 'https://xxx.com'
}
def get_sample(self):
return self.sample
def __iter__(self):
return self
def __next__(self):
while self.num > 0:
self.num -= 1
return self.get_sample()
raise StopIteration
if __name__ == "__main__":
batch_size = 64
avg_times = []
for _ in range(5):
cnt = 0
start_time = time.time()
input_dataset = RandomIterableDataset(num=5000000)
input_dataloader = DataLoader(
dataset=input_dataset,
shuffle=False,
batch_size=batch_size,
drop_last=False,
num_workers=0,
)
for data in input_dataloader:
cnt += 1
if cnt % 1000 == 0:
pass
end_time = time.time()
duration = (end_time - start_time)
print(f'Finish task. Use {duration} secs with batch size {batch_size}')
avg_times.append(duration)
print(f'The average time spend is {round(sum(avg_times) / len(avg_times), 2)}s')
```
cc @VitalyFedyunin @ngimel @SsnL @ejguan @NivekT
| 1 |
5,532 | 78,920 |
Strange tracing result with torchscript
|
oncall: jit
|
### π Describe the bug
I have a model, which weights are loaded one by one from standalone bin files, so I using nn.Parameters to load then.
But the trace to torchscript, it gives me weired result:

this is not expected, since these are not inputs, in onnx, it works normal:

how should make torchscript traced model more user friendly?
### Versions
1.11
| 0 |
5,533 | 78,917 |
LambdaLR changes the learning rate in an undesired way
|
triaged, module: LrScheduler
|
### π Describe the bug
I found the **LambdaLR** scheduler changes the preset learning rate to 1e-6 rather than my preset during the initialization of the optimizer, as the following example shows:
```python
import torch
import torch.nn as nn
from torch.optim.lr_scheduler import LambdaLR
model = nn.Sequential(
nn.Linear(1, 16,bias=True),
nn.ReLU(),
nn.Linear(16, 16, bias=True),
nn.ReLU()
)
lr = 0.001
lamda = 0.0001
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=lamda)
print(optimizer)
# Before we apply the LambdaLR, the lr is 0.001, just as what I set before.
#Adam (
#Parameter Group 0
# amsgrad: False
# betas: (0.9, 0.999)
# eps: 1e-08
# lr: 0.001
# maximize: False
# weight_decay: 0.0001
#)
update_func = lambda epoch: lr # as the simplest example, we don't change lr with epoch.
scheduler = LambdaLR(optimizer, lr_lambda=update_func)
print(optimizer)
# After wrapped by the scheduler, the lr changes to 1e-6, and the initial_lr stores the user preset lr.
#Adam (
#Parameter Group 0
# amsgrad: False
# betas: (0.9, 0.999)
# eps: 1e-08
# initial_lr: 0.001
# lr: 1e-06
# maximize: False
# weight_decay: 0.0001
#)
print(scheduler.get_lr())
print(scheduler.get_last_lr())
# Using scheduler function to print learning rate scheduled by the scheduler, it shows 1e-6 rather than 1e-3 as we set.
# UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`.
# warnings.warn("To get the last learning rate computed by the scheduler,
#[1e-06]
#[1e-06]
```
More importantly, I have tried to train by network, and I found it conversed just as setting lr=1e-6 without the scheduler. So I wondering there are something wrong in the LambdaLR function.
### Versions
I found the issue on a Linux system with torch-1.10 and GPU, and this simple test script is performed on my mac with latest version of pytorch: torch-1.11.0. I have compared with the source code of ``torch.optim.lr_scheduler.py`` of current main branch with my local file, and they are basically the same. So I believe this problem may be an undiscovered bug.
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: 11.0.0 (clang-1100.0.33.16)
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.7.7 (default, Mar 26 2020, 10:32:53) [Clang 4.0.1 (tags/RELEASE_401/final)] (64-bit runtime)
Python platform: Darwin-21.5.0-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.11.0
[pip3] torch-cluster==1.5.9
[pip3] torch-geometric==2.0.3
[pip3] torch-scatter==2.0.9
[pip3] torch-sparse==0.6.12
[pip3] torch-spline-conv==1.2.1
[pip3] torch-summary==1.4.3
[pip3] torchfile==0.1.0
[pip3] torchvision==0.12.0
[pip3] torchviz==0.0.2
[conda] numpy 1.19.5 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torch-cluster 1.5.9 pypi_0 pypi
[conda] torch-geometric 2.0.3 pypi_0 pypi
[conda] torch-scatter 2.0.9 pypi_0 pypi
[conda] torch-sparse 0.6.12 pypi_0 pypi
[conda] torch-spline-conv 1.2.1 pypi_0 pypi
[conda] torch-summary 1.4.3 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
| 2 |
5,534 | 93,757 |
TorchInductor missing ops tracker
|
triaged, oncall: pt2
|
The following ops are using `ir.FallbackKernel` via `make_fallback()` in [lowering.py](https://github.com/pytorch/torchdynamo/blob/main/torchinductor/lowering.py#L894) and appear in benchmarks. We should rewrite them to use decomps or lowerings.
- [x] pytorch/pytorch#93650
- [ ] aten.grid_sampler_2d_backward (higher priority)
- [ ] aten.upsample_bilinear2d_backward (higher priority)
- [ ] aten._adaptive_avg_pool2d_backward
- [ ] aten.upsample_bicubic2d_backward
- [ ] aten._fused_moving_avg_obs_fq_helper
- [x] aten.upsample_nearest3d (needed for FAIR model)
- [ ] aten.avg_pool3d (needed for FAIR model)
- [x] aten.bucketize (needed for internal model) - not targeting for codegen, do a fallback
- [x] aten.prod (needed for research model) - not targeting for codegen, do a fallback
Might not be possible (in a performant way), but currently use fallbacks:
- [ ] aten.convolution_backward (might need to hold of on this if perf doesn't match)
- [ ] aten._cudnn_rnn (might need to hold of on this if perf doesn't match) - not targeting for codegen, do a fallback
- [ ] aten._cudnn_rnn_backward (might need to hold of on this if perf doesn't match) - not targeting for codegen, do a fallback
- [ ] aten._embedding_bag (may have a template internally) (Attempted with https://github.com/pytorch/pytorch/pull/84235, but it's very hard to make it performant and inductor-friendly)
- [ ] pytorch/pytorch#93631 not targeting for codegen, do a fallback
- [ ] torchvision.roi_align (need to sort of decomps for domain libs)
Done:
- [x] aten._adaptive_avg_pool2d
- [x] aten.nll_loss_forward ([PR](https://github.com/pytorch/pytorch/pull/78491))
- [x] aten.grid_sampler_2d (could use the version [from here](https://github.com/pytorch/pytorch/issues/34704))
- [x] aten.upsample_bilinear2d https://github.com/pytorch/pytorch/pull/80964
- [x] aten.reflection_pad2d_backward (higher priority)
- [x] aten.native_batch_norm_backward (https://github.com/pytorch/pytorch/pull/81522)
- [x] aten.avg_pool2d_backward
- [x] aten.expand_as
- [x] aten.glu_backward (https://github.com/pytorch/pytorch/pull/78919)
- [x] aten.masked_fill_
- [x] aten.max_pool2d_with_indices_backward
- [x] aten.select_scatter
- [x] aten.upsample_nearest2d_backward
- [x] aten.baddbmm
- [x] aten.log1p
- [x] aten.bernoulli_
- [x] aten.conj_physical
- [x] pytorch/torchdynamo#885
- [x] aten.im2col_backward
- [x] aten.native_group_norm_backward (https://github.com/pytorch/pytorch/pull/84037)
- [x] aten.im2col (https://github.com/pytorch/pytorch/pull/84303)
- [x] aten.binary_cross_entropy_with_logits (needed for internal model)
- [x] aten.upsample_bicubic2d (#1349)
- [x] aten.unfold https://github.com/pytorch/pytorch/pull/85629
- [x] aten.unfold_backward https://github.com/pytorch/pytorch/pull/85629
- [x] aten.col2im https://github.com/pytorch/pytorch/pull/85541
- [x] aten.mse_loss_backward (higher priority)
- [x] aten.softplus (needed for research model)
- [x] aten.softplus_backward (needed for research model)
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh
| 50 |
5,535 | 78,892 |
torch.fx deepcopy does not copy attributes added to GraphModule or Nodes
|
triaged, module: fx
|
### π Describe the bug
Hi !
Calling deepcopy on GraphModules does not copy added attributes.
Here is a MWE :
```Python
#imports
import torch
import torch.nn
import torch.fx
import copy
#MWE
net=torch.nn.Linear(0,0)
gm=torch.fx.symbolic_trace(net)
gm.myattr="registered"
gm2=copy.deepcopy(gm)
print(gm2.myattr)
```
which returns `AttributeError: 'GraphModule' object has no attribute 'myattr'`, showing that attribute `myattr` has not been copied.
However, what lead me to this is that I wanted to mark nodes for transformations, and copy the GraphModule before applying the different transformations in order to be able to train transformed copies separately.
Here is an example with the nodes :
```Python
#imports
import torch
import torch.nn
import torch.fx
import copy
#MWE on nodes
net=torch.nn.Linear(0,0)
gm=torch.fx.symbolic_trace(net)
for node in gm.graph.nodes:
node.is_marked=True
gm2=copy.deepcopy(gm)
for node in gm2.graph.nodes:
print(node.is_marked)
```
which returns `AttributeError: 'Node' object has no attribute 'is_marked'`.
I think the same holds with attributes added to `gm.graph` etc.
### Versions
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Famille
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.10 (tags/v3.9.10:f2f3f53, Jan 17 2022, 15:14:21) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.18363-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 511.65
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
cc @ezyang @SherlockNoMad
| 0 |
5,536 | 78,885 |
[distributed_test.py] Improve `test_barrier`
|
oncall: distributed, module: tests, better-engineering
|
### π The feature, motivation and pitch
Right now, `test_barrier` tests in `distributed_test.py` work by having rank 0 sleep for a certain amount of time, then run barrier on all ranks, and verify all ranks waited for at least this amount of time. Asserts based on timing / sleep of a certain process have historically been very flaky and hard to get right, so we should refactor this test.
One suggestion, based on the goal of the test to ensure no ranks pass barrier until all ranks have called into it:
- Maintain a count starting with 0 on all ranks
- All ranks increment the count on each rank by 1
- Barrier()
- Validate each rank count == world_size
- If some rank passed barrier before others called into it, then the validation at the end could be incorrect.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @mruberry
| 0 |
5,537 | 78,884 |
Abnormal GPU memory usage when using CUDA tensors with multiprocessing
|
module: multiprocessing, module: cuda, module: memory usage, triaged, module: jiterator
|
### π Describe the bug
Hi! I wanted to do some inference with a network in multiple processes, so I followed this and used `torch.multiprocessing` and `share_memory`, hoping all of the processes can share the same network to avoid copying the network for each process which may cause OOM on GPU.
However, I found that the `share_memory` didn't work. I ran the script below and monitored the GPU memory usage, finding that the GPU memory usage can be up to ~ 4GB with number of processes equaled to 4, which was abnormal for a quite simple linear layer with shape `(3, 5)`. If I increase the number of processes, the GPU memory usage will grow linearly.
This issue happens for both of the `spawn` and `forkserver` multiprocessing starting method which are needed for passing CUDA tensors as arguments to multi-processed functions.
Code to reproduce the issue:
```python
import torch
import torch.nn as nn
import torch.multiprocessing as mp
from functools import partial
def train(task, net):
print(f'task {task} : {list(net.parameters())[0].device}')
# x = torch.rand(1, 3).cuda(3)
# y = net(x)
# print(y.shape)
def main():
mp.set_start_method('spawn')
net = nn.Linear(3, 5).cuda(3)
net = net.share_memory()
p_train = partial(train, net=net)
with mp.Pool(
processes=8,
maxtasksperchild=1,
) as pool:
pool.map(p_train, list(range(16)), chunksize=1)
if __name__ == '__main__':
main()
```
I tried to make the network global and use `fork` starting method, then the GPU memory usage is normal (takes about ~1GB, which is equal to the program with the same network but without any multiprocessing). However, I couldn't create CUDA tensors inside sub-processes with `fork`, which makes this method usefulness for most of the cases (after all we need to compute something rather than just print it out).
```python
import torch
import torch.nn as nn
import torch.multiprocessing as mp
from functools import partial
net: nn.Module
def train(task):
global net
print(f'task {task} : {net} {list(net.parameters())[0].device}')
x = torch.rand(1, 3).cuda(3) # not allowed
# RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
y = net(x)
print(y.shape)
def main():
mp.set_start_method('fork')
global net
net = nn.Linear(3, 5).cuda(3)
# net = net.share_memory()
p_train = partial(train)#, net=net)
with mp.Pool(
processes=8,
maxtasksperchild=1,
) as pool:
pool.map(p_train, list(range(16)), chunksize=1)
if __name__ == '__main__':
main()
```
### Versions
```
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1045-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 460.91.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0+cu113
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.11.0+cu113 pypi_0 pypi
```
cc @VitalyFedyunin @ngimel @mruberry
| 4 |
5,538 | 78,882 |
Cannot build master on AWS cluster: error: β__fatDeviceTextβ was not declared in this scope
|
module: build, module: cuda, triaged
|
### π Describe the bug
```
FAILED: third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o /fsx/users/ezyang/pytorch-tmp/build/
third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o
cd /fsx/users/ezyang/pytorch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl && /fsx/users/ezyang/conda/bin/cmak
e -E make_directory /fsx/users/ezyang/pytorch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/. && /fsx/users/ez
yang/conda/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/fsx/users/ezyang/pyto
rch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o -D generated_cubin_file:STRI
NG=/fsx/users/ezyang/pytorch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o.cub
in.txt -P /fsx/users/ezyang/pytorch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.
o.Release.cmake
In file included from /tmp/tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:9:0,
from tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:1:
/tmp/tmpxft_0000ff6b_00000000-2_nccl.fatbin.c:30:14: error: β__fatBinC_Wrapper_tβ does not name a type; did you mean β__nv_hdl
_wrapper_tβ?
static const __fatBinC_Wrapper_t __fatDeviceText __attribute__ ((aligned (8))) __attribute__ ((section (__CUDAFATBINSECTION))
)=
^~~~~~~~~~~~~~~~~~~
__nv_hdl_wrapper_t
In file included from /tmp/tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:8:0,
from tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:1:
/tmp/tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c: In function βvoid __sti____cudaRegisterAll()β:
/tmp/tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:13:44: error: β__fatDeviceTextβ was not declared in this scope
static void __sti____cudaRegisterAll(void){__cudaRegisterBinary(__nv_cudaEntityRegisterCallback);}
^
/tmp/tmpxft_0000ff6b_00000000-5_nccl.cudafe1.stub.c:13:44: note: suggested alternative: βcuDeviceGetβ
CMake Error at gloo_cuda_generated_nccl.cu.o.Release.cmake:277 (message):
Error generating file
/fsx/users/ezyang/pytorch-tmp/build/third_party/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o
```
I used the standard instructions at https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/Workflow/PyTorch_environment_setup/pytorch_aws_setup/
### Versions
master
cc @malfet @seemethere @ngimel
| 0 |
5,539 | 78,880 |
fatal_signal_asan_no_sig_test in current master hang.
|
module: cpp, triaged, module: deadlock, module: testing
|
### π Describe the bug
Our docker build with pytorch timed out recently and we found the fatal_signal_asan_no_sig_test always hangs.
It occupies ~700% CPU on a 36-thread system, so I guess something's spinning.
```
[root@15f31169863c build]# bin/fatal_signal_asan_no_sig_test
Running main() from ../third_party/googletest/googletest/src/gtest_main.cc
[==========] Running 7 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 7 tests from fatalSignalTest
[ RUN ] fatalSignalTest.SIGABRT8
```
This issue on exists on CentOS 7.
Debian 11 seems fine.
Old version of pytorch is also fine.
The build is a CUDA build on non-CUDA system.
### Versions
Current master of pytorch.
rh-python38 scl on CentOS 7.
cc @jbschlosser
| 3 |
5,540 | 78,878 |
Improving clarity in the docs of different losses
|
module: docs, module: nn, triaged, module: python frontend
|
### π The doc issue
Looking at the following losses [BCELoss](https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html#torch.nn.functional.binary_cross_entropy) vs [CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html#torch.nn.functional.cross_entropy) it seems that there's not clear indication of the type of data expected for `input` and `target`.
We see that for other variables it clearly indicates if its `int`, `bool`, `float`, etc, but not for `input` and `target`.
Also we see this discrepancy where `BCELoss` expects the `target` to be of type `float` instead `CrossEntropyLoss` expects it to be `int`.
Why `BCELoss` doesn't accept `target` of type `int` and expects `float`?
Another thing that really would allow more clarity is to have links from the doc directly to the source code of the forward and backward calls for each loss.
I know that you might say but there are these links, for instance like the following `SOURCE`:
<img width="856" alt="image" src="https://user-images.githubusercontent.com/2902390/171992728-e4835a16-4075-4a09-98cd-dd8b5166697d.png">
But following the `SOURCE` link brings us to the following definition:
<img width="352" alt="image" src="https://user-images.githubusercontent.com/2902390/171992840-177cf94f-4ef8-4629-a6a4-e78a9681760c.png">
With the following dispatch code:
<img width="600" alt="image" src="https://user-images.githubusercontent.com/2902390/171992880-9fa7b750-1940-4f71-94a7-a73f758546b8.png">
As we can see to find the actual implementation we'll have to go and find the definition of `torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)`
This prevents folks from directly retrieving relevant pieces of information on how different parts work directly from the docs. One would have to spend a would chunk of time searching through the github code base until they reach what they are looking for.
### Suggest a potential alternative/fix
1. The types of data inputs should be clearly indicated for all arguments and there should be some consistency in terms of the types for the first two inputs `input` and `target`.
2. Improve navigation from docs to source code, linking directly each method to its actual forward/backward pass instead to some dispatch function.
cc @svekars @holly1238 @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 1 |
5,541 | 78,876 |
Remove const from function return type if returning const value
|
module: cpp, triaged
|
### π The feature, motivation and pitch
It is better to avoid the `const` qualifier on function return type if the function returns by value. The `const` qualifier may prevent move semantics if the type is movable. https://quuxplusone.github.io/blog/2022/01/23/dont-const-all-the-things/#return-types-never-const
From [Scott Meyer](https://www.aristeia.com/BookErrata/ec++3e-errata.html):
> Declaring by-value function return values const will prevent their being bound to [rvalue references in C++0x](http://www.artima.com/cppsource/rvalue.html). Because rvalue references are designed to help improve the efficiency of C++ code, it's important to take into account the interaction of const return values and the initialization of rvalue references when specifying function signatures.
### Alternatives
Leave it as it is if the `const` qualifiers are there for some reasons.
### Additional context
I'd like to merge [this commit](https://github.com/nglee/pytorch/commit/10bdb79a0a4c1b0ae3e3dc1ba61b5929982994b4).
cc @jbschlosser
| 1 |
5,542 | 78,873 |
[Profiler] Capture more information about inputs
|
triaged, oncall: profiler
|
### π The feature, motivation and pitch
One of the most important options in profiler is `record_shapes`. It records the shape and dtype of inputs. This information is essential to for analyzing traces as without it there is no way to asses if an op has reasonable performance. However there are several things that we do not collect:
1) The identities of Tensors. Without this it is not possible to do data flow analysis.
2) The strides and layout of Tensors. Highly strided Tensors suffer from pathalogical memory/cache behavior and often kernes are not well optimized for these cased. Without strides two ops may be identical to the profiler but have very different runtime characteristics.
3) Scalar values. We capture that an input is a Scalar, but not the value itself. These values often parameterize computation (`dim`, `groups`, etc.) so proper analysis requires these values. (And they're scalars, so they should be cheap to store.)
4) More exotic types like `TensorList` or `NestedTensor` are important performance abstractions, but not handled by the profiler.
For (1), we can access the `TensorImpl*` and `StorageImpl*` when we encounter a Tensor. For (2), we already have an efficient packed sizes and strides representation that we use inside TensorImpl so it's mostly a matter of interfacing that with the profiler. (3) should be quite straightforward.
The input capture machinery in profiler was designed for extensibility (https://github.com/pytorch/pytorch/blob/master/torch/csrc/profiler/collection.h#L172-L202) so hopefully it will just be a matter of working through each case. And of course if you think of other things that might be useful feel free to add them as well.
### Alternatives
_No response_
### Additional context
_No response_
cc @ilia-cher @robieta @chaekit @gdankel @bitfort @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git
| 0 |
5,543 | 78,871 |
[RecordFunction] Hold a durable schema reference
|
triaged, oncall: profiler
|
### π The feature, motivation and pitch
When an op is called through the dispatcher we hold a reference to the schema in the RecordFunction object for the duration of the call: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/record_function.h#L372 We would like to keep the schema until post processing and expose it to users as that would enable certain analyses like FLOP counting. However it is prohibitively expensive to copy the schema while profiling. We should be able to guarantee the lifetime of the FunctionSchema in the Dispatcher and then just store a pointer in RecordFunction, however work needs to be done to ensure that all the machinery is safe.
### Alternatives
_No response_
### Additional context
_No response_
cc @ilia-cher @robieta @chaekit @gdankel @bitfort @ngimel @nbcsm @guotuofeng @guyang3532 @gaoteng-git
| 0 |
5,544 | 78,848 |
MPS: Adding int64 tensor does not work on AMD GPU
|
triaged, module: correctness (silent), module: mps
|
### π Describe the bug
Consider following simple example:
```
python3 -c "import torch;x=torch.arange(6); print(x+(1<<32), x.to('mps')+(1<<32))"
```
It yields correct results on my M1 MacMini:
```
% python3 -c "import torch;x=torch.arange(6); print(x+(1<<32), x.to('mps')+(1<<32))"
tensor([4294967296, 4294967297, 4294967298, 4294967299, 4294967300, 4294967301]) tensor([4294967296, 4294967297, 4294967298, 4294967299, 4294967300, 4294967301],
device='mps:0')
```
But fails on x86 MacBook Pro:
```
% python3 -c "import torch;x=torch.arange(6); print(x+(1<<32), x.to('mps')+(1<<32))"
tensor([4294967296, 4294967297, 4294967298, 4294967299, 4294967300, 4294967301]) tensor([0, 1, 2, 3, 4, 5], device='mps:0')
```
### Versions
Nightly
cc @kulinseth @albanD
| 2 |
5,545 | 78,845 |
[Modes] no_dispatch is not the same as DisableTorchFunction, causing differences in modes
|
triaged, module: __torch_function__, module: __torch_dispatch__
|
### π Describe the bug
## Repro
The following passes
```
def test_disable_subclass_not_mode(self):
called = False
class A(TorchFunctionMode):
def __torch_function__(self, func, types, args=(), kwargs=None):
nonlocal called
if kwargs is None:
kwargs = {}
called = True
return func(*args, **kwargs)
class B(torch.Tensor):
pass
x = B(torch.randn(5))
with torch.overrides.push_torch_function_mode(A):
with torch._C.DisableTorchFunction():
self.assertNotIsInstance(torch.sum(x), B)
self.assertTrue(called)
```
while the following fails because `torch.sum(x)` is of type B (also adjusting the test, called is not True)
```
def test_disable_subclass_not_mode(self):
called = False
class A(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
nonlocal called
if kwargs is None:
kwargs = {}
called = True
return func(*args, **kwargs)
class B(torch.Tensor):
pass
x = B(torch.randn(5))
with push_torch_dispatch_mode(A):
with no_dispatch():
self.assertNotIsInstance(torch.sum(x), B)
self.assertTrue(called)
```
## Problem
This suggests that the behavior of `no_dispatch` and `DisableTorchFunction` don't match
We should either make the behavior of these functions match or make a version of `DisableTorchDispatch` which matches `DisableTorchFunction`
cc @hameerabbasi @rgommers @peterbell10 @Chillee @ezyang @zou3519 @albanD @samdow @eellison
### Versions
master
| 4 |
5,546 | 78,842 |
Add TORCH_SHOW_CPP_STACKTRACES when TORCH_DISTRIBUTED_DEBUG = detail
|
oncall: distributed, module: bootcamp, pt_distributed_rampup, module: c10d
|
### π The feature, motivation and pitch
when TORCH_DISTRIBUTED_DEBUG = detail, we should also enable cpp stacktraces so that if a collective is inconsistent, user can get the entire C++ stack to help debug.
Arguably, we could also just do this for tdd = info. We could also discuss whether if TDD = detail, we also want to enable NCCL_DEBUG=warn or info, although one reason for not doing this is nccl is a separate library who's env vars should be explicitly set by the user.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,547 | 78,834 |
[ONNX] Re-design `torch.onnx.export`
|
module: onnx, triaged, needs design, onnx-triaged
|
### π The feature, motivation and pitch
## Motivation
* Reduce the complexity of this function.
* Make exporter more modularized for easier development and maintenance.
* Separate export process in modular responsibilities
* Export ONNX using different "backends"
* torch jit tracing
* torch jit scripting
* fx symbolic execution
* torch inductor
* whatever backend
* Validate/verify exported model
* Custom operators support
* User overloading existing operators for particular op_name/opset
* User adding new operators
* Shape inference / Dynamic axes support
* Optimization options
* Debug options
### Alternatives
* Create new exporter API and incrementally deprecate current torch.onnx.export API
### Additional context
1p users often complain about the difficulty in using this API and end up creating higher order APIs to hide away its complexity
| 3 |
5,548 | 78,831 |
Mac M1 Build Failure on DEBUG=1
|
needs reproduction, module: build, triaged, module: macos, module: arm
|
### π Describe the bug
DEBUG=1 python setup.py install
> ld: can't open file, errno=1 file 'caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/exit_transforms.cpp.o' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
### Versions
PyTorch version: 1.13.0a0+git61a83fc
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.27.3)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.9.12 (main, Apr 5 2022, 01:52:34) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.13.0a0+git61a83fc
[conda] numpy 1.22.3 py39h25ab29e_0
[conda] numpy-base 1.22.3 py39h974a1f5_0
[conda] torch 1.13.0a0+git141238a pypi_0 pypi
cc @malfet @seemethere @albanD
| 1 |
5,549 | 78,829 |
Certain import order triggers segmentation fault
|
module: crash, triaged, has workaround
|
### π Describe the bug
This bug is holding up the latest updates for the library pysr: https://github.com/MilesCranmer/PySR/ which depends on pytorch. cc @tttc3.
Basically: I see segmentation fault in PyTorch when importing another library which has a system interface. However, this depends on the import order:
- import torch, then the other library, I see a bug.
- import the other library, then torch, then I don't see a bug.
Here is a MWE (the other library is pyjulia - install with `pip install julia`):
```python
def do_something_torch():
import torch
def do_something_julia():
from julia import Pkg
Pkg.activate("test", shared=True)
Pkg.update()
```
on my macOS machine, running `do_something_torch()` and then `do_something_julia()` will trigger the following bug, which looks to come from `libtorch_cpu.dylib`. It seems like there is some interaction between the system calls of each library:
```
signal (11): Segmentation fault: 11
in expression starting at none:0
uv__io_start at /Users/mcranmer/venvs/main/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib (unknown line)
uv__read_start at /Users/mcranmer/venvs/main/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib (unknown line)
start_reading at ./stream.jl:793
wait_readnb at ./stream.jl:411
eof at ./stream.jl:106
jfptr_eof_42389 at /Applications/Julia-1.7.app/Contents/Resources/julia/lib/julia/sys.dylib (unknown line)
jl_apply_generic at /Applications/Julia-1.7.app/Contents/Resources/julia/lib/julia/libjulia-internal.1.7.dylib (unknown line)
eof at ./io.jl:416
#read_tarball#47 at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Tar/src/extract.jl:344
read_tarball##kw at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Tar/src/extract.jl:340 [inlined]
#11 at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/registry_instance.jl:199 [inlined]
#open#700 at ./process.jl:395
open at ./process.jl:393 [inlined]
uncompress_registry at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/registry_instance.jl:198
RegistryInstance at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/registry_instance.jl:266
#reachable_registries#17 at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/registry_instance.jl:373
reachable_registries at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/registry_instance.jl:345 [inlined]
#download_default_registries#28 at /Users/sabae/src/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Registry/Registry.jl:101
signal (11): Segmentation fault: 11
in expression starting at none:0
```
I see a similar issue on my CI's test suite which runs on linux with a different environment: https://github.com/MilesCranmer/PySR/runs/6715107004?check_suite_focus=true#step:12:31.
### Versions
(Note that I also see the bug in GitHub actions with different versions)
PyTorch version: 1.13.0.dev20220528
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.0 (https://github.com/llvm/llvm-project.git ab3b89855c5318f0009e1f016ffe5b1483507fd0)
CMake version: version 3.22.3
Libc version: N/A
Python version: 3.8.9 (default, Apr 13 2022, 08:48:06) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-12.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.5.10
[pip3] torch==1.13.0.dev20220528
[pip3] torchaudio==0.11.0
[pip3] torchmetrics==0.7.2
[pip3] torchvision==0.14.0a0+f0f8a3c
[conda] Could not collect
| 11 |
5,550 | 78,812 |
TorchScript inference get intermediate result?
|
oncall: jit, feature
|
### π The doc issue
https://discuss.pytorch.org/t/how-to-get-the-intermediate-output-from-torchscript/152981
Seems many people ask a solution for this, any documentation about it? Does it able to do so? This is really needed!!
### Suggest a potential alternative/fix
_No response_
| 1 |
5,551 | 78,809 |
Feature Request: Hessenberg and Schur decompositions
|
feature, triaged, module: linear algebra
|
There are a handful of eigenproblem decompositions that I would like to use but are not provided by PyTorch (e.g. Hessenberg and Schur). This is one area where PyTorch noticeably lags behind other applications and libraries.
- [ ] Schur Decomposition (`torch.linalg.schur`)
[Julia](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.schur)
[MATLAB](https://www.mathworks.com/help/matlab/ref/schur.html)
[SciPy
](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.schur.html)
- [ ] Hessenberg Decomposition (`torch.linalg.hessenberg`)
[Julia](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.hessenberg)
[MATLAB](https://www.mathworks.com/help/matlab/ref/hess.html)
[SciPy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.hessenberg.html)
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 7 |
5,552 | 78,808 |
Feature request: Integer system decompositions
|
feature, triaged, module: linear algebra
|
I have started to work more and more with integer systems in PyTorch so it would be nice if `linalg` provided some common integer system decompositions (e.g., Hermite or Smith normal forms are obvious choices).
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
| 1 |
5,553 | 78,805 |
torch.jit.script segmentation fault (pytorch debayer module) 1.10, 1.11 and nightly
|
oncall: jit
|
### π Describe the bug
Offending module(s) can be found by installing pytorch-debayer:
`pip install git+https://github.com/cheind/pytorch-debayer`
```
import torch
import debayer
m = debayer.Debayer3x3()
torch.jit.script(m)
```
```
>>> import torch
to>>> torch.__version__
'1.13.0.dev20220603'
>>> import debayer
>>> m = debayer.Debayer3x3()
>>> torch.jit.script(m)
[1] 111518 segmentation fault (core dumped) python
```
### Versions
The segmentation fault occurs on at least 1.10, 1.11 and nightly.
```
Collecting environment information...
PyTorch version: 1.13.0.dev20220603
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.23.2
Libc version: glibc-2.31
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:38:57) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.55
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-debayer==1.4.1
[pip3] torch==1.13.0.dev20220603
[pip3] torchaudio==0.12.0.dev20220603
[pip3] torchvision==0.14.0.dev20220603
[conda] blas 2.114 mkl conda-forge
[conda] blas-devel 3.9.0 14_linux64_mkl conda-forge
[conda] cudatoolkit 11.3.1 h9edb442_10 conda-forge
[conda] libblas 3.9.0 14_linux64_mkl conda-forge
[conda] libcblas 3.9.0 14_linux64_mkl conda-forge
[conda] liblapack 3.9.0 14_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 14_linux64_mkl conda-forge
[conda] mkl 2022.0.1 h8d4b97c_803 conda-forge
[conda] mkl-devel 2022.0.1 ha770c72_804 conda-forge
[conda] mkl-include 2022.0.1 h8d4b97c_803 conda-forge
[conda] numpy 1.22.4 py310h4ef5377_0 conda-forge
[conda] pytorch 1.13.0.dev20220603 py3.10_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-debayer 1.4.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 0.12.0.dev20220603 py310_cu113 pytorch-nightly
[conda] torchvision 0.14.0.dev20220603 py310_cu113 pytorch-nightly
```
| 0 |
5,554 | 78,800 |
Efficiency of unary operations on CPU for large tensors
|
module: performance, triaged
|
### π Describe the bug
It seems that doing
```python
out = x.clone()
x.op_()
```
is between a 5% and a 15% faster than doing `x.op()` on tensors floats of size 10^8:
```
[-------------- Unary Ops --------------]
| out of place | in-place
4 threads: ------------------------------
abs | 58 | 52
sqrt | 69 | 54
tanh | 114 | 105
atan | 124 | 115
cos | 84 | 74
erf | 83 | 73
log | 74 | 63
log1p | 603 | 598
neg | 59 | 53
Times are in milliseconds (ms).
```
When the tensors are smaller (10^7 elements or less), in my machine, the out of place version is comparable or faster than the in-place version. I have not found where's the exact cut-off for this behaviour, but by the looks of it, it may have something to do with pagination
The script:
```python
from itertools import product
import torch
from torch.utils.benchmark import Compare, Timer
def get_timer(op, fn):
x = torch.rand((10000, 10000), device="cpu")
stm = fn(op)
print(stm)
timer = Timer(
stm,
globals={"x": x},
label="Unary Ops",
sub_label=op,
description="in-place" if "clone" in stm else "out of place",
num_threads=4)
return timer.blocked_autorange(min_run_time=5)
def get_params():
ops = ("abs", "sqrt", "tanh", "atan", "cos", "erf", "log", "log1p", "neg")
fns = (lambda fn: f"x.{fn}()", lambda fn: f"out = x.clone(); out.{fn}_()")
for op, fn in product(ops, fns):
yield op, fn
compare = Compare([get_timer(*params) for params in get_params()])
compare.trim_significant_figures()
compare.print()
```
### Versions
master
cc @VitalyFedyunin @ngimel
| 0 |
5,555 | 78,786 |
Deprecate hardtanh type promotion behavior.
|
module: nn, triaged, module: primTorch
|
### π The feature, motivation and pitch
There's no clear reason that `hardtanh` should exist - however, it currently has its own kernel and its own `hardtanh_backward`.
See discussion here: https://github.com/pytorch/pytorch/pull/78689/files/618a28c397fbc66433de7b16e5a86a413fd8bf2b#diff-93c7b95139f636278cc494028e322a2c3c3c9ba1e83b2adb35d54ccabed5b47a
cc: @mruberry @ngim
### Alternatives
N/A
### Additional context
N/A
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @ezyang @ngimel
| 2 |
5,556 | 78,774 |
[FSDP] Customizable gradient pre-divide for mixed precision training
|
oncall: distributed, triaged, module: fsdp
|
### π The feature, motivation and pitch
When training FSDP with mixed precision and fp16, we need to be aware of the reduced range and avoid over/underflow. If we sum all gradients for a large world size and then divide, this might overflow the fp16 range and cause issues, while if we divide first and then sum, we may risk underflow. One possible solution is to divide by sqrt(N), run allreduce, and then divide the result again by sqrt(N) which is equivalent to a N = world size divide. As a result, allreduce only increases the magnitude by sqrt(N=world size), which is less likely to cause overflow.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 1 |
5,557 | 78,761 |
Extend tag testing for aliases
|
triaged, module: testing
|
### π Describe the bug
Currently tags are only tested for the op whose OpInfo we are looking at. We should also look into the aliases db for the OpInfo and run the tests for aliases as well.
### Versions
master
| 0 |
5,558 | 78,759 |
Add `inplace_view` tag for `resize_`
|
triaged, module: viewing and reshaping
|
### π Describe the bug
as per the title
### Versions
master
| 4 |
5,559 | 78,754 |
Getting NotImplementedError when trying to implement E2E support for `prim::is_nested` Op in torch-mlir.
|
triaged, module: nestedtensor, oncall: transformer/mha
|
### π Describe the bug
Hello, I am trying to add support for the Prim::is_nested Op in torch-mlir. I have posted a similar issue in torch-mlir: https://github.com/llvm/torch-mlir/issues/880 and this PR https://github.com/llvm/torch-mlir/pull/881 points to the commit in question. It has the corresponding lowering code and E2E test code. When trying to run the nested test case I get the following error summary. Kindly advice on the best way to debug the error.
```Unexpected outcome summary:
****** Failed tests - 1 tests
FAIL - "PrimIsNestedOpModule_nested"
Compilation error: Traceback (most recent call last):
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 282, in compile_and_run_test
golden_trace = generate_golden_trace(test)
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 276, in generate_golden_trace
test.program_invoker(tracer, TestUtils())
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/test_suite/basic.py", line 1922, in PrimIsNestedOpModule_nested
module.forward(nested_basic)
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 255, in __call__
inputs = [clone_torch_script_value(arg) for arg in args]
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 255, in <listcomp>
inputs = [clone_torch_script_value(arg) for arg in args]
File "/home/vidush/nodAI/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/torchscript/framework.py", line 60, in clone_torch_script_value
return v.clone()
NotImplementedError: Could not run 'aten::clone' with arguments from the 'NestedTensorCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clone' is only available for these backends: [Dense, FPGA, ORT, Vulkan, Metal, Meta, Quantized, CustomRNGKeyId, MkldnnCPU, Sparse, SparseCsrCPU, SparseCsrCUDA, NestedTensor, BackendSelect, Python, Fake, Named, Conjugate, Negative, ZeroTensor, FuncTorchDynamicLayerBackMode, ADInplaceOrView, AutogradOther, AutogradFunctionality, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, Autocast, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, Functionalize, DeferredInit, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, TESTING_ONLY_GenericWrapper, TESTING_ONLY_GenericMode, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, CPU, CUDA, HIP, XLA, MPS, IPU, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
Undefined: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
CPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
CUDA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
HIP: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
XLA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
MPS: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
IPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
XPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
HPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
VE: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
Lazy: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
PrivateUse1: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
PrivateUse2: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
PrivateUse3: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
FPGA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
ORT: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
Vulkan: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
Metal: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
Meta: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1294 [kernel]
QuantizedCUDA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
QuantizedXPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
CustomRNGKeyId: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
MkldnnCPU: registered at aten/src/ATen/RegisterMkldnnCPU.cpp:690 [kernel]
SparseCPU: registered at aten/src/ATen/RegisterSparseCPU.cpp:1858 [kernel]
SparseCUDA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
SparseHIP: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
SparseXPU: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
SparseVE: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
SparseCsrCPU: registered at aten/src/ATen/RegisterSparseCsrCPU.cpp:1507 [kernel]
SparseCsrCUDA: registered at aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:29545 [default backend kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
Named: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:22 [kernel]
ZeroTensor: fallthrough registered at ../aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:12167 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:12753 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1068 [kernel]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]```
### Versions
Collecting environment information...
PyTorch version: 1.13.0.dev20220523+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-5ubuntu1) 9.4.0
Clang version: 13.0.0
CMake version: version 3.22.4
Libc version: glibc-2.35
Python version: 3.9.0 (default, May 19 2022, 12:51:15) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.16.0-051600-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220523+cpu
[pip3] torchvision==0.13.0.dev20220523+cpu
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer
| 3 |
5,560 | 78,744 |
Unable to programmatically update models using references from model.named_modules()...requires additional parsing
|
module: nn, triaged, enhancement
|
### π Describe the bug
Users may need to programatically update models - for example, changing out specific layers or dynamically adding activation checkpoint_wrappers.
While attempting to do this, I used a simple enumeration of a DeepViT model with
~~~
for index, (name, layer) in enumerate(model.named_modules()):
if isinstance(layer, type):
~~~
This correctly isolated the layers of interest. However, when you go to update these layers with the name returned (i.e. transformer.layers.0.0.fn.fn.to_qkv) using setattr, you will find that rather than replacing the layer, you have simply appended a new layer to the model. Setattr doesn't return any response, and so it's even harder programmatically to determine if the updates worked, or you just mangled your model with a bunch of new layers appended.
To fix this and actually update the model using the returned name from named_modules, requires some additional parsing:
~~~
pieces = name.strip().split(".")
layer = model
for item in pieces[:-1]:
if not item.isnumeric():
layer = getattr(layer, item)
else:
layer = layer[int(item)]
~~~
This is b/c when items are in a list, we would need to reference them via [0] instead of .0 which is what is returned by named_modules.
However, this is highly confusing and shouldn't require a user to figure this out and write their own parsing routine to offer programmatic updates.
Details on this are here:
https://discuss.pytorch.org/t/how-to-replace-a-layer-with-own-custom-variant/43586/11
### Versions
PyTorch version: 1.13.0.dev20220523+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.27
Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-1072-aws-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.2.0a0+ae70048
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] numpydoc==1.1.0
[pip3] torch==1.13.0.dev20220523+cu113
[pip3] torch-model-archiver==0.5.0b20211117
[pip3] torch-workflow-archiver==0.2.0b20211118
[pip3] torchdynamo==0.2.0
[pip3] torchserve==0.5.0b20211117
[pip3] torchtext==0.11.0
[pip3] torchvision==0.13.0.dev20220523+cu113
[pip3] vit-pytorch==0.33.2
[conda] blas 1.0 mkl
[conda] captum 0.4.1 0 pytorch
[conda] cudatoolkit 11.1.1 h6406543_9 conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.21.2 py38h20f2e39_0
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] numpydoc 1.1.0 py_1 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 1.13.0.dev20220523+cu113 pypi_0 pypi
[conda] torch-model-archiver 0.5.0 py38_0 pytorch
[conda] torch-workflow-archiver 0.2.0 py38_0 pytorch
[conda] torchdynamo 0.2.0 dev_0 <develop>
[conda] torchserve 0.5.0 py38_0 pytorch
[conda] torchtext 0.11.0 py38 pytorch
[conda] torchvision 0.13.0.dev20220523+cu113 pypi_0 pypi
[conda] vit-pytorch 0.33.2 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 2 |
5,561 | 78,743 |
Expose docs from the yaml for each torch.Tag in Python
|
module: docs, triaged, module: dispatch, module: library
|
### π The doc issue
the docs for each tag exist in `tags.yaml`
### Suggest a potential alternative/fix
_No response_
cc @svekars @holly1238 @anjali411
| 0 |
5,562 | 78,742 |
Add a gallery of examples with sphinx-gallery
|
module: docs, triaged
|
### π The doc issue
A cornerstone of numeric and scientific Python libraries is a gallery of concise, straightforward examples that are longer than per-function examples but shorter than our current [tutorials](https://pytorch.org/tutorials/index.html). They usually have an entirely visual navigation (extremely useful for pattern matching). It would be nice if we could also provide this functionality.
Here are some examples:
[scikit-image](https://scikit-image.org/docs/stable/auto_examples/)
[scikit-learn](https://scikit-learn.org/stable/auto_examples/index.html)
[statsmodels](https://www.statsmodels.org/devel/examples/index.html)
[matplotlib](https://matplotlib.org/stable/gallery/index.html)
[seaborn](https://seaborn.pydata.org/examples/index.html)
cc @svekars @holly1238
| 1 |
5,563 | 78,741 |
Test approximation and numerical stability of numerical operators
|
module: numerical-stability, triaged, module: testing
|
### π The feature, motivation and pitch
It would be nice if there was an automated way for us to report the approximate error of our numerical operators and a way to annotate the OpInfo for the operator with the expected error bounds. Itβd be even nicer if there was a way to annotate error bounds for specific intervals (e.g., -1.0 and 1.0).
### Alternatives
Weβve made it this far!
### Additional context
_No response_
| 0 |
5,564 | 78,738 |
[primTorch] Sensible Error Messages
|
triaged, module: primTorch
|
In PyTorch today we typically write error messages as follows:
```
torch.foo expected X, but saw Y!
```
When an operator is implemented as a composite of other operations this often causes a choice to be made:
- either the operator has to redundantly implement error checks to produce a reasonable error message
- it has to rely on its constituent operators to throw errors appropriately, and for users to understand that the error generated from `torch.bar` is happening because `torch.foo` calls `torch.bar`
Neither of these solutions is great, of course, and we should decide which approach we want to pursue and how to improve it.
**Option 1: Each operator implements its own error checks.**
This would allow most error messages to be high quality, but...
- we'd want a mechanism to avoid redundant error checks (because they could be expensive)
- internal assertions (which usually involve obscure derived computations that operators shouldn't expect to emulate) will still produce odd error messages
- this approach also requires operators using another operator update whenever the error checks for the used operator update, too. For example, if torch.foo uses torch.bar, and torch.bar changes the input it accepts, then torch.foo will have to be explicitly updated, too, which is a potential maintenance headache
**Option 2: Operators rely on their constituents to generate errors wherever possible.**
This avoids redundant error checks and allows operators to inherit behavior updates, but...
- we'd want a mechanism to improve error message quality
**Proposal**
The second option seems preferable to me, but it requires developing a mechanism to improve error message quality (likely some kind of error context), and we need to be sure that it doesn't have a performance impact.
I can't imagine any mechanism that will correct all error message oddities, however. While it's straightforward to do things like cite the original operation the user called, messages with derived values seem very hard to improve. For example, let's say `torch.foo` has a parameter `height` that it passes to `torch.bar`'s `width` parameter as `width=(height/2)`. Then if that `width` is invalid the user will see an error about `width`, even though they specified `height`!
For those cases we can continue to add additional, redundant error messages,
cc @ezyang @mruberry @ngimel
| 5 |
5,565 | 78,737 |
New c10 constants
|
module: internals, triaged
|
### π The feature, motivation and pitch
It would be extremely helpful if c10 exposed constants beyond `c10::pi` for all the PyTorch dtypes. Minimally, the GNU predefined constants would suffice but I would advocate for something _slightly_ more expansive like NumPy or Boost. It would also provide the benefit of having consistent representations (e.g., implementors using different representations of constants).
### Alternatives
Continue to inline constants.
### Additional context
I use a lot of constants when implementing PyTorch functions (e.g., special functions)!
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| 3 |
5,566 | 78,729 |
[Better Engineering] Make OpInfo-based test failures easy to reproduce
|
module: tests, triaged, better-engineering
|
Let's update our OpInfo-based tests to better support randomness and reproducibility by using hypothesis (or a hypothesis-like mechanism) and providing a mechanism to acquire the exact sample input being tested when a failure occurs.
This functionality may be inherently limited to acquiring the exact sample input only when using the same hardware (and software) as the failing test.
cc @mruberry
| 1 |
5,567 | 78,722 |
AlBert quantization
|
oncall: quantization, triaged
|
Dynamicly quantized AlBert model shows poor performance
When I quantized fine-tuned albert-base-v2 model, checkpoint size reduced from 43MB to almost 22MB, but F1 score drops to 0.05 whereas on non quantized version it was 0.8.
`
quantized_model = torch.quantization.quantize_dynamic(
albert_model, { nn.Linear}, dtype=torch.qint8
)
`
I assume that dynamic quantization adds ops to quantize on each layer of encoder part, but because Albert model has shared layers, it quantizes same weights several times (to be precise 12 times). I have tried distillbert and it worked fine.
I fine-tuned on token-classification task, but I believe you can reproduce the issue by comparing quantized and original albert base model outputs.
Thanks in advance!
### Versions
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==1.4.9
[pip3] torch==1.11.0
[pip3] torchmetrics==0.7.2
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
| 4 |
5,568 | 78,721 |
[ONNX] Scripted `reshape` incorrect if shape is dynamically calculated
|
module: onnx, triaged, onnx-triaged, bug
|
### π Describe the bug
torch.onnx.export produces incorrect export of reshape function call after scripting if shape is calculated dynamically. It looks like one of the shape arguments is not converted to integer and is float instead.
```python
#sample code
import torch
import torch.nn as nn
import onnx
import onnxruntime
TEST_WINDOW_SIZE = 7
TEST_H = 28
TEST_W = 28
TEST_B = 1
TEST_NUM_WINDOWS = 16
TEST_C = 96
class Model(nn.Module):
def __init__(self, window_size: int, H: int, W: int):
super(Model, self).__init__()
self.window_size = window_size
self.H = H
self.W = W
def window_reverse(self, windows, window_size: int, H: int, W: int):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B:int = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.reshape(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5)
x = x.reshape(B, H, W, -1)
return x
def forward(self, windows):
return self.window_reverse(windows, self.window_size, self.H, self.W)
model = Model(TEST_WINDOW_SIZE, TEST_H, TEST_W)
model.eval()
model.cpu()
windows = torch.randn(TEST_NUM_WINDOWS * TEST_B, TEST_WINDOW_SIZE, TEST_WINDOW_SIZE, TEST_C)
jit_model = torch.jit.script(model, example_inputs=[(windows,)])
jit_model.eval()
jit_model.cpu()
torch.testing.assert_allclose(model(windows), jit_model(windows))
torch.onnx.export(jit_model,
windows,
"bug.onnx",
export_params=False,
opset_version=11,
do_constant_folding=True,
input_names=["input"],
output_names=["output"],
dynamic_axes={
'input' : {0 : 'batch_size'},
'output' : {0 : 'batch_size'}
}, verbose=True)
onnx_model = onnx.load("bug.onnx")
onnx.checker.check_model(onnx_model)
ort_session = onnxruntime.InferenceSession("bug.onnx", verbose=True)
```
error
```
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from bug.onnx failed:Type Error: Type parameter (T) of Optype (Concat) bound to different types (tensor(float) and tensor(int64) in node (Concat_11).
```

### Versions
PyTorch version: 1.11.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 495.29.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] torch 1.11.0 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
| 1 |
5,569 | 78,720 |
ValueError during `yaml.dump(dtype)`
|
module: serialization, triaged
|
### π Describe the bug
```python
import torch
import yaml
yaml.dump(torch.half)
```
```python
Traceback (most recent call last):
File "/home/carmocca/git/repro.py", line 4, in <module>
yaml.dump(torch.half)
File "/home/carmocca/git/py39/lib/python3.9/site-packages/yaml/__init__.py", line 253, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/home/carmocca/git/py39/lib/python3.9/site-packages/yaml/__init__.py", line 241, in dump_all
dumper.represent(data)
File "/home/carmocca/git/py39/lib/python3.9/site-packages/yaml/representer.py", line 27, in represent
node = self.represent_data(data)
File "/home/carmocca/git/py39/lib/python3.9/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/home/carmocca/git/py39/lib/python3.9/site-packages/yaml/representer.py", line 330, in represent_object
dictitems = dict(dictitems)
ValueError: dictionary update sequence element #0 has length 1; 2 is required
```
Reported in https://github.com/PyTorchLightning/pytorch-lightning/issues/13158
### Versions
```python
torch==1.11.0+cu113
yaml.__version__==6.0
Python 3.9.7
```
cc @mruberry
| 0 |
5,570 | 78,708 |
BuildExtension does not choose correct CUDA installation
|
module: cpp-extensions, module: cuda, triaged
|
### π Describe the bug
When trying to build CUDA extensions via the `BuildExtension` class (`torch.utils.cpp_extension.BuildExtension`), if multiple CUDA versions are installed, the build might fail. This is specifically due to [`_find_cuda_home()`](https://github.com/pytorch/pytorch/blob/619db911658e7f06c34e4198f7d0c9d4d629dd4a/torch/utils/cpp_extension.py#L81) choosing the first CUDA installation it can find, regardless of its compatibility with the current PyTorch version. This then causes the check at [`_check_cuda_version()`](https://github.com/pytorch/pytorch/blob/619db911658e7f06c34e4198f7d0c9d4d629dd4a/torch/utils/cpp_extension.py#L800) to error out, if the wrong path was selected automatically, regardless of whether a compatible CUDA version is available on the system.
In my specific case, there are installations of both CUDA 10.1 and various CUDA 11.x versions present on a Ubuntu machine. `nvcc` points to the 10.1 install, while the one at `/usr/local/cuda` is a compatible version. The logic of `_find_cuda_home()` checks the location of `nvcc` first and only uses `usr/local/cuda` as a fallback, giving me the following error:
```
Traceback (most recent call last):
File "setup.py", line 20, in <module>
setup(
File "[...]/env/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "[...]/env/lib/python3.8/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "[...]/env/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
self.do_egg_install()
File "[...]/env/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
self.run_command('bdist_egg')
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "[...]/env/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "[...]/env/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
self.run_command(cmdname)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "[...]/env/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "[...]/env/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "[...]/env/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "[...]/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 410, in build_extensions
self._check_cuda_version()
File "[...]/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 787, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (10.1) mismatches the version that was used to compile
PyTorch (11.3). Please make sure to use the same CUDA versions.
```
This can of course be worked around by manually setting `CUDA_HOME` to the correct path, but in my mind, the fact that PyTorch detects & uses the correct CUDA installation for training & inference and only fails to use the correct one when building extensions does sound unintended to me.
Ideally in my mind, [`_find_cuda_home()`](https://github.com/pytorch/pytorch/blob/619db911658e7f06c34e4198f7d0c9d4d629dd4a/torch/utils/cpp_extension.py#L81) should be adjusted to detect the same CUDA version as used by PyTorch normally or even automatically verify version compatibility for each candidate install location instead of just choosing the first one and verifying compatibility later.
Quick addendum: this error occurs with every CUDA extension I try to build and also happens on multiple systems with different hardware & CUDA configurations, as long as there are multiple CUDA installations and the one used by `nvcc` is an incompatible one.
### Versions
```
PyTorch version: 1.11.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.0
[pip3] pytorch-lightning==1.6.3
[pip3] torch==1.11.0+cu113
[pip3] torchaudio==0.11.0+cu113
[pip3] torchmetrics==0.8.2
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0+cu113
[conda] Could not collect
```
cc @malfet @zou3519 @ngimel
| 0 |
5,571 | 78,681 |
Unable to install Preview (Nightly) on M1 macOS: "Symbol not found"
|
oncall: binaries
|
### π Describe the bug
The conda installation command to install from `-c pytorch-nightly` does not recognize `torchaudio`.
```
(temp) speech-modes % conda install pytorch torchvision torchaudio -c pytorch-nightly
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- torchaudio
Current channels:
- https://conda.anaconda.org/pytorch-nightly/osx-arm64
- https://conda.anaconda.org/pytorch-nightly/noarch
- https://conda.anaconda.org/conda-forge/osx-arm64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
```
So I install with `pip`:
```
(temp) speech-modes % pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/nightly/cpu
Collecting torch
Downloading https://download.pytorch.org/whl/nightly/cpu/torch-1.13.0.dev20220601-cp39-none-macosx_11_0_arm64.whl (48.9 MB)
ββββββββββββββββββββββββββββββββββββββββ 48.9/48.9 MB 20.2 MB/s eta 0:00:00
Collecting torchvision
Downloading https://download.pytorch.org/whl/nightly/cpu/torchvision-0.14.0a0%2Bf9f721d-cp39-cp39-macosx_11_0_arm64.whl (804 kB)
ββββββββββββββββββββββββββββββββββββββββ 804.3/804.3 kB 15.0 MB/s eta 0:00:00
Collecting torchaudio
Downloading https://download.pytorch.org/whl/nightly/cpu/torchaudio-0.14.0.dev20220601-cp39-cp39-macosx_11_0_arm64.whl (2.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 2.7/2.7 MB 20.2 MB/s eta 0:00:00
Collecting typing-extensions
Using cached typing_extensions-4.2.0-py3-none-any.whl (24 kB)
Collecting numpy
Downloading numpy-1.23.0rc2-cp39-cp39-macosx_11_0_arm64.whl (13.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 13.3/13.3 MB 22.5 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0
Using cached Pillow-9.1.1-cp39-cp39-macosx_11_0_arm64.whl (2.8 MB)
Collecting requests
Using cached requests-2.27.1-py2.py3-none-any.whl (63 kB)
Collecting charset-normalizer~=2.0.0
Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.9-py2.py3-none-any.whl (138 kB)
Collecting idna<4,>=2.5
Using cached idna-3.3-py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.5.18.1-py3-none-any.whl (155 kB)
Installing collected packages: urllib3, typing-extensions, pillow, numpy, idna, charset-normalizer, certifi, torch, requests, torchvision, torchaudio
Successfully installed certifi-2022.5.18.1 charset-normalizer-2.0.12 idna-3.3 numpy-1.23.0rc2 pillow-9.1.1 requests-2.27.1 torch-1.13.0.dev20220601 torchaudio-0.14.0.dev20220601 torchvision-0.14.0a0+f9f721d typing-extensions-4.2.0 urllib3-1.26.9
```
Running my script fails when importing `torchaudio`:
```
(temp) speech-modes % ./run.sh
Traceback (most recent call last):
File "/Users/redacted/speech-modes/speech_modes/main.py", line 8, in <module>
import torchaudio
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/__init__.py", line 1, in <module>
from torchaudio import ( # noqa: F401
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/_extension.py", line 101, in <module>
_init_extension()
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/_extension.py", line 86, in _init_extension
_load_lib("libtorchaudio")
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/_extension.py", line 51, in _load_lib
torch.ops.load_library(path)
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torch/_ops.py", line 255, in load_library
ctypes.CDLL(path)
File "/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/lib/libtorchaudio.so, 0x0006): Symbol not found: __ZN2at14RecordFunctionC1ENS_11RecordScopeEb
Referenced from: /opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torchaudio/lib/libtorchaudio.so
Expected in: /opt/homebrew/Caskroom/miniforge/base/envs/temp/lib/python3.9/site-packages/torch/lib/libtorch_cpu.dylib
```
### Versions
```
(temp) speech-modes % python collect_env.py
Collecting environment information...
PyTorch version: 1.13.0.dev20220601
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.4 (arm64)
GCC version: Could not collect
Clang version: 12.0.5 (clang-1205.0.22.9)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 17:01:00) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-12.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.0rc2
[pip3] torch==1.13.0.dev20220601
[pip3] torchaudio==0.14.0.dev20220601
[pip3] torchvision==0.14.0a0+f9f721d
[conda] numpy 1.23.0rc2 pypi_0 pypi
[conda] torch 1.13.0.dev20220601 pypi_0 pypi
[conda] torchaudio 0.14.0.dev20220601 pypi_0 pypi
[conda] torchvision 0.14.0a0+f9f721d pypi_0 pypi
```
cc @ezyang @seemethere @malfet
| 4 |
5,572 | 78,656 |
Allow batch_norm_backward_elemt and batch_norm_gather_stats_with_counts handle 0 counts
|
triaged, enhancement, module: norms and normalization
|
### π The feature, motivation and pitch
#36530 requested empty batch support for SyncBatchNorm, which is later added in #74944. However, the fix in #74944 broke CUDA graph capturing as the new code introduced sync on CPU. See #78549.
For v1.12, the temporary fix is to guard the sync in new code with [is_current_stream_capturing](https://pytorch.org/docs/master/generated/torch.cuda.is_current_stream_capturing.html?highlight=is_current_stream_capturing#torch.cuda.is_current_stream_capturing). A better long-term fix would be updating the CUDA kernels for `batch_norm_backward_elemt` and `batch_norm_gather_stats_with_counts` to support zero counts, so that empty batch can work with CUDA graph capturing as well.
| 0 |
5,573 | 78,638 |
torch.distributed.init_process_group(backend="nccl") NCCL version error
|
oncall: distributed
|
### π Describe the bug
Initializing torch distributed with NCCL backend:
```
import torch
torch.distributed.init_process_group(backend="nccl")
```
Leads to the error of:
```
Traceback (most recent call last):
File "main_task_caption.py", line 24, in <module>
torch.distributed.init_process_group(backend="nccl")
File "/shared/nas/data/users/yifung2/envs/py_univl/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
barrier()
File "/shared/nas/data/users/yifung2/envs/py_univl/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1607370120218/work/torch/lib/c10d/ProcessGroupNCCL.cpp:748, internal error, NCCL version 2.7.8
```
How should I handle such an issue? Pointers greatly appreciated
### Versions
python=3.6.9
conda install pytorch==1.11.0 cudatoolkit=11.0 -c pytorch
NCCL version 2.7.8
NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.0 (v11.0.221)
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 2 |
5,574 | 78,634 |
Debug job does not build in debug mode
|
module: ci, triaged
|
example job: https://github.com/pytorch/pytorch/runs/6689824615?check_suite_focus=true
Searching DCMAKE_BUILD_TYPE, you find `Release` instead of the expected `Debug`
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 2 |
5,575 | 78,624 |
linalg.pinv_singular tests are slow
|
module: autograd, module: tests, triaged, module: linear algebra
|
### π Describe the bug
More context in https://github.com/pytorch/pytorch/issues/78620
In https://github.com/pytorch/pytorch/pull/78623, we skipped the slowest one (was taking almost 2 hours to run!), but there are still a couple more tests that run for that op that are quite slow. Recently their runtime seemed to have doubled as well (maybe due to https://github.com/pytorch/pytorch/pull/74521).
linalg.pinv_singular tests (previously) combine to represent almost all of slow gradcheck CI runtime (for the first shard) - ~2.6 hours total
### Versions
main
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @jianyuh @walterddr @IvanYashchuk @xwang233
| 0 |
5,576 | 78,618 |
Module parameters/submodules can be shadowed by class attributes silently
|
module: nn, triaged, actionable
|
### π Describe the bug
```python
import torch
class Bar(torch.nn.Module):
def __init__(self):
super().__init__()
self.attr1 = torch.nn.ReLU()
def attr1(self):
return "on the class"
b = Bar()
print(b.attr1())
# prints "on the class"
```
This happens because the Class `__dict__` is checked before calling the custom `__getattr__` that `nn.Module` relies on to properly return Parameters and Modules.
We are able to detect such a case when the paramter/submodule is being registered and we should raise an error in that case.
### Versions
master
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 1 |
5,577 | 78,606 |
[FSDP] Enhance sync_module_states for auto wrapping
|
oncall: distributed, better-engineering, module: fsdp
|
### π The feature, motivation and pitch
Currently `sync_module_states` syncs state wrapped layer by layer. This makes sense for `wrap()` API where the root FSDP instance is not known at construction time.
However, when using `auto_wrap_policy`, we can do a single sync at the root as opposed to layer by layer, possibly saving some overhead.
### Alternatives
_No response_
### Additional context
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501
| 0 |
5,578 | 78,605 |
torch.svd_lowrank fails for complex matrices
|
triaged, module: complex, module: linear algebra
|
The reason is that the function get_floating_dtype from _linalg_utils returns torch.float32 on complex input instead of torch.cfloat or torch.cdouble as appropriate. This is in version 1.11.0.
cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @jianyuh @pearu @walterddr @IvanYashchuk @xwang233
| 1 |
5,579 | 78,581 |
RFC: Improve the performance and usability of linear algebra on CUDA devices
|
module: cuda, triaged, module: linear algebra, module: magma
|
### π The feature, motivation and pitch
Currently, the `torch.linalg` (https://pytorch.org/docs/stable/linalg.html) package provides linear algebra functionalities in pytorch. The CUDA backend is supported by cuSOLVER and MAGMA libraries.
For now, linear algebra operators in pytorch are implemented in either cuSOLVER or MAGMA, or both. Users can use
```python
torch.backends.cuda.preferred_linalg_library(backend='cusolver')
```
to prefer one of the two backends. Available options (python `str`) are `default` (using heuristics), `cusolver`, or `magma`. See doc for details https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library.
However, libraries have limitations and heuristics can't be perfect on all devices, library versions, input batch sizes, and input shapes. We'd like to collect user feedbacks and feature requests on the performance and usability of pytorch linear algebra on CUDA devices. Please leave a comment if you have any suggestions. Thank you!
### Alternatives
_No response_
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @ptrblck @ngimel
| 2 |
5,580 | 78,559 |
[JIT] autodiff implementation of rand_like function is outdated
|
oncall: jit
|
### π Describe the bug
The autodiff implementation of rand)like in symbolic_script.cpp is outdated such that it doesn't match the updated function signature in the config file.
The function signature in symbolic_script.cpp is currently:
def rand_like(self, *, memory_format: Optional[int])
Which is incorrect
### Versions
N/A
| 0 |
5,581 | 78,530 |
LibTorch cannot be used without nvcc
|
module: build, triaged
|
### π Describe the bug
When using LibTorch in CMake via `find_package(Torch REQUIRED)` it tries to find CUDA via `nvcc`. When only the runtime libraries are installed, e.g. `sudo apt install cuda-libraries-11-3` then it will fail with:
```
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
CMake Warning at [...]/cmake/Caffe2/public/cuda.cmake:31 (message):
Caffe2: CUDA cannot be found. Depending on whether you are building Caffe2
or a Caffe2 dependent library, the next warning / error will give you more
info.
```
Since LibTorch is already compiled, it should not require `nvcc` and other development libraries to be installed.
Looking at `Caffe2/public/cuda.cmake` and https://github.com/pytorch/pytorch/blob/v1.11.0/cmake/public/cuda.cmake#L29, it seems that the [deprecated `find_package(CUDA)`](https://cmake.org/cmake/help/latest/module/FindCUDA.html) is used. Instead, the new [`FindCUDAToolkit`](https://cmake.org/cmake/help/latest/module/FindCUDAToolkit.html#module:FindCUDAToolkit) should be used. This will work without `nvcc` installed.
### Versions
1.11 from https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcu113.zip
cc @malfet @seemethere
| 2 |
5,582 | 78,519 |
test_python_dispatch fails on DEBUG=1
|
triaged, module: dispatch
|
### π Describe the bug
`python test/test_python_dispatch.py`
```
ERROR: test_produce_real_type (__main__.TestPythonDispatch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/test_python_dispatch.py", line 371, in test_produce_real_type
x[:, 1].contiguous(memory_format=torch.contiguous_format) # optional memory format
RuntimeError: self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_4.cpp":9776, please report a bug to PyTorch.
======================================================================
ERROR: test_extend_library_with_dispatch_key_arg (__main__.TestPythonRegistration)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/test_python_dispatch.py", line 219, in test_extend_library_with_dispatch_key_arg
self.assertEqual(torch.sum(x), x)
RuntimeError: result.storage().use_count() == 1 INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_2.cpp":13157, please report a bug to PyTorch. function: sum
======================================================================
ERROR: test_override_cpu_sum (__main__.TestPythonRegistration)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/test_python_dispatch.py", line 79, in test_override_cpu_sum
self.assertEqual(torch.sum(x), x)
RuntimeError: result.storage().use_count() == 1 INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_2.cpp":13157, please report a bug to PyTorch. function: sum
======================================================================
FAIL: test_none_wrapping (__main__.TestPythonDispatch)
----------------------------------------------------------------------
RuntimeError: self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_3.cpp":5047, please report a bug to PyTorch.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test/test_python_dispatch.py", line 1072, in test_none_wrapping
out.backward()
AssertionError: "but got None" does not match "self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_3.cpp":5047, please report a bug to PyTorch. "
======================================================================
FAIL: test_subclass_creation (__main__.TestPythonDispatch)
----------------------------------------------------------------------
RuntimeError: self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_1.cpp":2542, please report a bug to PyTorch.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test/test_python_dispatch.py", line 563, in test_subclass_creation
b = LoggingTensor(torch.rand(2)).as_subclass(Foo)
AssertionError: "subclass Foo but.*already associated to a python object of type LoggingTensor" does not match "self__storage_saved.value().is_alias_of(result.storage()) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/generated/VariableType_1.cpp":2542, please report a bug to PyTorch. "
```
### Versions
master
| 7 |
5,583 | 78,518 |
Exponentiating floating number with cuda tensor is slow
|
module: cuda, triaged, topic: performance
|
### π Describe the bug
Exponentiating a floating number with a cuda tensor is slower than doing the same operation on cpu (even with numpy).
The dtype of the tensor does not seem to impact significantly the results.
Perhaps it's expected since it isn't an operation that is expected to benefit from being executed on cuda, but still the difference in performance is quite significant.
Strangely, changing to `a ** b` to `(a.log() * b).exp()` seems to bring quite a speedup.
```python
import torch
import timeit
import math
if __name__ == "__main__":
steps = torch.randint(5, (512,))
gamma = 0.99
print("numpy")
steps_numpy = steps.numpy()
print(timeit.timeit("gamma ** steps_numpy", globals={"steps_numpy": steps_numpy, "gamma": gamma}))
print("torch, cpu")
print(timeit.timeit("gamma ** steps", globals={"steps": steps, "gamma": gamma}))
print("torch, cpu, float")
steps_float = steps.float()
print(timeit.timeit("gamma ** steps_float", globals={"steps_float": steps_float, "gamma": gamma}))
print("torch, cpu, int8")
steps_int8 = steps.to(torch.int8)
print(timeit.timeit("gamma ** steps_int8", globals={"steps_int8": steps_int8, "gamma": gamma}))
print("torch, cuda")
steps_cuda = steps.cuda()
print(timeit.timeit("gamma ** steps_cuda", globals={"steps_cuda": steps_cuda, "gamma": gamma}))
print("torch, cuda, float")
steps_cuda_float = steps.cuda().float()
print(timeit.timeit("gamma ** steps_cuda_float", globals={"steps_cuda_float": steps_cuda_float, "gamma": gamma}))
print("torch, cuda, int8")
steps_cuda_int8 = steps.cuda().to(torch.int8)
print(timeit.timeit("gamma ** steps_cuda_int8", globals={"steps_cuda_int8": steps_cuda_int8, "gamma": gamma}))
print("torch, cuda, log")
print(timeit.timeit("(math.log(gamma) * steps_cuda).exp()", globals={"steps_cuda": steps_cuda, "gamma": gamma, "torch": torch, "math": math}))
print("torch, cuda, log, int8")
print(timeit.timeit("(math.log(gamma) * steps_cuda_int8).exp()", globals={"steps_cuda_int8": steps_cuda_int8, "gamma": gamma, "torch": torch, "math": math}))
```
Heres' the log on a cluster with recent nvidia gpus:
```
numpy
15.628631395753473
torch, cpu
12.830785953439772
torch, cpu, float
9.56215474428609
torch, cpu, int8
12.628445492126048
torch, cuda
31.448247560299933
torch, cuda, float
30.8037648932077
torch, cuda, int8
31.459348308388144
torch, cuda, log
13.928261326625943
torch, cuda, log, int8
13.960974802728742
```
### Versions
```
PyTorch version: 1.13.0.dev20220526
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.22.3
Libc version: glibc-2.27
Python version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-1069-aws-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.3.0a0+9fa0265
[pip3] numpy==1.22.4
[pip3] torch==1.13.0.dev20220526
[pip3] torchaudio==0.12.0.dev20220529
[pip3] torchrl==0.1
[pip3] torchvision==0.14.0.dev20220529
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.3.0a0+9fa0265 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
[conda] mkl_fft 1.3.1 py39h0c7bc48_1 conda-forge
[conda] mkl_random 1.2.2 py39hde0f152_0 conda-forge
[conda] numpy 1.22.4 pypi_0 pypi
[conda] numpy-base 1.22.3 py39hf524024_0
[conda] pytorch 1.13.0.dev20220526 py3.9_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] tensorflow 2.4.1 mkl_py39h4683426_0
[conda] tensorflow-base 2.4.1 mkl_py39h43e0292_0
[conda] torchaudio 0.12.0.dev20220529 py39_cu113 pytorch-nightly
[conda] torchrl 0.1 dev_0 <develop>
[conda] torchvision 0.14.0.dev20220529 py39_cu113 pytorch-nightly
```
cc @ngimel
| 0 |
5,584 | 78,513 |
clear input shape declaration on pytorch model inputs and outputs
|
triaged, module: shape checking, module: python frontend
|
### π The feature, motivation and pitch
It is hard to read a new pytorch project, compared to keras like projects. each DL models can be seen as implementation of certain function, with scheduled inputs and outputs. But there is no clear information about shape, dtype for inputs and outputs in pytorch projects.
### Alternatives
or, to provide some convenient tools to test input and output shape, dtype, for any pytorch model - before training or after.
### Additional context
_No response_
| 0 |
5,585 | 78,507 |
Parallel execution of multiple unrelated statements written sequentially
|
triaged, enhancement
|
### π The feature, motivation and pitch
Often, I write statements in a sequential manner that do not relate to each other - the order of their execution does not matter. Such statements can be parallelized to achieve better performance. One example: resize three different images using nearest, bilinear, and bicubic:
Old method:
```python
img1 = F.interpolate(img1, mode="nearest", ...)
img2 = F.interpolate(img2, mode="bilinear", ...)
img3 = F.interpolate(img3, mode="bicubic", ...)
```
New method:
```python
img1, img2, img3 = parallel_api_fn([
{
"function": F.interpolate,
"args": {"inputs": img1, "mode": "nearest"}
},
{
"function": F.interpolate,
"args": {"inputs": img2, "mode": "bilinear"}
},
{
"function": F.interpolate,
"args": {"inputs": img3, "mode": "bicubic"}
}
])
```
Of course, the implementation mentioned above is just an example. The core idea is to take a list of functions (same or different), and their respective arguments. These functions are then executed in parallel using the new API function.
### Alternatives
Currently, the functionality can be implemented using vmap or python multiprocessing. However, each of these requires a lot of additional code and has a lot of added complexity. If we assume that every line to be executed in parallel is independent and all arguments are immutable, I believe the lines can easily be run in parallel on GPU.
### Additional context
_No response_
cc @ngimel @bdhirsh
| 1 |
5,586 | 78,489 |
[1.9.1] [collect_env] collect_env does not collect actual runtime-loaded cudnn version
|
module: cudnn, module: collect_env.py, triaged, enhancement
|
### π Describe the bug
In opposite, `torch.backends.cudnn.version()` does collect it correctly.
Originally reported by @grazder in a comment: https://github.com/pytorch/pytorch/issues/78475#issuecomment-1140516737
This occurred on old-ish version of 1.9.1, not sure if it still happens on 1.11.0
### Versions
1.9.1
cc @csarofeen @ptrblck @xwang233
| 6 |
5,587 | 99,719 |
New feature requested: vmap for torch.histc
|
high priority, triaged, module: functorch
|
Hi,
I hit the following warning
```
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::histc. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /__w/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)
h = torch.histc(x, bins=n_buckets, min=min_dist, max=max_dist)
```
Version
```
>>> functorch.__version__
'0.1.1'
```
Thanks
cc @ezyang @gchanan @zou3519 @Chillee @samdow @kshitij12345 @janeyx99 @soumith
| 10 |
5,588 | 78,487 |
torch.fx: symbolic_trace: ones() received an invalid combination of arguments
|
triaged, module: fx
|
### π Describe the bug
Run the following standalone python file.
```
import torch
from torch.fx import symbolic_trace
from transformers import AutoModelForSequenceClassification
class OnlyLogitsHuggingFaceModel(torch.nn.Module):
"""Wrapper that returns only the logits from a HuggingFace model."""
def __init__(self):
super().__init__()
self.model = AutoModelForSequenceClassification.from_pretrained(
"ProsusAI/finbert", # The pretrained model name.
num_labels=3,
output_attentions=False,
output_hidden_states=False,
torchscript=True,
)
self.model.eval()
def forward(self, input):
# Return only the logits.
return self.model(input)[0]
traced = symbolic_trace(OnlyLogitsHuggingFaceModel())
```
I get the error:
```
TypeError: ones() received an invalid combination of arguments - got (tuple, device=Attribute), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```
Full error log: https://gist.github.com/silvasean/a11f5219e8d931014ae4046a1fafcef7
### Versions
PyTorch version: 1.13.0.dev20220530+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 11.2.0-19) 11.2.0
Clang version: 13.0.1-3+build2
CMake version: version 3.22.4
Libc version: glibc-2.33
Python version: 3.9.12 (main, Mar 24 2022, 13:02:21) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.17.6-1rodete1-amd64-x86_64-with-glibc2.33
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] iree-torch==0.0.1
[pip3] numpy==1.23.0rc1
[pip3] torch==1.13.0.dev20220530+cpu
[pip3] torch-mlir==20220530.482
[pip3] torchvision==0.14.0.dev20220530+cpu
[conda] Could not collect
cc @ezyang @SherlockNoMad
| 2 |
5,589 | 78,486 |
Exception in torch.jit.script doesn't indicate where in the code the problem lies.
|
oncall: jit
|
### π Describe the bug
In the following exception caused by
```
RuntimeError: Attempted to use Dict without contained types. Please add contained type, e.g. Dict[int, int]
```
there's no indication of where in the model/code the problem lies, only that there's a Dict without contained types *somewhere*:
```python
In [6]: scripted_model = run_trace(S,args)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 scripted_model = run_trace(S,args)
File ~/repos/Autosensor/NN/deploy/trace_model.py:35, in run_trace(S, args)
33 with torch.inference_mode():
34 if args.script:
---> 35 traced_module = torch.jit.script(S.model)
36 else:
37 example = torch.rand(1, 3, *S.model.input_size).to(S.device)
File ~/.local/lib/python3.10/site-packages/torch/jit/_script.py:1265, in script(obj, optimize, _frames_up, _rcb, example_inputs)
1263 if isinstance(obj, torch.nn.Module):
1264 obj = call_prepare_scriptable_func(obj)
-> 1265 return torch.jit._recursive.create_script_module(
1266 obj, torch.jit._recursive.infer_methods_to_compile
1267 )
1269 if isinstance(obj, dict):
1270 return create_script_dict(obj)
File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:454, in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
452 if not is_tracing:
453 AttributeTypeIsSupportedChecker().check(nn_module)
--> 454 return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:520, in create_script_module_impl(nn_module, concrete_type, stubs_fn)
518 # Compile methods if necessary
519 if concrete_type not in concrete_type_store.methods_compiled:
--> 520 create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
521 # Create hooks after methods to ensure no name collisions between hooks and methods.
522 # If done before, hooks can overshadow methods that aren't exported.
523 create_hooks_from_stubs(concrete_type, hook_stubs, pre_hook_stubs)
File ~/.local/lib/python3.10/site-packages/torch/jit/_recursive.py:371, in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
368 property_defs = [p.def_ for p in property_stubs]
369 property_rcbs = [p.resolution_callback for p in property_stubs]
--> 371 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File ~/.local/lib/python3.10/site-packages/torch/jit/annotations.py:350, in try_ann_to_type(ann, loc)
348 if a is None:
349 inner.append(NoneType.get())
--> 350 maybe_type = try_ann_to_type(a, loc)
351 msg = "Unsupported annotation {} could not be resolved because {} could not be resolved."
352 assert maybe_type, msg.format(repr(ann), repr(maybe_type))
File ~/.local/lib/python3.10/site-packages/torch/jit/annotations.py:321, in try_ann_to_type(ann, loc)
319 if elem_type:
320 return ListType(elem_type)
--> 321 if is_dict(ann):
322 key = try_ann_to_type(ann.__args__[0], loc)
323 value = try_ann_to_type(ann.__args__[1], loc)
File ~/.local/lib/python3.10/site-packages/torch/_jit_internal.py:880, in is_dict(ann)
878 def is_dict(ann) -> bool:
879 if ann is Dict:
--> 880 raise_error_container_parameter_missing("Dict")
882 if not hasattr(ann, '__module__'):
883 return False
File ~/.local/lib/python3.10/site-packages/torch/_jit_internal.py:1101, in raise_error_container_parameter_missing(target_type)
1099 def raise_error_container_parameter_missing(target_type) -> None:
1100 if target_type == 'Dict':
-> 1101 raise RuntimeError(
1102 "Attempted to use Dict without "
1103 "contained types. Please add contained type, e.g. "
1104 "Dict[int, int]"
1105 )
1106 raise RuntimeError(
1107 f"Attempted to use {target_type} without a "
1108 "contained type. Please add a contained type, e.g. "
1109 f"{target_type}[int]"
1110 )
RuntimeError: Attempted to use Dict without contained types. Please add contained type, e.g. Dict[int, int]
```
It would be really helpful if the exception could indicate the location of the problematic dictionary.
### Versions
```
Collecting environment information...
PyTorch version: 1.11.0a0+gitbc2c6ed
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 515.43.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.6
[pip3] numpy-quaternion==2022.4.1
[pip3] torch==1.11.0
[pip3] torch-tensorrt==1.2.0a0+e9e824c0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.12.0
[conda] Could not collect
```
| 2 |
5,590 | 78,484 |
torch.lerp: discrepancy between CUDA and CPU (with extremal inputs)
|
triaged, module: NaNs and Infs
|
### π Describe the bug
```python
import torch
st = torch.tensor(0.2345)
en = torch.tensor(float("inf"))
weight = torch.tensor(-0.9890)
print(torch.lerp(st, en, weight)) # -inf
print(torch.lerp(st.cuda(), en.cuda(), weight.cuda())) # nan
```
### Versions
master
| 3 |
5,591 | 78,483 |
is the issue resolved? windows not pytorch_jni in path
|
oncall: java
|
### π Describe the bug
same question in https://github.com/pytorch/pytorch/issues/44363
is it resolved? how can i download the new version code or Pom.xml by java code style
i have a idea :
1.like djl οΌit automatically download when the service start;
or 2. give the full relation packages i download it by myself
| 1 |
5,592 | 78,482 |
RuntimeError: Event device type CUDA does not match blocking streamβs device type CPU
|
module: autograd, module: cuda, module: tests, triaged
|
### π Describe the bug
when I do the unittest of test_autograd.py on pytorch1.11.0, it throw an RuntimeError: Event device type CUDA does not match blocking streamβs device type CPU. I want to know what causes this error.
```
Traceback (most recent call last):
File "/root/.local/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1754, in wrapper
method(*args, **kwargs)
File "/root/.local/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1754, in wrapper
method(*args, **kwargs)
File "/root/.local/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 389, in instantiated_test
raise rte
File "/root/.local/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
result = test(self, **param_kwargs)
File "/root/.local/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 939, in multi_fn
return fn(slf, devices, *args, **kwargs)
File "pytorch/test/test_autograd.py", line 8644, in test_backward_device
Identity.apply(v).backward()
File "/root/.local/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/root/.local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: Event device type CUDA does not match blocking stream's device type CPU.
```
### Versions
PyTorch version: 1.11.0a0
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 4.3.22211
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
Clang version: 14.0.0
CMake version: version 2.8.12.2
Is CUDA available: True
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @ngimel @mruberry
| 5 |
5,593 | 78,481 |
[onnx] RuntimeError: Attribute 'axes' is expected to have field 'ints'
|
module: onnx, triaged, onnx-needs-info
|
### π Describe the bug
RuntimeError: Attribute 'axes' is expected to have field 'ints'
When I want to export the transformer model containing the proj_adaptive_softmax layer to the onnx format, I get an error: RuntimeError: Attribute 'axes' is expected to have field 'ints', ==> Context: Bad node spec: input: "567" output: " 584" name: "Unsqueeze_445" op_type: "Unsqueeze" attribute { name: "axes" type: INTS }
It means that axes type must be int. I found the corresponding line of code:
````
nll.index_copy_(0, indices_i, -logprob_i)
````
It means that indices_i must be a tensor of type int, but the problem is that indices_i is already a tensor of int64. When I replace it with a constant tensor equal to indices_i, no error is reported
````
#nll.index_copy_(0, indices_i, -logprob_i)
a = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 , 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35])
nll.index_copy_(0, a, -logprob_i)
print(indices_i == a)
#[true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true, true,true,true,true,true,true,true,true]
````
Why is this?
### Versions
Pytorch Version == 1.7.1+cu101
OS == Linux
onnx == 1.10.2
CUDA Version == 10.1
| 4 |
5,594 | 78,475 |
`with torch.backends.cudnn.flags(deterministic=True)` doesn't give an exception for ctc_loss backward on CUDA
|
module: cudnn, triaged, module: determinism
|
### π Describe the bug
Hi! I noticed some strange behavior of `with torch.backends.cudnn.flags(deterministic=True)` for ctc loss backward on CUDA.
Main problem is that `with torch.backends.cudnn.flags(deterministic=True)` doesn't give an exception while `torch.use_deterministic_algorithms(True)` does.
## Reproduction
I maid two scripts to reproduce bug.
### torch.use_deterministic_algorithms(True) - Exception
```
import torch
from torch import nn
import torch.nn.functional as F
torch.use_deterministic_algorithms(True)
device = torch.device('cuda')
T = 50 # Input sequence length
C = 20 # Number of classes (including blank)
N = 16 # Batch size
S = 30 # Target sequence length of longest target in batch (padding length)
S_min = 10 # Minimum target length, for demonstration purposes
# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_().to(device)
# Initialize random batch of targets (0 = blank, 1:C = classes)
target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long).to(device)
input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long).to(device)
target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long).to(device)
loss = F.ctc_loss(input, target, input_lengths, target_lengths)
loss.backward()
```
And I get following exception:
```
RuntimeError: ctc_loss_backward_gpu does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
```
### with torch.backends.cudnn.flags(deterministic=True) - No exception
```
import torch
import torch.nn.functional as F
device = torch.device('cuda')
T = 50 # Input sequence length
C = 20 # Number of classes (including blank)
N = 16 # Batch size
S = 30 # Target sequence length of longest target in batch (padding length)
S_min = 10 # Minimum target length, for demonstration purposes
# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_().to(device)
# Initialize random batch of targets (0 = blank, 1:C = classes)
target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long).to(device)
input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long).to(device)
target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long).to(device)
with torch.backends.cudnn.flags(deterministic=True):
loss = F.ctc_loss(input, target, input_lengths, target_lengths)
loss.backward()
```
And this script runs without exceptions. And I found it strange, because I do the same. I also tried on colab and got two exceptions.
Example of usage - https://github.com/espnet/espnet/blob/5fa6dcc4e649dc66397c629d0030d09ecef36b80/espnet/nets/pytorch_backend/ctc.py#L65
### Versions
```
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.1 (default, Dec 11 2020, 14:32:07) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 470.82.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.2
[pip3] pytorch-lightning==1.5.4
[pip3] torch==1.9.0
[pip3] torch-stft==0.1.4
[pip3] torchaudio==0.9.0a0+33b2469
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py39h27cfd23_1
[conda] mkl_fft 1.3.0 py39h42c9631_2
[conda] mkl_random 1.2.1 py39ha9443f7_2
[conda] numpy 1.20.2 py39h2d18471_0
[conda] numpy-base 1.20.2 py39hfae3a4d_0
[conda] pytorch 1.9.0 py3.9_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-lightning 1.5.4 pypi_0 pypi
[conda] torch-stft 0.1.4 pypi_0 pypi
[conda] torchaudio 0.9.0 py39 pytorch
[conda] torchmetrics 0.6.0 pypi_0 pypi
[conda] torchvision 0.10.0 py39_cu111 pytorch
```
`torch.backends.cudnn.version()` - `8005`
cc @csarofeen @ptrblck @xwang233 @mruberry @kurtamohler
| 21 |
5,595 | 78,450 |
Softmax, LogSoftmax are over parameterized
|
module: nn, triaged
|
### π The feature, motivation and pitch
The current implementation of Softmax and LogSoftmax for a K class output involves an [NxK] input tensor. This is over parameterized and leads to issues of identifiability. It would be worthwhile to include an implementation that only requires an input tensor [NX(K-1)] dimensionality.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
| 1 |
5,596 | 78,444 |
`layer_norm` triggers INTERNAL ASSERT with input requiring grad + zero-size int tensor
|
module: autograd, triaged, actionable
|
### π Describe the bug
`layer_norm` triggers INTERNAL ASSERT with input requiring grad + zero-size int tensor
```python
import torch
input = torch.randint(0, 8, [0, 0, 1024], dtype=torch.int64)
normalized_shape = [1024]
eps = 1e-05
weight = torch.rand([1024], dtype=torch.float64, requires_grad=True)
bias = torch.rand([1024], dtype=torch.float64, requires_grad=True)
torch.nn.functional.layer_norm(input, normalized_shape, weight=weight, bias=bias, eps=eps, )
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch
```
`layer_norm` does check the dtype of zero-size tensor, like `input` in this example. If `input` is not zero-size, it will raise an error that `RuntimeError: "LayerNormKernelImpl" not implemented for 'Long'`
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,597 | 78,443 |
`index_fill` will trigger INTERNAL ASSERT when float tensor requiring grad + int tensor
|
module: autograd, triaged, actionable
|
### π Describe the bug
`index_fill` will trigger INTERNAL ASSERT when float tensor requiring grad + int tensor
```python
import torch
value = torch.rand([], dtype=torch.float64, requires_grad=True)
input = torch.randint(-2, 8, [], dtype=torch.int64)
index = torch.randint(-1, 1, [], dtype=torch.int64)
dim = 0
input.index_fill(dim, index, value)
# RuntimeError: isDifferentiableType(variable.scalar_type())INTERNAL ASSERT FAILED at "/Users/distiller/project/pytorch/torch/csrc/autograd/functions/utils.h":65, please report a bug to PyTorch.
```
### Versions
pytorch: 1.11.0
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 0 |
5,598 | 78,435 |
fx.Tracer with param_shapes_constant=True not working for RobertaForMaskedLM
|
triaged, module: fx
|
### π Describe the bug
param_shapes_constant=True can't bypass issue mentioned by @jamesr66a
https://github.com/pytorch/pytorch/issues/61733
```python
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
import torch.fx as fx
import inspect
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask",'token_type_ids']
sig = inspect.signature(model.forward)
concrete_args = {p.name: None for p in sig.parameters.values() if p.name not in input_names}
tracer = fx.Tracer(param_shapes_constant=True)
graph = tracer.trace(model, concrete_args)
```
Error Message:
```
---------------------------------------------------------------------------
TraceError Traceback (most recent call last)
Input In [20], in <cell line: 2>()
1 tracer = fx.Tracer(param_shapes_constant=True)
----> 2 graph = tracer.trace(model, concrete_args)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py:615, in Tracer.trace(self, root, concrete_args)
613 for module in self._autowrap_search:
614 _autowrap_check(patcher, module.__dict__, self._autowrap_function_ids)
--> 615 self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
616 type_expr=fn.__annotations__.get('return', None))
618 self.submodule_paths = None
620 return self.graph
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:1098, in RobertaForMaskedLM.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, output_attentions, output_hidden_states, return_dict)
1088 r"""
1089 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1090 Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
(...)
1094 Used to hide legacy arguments that have been deprecated.
1095 """
1096 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1098 outputs = self.roberta(
1099 input_ids,
1100 attention_mask=attention_mask,
1101 token_type_ids=token_type_ids,
1102 position_ids=position_ids,
1103 head_mask=head_mask,
1104 inputs_embeds=inputs_embeds,
1105 encoder_hidden_states=encoder_hidden_states,
1106 encoder_attention_mask=encoder_attention_mask,
1107 output_attentions=output_attentions,
1108 output_hidden_states=output_hidden_states,
1109 return_dict=return_dict,
1110 )
1111 sequence_output = outputs[0]
1112 prediction_scores = self.lm_head(sequence_output)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py:604, in Tracer.trace.<locals>.module_call_wrapper(mod, *args, **kwargs)
600 return _orig_module_call(mod, *args, **kwargs)
602 _autowrap_check(patcher, getattr(getattr(mod, "forward", mod), "__globals__", {}),
603 self._autowrap_function_ids)
--> 604 return self.call_module(mod, forward, args, kwargs)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py:422, in Tracer.call_module(self, m, forward, args, kwargs)
420 module_qualified_name = self.path_of_module(m)
421 if not self.is_leaf_module(m, module_qualified_name):
--> 422 return forward(*args, **kwargs)
423 return self.create_proxy('call_module', module_qualified_name, args, kwargs)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py:600, in Tracer.trace.<locals>.module_call_wrapper.<locals>.forward(*args, **kwargs)
599 def forward(*args, **kwargs):
--> 600 return _orig_module_call(mod, *args, **kwargs)
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:824, in RobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
820 token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
822 # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
823 # ourselves in which case we just need to make it broadcastable to all heads.
--> 824 extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
826 # If a 2D or 3D attention mask is provided for the cross-attention
827 # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
828 if self.config.is_decoder and encoder_hidden_states is not None:
File ~/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py:559, in ModuleUtilsMixin.get_extended_attention_mask(self, attention_mask, input_shape, device)
543 """
544 Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
545
(...)
555 `torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
556 """
557 # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
558 # ourselves in which case we just need to make it broadcastable to all heads.
--> 559 if attention_mask.dim() == 3:
560 extended_attention_mask = attention_mask[:, None, :, :]
561 elif attention_mask.dim() == 2:
562 # Provided a padding mask of dimensions [batch_size, seq_length]
563 # - if the model is a decoder, apply a causal mask in addition to the padding mask
564 # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/proxy.py:251, in Proxy.__bool__(self)
250 def __bool__(self) -> bool:
--> 251 return self.tracer.to_bool(self)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/proxy.py:152, in TracerBase.to_bool(self, obj)
145 @compatibility(is_backward_compatible=True)
146 def to_bool(self, obj: 'Proxy') -> bool:
147 """Called when a proxy object is being converted to a boolean, such as
148 when used in control flow. Normally we don't know what to do because
149 we don't know the value of the proxy, but a custom tracer can attach more
150 information to the graph node using create_node and can choose to return a value.
151 """
--> 152 raise TraceError('symbolically traced variables cannot be used as inputs to control flow')
TraceError: symbolically traced variables cannot be used as inputs to control flow
```
### Versions
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA T500
Nvidia driver version: 472.91
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.10.2
[pip3] torchvision==0.11.3
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.10.2 cpu_py39hfa7516b_0
[conda] torchvision 0.11.3 py39_cu113 pytorch
cc @ezyang @SherlockNoMad
| 0 |
5,599 | 78,422 |
Permutation of Sparse Tensor
|
module: sparse, triaged
|
### π The feature, motivation and pitch
I'm working with COO sparse tensors and would like to get a permutation of any COO sparse tensor.
In the current version, `torch.permute` throws the following error:
`RuntimeError: sparse tensors do not have strides`
Also, in-place modification of indices is not possible.
### Alternatives
I have considered the following solution (it wouldn't be the same as `torch.permute` since this one returns a view of the original tensor):
`torch.permute_sparse(input, dims) β Tensor`
Returns a copy of the original tensor input with its dimensions permuted.
- **input** (_Tensor_) β the input tensor.
- **dims** (_tuple of python:ints_) β The desired ordering of dimensions
The source code:
```
def permute_sparse(input, dims):
dims = torch.LongTensor(dims)
return torch.sparse_coo_tensor(indices=input._indices()[dims], values=input._values(), size=torch.Size(torch.tensor(input.size())[dims]))
```
An example:
```
>>> indices = torch.tensor([[1, 1],
[2, 1],
[1, 0],
[3, 3]])
>>> values = torch.tensor([ 1, -1])
>>> s = torch.sparse_coo_tensor(indices=indices, values=values, size=(2, 3, 4, 5))
>>> s.size()
torch.Size([2, 3, 4, 5])
>>> permute_sparse(s, (3, 2, 0, 1)).size()
torch.Size([5, 4, 2, 3])
```
### Additional context
_No response_
cc @nikitaved @pearu @cpuhrsch @amjames
| 5 |
5,600 | 78,414 |
.lldbinit for lldb debuger
|
feature, triaged, module: macos, actionable
|
### π The feature, motivation and pitch
found `.gdbinit`, but no `.lldbinit`, If more lldb feature(like prettyprint for tensor) is supported, the debuging progress using lldb would have better experience.
### Alternatives
Nop
### Additional context
Nop
cc @malfet @albanD
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.