Serial Number
int64
1
6k
Issue Number
int64
75.6k
112k
Title
stringlengths
3
357
Labels
stringlengths
3
241
Body
stringlengths
9
74.5k
Comments
int64
0
867
2,201
104,081
Distributing HSDP checkpoint writing for load balancing
oncall: distributed, module: fsdp
### 🐛 Describe the bug In HSDP, the ranks within a replication group are equivalent in terms of their model and optimizer shards. In other words, any rank in a replication group can be selected to write the checkpoint. This creates the possibility of distributing the checkpoint writing to perform better load balancing among the pods. Illustrative example: Consider a set up with 2 pods each having 8 gpus, 2 shard groups [0, 1, .., 7] and [8, 9, .., 15], and 8 replication groups [0, 8], [1, 9], ..., [7, 15]. On ranks 0-7, the shard_group is [0, 1, .., 7] and shard_group.rank() = i where i in [0, 1,..7] is the gpu rank. However, on ranks 8-15, the shard_group is [8, 9, .., 15] but the shard_group.rank() continues to be in [0, 1, .., 7]. Issue: Checkpoint writing on ranks [0, 1, .., 7] works perfectly well. However, in this case all the ranks are on pod 0. To better distribute the load among other pods, we would want to write checkpoints on ranks [8, 9, ..., 15] as well. However, trying to write the checkpoint on any of the ranks 8-15, throws an error that the ```global rank is not part of the shard group``` Pseudo code snippet with logic to distribute checkpoint writing among ranks is provided below: ``` import torch from torch.distributed.fsdp import FullyShardedDataParallel as FSDP from torch.distributed.fsdp import MixedPrecision, ShardingStrategy, from torch.distributed.fsdp.wrap import enable_wrap, transformer_auto_wrap_policy, wrap import os rank = int(os.getenv("RANK")) local_rank = int(os.getenv("LOCAL_RANK")) world_size = int(os.getenv("WORLD_SIZE")) from torch.distributed._shard.checkpoint import ( FileSystemWriter, SavePlan, save_state_dict, ) # save on ranks such that every replicate group is covered def do_save(self, rank, local_rank, shard_group, replicate_group, is_dist=False): if not is_dist: return (rank == local_rank) else: a = rank % shard_group.size() b = rank // shard_group.size() return True if a % replicate_group.size() == b else False def write_dcp(self, state_dict, process_group, save_name, rank): os.makedirs(save_name, exist_ok=True) writer = FileSystemWriter(save_name, single_file_per_rank=True) if state_dict is not None: print(f'Writing state dict on rank={rank}') save_state_dict(state_dict=state_dict, storage_writer=writer, process_group=process_group, planner=DefaultSavePlanner()) print(f'Finished writing state dict on rank={rank}') def save_dcp(self, step, model_state, optimizer_state, process_group=None, dist_hsdp_ckp_save=False, replicate_group=None, **kwargs): if self.do_save(rank, local_rank, shard_group=process_group, replicate_group=replicate_group, is_dist=dist_hsdp_ckp_save): state_dict = { 'model_state': model_state, 'optimizer_state': optimizer_state, } self.write_dcp(state_dict, process_group, save_name, rank) model_sharding_strategy = ShardingStrategy.HYBRID_SHARD mp_policy = None wrapping_policy = None class TinyModel(torch.nn.Module): def __init__(self): super(TinyModel, self).__init__() self.linear1 = torch.nn.Linear(100, 200) self.activation = torch.nn.ReLU() self.linear2 = torch.nn.Linear(200, 10) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() model = FSDP( model, auto_wrap_policy=wrapping_policy, mixed_precision=mp_policy, sharding_strategy=model_sharding_strategy, device_id=local_rank, limit_all_gathers=True, use_orig_params=True, ) optimizer = torch.optim.AdamW(model.parameters(), weight_decay=0.1, lr=0.001, betas=(0.9, 0.999)) shard_group = model.process_group replicate_group = model._inter_node_state.process_group with FSDP.state_dict_type(model, StateDictType.SHARDED_STATE_DICT): model_state = model.state_dict() optim_state = FSDP.sharded_optim_state_dict(model, optimizer, group=shard_group) save_dcp( step + 1, model_state, optim_state, process_group=shard_group, dist_hsdp_ckp_save=True, replicate_group=replicate_group, ) ``` ### Versions Pytorch nightlies. cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
3
2,202
104,072
[WIP]Add mutliple CUDA streams support to TorchInductor
topic: not user facing, module: inductor, module: dynamo, ciflow/inductor
It is still a very early stage PR. As for now, it could successfully run some models with `--disable-cudagraphs`. This PR is supposed to enable TorchInductor to generate output_code.py that utilizes multiple CUDA streams. TODO: - [ ] ~merge the stream context lines when the adjcent lines share the same stream.~ - [ ] use a better stream_pool_pop algorithem. - [x] correct the wrapper code generation. - [x] process 'nop' node. - [x] check the correctness. - [x] add config.multiple_stream checks. - [ ] remove debug lines. - [ ] solve all other TODOs in the code - [x] finish cpp wrapper version - [x] fix most accuracy issues cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @anijain2305
1
2,203
104,053
Tracking issue for optimizer graph not being an inference graph
triaged, tracker, oncall: pt2, module: aotdispatch
### 🐛 Describe the bug The optimizer is incorrectly classified as a training graph in aot. The main reason this occurs is that some parameters have `requires_grad=True` *and* torchdynamo calls set_grad_enabled(True|False) while simultaneously putting a call_function node in the graph which performs the global state change. As a result, when dynamo traces a no_grad context manager, it will trace the *exit* of the context manager *before* calling the user compiler. As a result, by the time we reach aot, grad is enabled, and the graph is treated as a training graph. Why this matters: Functionalization appends an epilogue which aggregates all mutation updates of tensors into a single code block consisting of a bunch of copies. In inference mode, these copies are visible to inductor while in training mode, these copies are not visible to inductor. If the copies are not visible to inductor, inductor will allocate tensors for every single parameter in the graph in order for them to get sent to the epilogue, which is atrocious for perf. Solutions: 1. graph break before grad is re-enabled 2. in functionalization make the epilogue visible in training mode 3. with compiled backwards, the entire graph is an inference graph by default 1 is two lines of code, so that approach is the easiest. 3 is getting implemented currently, so once it's ready we can remove the graph breaks. ### Error logs N/A ### Minified repro _No response_ ### Versions N/a cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
2
2,204
104,047
[torch.compile] torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method normal of type object at ***: got an unexpected keyword argument 'device'
triaged, module: random, module: primTorch, oncall: pt2
### 🐛 Describe the bug My repro ``` from torch import _dynamo as dyna def func(): return torch.normal(1, 1, (8,8)) op = dyna.optimize('eager')(func) a = func() # this runs fine a = op() # this fails ``` Stacktrace: ``` [2023-06-22 10:17:33,958] torch._subclasses.fake_tensor: [ERROR] fake tensor raised TypeError Traceback (most recent call last): File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1161, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1381, in dispatch op_impl_out = op_impl(self, func, *args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 392, in constructors r = func(*args, **new_kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_ops.py", line 429, in __call__ return self._op(*args, **kwargs or {}) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 227, in _fn result = fn(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 110, in _fn bound = sig.bind(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/inspect.py", line 3037, in bind return self._bind(args, kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/inspect.py", line 3026, in _bind raise TypeError( TypeError: got an unexpected keyword argument 'device' Traceback (most recent call last): File "examples/wip.py", line 14, in <module> a = op() File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn return fn(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 448, in catch_errors return callback(frame, cache_size, hooks, frame_state) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 526, in _convert_frame result = inner_convert(frame, cache_size, hooks, frame_state) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 127, in _fn return fn(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert return _compile( File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper r = func(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 430, in _compile out_code = transform_code_object(code, transform) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object transformations(instructions, code_options) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 415, in transform tracer.run() File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 2026, in run super().run() File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 708, in run and self.step() File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 668, in step getattr(self, inst.opname)(inst) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 390, in wrapper return inner_fn(self, inst) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1100, in CALL_FUNCTION self.call_function(fn, args, {}) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 559, in call_function self.push(fn.call_function(self, args, kwargs)) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/variables/torch.py", line 610, in call_function tensor_variable = wrap_fx_proxy( File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 1115, in wrap_fx_proxy return wrap_fx_proxy_cls( File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 1151, in wrap_fx_proxy_cls example_value = get_fake_value(proxy.node, tx) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1313, in get_fake_value raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1281, in get_fake_value return wrap_fake_exception( File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 871, in wrap_fake_exception return fn() File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1282, in <lambda> lambda: run_node(tx.output, node, args, kwargs, nnmodule) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1347, in run_node raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 1334, in run_node return node.target(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/utils/_stats.py", line 20, in wrapper return fn(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1161, in __torch_dispatch__ return self.dispatch(func, types, args, kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1381, in dispatch op_impl_out = op_impl(self, func, *args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 392, in constructors r = func(*args, **new_kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_ops.py", line 429, in __call__ return self._op(*args, **kwargs or {}) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 227, in _fn result = fn(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/site-packages/torch/_prims_common/wrappers.py", line 110, in _fn bound = sig.bind(*args, **kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/inspect.py", line 3037, in bind return self._bind(args, kwargs) File "/home/yshi/miniconda3/envs/dyn/lib/python3.8/inspect.py", line 3026, in _bind raise TypeError( torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method normal of type object at 0x7f4ea1a5a100>(*(1, 1, (8, 8)), **{}): got an unexpected keyword argument 'device' ``` Looks like a fake tensor mode issue with this specific op that some keyword args are not recognized correctly. ### Versions PyTorch version: 2.1.0.dev20230621+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 10.4.0-4ubuntu1~22.04) 10.4.0 Clang version: 14.0.0-1ubuntu1 CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.8.16 (default, Jun 12 2023, 18:09:05) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Nvidia driver version: 520.61.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 5900X 12-Core Processor CPU family: 25 Model: 33 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 4950.1948 CPU min MHz: 2200.0000 BogoMIPS: 7399.95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Virtualization: AMD-V L1d cache: 384 KiB (12 instances) L1i cache: 384 KiB (12 instances) L2 cache: 6 MiB (12 instances) L3 cache: 64 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flake8==6.0.0 [pip3] numpy==1.24.3 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230621+cu118 [pip3] torchaudio==2.1.0.dev20230621+cu118 [pip3] torchvision==0.16.0.dev20230621+cu118 [conda] numpy 1.24.3 pypi_0 pypi [conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi [conda] torch 2.1.0.dev20230621+cu118 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230621+cu118 pypi_0 pypi [conda] torchvision 0.16.0.dev20230621+cu118 pypi_0 pypi cc @pbelevich @ezyang @mruberry @Lezcano @peterbell10 @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
1
2,205
104,045
"TorchDynamo Deeper Dive" doc is missing some part
triaged, topic: docs
In https://pytorch.org/docs/main/compile/deep-dive.html#torchdynamo-deeper-dive, the first sentence is > TorchDynamo operates just-in-time and specializes graphs based on dynamic properties. For example, **the first graph above** has the following guards: but there's no graph above, so it's not clear what the rest is referring to. CC @jansel as you seem to be the author
1
2,206
104,044
Add check for malformed tensor in load.
triaged, open source, Stale, release notes: cpp
Hi! We've been fuzzing torchvision project with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz). We've found the input for the `torch::load` function, which makes `data_ptr` field of input Tensor `NULL` after loading from file. This can lead to the null pointer dereference later, the example with reproducing is shown below. To prevent the error we suggest to add the check for the `data_ptr` field. torchvision version: 9d0a93eee90bf7c401b74ebf9c8be80346254f15 pytorch version: 0f1621df1a0a73956c7ce4e2f72f069e610e0137 OS: Ubuntu 20.04 How to reproduce 1. Build docker from [here](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/torchvision) and run the container: sudo docker build -t oss-sydr-fuzz-torchvision . sudo docker run --privileged --rm -v `pwd`:/fuzz -it oss-sydr-fuzz-torchvision /bin/bash 2. Run the target on this input: [null-ptr-deref-vec.txt](https://github.com/pytorch/pytorch/files/11835087/null-ptr-deref-vec.txt) /encode_png_fuzz null-ptr-deref-vec.txt 3. You will see the following output: ================================================================= AddressSanitizer:DEADLYSIGNAL ==936==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000011582f19 bp 0x7ffc439944b0 sp 0x7ffc439943c0 T0) ==936==The signal is caused by a READ memory access. ==936==Hint: address points to the zero page. #0 0x11582f19 in at::vec::AVX2::Vectorized8<unsigned char>::loadu(void const*) /pytorch/aten/src/ATen/cpu/vec/vec256/vec256_int.h:681:12 #1 0x11582f19 in function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>::ArgsTuple at::native::AVX2::dereference_vec_impl<function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>, 0ul>(char* restrict*, function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>::result_type const&, unsigned long, long, std::integer_sequence<unsigned long, 0ul>) /pytorch/aten/src/ATen/native/cpu/Loops.h:73:7 #2 0x11581524 in function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>::ArgsTuple at::native::AVX2::dereference_vec<function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&> >(char* restrict*, function_traits<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>::result_type const&, unsigned long, long) /pytorch/aten/src/ATen/native/cpu/Loops.h:80:10 #3 0x11581524 in void at::native::AVX2::vectorized_loop<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char)&, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&>(char**, long, long, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char)&, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&) /pytorch/aten/src/ATen/native/cpu/Loops.h:213:18 #4 0x11580a03 in at::native::AVX2::VectorizedLoop2d<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char), at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)>::operator()(char**, long const*, long, long) /pytorch/aten/src/ATen/native/cpu/Loops.h:275:9 #5 0x11580a03 in void c10::function_ref<void (char**, long const*, long, long)>::callback_fn<at::native::AVX2::VectorizedLoop2d<at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char), at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)> >(long, char**, long const*, long, long) /pytorch/c10/util/FunctionRef.h:43:12 #6 0x8c1eb6 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /pytorch/aten/src/ATen/TensorIteratorInternal.h:65:7 #7 0x8bff21 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /pytorch/aten/src/ATen/TensorIterator.cpp:777:3 #8 0x83d498 in std::function<void (long, long)>::operator()(long, long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14 #9 0x83ce7a in at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&)::$_0::operator()(int, unsigned long) const /pytorch/aten/src/ATen/ParallelNative.cpp:168:9 #10 0x83ce7a in void std::__invoke_impl<void, at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&)::$_0&, int, unsigned long>(std::__invoke_other, at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&)::$_0&, int&&, unsigned long&&) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/invoke.h:60:14 #11 0x8246f2 in std::function<void (int, unsigned long)>::operator()(int, unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/std_function.h:622:14 #12 0x822039 in at::(anonymous namespace)::_run_with_pool(std::function<void (int, unsigned long)> const&, unsigned long) /pytorch/aten/src/ATen/ParallelNative.cpp:96:3 #13 0x820e00 in at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&) /pytorch/aten/src/ATen/ParallelNative.cpp:183:3 #14 0x8c0703 in void at::parallel_for<at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long)::$_5>(long, long, long, at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long)::$_5 const&) /pytorch/aten/src/ATen/Parallel-inl.h:31:3 #15 0x8bf680 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /pytorch/aten/src/ATen/TensorIterator.cpp:751:5 #16 0x115802cc in void at::native::AVX2::cpu_kernel_vec<true, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char), at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)>(at::TensorIteratorBase&, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(unsigned char)&&, at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const::'lambda'(at::vec::AVX2::Vectorized<unsigned char>)&&, long) /pytorch/aten/src/ATen/native/cpu/Loops.h:352:8 #17 0x1156ff6f in at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const::'lambda'()::operator()() const /pytorch/aten/src/ATen/native/cpu/CopyKernel.cpp:186:5 #18 0x1156ff6f in at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&)::$_7::operator()() const /pytorch/aten/src/ATen/native/cpu/CopyKernel.cpp:186:5 #19 0x1156ff6f in at::native::AVX2::direct_copy_kernel(at::TensorIteratorBase&) /pytorch/aten/src/ATen/native/cpu/CopyKernel.cpp:186:5 #20 0x11571742 in at::native::AVX2::copy_kernel(at::TensorIterator&, bool) /pytorch/aten/src/ATen/native/cpu/CopyKernel.cpp:233:5 #21 0x133613e in at::native::copy_impl(at::Tensor&, at::Tensor const&, bool) /pytorch/aten/src/ATen/native/Copy.cpp:278:3 #22 0x133509e in at::native::copy_(at::Tensor&, at::Tensor const&, bool) /pytorch/aten/src/ATen/native/Copy.cpp:301:5 #23 0x3a6f2f5 in at::Tensor& c10::KernelFunction::call<at::Tensor&, at::Tensor&, at::Tensor const&, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool) const /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:102:16 #24 0x3a6f2f5 in at::Tensor& c10::Dispatcher::call<at::Tensor&, at::Tensor&, at::Tensor const&, bool>(c10::TypedOperatorHandle<at::Tensor& (at::Tensor&, at::Tensor const&, bool)> const&, at::Tensor&, at::Tensor const&, bool) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:644:26 #25 0x3a6f2f5 in c10::TypedOperatorHandle<at::Tensor& (at::Tensor&, at::Tensor const&, bool)>::call(at::Tensor&, at::Tensor const&, bool) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492:41 #26 0x3a6f2f5 in at::_ops::copy_::call(at::Tensor&, at::Tensor const&, bool) /pytorch/build/aten/src/ATen/Operators_3.cpp:2152:15 #27 0x1ea317d in at::Tensor::copy_(at::Tensor const&, bool) const /pytorch/build/aten/src/ATen/core/TensorBody.h:2103:12 #28 0x1ea317d in at::native::clone(at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/aten/src/ATen/native/TensorFactories.cpp:1611:10 #29 0x4add399 in at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeExplicitAutograd__clone(at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:4160:10 #30 0x4add399 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>), &(at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeExplicitAutograd__clone(at::Tensor const&, c10::optional<c10::MemoryFormat>))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::optional<c10::MemoryFormat> > >::operator()(at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13:16 #31 0x4add399 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>), &(at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeExplicitAutograd__clone(at::Tensor const&, c10::optional<c10::MemoryFormat>))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::optional<c10::MemoryFormat> > >, at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463:14 #32 0x33bb614 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>&&) /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50:12 #33 0x33bb614 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:102:16 #34 0x33bb614 in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:661:26 #35 0x3170e8e in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:497:41 #36 0x3170e8e in at::_ops::clone::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/build/aten/src/ATen/Operators_1.cpp:5921:15 #37 0xaa5285f in at::redispatch::clone(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/build/aten/src/ATen/RedispatchFunctions.h:7947:16 #38 0xaa5285f in torch::autograd::VariableType::(anonymous namespace)::clone(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>)::$_55::operator()() const /pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:5240:12 #39 0xaa5285f in torch::autograd::VariableType::(anonymous namespace)::clone(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:5238:15 #40 0xaa548ae in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>), &(torch::autograd::VariableType::(anonymous namespace)::clone(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13:16 #41 0xaa548ae in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>), &(torch::autograd::VariableType::(anonymous namespace)::clone(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480:14 #42 0x3170723 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>&&) /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50:12 #43 0x3170723 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:102:16 #44 0x3170723 in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:644:26 #45 0x3170723 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::optional<c10::MemoryFormat>) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492:41 #46 0x3170723 in at::_ops::clone::call(at::Tensor const&, c10::optional<c10::MemoryFormat>) /pytorch/build/aten/src/ATen/Operators_1.cpp:5914:15 #47 0x1ec7c66 in at::Tensor::clone(c10::optional<c10::MemoryFormat>) const /pytorch/build/aten/src/ATen/core/TensorBody.h:3873:12 #48 0x1ec7c66 in at::native::contiguous(at::Tensor const&, c10::MemoryFormat) /pytorch/aten/src/ATen/native/TensorProperties.cpp:101:15 #49 0x50965da in at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__contiguous(at::Tensor const&, c10::MemoryFormat) /pytorch/build/aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:1889:10 #50 0x50965da in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::MemoryFormat), &(at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__contiguous(at::Tensor const&, c10::MemoryFormat))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::MemoryFormat> >::operator()(at::Tensor const&, c10::MemoryFormat) /pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13:16 #51 0x50965da in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::MemoryFormat), &(at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__contiguous(at::Tensor const&, c10::MemoryFormat))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::MemoryFormat> >, at::Tensor (at::Tensor const&, c10::MemoryFormat)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::MemoryFormat) /pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463:14 #52 0x3fc4b45 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::MemoryFormat>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::MemoryFormat&&) /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50:12 #53 0x3df7b26 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::MemoryFormat>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::MemoryFormat) const /pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:102:16 #54 0x3df7b26 in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::MemoryFormat>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::MemoryFormat)> const&, at::Tensor const&, c10::MemoryFormat) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:644:26 #55 0x3df7b26 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::MemoryFormat)>::call(at::Tensor const&, c10::MemoryFormat) const /pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492:41 #56 0x3df7b26 in at::_ops::contiguous::call(at::Tensor const&, c10::MemoryFormat) /pytorch/build/aten/src/ATen/Operators_4.cpp:1663:15 #57 0xe0a185 in at::TensorBase::__dispatch_contiguous(c10::MemoryFormat) const /pytorch/aten/src/ATen/core/Tensor.cpp:26:10 #58 0x628ce6 in at::TensorBase::contiguous(c10::MemoryFormat) const /pytorch/torch/include/ATen/core/TensorBase.h:128:14 #59 0x622567 in at::Tensor::contiguous(c10::MemoryFormat) const /pytorch/torch/include/ATen/core/TensorBody.h:122:24 #60 0x61f106 in vision::image::encode_png(at::Tensor const&, long) /vision/torchvision/csrc/io/image/cpu/encode_png.cpp:115:40 #61 0x604619 in LLVMFuzzerTestOneInput /vision/encode_png.cc:64:32 #62 0x66b041 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15 #63 0x6544cc in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6 #64 0x65a61b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9 #65 0x654222 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 #66 0x7fdbe688e082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee) #67 0x542cdd in _start (/encode_png_fuzz+0x542cdd) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /pytorch/aten/src/ATen/cpu/vec/vec256/vec256_int.h:681:12 in at::vec::AVX2::Vectorized8<unsigned char>::loadu(void const*) ==936==ABORTING
6
2,207
104,043
Torch.compile tutorial shows incorrect triton kernel
triaged, topic: docs
This tutorial https://pytorch.org/docs/main/compile/get-started.html claims that `cos(x) + sin(y)` in ```py def fn(x, y): a = torch.cos(x).cuda() b = torch.sin(y).cuda() return a + b ``` gets compiled into and fused into a triton kernel that computes `sin(sin(x))`: ```py tmp0 = tl.load(in_ptr0 + (x0), xmask) tmp1 = tl.sin(tmp0) tmp2 = tl.sin(tmp1) tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask) ``` I assume the triton kernel in the tutorial is wrong / outdated?
1
2,208
104,041
Scripted model is loaded on GPU but the inference seems to utilize the CPU with zero GPU utilization
oncall: jit
### 🐛 Describe the bug The model scripted with `torch.jit.script` and serialized with `torch.jit.save` is put on GPU before inference but the calculations seem to be performed on CPU: the GPU memory is allocated but utilization is 0% throughout the 3 minute inference. All the model parameters and the model output are located on GPU. No evidence of CPU utilization is present. The model that is not scripted and serialized with `torch.save` is put on GPU before inference and finishes inference in 4 seconds as expected. ### Versions PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 Nvidia driver version: 520.61.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 8 Model name: AMD Ryzen Threadripper 2970WX 24-Core Processor Stepping: 2 Frequency boost: enabled CPU MHz: 2114.061 CPU max MHz: 3000.0000 CPU min MHz: 2200.0000 BogoMIPS: 5987.95 Virtualization: AMD-V L1d cache: 768 KiB L1i cache: 1.5 MiB L2 cache: 12 MiB L3 cache: 64 MiB NUMA node0 CPU(s): 0-5,24-29 NUMA node1 CPU(s): 12-17,36-41 NUMA node2 CPU(s): 6-11,30-35 NUMA node3 CPU(s): 18-23,42-47 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.5 [pip3] pytorch3d==0.7.4 [pip3] torch==2.0.1 [pip3] torchio==0.18.91 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] Could not collect cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
1
2,209
104,040
Improved error checking for custom Function when saving intermediates
module: double backwards, module: autograd, triaged, actionable
Currently, if users create a custom Function, save intermediate tensors (tensors that are neither inputs nor outputs to the custom function) for backward, and use them in the backward formula to compute gradients, the gradients produced in double backward would be silently incorrect. We should produce a error in this case instead. One way to do this is to attach an ErrorNode to all non-input/non-output tensors being saved. https://github.com/pytorch/pytorch/issues/103726#issuecomment-1599015774 cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @Lezcano @Varal7
0
2,210
104,033
Reproducibility documentation should be updated
module: docs, triaged, module: numerical-reproducibility
### 📚 The doc issue When loading the weights, the model and the optimizer are loaded. Besides, the sampled data in dataloader are guaranteed to be the same. It should be considered in https://pytorch.org/docs/stable/notes/randomness.html. ### Suggest a potential alternative/fix ```python import torch import numpy import random print(torch.__version__) # 1.13.0+cu116 def set_random_seed(seed, deterministic=True): """Set random seed. Args: seed (int): Seed to be used. deterministic (bool): Whether to set the deterministic option for CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` to True and `torch.backends.cudnn.benchmark` to False. Default: False. """ random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) if deterministic: torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False if __name__ == '__main__': n = 100 set_random_seed(1) seed = 1#int(torch.empty((), dtype=torch.int64).random_().item()) generator = torch.Generator() # Function 'torch.Generator()' is not limited by the global seed # because there is another seed offset inside it to ensure that each epoch is different generator.manual_seed(seed) def simple_generator(): # When the epoch is increasing, torch.randperm also generates a seed offset inside, # which can be controlled by the generator. # Note that it cannot be set the same for each epoch # When loading the weights, the model and the optimizer are loaded. # Besides, the sampled data in dataloader are guaranteed to be the same. # See Line: 36-39 yield torch.randperm(n, generator=generator).tolist() state = generator.get_state() for i in range(3): my_gen = simple_generator() # It will be make sure each epoch is the same, but don't do this, # only when restoring the model to ensure that the sampled data are the same. # It should be considered in https://pytorch.org/docs/stable/notes/randomness.html print(generator.set_state(state)) print(generator.get_state().sum()) # print(torch.get_rng_state().sum()) print(next(my_gen)) # It is related to the instance, not the global. print(torch.Generator().get_state().sum()) ``` cc @svekars @carljparker
0
2,211
104,029
SummaryWriter.add_embedding not working for RGBA images
triaged, enhancement, module: tensorboard
### 🐛 Describe the bug The SummaryWriter.add_embedding does not work for images with transparency (RGBA), although tensorboard's embedding projector does support this. ``` import numpy as np import torch from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter('embedding_logs') N_samples = 6 N_features = 24 N_channels = 3 dim_img =(128,128) embeddings = np.random.random((N_samples,N_features)) imgs1 = np.random.random(size=(N_samples,N_channels,dim_img[0],dim_img[1])) print(imgs1.shape) # (6, 3, 128, 128) # Working writer.add_embedding( embeddings, label_img = torch.tensor(imgs1), global_step='RGB' ) mask = np.random.randint(low=0, high=1, size=(N_samples,1,dim_img[0],dim_img[1])) imgs2 = np.concatenate((imgs1, mask,), axis=1) print(imgs2.shape) # (6, 4, 128, 128) # Not Working writer.add_embedding( embeddings, label_img = torch.tensor(imgs2), global_step='RGBA' ) ``` Error Message: ``` Traceback (most recent call last): File "/Users/alex5753/Documents/Python/pytorch/test_embedding_projector.py", line 28, in <module> writer.add_embedding( File "/Users/alex5753/Documents/Python/pytorch/.env_torch/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py", line 949, in add_embedding make_sprite(label_img, save_path) File "/Users/alex5753/Documents/Python/pytorch/.env_torch/lib/python3.10/site-packages/torch/utils/tensorboard/_embedding.py", line 45, in make_sprite arranged_img_CHW = make_grid(make_np(label_img), ncols=nrow) File "/Users/alex5753/Documents/Python/pytorch/.env_torch/lib/python3.10/site-packages/torch/utils/tensorboard/_utils.py", line 74, in make_grid assert I.ndim == 4 and I.shape[1] == 3 AssertionError ``` The error occur because the functionality in `torch/utils/tensorboard/_embedding.py, make_sprite()` and `torch/utils/tensorboard/_utils.py, make_grid` have been hard coded to only support three image channels (RGB). Can probably be fixed by reading the amount of channels from the image directly. ### Versions Collecting environment information... PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.4 (x86_64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: Could not collect Libc version: N/A Python version: 3.10.8 (v3.10.8:aaaf517424, Oct 11 2022, 10:14:40) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime) Python platform: macOS-13.4-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz Versions of relevant libraries: [pip3] numpy==1.25.0 [pip3] torch==2.0.1 [pip3] torchvision==0.15.2 [conda] Could not collect
0
2,212
104,026
Is memory-efficient FSDP initialization intended to be possible with torch.device('meta')?
triaged, module: fsdp
### 🐛 Describe the bug Looking at the FSDP API, I was expecting that we could use the torch meta device in conjunction with the `param_init_fn` and `sync_module_states` arguments to `FSDP` to instantiate a large model in a memory-efficient manner. Specifically, I was imagining a workflow like: - Only loading model parameters on the rank 0 device (load on the meta device on the others) - Use a param_init_fn like module.to_empty(device=f'cuda:{rank}') when creating the FSDP module on non-rank 0 devices - Use sync_module_states=True when creating the FSDP instance to sync parameter values with rank 0 For example: import contextlib import transformers import functools import torch from torch.distributed.fsdp import FullyShardedDataParallel as FSDP init_context = contextlib.nullcontext() if local_rank == 0 else torch.device('meta') with init_context: model = transformers.AutoModelForCausalLM.from_pretrained( 'EleutherAI/pythia-160m', low_cpu_mem_usage=True) if local_rank == 0: print(model) model_auto_wrap_policy = functools.partial(transformer_auto_wrap_policy, transformer_layer_cls={model.gpt_neox.layers[0].__class__},) model = FSDP(model, auto_wrap_policy=model_auto_wrap_policy, sharding_strategy=ShardingStrategy.FULL_SHARD, cpu_offload=CPUOffload(offload_params=False), backward_prefetch=BackwardPrefetch.BACKWARD_PRE, device_id=local_rank, sync_module_states=True, param_init_fn=lambda mod: mod.to_empty(device=f'cuda:{local_rank}')) But this code fails on the `FSDP()` line. It seems the [model sharding](https://github.com/pytorch/pytorch/blob/b689128db3564ca46c2e6f2f272653f3801a9b37/torch/distributed/fsdp/fully_sharded_data_parallel.py#L436) happens before modules on the meta device [are re-instantiated](https://github.com/pytorch/pytorch/blob/b689128db3564ca46c2e6f2f272653f3801a9b37/torch/distributed/fsdp/fully_sharded_data_parallel.py#L453). Is this behavior intentional? ### Versions Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB Nvidia driver version: 515.43.04 cuDNN version: Probably one of the following: /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /scr/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /scr/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /scr/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.4.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.4.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7662 64-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 3296.299 CPU max MHz: 2154.2959 CPU min MHz: 1500.0000 BogoMIPS: 3999.91 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] triton==2.0.0 [conda] Could not collect cc @zhaojuanmao @mrshenli @rohan-varma @awgu
14
2,213
104,022
[PT2.0][compile] torch._dynamo.config.log_level does not exist
module: logging, triaged, oncall: pt2
### 🐛 Describe the bug setting torch._dynamo.config.log_level=logging.DEBUG causes error in nightly build 20230622 while this link mentiones that we can have it. https://pytorch.org/docs/master/compile/troubleshooting.html ### Error logs torch_executor.py:21: in <module> torch._dynamo.config.log_level=logging.DEBUG /home/jthakur/.pt_2_0/lib/python3.8/site-packages/torch/_dynamo/config_utils.py:71: in __setattr__ raise AttributeError(f"{self.__name__}.{name} does not exist") E AttributeError: torch._dynamo.config.log_level does not exist ### Minified repro _No response_ ### Versions Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] torch==2.1.0.dev20230621+cpu [pip3] torchaudio==2.1.0.dev20230621+cpu [pip3] torchvision==0.16.0.dev20230621+cpu [pip3] triton==2.0.0 [conda] Could not collect cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
0
2,214
104,020
[torch.compile] `permute_linear_fusion` ignores the inplace operation for the tensor
high priority, triaged, module: correctness (silent), oncall: pt2, module: aotdispatch, module: inductor
### 🐛 Describe the bug The `permute_linear_fusion` in `torch.compile` ignores the inplace operation for the tensor. For example, ```py import torch torch.manual_seed(420) class Model(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(2, 2) def forward(self, x1): v1 = x1.permute(0, 2, 1) v1.resize_(1, 1, 2) v2 = torch.nn.functional.linear(v1, self.linear.weight, self.linear.bias) return v2 func = Model().cuda() x = torch.randn(1, 2, 2).cuda() with torch.no_grad(): func.train(False) jit_func = torch.compile(func) res1 = func(x) # without jit print(res1) res2 = jit_func(x) print(res2) torch.testing.assert_close(res1, res2, rtol=1e-3, atol=1e-3) ``` If we run it with `TORCHINDUCTOR_PERMUTE_FUSION=1`, the output would be: ``` tensor([[[-0.2002, -0.4580]]], device='cuda:0') tensor([[[0.2537, 0.1411], [0.7882, 0.8429]]], device='cuda:0') AssertionError: The values for attribute 'shape' do not match: torch.Size([1, 1, 2]) != torch.Size([1, 2, 2]). ``` The optimized output have the different shape from the one without optimization. ### Versions ``` PyTorch version: 2.1.0.dev20230621+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: 14.0.0-1ubuntu1 CMake version: version 3.26.4 Libc version: glibc-2.35 Python version: 3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.19.0-40-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.6.124 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 GPU 2: NVIDIA GeForce RTX 3090 Nvidia driver version: 525.105.17 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores CPU family: 25 Model: 8 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 1 Stepping: 2 Frequency boost: enabled CPU max MHz: 7006.6401 CPU min MHz: 1800.0000 BogoMIPS: 7186.21 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 16 MiB (32 instances) L3 cache: 128 MiB (4 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-63 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230621+cu118 [pip3] torchaudio==2.1.0.dev20230621+cu118 [pip3] torchvision==0.16.0.dev20230621+cu118 [pip3] triton==2.0.0 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7f8727e_0 [conda] mkl_fft 1.3.1 py39hd3c417c_0 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] numpy 1.23.5 py39h14f4228_0 [conda] numpy-base 1.23.5 py39h31eccc5_0 [conda] pytorch-cuda 11.7 h778d358_3 pytorch-nightly [conda] pytorch-mutex 1.0 cuda pytorch-nightly [conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi [conda] torch 2.1.0.dev20230621+cu118 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230621+cu118 pypi_0 pypi [conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly [conda] torchvision 0.16.0.dev20230621+cu118 pypi_0 pypi ``` cc @ezyang @gchanan @zou3519 @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
3
2,215
104,011
DISABLED test_backward_ddp_outside_uneven_inputs (__main__.TensorPipeDdpUnderDistAutogradTest)
oncall: distributed, module: flaky-tests, skipped
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_backward_ddp_outside_uneven_inputs&suite=TensorPipeDdpUnderDistAutogradTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14449736644). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_backward_ddp_outside_uneven_inputs` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `distributed/rpc/test_tensorpipe_agent.py` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
1
2,216
104,010
Several Torchbench models don't run with float16 or bfloat16 in the inference eager mode
triaged, module: bfloat16, module: half, module: benchmark, oncall: pt2
Repro: Apply the following patch, ``` diff --git a/benchmarks/dynamo/torchbench.py b/benchmarks/dynamo/torchbench.py index 30406d5345d..6161f02690c 100755 --- a/benchmarks/dynamo/torchbench.py +++ b/benchmarks/dynamo/torchbench.py @@ -213,7 +213,7 @@ MAX_BATCH_SIZE_FOR_ACCURACY_CHECK = { FORCE_AMP_FOR_FP16_BF16_MODELS = { "doctr_det_predictor", "doctr_reco_predictor", - "Super_SloMo", + #"Super_SloMo", "tts_angular", } ``` and then run ``` benchmarks/dynamo/torchbench.py --inductor --bfloat16 --accuracy --inference --device cuda --only Super_SloMo ``` Error: ``` Eager model failed to run Traceback (most recent call last): File "/scratch/binbao/work/pytorch/benchmarks/dynamo/common.py", line 1320, in validate_model self.model_iter_fn(model, example_inputs) File "/scratch/binbao/work/pytorch/benchmarks/dynamo/torchbench.py", line 436, in forward_pass return mod(*inputs) File "/scratch/binbao/work/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/scratch/binbao/work/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/scratch/binbao/work/torchbench/torchbenchmark/models/Super_SloMo/model_wrapper.py", line 41, in forward g_I0_F_t_0 = self.trainFlowBackWarp(I0, F_t_0) File "/scratch/binbao/work/pytorch/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/scratch/binbao/work/pytorch/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/scratch/binbao/work/torchbench/torchbenchmark/models/Super_SloMo/slomo_model.py", line 286, in forward imgOut = torch.nn.functional.grid_sample(img, grid) File "/scratch/binbao/work/pytorch/torch/nn/functional.py", line 4295, in grid_sample return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners) RuntimeError: "grid_sampler_2d_cuda" not implemented for 'BFloat16' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/scratch/binbao/work/pytorch/benchmarks/dynamo/common.py", line 2672, in run ) = runner.load_model( File "/scratch/binbao/work/pytorch/benchmarks/dynamo/torchbench.py", line 384, in load_model self.validate_model(model, example_inputs) File "/scratch/binbao/work/pytorch/benchmarks/dynamo/common.py", line 1322, in validate_model raise NotImplementedError("Eager model failed to run") from e NotImplementedError: Eager model failed to run WARNING:root:Super_SloMo failed to load ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
3
2,217
103,990
[Inductor] Freezing Add support for Caching Parameter Conversions
triaged, oncall: pt2, module: inductor
### 🚀 The feature, motivation and pitch Freezing is a WIP feature within inductor that, similar to JIT freezing, will attempt to inline weights as constants in optimization and run constant folding and other optimizations on them. After freezing, weights can no longer be updated. We may run transformations on parameters like converting to channels last and padding for mms. Because these transformations would naively duplicate parameter memory use, we have an option to discard the original parameters, [freezing_discards_parameters](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L213). As a result, currently, there is no way to both avoid duplication of memory of parameters and recompile. ``` import torch import torchvision import torch._inductor import torch._inductor.config as config config.freezing = True config.freezing_discard_parameters = True resnet = torchvision.models.resnet18().eval().cuda() @torch.compile(mode="reduce-overhead") def run_model(resnet, inp): return resnet(inp) with torch.no_grad(): run_model(resnet, torch.rand([8, 3, 255, 255], device="cuda")) with torch.no_grad(): run_model(resnet, torch.rand([1, 3, 255, 255], device="cuda")) # RuntimeError: Trying to Run Pytorch Eager Module After Dynamo Freezing. The original parameters have been discarded ``` This isn't a fundamental limitation. The constant-folded conversions we're doing to the parameters are the same regardless of the input shape (at least for vast majority of models). If we express all the parameter conversions as transformations in the fx graph, we can cache their final result. The process would look something like : - On first compilation, record the mapping from real parameters to their [ErasedTensor](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/freezing.py#L186) replacement. When we run constant folding on parameters, record the operators we run, their inputs, and their outputs. Intermediary outputs in a chain of conversions should get mapped to an ErasedTensor just as we did with real parameters. - On subsequent recompilation, when we would attempt to run an operator in constant folding, we'll check in the cache if we already have a cached run of that operator with those particular inputs. This feature is also important because it would enable us to do real-time parameter updates. For all of the original parameters, we will have a mapping of the series of conversions we need to run in order to replace the final constant that inductor uses in compilation. There are a few things that still need investigation, such as getting dynamo to place nicely with the ErasedTensor on recompilation. Additionally, it's possible that some parameters may have different series of transformations from run to run. This could possibly be resolved with asking users to run with dynamic shapes first, or by having an api to allow users to not constant fold certain parameters. ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
7
2,218
103,987
[dynamo] Update Unsupported to raise from fake tensor exceptions
Stale, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #104360 * #103676 * __->__ #103987 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy
2
2,219
103,973
gfx906 ROCM print black images all ai torch: 2.0.1+rocm5.4.2/rocm5.5 only works with torch=1.13.0+rocm5.2
needs reproduction, module: rocm, triaged
### 🐛 Describe the bug I am not gona typed here all things but I am gona summary before I gave link of I opened on other issues or discussions on github I tried my gfx906 Radeon VII card with webui and invoke ai its working with torch==1.13.0+rocm5.2 but with torch==2.0.1+rocm5.4.2 I just got problem as black render. But it work with lots people but in my case I couldn't work it in my case. I think this could be related this. Normal this printing in terminal with torch==1.13.0+rocm5.2 MIOpen(HIP): Warning [SQLiteBase] Missing system database file: gfx906_60.kdb Performance may degrade. Please follow instructions to install: https://github.com/ROCmSoftwarePlatform/MIOpen#installing-miopen-kernels-package But with new torch won't. Here my history of this. https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/9206 https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10873 At this point I am stuck with out dated pytorch. Also my card still in rocm support list but my card https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html Also Rocm version installed ROCm 5.5.1 in my Ubuntu 22.04.2 LTS ### Versions python: 3.10.6 working version: pip install torch==1.13.0+rocm5.2 torchvision==0.14.0+rocm5.2 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/rocm5.2 Not working pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 --index-url https://download.pytorch.org/whl/rocm5.4.2 pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.5 cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
81
2,220
103,966
'MPS' Issue Running HuggingFace Transformer Pix2Struct Model
triaged, module: mps
### 🐛 Describe the bug I am running the transformer model Pix2StructForConditionalGeneration, Pix2StructProcessor on MacOS 13.4 on an iMac 27" 2020 with an AMD Radeon Pro 5700XT. The code to run the Transformer is here: https://huggingface.co/google/pix2struct-ai2d-base The code runs with 'CPU' but with 'MPS' gets: ``` ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ <ipython-input-5-7c03e79f5f42>:18 in <module> │ │ │ │ /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/tra │ │ nsformers/models/pix2struct/processing_pix2struct.py:156 in decode │ │ │ │ 153 │ │ This method forwards all its arguments to Pix2StructTokenizerF │ │ 154 │ │ refer to the docstring of this method for more information. │ │ 155 │ │ """ │ │ ❱ 156 │ │ return self.tokenizer.decode(*args, **kwargs) │ │ 157 │ │ │ 158 │ @property │ │ 159 │ def model_input_names(self): │ │ │ │ /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/tra │ │ nsformers/tokenization_utils_base.py:3485 in decode │ │ │ │ 3482 │ │ # Convert inputs to python lists │ │ 3483 │ │ token_ids = to_py_obj(token_ids) │ │ 3484 │ │ │ │ ❱ 3485 │ │ return self._decode( │ │ 3486 │ │ │ token_ids=token_ids, │ │ 3487 │ │ │ skip_special_tokens=skip_special_tokens, │ │ 3488 │ │ │ clean_up_tokenization_spaces=clean_up_tokenization_spaces │ │ │ │ /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/tra │ │ nsformers/tokenization_utils_fast.py:549 in _decode │ │ │ │ 546 │ │ │ │ 547 │ │ if isinstance(token_ids, int): │ │ 548 │ │ │ token_ids = [token_ids] │ │ ❱ 549 │ │ text = self._tokenizer.decode(token_ids, skip_special_tokens=s │ │ 550 │ │ │ │ 551 │ │ clean_up_tokenization_spaces = ( │ │ 552 │ │ │ clean_up_tokenization_spaces │ ╰──────────────────────────────────────────────────────────────────────────────╯ OverflowError: out of range integral type conversion attempted ``` <img width="1401" alt="Screenshot 2023-06-21 at 8 24 46 AM" src="https://github.com/pytorch/pytorch/assets/3105499/6d40d72e-7f90-49c5-805d-b071f191d707"> Here's the code: ``` import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(image_url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base").to("mps") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base") question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud" inputs = processor(images=image, text=question, return_tensors="pt").to("mps") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) ``` ### Versions ``` % python collect_env.py Collecting environment information... PyTorch version: 2.1.0.dev20230428 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.4 (x86_64) GCC version: Could not collect Clang version: 14.0.6 CMake version: version 3.22.1 Libc version: N/A Python version: 3.10.11 (main, Apr 20 2023, 13:59:00) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-10.16-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz Versions of relevant libraries: [pip3] audiolm-pytorch==0.0.1 [pip3] configmypy==0.1.0 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.24.3 [pip3] pytorch-transformers==1.1.0 [pip3] tensorly-torch==0.4.0 [pip3] torch==1.14.0a0+git1c8b077 [pip3] torch-struct==0.5 [pip3] torch-summary==1.4.5 [pip3] torch-utils==0.1.2 [pip3] torchaudio==2.1.0.dev20230428 [pip3] torchtraining-nightly==1604016577 [pip3] torchvision==0.16.0.dev20230428 [pip3] vector-quantize-pytorch==0.9.2 [conda] nomkl 3.0 0 [conda] numpy 1.24.3 py310he50c29a_0 [conda] numpy-base 1.24.3 py310h992e150_0 [conda] pytorch-transformers 1.1.0 pypi_0 pypi [conda] tensorly-torch 0.4.0 pypi_0 pypi [conda] torch 2.0.0.dev20230211 pypi_0 pypi [conda] torch-struct 0.5 pypi_0 pypi [conda] torch-summary 1.4.5 pypi_0 pypi [conda] torch-utils 0.1.2 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230428 pypi_0 pypi [conda] torchtraining-nightly 1604016577 pypi_0 pypi [conda] torchvision 0.16.0.dev20230428 pypi_0 pypi [conda] vector-quantize-pytorch 0.9.2 pypi_0 pypi ``` cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
1
2,221
103,965
[ONNX] Isolate TorchScript-based code-base from Dynamo-based ONNX exporter for easier deprecation
module: onnx, triaged, enhancement
### 🐛 Describe the bug Currently, many files and helpers are being shared between both torchscript-based and dynamo-based ONNX exporter. With future deprecation of TorchScript based exporter and the rise of Dynamo-based exporter in mind, we could start separating both exporter into different folders/files to make deprecation easy. That should also make the code-base more organized and compact ### Versions pytorch main branch ```[tasklist] ### Tasks - [ ] https://github.com/pytorch/pytorch/pull/103942/ ```
0
2,222
103,962
How to unwrap after auto_wrap in FSDP?
oncall: distributed, triaged, module: fsdp
I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like “_fsdp_wrapped_module._flat_param” during training. I need to map these wrapped weights to the original LLaMA architecture such as “self_attn.v_proj”. Any code examples? I guess “summon_full_params()” might be the function that I look for, but I am not sure if that is correct. I also have difficulty using this function. Thanks a lot for any help! cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
1
2,223
103,960
CODEOWNERS file has errors due to non existent people being referred to
module: docs, triaged
If you have a question or would like help and support, please ask at our [forums](https://discuss.pytorch.org/). If you are submitting a feature request, please preface the title with [feature request]. If you are submitting a bug report, please fill in the following details. ## Issue description https://github.com/pytorch/pytorch/blob/main/CODEOWNERS displays with errors ``` Unknown owner on line 25: make sure @z-a-f exists and has write access to the repository …ten/src/ATen/native/ao_sparse @z-a-f @salilsdesai @kimishpatel @digantdesai @jianyuh Unknown owner on line 26: make sure @z-a-f exists and has write access to the repository …/native/quantized @jerryzh168 @z-a-f @salilsdesai @kimishpatel @digantdesai @jianyuh Unknown owner on line 27: make sure @z-a-f exists and has write access to the repository …ive/quantized/cpu @jerryzh168 @z-a-f @salilsdesai @kimishpatel @digantdesai @jianyuh Unknown owner on line 31: make sure @z-a-f exists and has write access to the repository /test/ao/ @jerryzh168 @z-a-f @hdcharles Unknown owner on line 32: make sure @z-a-f exists and has write access to the repository …est/quantization/ @jerryzh168 @z-a-f Unknown owner on line 34: make sure @z-a-f exists and has write access to the repository ao/sparisty/ @z-a-f @hdcharles Unknown owner on line 38: make sure @z-a-f exists and has write access to the repository nn/quantizable/ @jerryzh168 @z-a-f Unknown owner on line 114: make sure @robieta exists and has write access to the repository torch/csrc/autograd/profiler* @robieta Unknown owner on line 115: make sure @robieta exists and has write access to the repository torch/autograd/profiler* @robieta Unknown owner on line 116: make sure @robieta exists and has write access to the repository torch/csrc/profiler/ @robieta Unknown owner on line 117: make sure @robieta exists and has write access to the repository torch/profiler/ @robieta Unknown owner on line 123: make sure @NivekT exists and has write access to the repository torch/utils/data/ @NivekT @ejguan ``` This should be reviewed to either remove users or ensure that they exist. I can submit a change to remove the reference to users in this file, if that is the right approach for all of them.. cc @svekars @carljparker
3
2,224
103,959
Need the full "Release Compatibility Matrix" of torch
oncall: binaries, oncall: releng, triaged
### 📚 The doc issue Hi, We always tries some old project which may lead us install old version of pytorch. But there is no full-version compatibility matrix of pytorch. It will be about the compatibility limitations between python version, pytorch version, and CUDA. NOT THAT https://github.com/pytorch/pytorch/wiki/PyTorch-Versions BUT LIKE THAT https://github.com/pytorch/pytorch/blob/main/RELEASE.md ### Suggest a potential alternative/fix _No response_ cc @seemethere @malfet
3
2,225
103,958
How to modify gradients of an FSDP model?
oncall: distributed, module: fsdp
### 📚 The doc issue I've initially posted the question on [forum](https://discuss.pytorch.org/t/modify-gradients-of-an-fsdp-model/182159) 7 days ago, but crossposting here as well for better visibility since I couldn't get any answers there. Hi everyone, I have an FSDP model which has zeros in some of the `torch.nn.Linear.weight` parameters. During the training I would like to keep those parameters fixed to zeros, and to zero-out their gradients during backward as well. The specific use-case is: I am loading a pruned model and I want to fine-tune it with FSDP while keeping the pruning mask fixed. To achieve this I need to do two things: 1) multiply parameters with the mask before the forward pass (so that all pruned weights remain pruned), 2) multiply gradients of pruned parameters after the backward pass (so that gradients of pruned weights are zeros) In the standard DDP training I would achieve this by: 1) registering forward pre-hook on `torch.nn.Linear` modules and multiplying weights with the mask before each forward pass, 2) registering a hook on the parameter `torch.nn.Linear.weight` and multiplying its gradient with the mask. For example: ```python def keep_param_pruned(mask, module, input): with torch.no_grad(): module.weight.data.mul_(mask.to(module.weight.device)) def keep_grad_pruned(mask, grad): return grad.mul_(mask.to(grad.device)) for n, m in model.named_modules(): if isinstance(m, torch.nn.Linear): mask = m.weight > threshold m.register_forward_pre_hook(partial(keep_param_pruned, mask)) m.weight.register_hook(partial(keep_grad_pruned, mask)) ``` However, I am struggling to modify this idea to work with FSDP. Any suggestions/ideas on what I am doing wrong or if there is a simpler way to achieve this without playing with hooks? ### Suggest a potential alternative/fix _No response_ cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
3
2,226
103,955
when train with multi GPUS
oncall: distributed
### 🐛 Describe the bug train command: xport NGPUS=2 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9531 ./train.py but when traininf fow a while ,raise errors: what did it means? ARNING:torch.distributed.elastic.multiprocessing.api:Sending process 25236 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 1 (pid: 25237) of binary: /anaconda3/envs/paddle_env/bin/python Traceback (most recent call last): File "/anaconda3/envs/paddle_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/anaconda3/envs/paddle_env/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module> main() File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main launch(args) File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch run(args) File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/anaconda3/envs/paddle_env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ### Versions pytorch 13.1 cuda 116 cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
3
2,227
103,949
torch.save() fails if path contains multibyte characters
module: serialization, triaged
### 🐛 Describe the bug After upgrading PyTorch from 1.8.1 (LTS) to 2.0.1. When saving a model using torch.save(), if the save destination path contains multi-byte characters, saving the model will fail. Below is the sample code and error message. Code to reproduce: ```python import torch model = torch.load("mnist_cnn.pt") save_path = model_save.pt torch.save(model, save_path) # OK save_path_multibyte = "C:\\Users\\ユーザー名\\source\\model_save_multibyte.pt" torch.save(model, save_path_multibyte) # NG ``` The error message: ``` Traceback (most recent call last): File ".\sample.py", line 147, in <module> main() File ".\sample.py", line 142, in main torch.save(model, save_path_multibyte) File "C:\Users\ユーザー名\source\.venv\lib\site-packages\torch\serialization.py", line 440, in save with _open_zipfile_writer(f) as opened_zipfile: File "C:\Users\ユーザー名\source\.venv\lib\site-packages\torch\serialization.py", line 315, in _open_zipfile_writer return container(name_or_buffer) File "C:\Users\ユーザー名\source\.venv\lib\site-packages\torch\serialization.py", line 288, in __init__ super().__init__(torch._C.PyTorchFileWriter(str(name))) RuntimeError: Parent directory C:\Users\ユーザー名\source\ does not exist. ``` ### Versions PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Pro GCC version: Could not collect Clang version: 11.0.1 CMake version: version 3.19.2 Libc version: N/A Python version: 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19041-SP0 Is CUDA available: True CUDA runtime version: 11.0.221 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060 with Max-Q Design Nvidia driver version: 512.78 cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\cudnn_ops_train64_8.dll HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture=9 CurrentClockSpeed=3000 DeviceID=CPU0 Family=107 L2CacheSize=4096 L2CacheSpeed= Manufacturer=AuthenticAMD MaxClockSpeed=3000 Name=AMD Ryzen 9 4900HS with Radeon Graphics ProcessorType=3 Revision=24577 Versions of relevant libraries: [pip3] numpy==1.19.3 [pip3] torch==2.0.1+cu118 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [conda] Could not collect cc @mruberry @mikaylagawarecki
0
2,228
103,947
[torch.fx] Deserialization Error - TypeError: ones() received an invalid combination of arguments - got (tuple, device=Attribute)
high priority, triage review, module: fx, oncall: fx
### 🐛 Describe the bug Dear Pytorch Community, Recently, I'm working on my research project and trying to serialize and deserialize the `torch.fx GraphModule object` of `bloom-560m `model. I used pickle as the package for serialization and deserialization. Initially, I successfully serialized the GraphModule object using `pickle.dumps()`. However, when I try to deserialize the pickle object using `pickle.loads()`, it shows the error shown below: ``` Traceback (most recent call last): File "/Users/junchenzhao/Dist-CPU-Learn/test_inference.py", line 133, in <module> deserialized_modules = [pickle.loads(i) for i in serialized_modules] File "/Users/junchenzhao/Dist-CPU-Learn/test_inference.py", line 133, in <listcomp> deserialized_modules = [pickle.loads(i) for i in serialized_modules] File "/Users/junchenzhao/anaconda3/envs/onnx-training/lib/python3.10/site-packages/torch/fx/graph_module.py", line 109, in reduce_graph_module return _deserialize_graph_module(forward, body) File "/Users/junchenzhao/anaconda3/envs/onnx-training/lib/python3.10/site-packages/torch/fx/graph_module.py", line 168, in _deserialize_graph_module graph = KeepModules().trace(com, **tracer_extras) File "/Users/junchenzhao/anaconda3/envs/onnx-training/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace (self.create_arg(fn(*args)),), File "<eval_with_key>.7", line 18, in forward TypeError: ones() received an invalid combination of arguments - got (tuple, device=Attribute), but expected one of: * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) ``` Part of the GraphModule forward code to be serialized can be seen below, where error happens at the `ones`: ``` def forward(self, input_ids : torch.Tensor): 1 size = input_ids.size() 2 getitem = size[0] 3 getitem_1 = size[1]; size = None 4 transformer_word_embeddings = self.transformer.word_embeddings(input_ids) 5 transformer_word_embeddings_layernorm = self.transformer.word_embeddings_layernorm(transformer_word_embeddings); transformer_word_embeddings = None 6 getattr_1 = transformer_word_embeddings_layernorm.device ----> 7 ones = torch.ones((getitem, getitem_1), device = getattr_1); getattr_1 = None 8 size_1 = ones.size() 9 getitem_2 = size_1[0] 10 getitem_3 = size_1[1]; size_1 = None 11 getattr_2 = ones.device 12 tensor = torch.tensor(0.7071067811865476, device = getattr_2, dtype = torch.float32); getattr_2 = None 13 getattr_3 = ones.device 14 arange = torch.arange(1, 17, device = getattr_3, dtype = torch.int32); getattr_3 = None 15 pow_1 = torch.pow(tensor, arange); tensor = arange = None 16 cumsum = ones.cumsum(dim = -1) 17 sub = cumsum - 1; cumsum = None 18 mul = sub * ones; sub = None .... ``` ### Reproduce A similar issue can also be reproduced using the code shown below: ``` import torch from torch.fx import symbolic_trace def test(input_ids): size = input_ids.size() getitem = size[0] getitem_1 = size[1]; getattr_1 = input_ids.device ones = torch.ones((getitem, getitem_1), device=getattr_1) return ones traced = symbolic_trace(test) ``` Same Error Message as you can see below by running the reproduce coding shown above: ``` 7 getitem_1 = size[1]; 8 getattr_1 = input_ids.device ----> 9 ones = torch.ones((getitem, getitem_1), device=getattr_1) 10 return ones 11 TypeError: ones() received an invalid combination of arguments - got (tuple, device=Attribute), but expected one of: * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) ``` If it's possible, could anyone help me with this issue? Thanks! ### Urgency This is urgent and hope to be resolved as soon as possible. ### Versions PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.3.1 (x86_64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: version 3.26.3 Libc version: N/A Python version: 3.10.11 (main, Apr 20 2023, 13:59:00) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-10.16-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 h0a44026_0 pytorch [conda] mkl 2023.1.0 h59209a4_43558 [conda] mkl-service 2.4.0 py310h6c40b1e_1 [conda] mkl_fft 1.3.6 py310h3ea8b11_1 [conda] mkl_random 1.2.2 py310h3ea8b11_1 [conda] numpy 1.24.3 py310h827a554_1 [conda] numpy-base 1.24.3 py310ha186be2_1 [conda] pytorch 2.0.1 py3.10_0 pytorch [conda] torchaudio 2.0.2 py310_cpu pytorch [conda] torchvision 0.15.2 py310_cpu pytorch cc @ezyang @gchanan @zou3519 @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @msaroufim @wconstab @bdhirsh @anijain2305
2
2,229
103,935
DISABLED test_gather_state_dict_dtensor (__main__.TestShardUtilsDistributedDTensor)
oncall: distributed, triaged, skipped
Platforms: linux This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/distributed%2Ffsdp%2Ftest_shard_utils.py%3A%3ATestShardUtilsDistributedDTensor%3A%3Atest_gather_state_dict_dtensor)). This looks related to https://github.com/pytorch/pytorch/issues/103863 and it fails on trunk multigpu test https://hud.pytorch.org/pytorch/pytorch/commit/4bd14d97f81f10373cf5ce2d272e7e9035cc8f83: ``` 2023-06-20T23:41:02.6306643Z FAILED [2.9096s] distributed/fsdp/test_shard_utils.py::TestShardUtilsDistributedDTensor::test_gather_state_dict_dtensor - RuntimeError: Process 0 exited with error code 10 and exception: 2023-06-20T23:41:02.6307151Z Traceback (most recent call last): 2023-06-20T23:41:02.6307715Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657, in run_test 2023-06-20T23:41:02.6308130Z getattr(self, test_name)() 2023-06-20T23:41:02.6308678Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543, in wrapper 2023-06-20T23:41:02.6309048Z fn() 2023-06-20T23:41:02.6309592Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 174, in wrapper 2023-06-20T23:41:02.6310029Z func(self) # type: ignore[misc] 2023-06-20T23:41:02.6310623Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-06-20T23:41:02.6311054Z return func(*args, **kwargs) 2023-06-20T23:41:02.6311814Z File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_shard_utils.py", line 81, in test_gather_state_dict_dtensor 2023-06-20T23:41:02.6312241Z device_mesh = self.build_device_mesh() 2023-06-20T23:41:02.6312855Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 101, in build_device_mesh 2023-06-20T23:41:02.6313360Z return DeviceMesh(DEVICE_TYPE, list(range(NUM_DEVICES))) 2023-06-20T23:41:02.6313940Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_tensor/device_mesh.py", line 124, in __init__ 2023-06-20T23:41:02.6314358Z self._get_or_create_default_group() 2023-06-20T23:41:02.6314932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_tensor/device_mesh.py", line 135, in _get_or_create_default_group 2023-06-20T23:41:02.6315449Z raise RuntimeError( 2023-06-20T23:41:02.6315822Z RuntimeError: Mesh should not be bigger than default world size, but found 4 ranks! ``` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
1
2,230
103,913
[dynamo] functools.wraps : graph-break when wrapping nested functions.
triaged, oncall: pt2, module: dynamo
### 🐛 Describe the bug Repro: ```python import torch from torch._dynamo.utils import counters import json from functools import wraps counters.clear() def foo(x, y): def add(x, y): return x + y @wraps(add) def wrapped_call(x, y): return add(x, y) return wrapped_call(x, y) x = torch.ones(1,) y = torch.ones(1,) o = torch.compile(foo, fullgraph=False, backend='aot_eager')(x, y) torch.testing.assert_close(o, 2 * x) assert counters["graph_break"] == { "call_function wraps in skip_files /home/kshiteej/.conda/envs/pytorch-cuda-dev/lib/python3.9/functools.py": 1, "call_method UserDefinedObjectVariable(partial) __call__ [NestedUserFunctionVariable()] {}": 1 } # print(json.dumps(counters, indent=2)) ``` Note, that it doesn't graph-break for non-nested functions ```python import torch from torch._dynamo.utils import counters import json from functools import wraps counters.clear() def add(x, y): return x + y def foo(x, y): @wraps(add) def wrapped_call(x, y): return add(x, y) return wrapped_call(x, y) x = torch.ones(1,) y = torch.ones(1,) o = torch.compile(foo, fullgraph=False, backend='aot_eager')(x, y) torch.testing.assert_close(o, 2 * x) assert counters["graph_break"] == {} # print(json.dumps(counters, indent=2)) ``` Found in https://github.com/pytorch/pytorch/pull/102264 (this leads to graph-breaks when using nested functions with `torch.func.grad`) ### Versions master cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78
3
2,231
103,903
Remove setting eval mode in observed custom LSTM
fb-exported, release notes: AO frontend
Summary: Same as title. Test Plan: CI Differential Revision: D46831286
8
2,232
103,895
Issue with loading similar checkpoints in a distributed fashion
module: serialization, triaged
### 🐛 Describe the bug I am trying to load a model which is partitioned in 32 files on 128 GPU, for which each 4-GPU loads the same checkpoint and takes a portion of the parameters. However, I am running into ` PytorchStreamReader failed reading zip archive: failed finding central directory` on just some ranks. I have also verified that I can load the same checkpoint which gives such error one process alone. Here is part of the stack trace: ``` node-12: File "/.../deepspeed/runtime/engine.py", line 2765, in _load_zero_checkpoint node-12: zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag, self.loaded_checkpoint_dp_world_size) node-12: File "/.../deepspeed/runtime/engine.py", line 2861, in _get_all_zero_checkpoints node-12: return self._get_all_zero_checkpoint_state_dicts(zero_ckpt_names, ckp_dp_size) node-12: File "/.../deepspeed/runtime/engine.py", line 2840, in _get_all_zero_checkpoint_state_dicts node-12: _state = self.checkpoint_engine.load( node-12: File "/.../deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py", line 28, in load node-12: partition = torch.load(path, map_location=map_location) node-12: File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/serialization.py", line 777, in load node-12: with _open_zipfile_reader(opened_file) as opened_zipfile: node-12: File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/serialization.py", line 282, in __init__ node-12: super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) node-12: RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory node-12: [2023-06-20 09:11:11,157] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /checkpoint_path/iter_0000340/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... ``` Strangely enough, if I am loading the same files on 64 GPUs and spliting it 2-way, I don't run into the same problem!! My suspicion is that this has to do with the CPU-memory, and how the file is loaded on its memory for the distributed case. Here is the part of the code on DeepSpeed side that triggers this issue: https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/engine.py#L2815 I am happy to helpt resolve the issue if it is on the DeepSpeed side, however, the logic over there is very simple! Can anyone help me to narrow down the source of this problem? Thanks, Reza ### Versions Collecting environment information... PyTorch version: 1.13.0 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.23.0 Libc version: glibc-2.31 Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-1101-azure-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB GPU 2: NVIDIA A100-SXM4-80GB GPU 3: NVIDIA A100-SXM4-80GB GPU 4: NVIDIA A100-SXM4-80GB GPU 5: NVIDIA A100-SXM4-80GB GPU 6: NVIDIA A100-SXM4-80GB GPU 7: NVIDIA A100-SXM4-80GB Nvidia driver version: 510.85.02 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7V12 64-Core Processor Stepping: 0 CPU MHz: 3296.887 BogoMIPS: 4890.89 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 3 MiB L1i cache: 3 MiB L2 cache: 48 MiB L3 cache: 384 MiB NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 NUMA node2 CPU(s): 48-71 NUMA node3 CPU(s): 72-95 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip rdpid Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.22.2 [pip3] pytorch-lightning==1.9.3 [pip3] torch==1.13.0 [pip3] torch-nebula==0.16.2 [pip3] torch-ort==1.14.0 [pip3] torch-tb-profiler==0.4.1 [pip3] torchmetrics==0.11.3 [pip3] torchsnapshot==0.1.0 [pip3] torchvision==0.14.0+cu117 [conda] magma-cuda117 2.6.1 1 pytorch [conda] mkl 2021.4.0 pypi_0 pypi [conda] mkl-include 2021.4.0 pypi_0 pypi [conda] numpy 1.22.2 pypi_0 pypi [conda] pytorch-lightning 1.9.3 pypi_0 pypi [conda] torch 1.13.0 pypi_0 pypi [conda] torch-nebula 0.16.2 pypi_0 pypi [conda] torch-ort 1.14.0 pypi_0 pypi [conda] torch-tb-profiler 0.4.1 pypi_0 pypi [conda] torchmetrics 0.11.3 pypi_0 pypi [conda] torchsnapshot 0.1.0 pypi_0 pypi [conda] torchvision 0.14.0+cu117 pypi_0 pypi cc @mruberry @mikaylagawarecki
2
2,233
103,891
Docker images: faster linker for `torch.compile`
triaged, module: docker, oncall: pt2
### 🚀 The feature, motivation and pitch As we are going to u se the linker can we switch by default to a faster linker in the official pytorch image (Gold, LLVM LD or [Mold](https://github.com/rui314/mold)) See also.. https://github.com/pytorch/pytorch/issues/103417 ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
4
2,234
103,883
Runtime Error outerNode->outputs().size() == node->inputs().size() INTERNAL ASSERT FAILED when exporting custom operator
module: onnx, triaged
### 🐛 Describe the bug Dear community, I need to export to ONNX a method that is highly state-sensitive and difficult to make stateless, I've found a trick to that with giving the custom_op function the results to return directly, when doing so, it works very well as long as the method to deploy is doing branching only, but when computation is added, this fails with the error. `RuntimeError: outerNode->outputs().size() == node->inputs().size() INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/dead_code_elimination.cpp":140, please report a bug to PyTorch.` I'm having a hard time debugging that and any help would be greatly appreciated ! Here is a minimal working example that allows to reproduce the issue: In this example if you uncomment `# compute_to_use = fake_compute # This work` the export works fine. ```python from typing import Tuple, Union import torch class CustomClassOp_Func(torch.autograd.Function): @staticmethod @torch.no_grad() def forward( g: torch.onnx._internal.jit_utils.GraphContext, result: Union[torch.Value, Tuple[torch.Value]], *args: torch.Value, ) -> torch.Value: return result @staticmethod def symbolic( g: torch.onnx._internal.jit_utils.GraphContext, result: Union[torch.Value, Tuple[torch.Value]], *args: torch.Value, ) -> torch.Value: return g.op( f"CustomDomain::custom_op", *args, outputs=len(result), ) class Custom(torch.nn.Module): def real_compute(self, input1, input2): a = input1 + input2 b = 2 * input1 c = 2 * input2 return a,b,c def fake_compute(self, input1, input2): a = input1 b = input1 c = input2 return a,b,c # Change me compute_to_use = real_compute # This doesn't work # compute_to_use = fake_compute # This work @torch.no_grad() def custom_op_inference(self, *args): return CustomClassOp_Func.apply( self.compute_to_use(*args), *args ) def forward(self, input1, input2): if torch.onnx.is_in_onnx_export(): args = input1, input2 return self.custom_op_inference(*args) return self.compute_to_use(input1, input2) model = Custom() batch = (torch.FloatTensor(1, 3), torch.FloatTensor(1, 3)) torch.onnx.export(model, batch, "/tmp/model.onnx", opset_version=16, custom_opsets={"CustomDomain": 1}) ``` and the logtrace obtained ``` ============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== Traceback (most recent call last): File "/work/PSEE/metavision-computational-imaging/.vscode/.scratch/.model/colorizer_preprocessing.py", line 60, in <module> torch.onnx.export(model, batch, "/tmp/model.onnx", opset_version=16, custom_opsets={"CustomDomain": 1}) File "/home/edupuis/.cache/pypoetry/virtualenvs/cimaging-ml-qzr9xo67-py3.8/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/home/edupuis/.cache/pypoetry/virtualenvs/cimaging-ml-qzr9xo67-py3.8/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/edupuis/.cache/pypoetry/virtualenvs/cimaging-ml-qzr9xo67-py3.8/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph graph = _optimize_graph( File "/home/edupuis/.cache/pypoetry/virtualenvs/cimaging-ml-qzr9xo67-py3.8/lib/python3.8/site-packages/torch/onnx/utils.py", line 584, in _optimize_graph _C._jit_pass_lower_all_tuples(graph) RuntimeError: outerNode->outputs().size() == node->inputs().size() INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/dead_code_elimination.cpp":140, please report a bug to PyTorch. ``` ### Versions Collecting environment information... PyTorch version: 2.0.0+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.26.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA T500 Nvidia driver version: 530.30.02 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 39 bits physical, 48 bits virtual CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 140 Model name: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz Stepping: 1 CPU MHz: 4083.291 CPU max MHz: 4700.0000 CPU min MHz: 400.0000 BogoMIPS: 5606.40 Virtualization: VT-x L1d cache: 192 KiB L1i cache: 128 KiB L2 cache: 5 MiB L3 cache: 12 MiB NUMA node0 CPU(s): 0-7 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] mypy==1.3.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.5 [pip3] pytorch-lightning==2.0.2 [pip3] torch==2.0.0 [pip3] torch-tb-profiler==0.4.1 [pip3] torchmetrics==0.11.4 [pip3] torchvision==0.15.1 [pip3] triton==2.0.0 [conda] mkl 2022.1.0 hc2b9512_224 [conda] numpy 1.23.5 py38hf838250_0 [conda] numpy-base 1.23.5 py38h1e6e340_0 [conda] pytorch 1.12.1 cpu_py38h9dbd814_1 [conda] pytorch-mutex 1.0 cpu pytorch
2
2,235
103,879
Can ``torch.vmap`` add ``grad_fn``= SelectBackward when maping over some dimension of the inputs?
triaged, actionable, module: functorch
### 🐛 Describe the bug When applying ``torch.vmap`` to calculate gradients, I come up with the error ``element 0 of tensors does not require grad and does not have a grad_fn`` with the following code: ```python import torch from torch import vmap # Setup N = 5 def f(x): return torch.stack([x**2, x**3, x**4, x**5, x**6], dim=0) x = torch.randn(N, requires_grad=True) y = f(x) basis_vectors = torch.eye(N) def get_vjp(y, v): print(y) return torch.autograd.grad(y, x, v)[0] jacobian_vmap = vmap(get_vjp, in_dims=(0,0))(y, basis_vectors) ``` I check ``y`` of ``get_vjp`` and the reason is that the mapping operation does not create a computational graph. ### Versions PyTorch version: 2.0.1 CUDA used to build PyTorch: 11.8 OS: Ubuntu 22.04.2 LTS (x86_64) cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
5
2,236
103,868
row.device().is_cpu() INTERNAL ASSERT FAILED at "csrc/cpu/diag_cpu.cpp":7
triaged, module: macos
### 🐛 Describe the bug [gist](https://gist.github.com/inoue0426/36baa6ec2f87d537e16b1029080654ed) I got the runtime error related to the M1 Mac, although this code works for Linux with GTX3080. (This may be related to PyTorch Geometric, not PyTorch.) ```python /Users/yoshitakainoue/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch_sparse/utils.py:20: UserWarning: MPS: no support for int64 min/max ops, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Sort.mm:39.) return inputs.sort() /Users/yoshitakainoue/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch_sparse/storage.py:69: UserWarning: MPS: no support for int64 min/max ops, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:1271.) assert trust_data or int(row.max()) < M --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[16], line 16 13 model.train() 14 optimizer.zero_grad() ---> 16 outputs, all_attention, result = model(x, adj, train_drug, train_cell) 17 torch.cuda.empty_cache() 18 loss = criterion(outputs.squeeze(), train_labels) File ~/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] Cell In[12], line 17, in GAT.forward(self, x, adj, drug, cell) 16 def forward(self, x, adj, drug, cell): ---> 17 x, attention = self.gat1(x, adj.t(), return_attention_weights=True) 18 all_attention = get_attention_mat(attention) 19 del attention File ~/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch_geometric/nn/conv/gat_conv.py:244, in GATConv.forward(self, x, edge_index, edge_attr, size, return_attention_weights) 242 elif isinstance(edge_index, SparseTensor): 243 if self.edge_dim is None: --> 244 edge_index = torch_sparse.set_diag(edge_index) 245 else: 246 raise NotImplementedError( 247 "The usage of 'edge_attr' and 'add_self_loops' " 248 "simultaneously is currently not yet supported for " 249 "'edge_index' in a 'SparseTensor' form") File ~/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch_sparse/diag.py:41, in set_diag(src, values, k) 38 src = remove_diag(src, k=k) 39 row, col, value = src.coo() ---> 41 mask = torch.ops.torch_sparse.non_diag_mask(row, col, src.size(0), 42 src.size(1), k) 43 inv_mask = ~mask 45 start, num_diag = -k if k < 0 else 0, mask.numel() - row.numel() File ~/.pyenv/versions/miniforge3-4.10.1-5/lib/python3.9/site-packages/torch/_ops.py:502, in OpOverloadPacket.__call__(self, *args, **kwargs) 497 def __call__(self, *args, **kwargs): 498 # overloading __call__ to ensure torch.ops.foo.bar() 499 # is still callable from JIT 500 # We save the function ptr as the `op` attribute on 501 # OpOverloadPacket to access it here. --> 502 return self._op(*args, **kwargs or {}) RuntimeError: row.device().is_cpu() INTERNAL ASSERT FAILED at "csrc/cpu/diag_cpu.cpp":7, please report a bug to PyTorch. row must be CPU tensor ``` ### Versions ``` Collecting environment information... PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.2.1 (arm64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: version 3.23.3 Libc version: N/A Python version: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:24:55) [Clang 11.1.0 ] (64-bit runtime) Python platform: macOS-13.2.1-arm64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M1 Versions of relevant libraries: [pip3] flake8==5.0.3 [pip3] numpy==1.23.1 [pip3] torch==2.0.1 [pip3] torch-geometric==2.3.1 [pip3] torch-scatter==2.1.1 [pip3] torch-sparse==0.6.17 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [conda] numpy 1.23.1 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torch-geometric 2.3.1 pypi_0 pypi [conda] torch-scatter 2.1.1 pypi_0 pypi [conda] torch-sparse 0.6.17 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi [conda] torchvision 0.15.2 pypi_0 pypi ``` cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
1
2,237
103,863
DISABLED test_create_chunk_dtensor (__main__.TestShardUtilsDistributedDTensor)
oncall: distributed, triaged, skipped
Platforms: linux This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/distributed%2Ffsdp%2Ftest_shard_utils.py%3A%3ATestShardUtilsDistributedDTensor%3A%3Atest_create_chunk_dtensor)). This test is failing on trunk with the following error https://hud.pytorch.org/pytorch/pytorch/commit/3e42854caa6bfd4cd2694b7da0b13ddd9695ca58: ``` 2023-06-19T19:45:30.2835537Z =========================== short test summary info ============================ 2023-06-19T19:45:30.2836236Z FAILED [2.8097s] distributed/fsdp/test_shard_utils.py::TestShardUtilsDistributedDTensor::test_create_chunk_dtensor - RuntimeError: Process 0 exited with error code 10 and exception: 2023-06-19T19:45:30.2836752Z Traceback (most recent call last): 2023-06-19T19:45:30.2837391Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657, in run_test 2023-06-19T19:45:30.2837814Z getattr(self, test_name)() 2023-06-19T19:45:30.2838344Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543, in wrapper 2023-06-19T19:45:30.2838742Z fn() 2023-06-19T19:45:30.2839347Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 174, in wrapper 2023-06-19T19:45:30.2839777Z func(self) # type: ignore[misc] 2023-06-19T19:45:30.2840340Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-06-19T19:45:30.2840755Z return func(*args, **kwargs) 2023-06-19T19:45:30.2841180Z File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_shard_utils.py", line 64, in test_create_chunk_dtensor 2023-06-19T19:45:30.2841588Z device_mesh = self.build_device_mesh() 2023-06-19T19:45:30.2842211Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 101, in build_device_mesh 2023-06-19T19:45:30.2842710Z return DeviceMesh(DEVICE_TYPE, list(range(NUM_DEVICES))) 2023-06-19T19:45:30.2843264Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_tensor/device_mesh.py", line 124, in __init__ 2023-06-19T19:45:30.2843685Z self._get_or_create_default_group() 2023-06-19T19:45:30.2844267Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_tensor/device_mesh.py", line 135, in _get_or_create_default_group 2023-06-19T19:45:30.2844679Z raise RuntimeError( 2023-06-19T19:45:30.2845023Z RuntimeError: Mesh should not be bigger than default world size, but found 4 ranks! ``` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
1
2,238
103,862
FSDP full precision `model.eval` silently failing
triaged, module: fsdp
### 🐛 Describe the bug I'm calling `model.eval()` on an FSDP wrapped model to evaluate performance in full precision. Some parameters are not getting cast properly, leading to incorrect results (but no errors). ### Fix that works: Traversing the `wrapped_model.named_modules()` and setting `module.train(False)` on the correct modules (prone to breakage) ### Fix that doesn't work but may be illuminating in debugging: Traversing the wrapped module and calling `module.train(False)` instead of calling `wrapped_model.eval()` makes the failure not silent: `F.scaled_dot_product_attention` throws an error that it got inputs of fp32 but expected inputs of bf16. Specifically, calling `module.train(False)` on the module that is the block for FSDP wrapping causes the break. ### Versions Collecting environment information... PyTorch version: 2.1.0.dev20230609+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.31 Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1035-oracle-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 515.105.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 256 On-line CPU(s) list: 0-127 Off-line CPU(s) list: 128-255 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7J13 64-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 2550.000 CPU max MHz: 3673.0950 CPU min MHz: 1500.0000 BogoMIPS: 4899.83 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 NUMA node4 CPU(s): 64-79 NUMA node5 CPU(s): 80-95 NUMA node6 CPU(s): 96-111 NUMA node7 CPU(s): 112-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f1 6c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] pytorch-triton==2.1.0+9820899b38 [pip3] torch==2.1.0.dev20230609+cu118 [pip3] torchaudio==2.1.0.dev20230609+cu118 [pip3] torchvision==0.16.0.dev20230609+cu118 [pip3] triton==2.0.0 [conda] numpy 1.24.1 pypi_0 pypi [conda] pytorch-triton 2.1.0+9820899b38 pypi_0 pypi [conda] torch 2.1.0.dev20230609+cu118 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230609+cu118 pypi_0 pypi [conda] torchvision 0.16.0.dev20230609+cu118 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @zhaojuanmao @mrshenli @rohan-varma @awgu
8
2,239
103,860
Long PR description leads to "Argument list too long" error from docker
module: ci, triaged, module: devx
### 🐛 Describe the bug Per title. Also, if your PR description contains a error message it can trick the hud into highlighting that line. An example: https://github.com/pytorch/pytorch/pull/103735 (I've since updated to link to the error message in a gist, see edit history for the original PR description) ### Versions main cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn @clee2000
7
2,240
103,857
[ONNX] FX exporter: replace `aten::copy_` with out-place version
module: onnx, triaged
In-place ops like `aten::copy_` should be replaced or removed before converting to ONNX. Standard cases are handled by functionalization pass. This issue targets edge cases like [nn.InstanceNorm with track_running_stats=True](https://pytorch.org/docs/master/generated/torch.nn.InstanceNorm2d.html#torch.nn.InstanceNorm2d), which **updates model state** even in inference mode. Perhaps solution is to write an `unsafe_remove_inplace_pass` that emits diagnostic warnings to detect and handle these inplace ops. Repro: ```python import torch class Model(torch.nn.Module): def __init__(self): super().__init__() self.instance_norm = torch.nn.InstanceNorm1d(3, affine=True, track_running_stats=True) def forward(self, x, y): out = x + y out = self.instance_norm(out) out = out + x return out model = Model() model.eval() x = torch.randn(1, 3, 3) y = torch.randn(1, 3, 3) torch.onnx.dynamo_export(model, x, y) ``` cc @BowenBao @thiagocrepaldi @wschin
10
2,241
103,852
Segmentation fault when tensorrt is imported before torch
needs reproduction, module: build, triaged, has workaround
### 🐛 Describe the bug In the following snippet, importing tensorrt before torch will segfault, but importing tensorrt after torch will work. ```python import tensorrt import torch with torch.inference_mode(): model = torch.nn.Conv2d(3, 3, 1).eval().cuda() x = torch.rand((1, 3, 4, 4)).cuda() print(model(x)) ``` ``` holywu@HolyWu:~$ python3 -X faulthandler test.py Fatal Python error: Segmentation fault Current thread 0x00007f03aaec3000 (most recent call first): File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 459 in _conv_forward File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 463 in forward File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511 in _call_impl File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1502 in _wrapped_call_impl File "/home/holywu/test.py", line 7 in <module> Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special (total: 20) Segmentation fault ``` ### Versions Collecting environment information... PyTorch version: 2.1.0.dev20230619+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.35 Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Nvidia driver version: 536.23 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i5-13400F CPU family: 6 Model: 191 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 2 BogoMIPS: 4991.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm serialize flush_l1d arch_capabilities Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 384 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 10 MiB (8 instances) L3 cache: 20 MiB (1 instance) Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.25.0 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230619+cu121 [pip3] torchaudio==2.1.0.dev20230619+cu121 [pip3] torchvision==0.16.0.dev20230619+cu121 [conda] Could not collect TensorRT version: TensorRT 8.6 GA for Ubuntu 22.04 and CUDA 12.0 and 12.1 (nv-tensorrt-local-repo-ubuntu2204-8.6.1-cuda-12.0_1.0-1_amd64.deb) cc @malfet @seemethere
1
2,242
103,848
torch compile aten::floor_divide error
triaged, oncall: pt2
### 🐛 Describe the bug When compiling [segment_anything](https://github.com/facebookresearch/segment-anything) with torch_tensorrt, I got the error `We don't have an op for aten::floor_divide but it isn't a special case`. My reproducible code is: ``` from segment_anything import sam_model_registry, SamPredictor import torch_tensorrt sam = sam_model_registry["vit_b"](checkpoint="sam_vit_b_01ec64.pth") input_signature = ([torch_tensorrt.Input(shape=frame.shape, dtype=torch.half)]) enabled_precisions = {torch.half,} sam.image_encoder = torch_tensorrt.compile(sam.image_encoder, input_signature=input_signature, enabled_precisions=enabled_precisions) ``` The error message is: ``` Traceback (most recent call last): File "realtime.py", line 144, in <module> main() File "realtime.py", line 116, in main sam = newSam(frame) File "realtime.py", line 86, in newSam sam.image_encoder = torch_tensorrt.compile(sam.image_encoder, input_signature=input_signature, enabled_precisions=enabled_precisions) File "/home/topunion/.local/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 133, in compile return torch_tensorrt.ts.compile( File "/home/topunion/.local/lib/python3.8/site-packages/torch_tensorrt/ts/_compiler.py", line 139, in compile compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec)) RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::floor_divide but it isn't a special case. Argument types: int, int, Candidates: aten::floor_divide(Tensor self, Tensor other) -> Tensor aten::floor_divide.Scalar(Tensor self, Scalar other) -> Tensor aten::floor_divide.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!) ``` ### Versions Collecting environment information... PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.31 Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Nvidia driver version: 530.41.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 113 Model name: AMD Ryzen 5 3600 6-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 2200.000 CPU max MHz: 3600.0000 CPU min MHz: 2200.0000 BogoMIPS: 7200.08 Virtualization: AMD-V L1d cache: 192 KiB L1i cache: 192 KiB L2 cache: 3 MiB L3 cache: 32 MiB NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.1 [pip3] torch==2.0.1+cu118 [pip3] torch-tensorrt==1.4.0 [pip3] torchaudio==2.0.2+cu118 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
2,243
103,847
Some parameters are missing type descriptions
module: docs, triaged, actionable
### 📚 The doc issue <html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40"> <head> <meta name=ProgId content=Excel.Sheet> <meta name=Generator content="Microsoft Excel 15"> <link id=Main-File rel=Main-File href="file:///C:/Users/pigpi/AppData/Local/Temp/msohtmlclip1/01/clip.htm"> <link rel=File-List href="file:///C:/Users/pigpi/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml"> <style> <!--table {mso-displayed-decimal-separator:"\."; mso-displayed-thousand-separator:"\,";} @page {margin:.75in .7in .75in .7in; mso-header-margin:.3in; mso-footer-margin:.3in;} .font5 {color:windowtext; font-size:9.0pt; font-weight:400; font-style:normal; text-decoration:none; font-family:等线; mso-generic-font-family:auto; mso-font-charset:134;} tr {mso-height-source:auto; mso-ruby-visibility:none;} col {mso-width-source:auto; mso-ruby-visibility:none;} br {mso-data-placement:same-cell;} td {padding-top:1px; padding-right:1px; padding-left:1px; mso-ignore:padding; color:black; font-size:11.0pt; font-weight:400; font-style:normal; text-decoration:none; font-family:等线; mso-generic-font-family:auto; mso-font-charset:134; mso-number-format:General; text-align:general; vertical-align:bottom; border:none; mso-background-source:auto; mso-pattern:auto; mso-protection:locked visible; white-space:nowrap; mso-rotate:0;} ruby {ruby-align:left;} rt {color:windowtext; font-size:9.0pt; font-weight:400; font-style:normal; text-decoration:none; font-family:等线; mso-generic-font-family:auto; mso-font-charset:134; mso-char-type:none; display:none;} --> </style> </head> <body link="#0563C1" vlink="#954F72"> Many parameters are not for any type of value, so the allowed types should be clearly marked in the document. API | lack of description params -- | -- torch.qr | input torch.combinations | input torch.logical_xor | input、other torch.nn.functional.rrelu | training torch.prod | input torch.log2 | input torch.log10 | input torch.bitwise_or | input、other torch.asin | input torch.numel | input torch.nn.functional.avg_pool2d | input torch.square | input torch.empty_like | input torch.asinh | input torch.nn.LPPool2d | norm_type torch.msort | input torch.acos | input torch.mul | input torch.cholesky_solve | input1、input2 torch.kthvalue | input torch.log | input torch.atanh | input torch.t | input torch.squeeze | input torch.min | input torch.count_nonzero | input torch.sum | input torch.diagonal | input torch.polygamma | input torch.sub | input torch.nn.functional.max_unpool1d | indices torch.fmod | input torch.argsort | input torch.nanquantile | input torch.median | input torch.cos | input torch.unsqueeze | input torch.lt | input torch.ger | input、vec2 torch.clamp | input torch.mv | input、vec torch.tanh | input torch.sort | input torch.isreal | input torch.stack | tensors torch.nn.BCEWithLogitsLoss | weight torch.trace | input torch.nn.functional.softmax | _stacklevel torch.logaddexp2 | input、other torch.cdist | x1、x2 torch.igamma | input、other torch.exp | input torch.nn.functional.pixel_shuffle | input torch.view_as_complex | input torch.nn.functional.pad | input torch.bitwise_and | input、other torch.nn.functional.l1_loss | target torch.ravel | input torch.equal | input、other torch.save | _use_new_zipfile_serialization torch.nn.functional.lp_pool1d | norm_type torch.nn.functional.lp_pool2d | norm_type </body> </html> ### Suggest a potential alternative/fix _No response_ cc @svekars @carljparker
0
2,244
103,846
The document style is inconsistent with other documents, and the parameter type is not clearly highlight
module: docs, triaged, actionable
### 📚 The doc issue The document style is inconsistent with other documents, and the parameter type is not clearly highlight in torch.nn.functional.conv_transpose3d、torch.nn.functional.conv_transpose2d、torch.nn.functional.bilinear、torch.nn.functional.conv1d、torch.nn.functional.max_unpool3d、torch.nn.functional.max_pool3d、torch.nn.functional.conv3d、torch.nn.MultiheadAttention、torch.nn.functional.avg_pool1d、torch.nn.functional.avg_pool3d、torch.nn.functional.max_pool2d、torch.nn.functional.linear、torch.nn.functional.max_pool1d. ![image](https://github.com/pytorch/pytorch/assets/45327670/f61a1fc2-d70c-4b6a-ad38-9a8579305cd6) ![image](https://github.com/pytorch/pytorch/assets/45327670/d04ebb60-305f-41de-89ec-50dc7e2ba42e) ### Suggest a potential alternative/fix _No response_ cc @svekars @carljparker
0
2,245
103,844
Missing examples in some API docs
module: docs, triaged, actionable
### 📚 The doc issue torch.nn.functional.mse_loss、torch.are_deterministic_algorithms_enabled、torch.nn.functional.batch_norm、torch.nn.functional.triplet_margin_loss、torch.nn.RNNBase missing examples in API docs. ### Suggest a potential alternative/fix _No response_ cc @svekars @carljparker
2
2,246
103,841
[question] [docs] Short/mid/long-term status of TorchScript / JIT / torch.jit.trace / FX / symbolic tracing and its replacement by Dynamo
oncall: jit, module: docs, triaged
### 📚 The doc issue At OP https://github.com/pytorch/vision/pull/7624#discussion_r1233691474 I stumbled that there might be lack of official communication on short-term / mid-term / long-term status of TorchScript / JIT interpreter. In that OP case, need to support TorchScript necessitated use of inelegant indexing instead of simpler `tensor[..., perm, :, :]` which seems not supported by TorchScript (btw is this construct supported by Dynamo?) Any official information on need to support TorchScript in new code of domain libraries / in general? Does TorchScript continue to receive development? Any deprecation calendar? I would propose that this information / overview (even if no decisions made yet) made very clear directly on main page of https://pytorch.org/docs (preferrably at the top of this long page) This question comes from the fact that PyTorch had offered many tracing/scripting/compilation technologies over the years, their recipes/tutorials are still all available, and the users need to have information at least about mid-term level plans and recommendations of the core team. My related question on deployment as well: https://discuss.pytorch.org/t/torch-compiles-deployment-story-to-non-python-host-processes/180943/4 - would be good to have this explained very clearly as well cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @svekars @carljparker @ezyang @albanD
4
2,247
103,840
Gradient operations (zero_grad and gradient accumulations) as graphs
module: autograd, triaged, oncall: pt2
### 🚀 The feature, motivation and pitch Hi, We already can see in PT2.0 that operations which are not part of main FWD/BWD/OPT blocks - especially **zero_grad** and **gradient accumulation** - can become serious host bottleneck due to being scaled with topology size and executed eagerly. Are there any plans to incorporate these operations into the graphs in PT2.1 or later? Thanks. ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @msaroufim @wconstab @bdhirsh @anijain2305
15
2,248
103,837
type conflict
module: cpp, triaged
### 🐛 Describe the bug this is a c++ code. when I compiler it, I got a compiler error. this error occurs when I add `typedef struct data data` before `#include <torch/torch.h>` The LibTorch I use is `libtorch-cxx11-abi-shared-with-deps-1.10.0+cu113.zip`, Ubuntu 22.04. ``` // main.cpp typedef struct data data; #include <torch/torch.h> #include <iostream> int main() { torch::Tensor tensor = torch::rand({2, 3}); std::cout << tensor << std::endl; } ``` compiler output ``` wsl@DESKTOP-ULB33J0:~/src/libtorch_test/build$ make Consolidate compiler generated dependencies of target example-app [ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o In file included from /home/wsl/libtorch/include/torch/csrc/api/include/torch/enum.h:7, from /home/wsl/libtorch/include/torch/csrc/api/include/torch/all.h:9, from /home/wsl/libtorch/include/torch/csrc/api/include/torch/torch.h:3, from /home/wsl/src/libtorch_test/example-app.cpp:2: /home/wsl/libtorch/include/c10/util/variant.h: In instantiation of ‘static constexpr auto&& c10::detail_::access::base::get_alt(V&&) [with long unsigned int I = 0; V = const c10::detail_::base<c10::detail_::Trait::TriviallyAvailable, torch::ExpandingArray<1, long int>, torch::enumtype::kValid, torch::enumtype::kSame>&]’: /home/wsl/libtorch/include/c10/util/variant.h:1205:31: required from ‘static constexpr decltype(auto) c10::detail_::visitation::alt::visit_alt(Visitor&&, Vs&& ...) [with Visitor = c10::detail_::visitation::variant::value_visitor<torch::nn::functional::detail::conv1d(const at::Tensor&, const at::Tensor&, const at::Tensor&, torch::ExpandingArray<1>, const padding_t&, torch::ExpandingArray<1>, int64_t)::<lambda(const auto:14&)> >; Vs = {const c10::detail_::impl<torch::ExpandingArray<1, long int>, torch::enumtype::kValid, torch::enumtype::kSame>&}]’ /home/wsl/libtorch/include/c10/util/variant.h:1721:13: required from ‘static constexpr decltype(auto) c10::detail_::visitation::variant::visit_alt(Visitor&&, Vs&& ...) [with Visitor = c10::detail_::visitation::variant::value_visitor<torch::nn::functional::detail::conv1d(const at::Tensor&, const at::Tensor&, const at::Tensor&, torch::ExpandingArray<1>, const padding_t&, torch::ExpandingArray<1>, int64_t)::<lambda(const auto:14&)> >; Vs = {const c10::variant<torch::ExpandingArray<1, long int>, torch::enumtype::kValid, torch::enumtype::kSame>&}]’ /home/wsl/libtorch/include/c10/util/variant.h:1736:13: required from ‘static constexpr decltype(auto) c10::detail_::visitation::variant::visit_value(Visitor&&, Vs&& ...) [with Visitor = torch::nn::functional::detail::conv1d(const at::Tensor&, const at::Tensor&, const at::Tensor&, torch::ExpandingArray<1>, const padding_t&, torch::ExpandingArray<1>, int64_t)::<lambda(const auto:14&)>; Vs = {const c10::variant<torch::ExpandingArray<1, long int>, torch::enumtype::kValid, torch::enumtype::kSame>&}]’ /home/wsl/libtorch/include/c10/util/variant.h:2866:51: required from ‘constexpr decltype(auto) c10::visit(Visitor&&, Vs&& ...) [with Visitor = torch::nn::functional::detail::conv1d(const at::Tensor&, const at::Tensor&, const at::Tensor&, torch::ExpandingArray<1>, const padding_t&, torch::ExpandingArray<1>, int64_t)::<lambda(const auto:14&)>; Vs = {const c10::variant<torch::ExpandingArray<1, long int>, torch::enumtype::kValid, torch::enumtype::kSame>&}]’ /home/wsl/libtorch/include/torch/csrc/api/include/torch/nn/functional/conv.h:35:20: required from here /home/wsl/libtorch/include/c10/util/variant.h:1181:7: error: invalid use of incomplete type ‘data’ {aka ‘struct data’} 1181 | AUTO_REFREF_RETURN(recursive_union::get_alt( | ^~~~~~~~~~~~~~~~~~ /home/wsl/src/libtorch_test/example-app.cpp:1:16: note: forward declaration of ‘data’ {aka ‘struct data’} 1 | typedef struct data data; | ^~~~ ``` Is there has a way to avoid this bug? thank you! ### Versions Collecting environment information... PyTorch version: 1.13.0+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.1 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU Nvidia driver version: 516.54 cuDNN version: Probably one of the following: /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 5800H with Radeon Graphics CPU family: 25 Model: 80 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 0 BogoMIPS: 6387.82 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm Virtualization: AMD-V Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 4 MiB (8 instances) L3 cache: 16 MiB (1 instance) Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccompVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==1.13.0+cu117 [pip3] torchaudio==0.13.0+cu117 [pip3] torchvision==0.14.0+cu117 [conda] Could not collect cc @jbschlosser
0
2,249
103,836
Please consider the SCFA/dynamic flash attention for your implementation of scaled dot product attention
module: sparse, triaged, oncall: transformer/mha, topic: new features
### 🚀 The feature, motivation and pitch Sparse Causal Flash Attention as implmented [here](https://github.com/epfml/dynamic-sparse-flash-attention) and described in [this paper](https://arxiv.org/abs/2306.01160) claims to > extend FlashAttention to accommodate a large class of attention sparsity patterns that, in particular, encompass key/query dropping and hashing-based attention. This leads to implementations with no computational complexity overhead and a multi-fold runtime speedup on top of FlashAttention. Even with relatively low degrees of sparsity, our method improves visibly upon FlashAttention as the sequence length increases. Without sacrificing perplexity, we increase the training speed of a transformer language model by 2.0× and 3.3× for sequences of respectively 8k and 16k tokens. Adding these implementations to main Pytorch might lead to Transformers being able to use both flash attention, PEFT/LoRA and methods for handling long contexts ### Alternatives 1. Leaving everything as is and finalising the current implementation to allow handling attention masks (see issue #96099 ) 2. Relying on flash_attn by HazyResearch, which works great on its own but apparently does not work with LoRA ### Additional context I'm just not sure functions written in triton are readily convertible into C++ functions. cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jbschlosser @erichan1 @drisspg
1
2,250
103,832
Torch 1.13 for GPU breaks if libcublas is already present.
triaged, module: docker
### 🐛 Describe the bug When I install pytorch into the docker image `gcr.io/deeplearning-platform-release/base-cu113` from https://cloud.google.com/deep-learning-containers/docs/choosing-container or `nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04` with gpu, it installs also installs `nvidia-cublas-cu11` which is a dependency, but it's already present in these images, so when importing torch, it errors out, see ```python >>> import torch Traceback (most recent call last): File "/root/.cache/pypoetry/virtualenvs/my-env-jEFYeG2G-py3.10/lib/python3.10/site-packages/torch/__init__.py", line 172, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/opt/conda/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: /root/.cache/pypoetry/virtualenvs/my-env-jEFYeG2G-py3.10/lib/python3.10/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11 ``` see https://stackoverflow.com/a/75095447/5224881 where people were also struggling with it. The cuda toolkit is automatically installed on GCP docker images, so it makes it hard to use. When I uninstall the `nvidia-cublas-cu11`, it starts working, but I would like to be able to use it purely using poetry Of course I can manually uninstall the dependency, but option to install or uninstall it using extras would be very welcome. ### Versions [pip3] flake8==3.9.2 [pip3] mypy==0.950 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.3 [pip3] torch==1.13.1 [conda] numpy 1.23.5 pypi_0 pypi
0
2,251
103,831
[dynamo] AssertionError for custom iterable nn.Module
triaged, oncall: pt2
### 🐛 Describe the bug Using a custom iterable nn.Module causes an AssertionError when dynamo is enabled. The use case here is creating an equivalent of nn.ParameterList, but for buffers, like in [this issue](https://github.com/pytorch/pytorch/issues/37386) or [this question](https://discuss.pytorch.org/t/why-no-nn-bufferlist-like-function-for-registered-buffer-tensor/18884/3) The code hits an assertion in __dynamo/variables/nn_module.py_ line 99 (in _unpack_var_sequence_), which seems to explicitly disallow custom iterable modules. Verified in nighly 2023.06.17 ### Error logs ``` AssertionError Traceback (most recent call last) [<ipython-input-2-79323aca98f5>](https://localhost:8080/#) in <cell line: 37>() 35 x = Containing(BufferList((torch.rand((5,5)), torch.rand((5,5))))) 36 x() ---> 37 torch.compile(x, backend="eager")() 21 frames [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs) 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1501 else: -> 1502 return self._call_impl(*args, **kwargs) 1503 1504 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1509 or _global_backward_pre_hooks or _global_backward_hooks 1510 or _global_forward_hooks or _global_forward_pre_hooks): -> 1511 return forward_call(*args, **kwargs) 1512 # Do not call functions when jit is used 1513 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py](https://localhost:8080/#) in _fn(*args, **kwargs) 293 dynamic_ctx.__enter__() 294 try: --> 295 return fn(*args, **kwargs) 296 finally: 297 set_eval_frame(prior) [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs) 1500 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1501 else: -> 1502 return self._call_impl(*args, **kwargs) 1503 1504 def _call_impl(self, *args, **kwargs): [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs) 1509 or _global_backward_pre_hooks or _global_backward_hooks 1510 or _global_forward_hooks or _global_forward_pre_hooks): -> 1511 return forward_call(*args, **kwargs) 1512 # Do not call functions when jit is used 1513 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py](https://localhost:8080/#) in catch_errors(frame, cache_size, frame_state) 446 447 with compile_lock: --> 448 return callback(frame, cache_size, hooks, frame_state) 449 450 catch_errors._torchdynamo_orig_callable = callback # type: ignore[attr-defined] [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in _convert_frame(frame, cache_size, hooks, frame_state) 525 counters["frames"]["total"] += 1 526 try: --> 527 result = inner_convert(frame, cache_size, hooks, frame_state) 528 counters["frames"]["ok"] += 1 529 return result [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in _fn(*args, **kwargs) 125 cleanup = setup_compile_debug() 126 try: --> 127 return fn(*args, **kwargs) 128 finally: 129 cleanup.close() [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in _convert_frame_assert(frame, cache_size, hooks, frame_state) 358 ) 359 --> 360 return _compile( 361 frame.f_code, 362 frame.f_globals, [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py](https://localhost:8080/#) in time_wrapper(*args, **kwargs) 178 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 179 t0 = time.time() --> 180 r = func(*args, **kwargs) 181 time_spent = time.time() - t0 182 # print(f"Dynamo timer: key={key}, latency={latency:.2f} sec") [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, frame, frame_state) 428 for attempt in itertools.count(): 429 try: --> 430 out_code = transform_code_object(code, transform) 431 orig_code_map[out_code] = code 432 break [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py](https://localhost:8080/#) in transform_code_object(code, transformations, safe) 998 propagate_line_nums(instructions) 999 -> 1000 transformations(instructions, code_options) 1001 return clean_and_assemble_instructions(instructions, keys, code_options)[1] 1002 [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in transform(instructions, code_options) 413 ) 414 with tracing(tracer.output.tracing_context): --> 415 tracer.run() 416 output = tracer.output 417 assert output is not None [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in run(self) 2023 2024 def run(self): -> 2025 super().run() 2026 2027 def match_nested_cell(self, name, cell): [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in run(self) 706 self.instruction_pointer is not None 707 and not self.output.should_exit --> 708 and self.step() 709 ): 710 pass [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in step(self) 666 self.f_code.co_filename, self.lineno, self.f_code.co_name 667 ) --> 668 getattr(self, inst.opname)(inst) 669 670 return inst.opname != "RETURN_VALUE" [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in GET_ITER(self, inst) 1092 1093 def GET_ITER(self, inst): -> 1094 self.call_function(BuiltinVariable(iter), [self.pop()], {}) 1095 1096 @break_graph_if_unsupported(push=1) [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in call_function(self, fn, args, kwargs) 557 ): 558 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}") --> 559 self.push(fn.call_function(self, args, kwargs)) 560 561 def update_locals_and_stack(self, oldvar: VariableTracker, newvar: VariableTracker): [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py](https://localhost:8080/#) in call_function(self, tx, args, kwargs) 575 if handler: 576 try: --> 577 result = handler(tx, *args, **kwargs) 578 if result is not None: 579 return result.add_options(options) [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builtin.py](https://localhost:8080/#) in _call_iter_tuple_list(self, tx, obj, *args, **kwargs) 754 mutable_local=MutableLocal(), 755 ) --> 756 elif obj.has_unpack_var_sequence(tx): 757 guards = set() 758 if obj.source and not is_constant_source(obj.source): [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/base.py](https://localhost:8080/#) in has_unpack_var_sequence(self, tx) 214 def has_unpack_var_sequence(self, tx): 215 try: --> 216 self.unpack_var_sequence(tx) 217 return True 218 except NotImplementedError: [/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/nn_module.py](https://localhost:8080/#) in unpack_var_sequence(self, tx) 97 return result 98 ---> 99 assert isinstance( 100 base, (torch.nn.ModuleList, torch.nn.ParameterList, torch.nn.Sequential) 101 ), typestr(base) AssertionError: BufferList ``` ### Minified repro ``` import torch import torch.nn as nn class BufferList(nn.Module): """ Similar to nn.ParameterList, but for buffers """ def __init__(self, buffers=None): super(BufferList, self).__init__() if buffers is not None: self.extend(buffers) def extend(self, buffers): offset = len(self) for i, buffer in enumerate(buffers): self.register_buffer(str(offset + i), buffer) return self def __len__(self): return len(self._buffers) def __iter__(self): return iter(self._buffers.values()) class Containing(nn.Module): def __init__(self, buflist): super().__init__() self.buflist = buflist def forward(self): for x in self.buflist: print(x.shape) print(torch.__version__) #2.1.0.dev20230617+cu118 x = Containing(BufferList((torch.rand((5,5)), torch.rand((5,5))))) x() # works torch.compile(x, backend="eager")() # crashes ``` ### Versions Collecting environment information... PyTorch version: 2.1.0.dev20230617+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.25.2 Libc version: glibc-2.31 Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.107+-x86_64-with-glibc2.31 Is CUDA available: False CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU @ 2.20GHz Stepping: 0 CPU MHz: 2200.146 BogoMIPS: 4400.29 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB L1i cache: 32 KiB L2 cache: 256 KiB L3 cache: 55 MiB NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Versions of relevant libraries: [pip3] numpy==1.22.4 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230617+cu118 [pip3] torchaudio==2.0.2+cu118 [pip3] torchdata==0.6.1 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.15.2 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
2,252
103,829
RPC Framework support custom backend
oncall: distributed, triaged
### 🚀 The feature, motivation and pitch Custom backend(PrivateUse1) also has requirements for the RPC framework, and hopes to provide a registration method or implement a common logic. I'm currently reading the code of the RPC part, according to my analysis and research, the RPC module has a great dependence on the third-party project tensorpipe, and the important channel module is also implemented in it. I understand that the current RPC module needs to support third-party devices, and there are several things to do : 1. Extend the corresponding device logic in the RPC module of pytorch project to provide a processing branch for third-party devices. 2. Third-party devices need to inherit the `TensorpipeDeviceTypeConverter `class to implement their own converter class, and complete the preparation for sending and receiving of devcie tensor. And correctly registered in pytorch to ensure that the object can be obtained correctly 3. Third-party devices need to refer to cuda_basic channel, implement their own basic channel, and register Our current demands: 1. At present, for the pytorch project, we can improve and expand the corresponding device logic. 2. For the tensorpip project, we hope to expose more header files, so that third-party devices can implement their own basic channel based on tensorpip cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @ezyang. ### Alternatives _No response_ ### Additional context 1. The current tensorpipe project cannot contribute or discuss etc. _No response_
0
2,253
103,820
Upgrading SpGEMM algorithm to resolve Cusparse SpGEMM insufficient resources problem
module: sparse, module: cuda, triaged
### 🚀 The feature, motivation and pitch The SpGEMM algorithm in cuda 11.x version requires high amount of memory for the sparse computation. In CUDA 12, two new SpGEMM algorithms has been introduced to resolve the problem. I really hope that the new algorithms can be integrated to pytorch (Providing a solution to use the new algorithms is also exciting : ) ). Thanks. Please see https://github.com/NVIDIA/CUDALibrarySamples/issues/38. ### Alternatives _No response_ ### Additional context _No response_ cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ptrblck
1
2,254
103,814
abnormal behavior in function "scatter"
triaged, module: python frontend
### 🐛 Describe the bug I have encountered a weird bug, when I use "scatter_max", "scatter_min", "scatter_mul" to deal specifical data, the function could not return expected output. I have no ideal whether I incorrectedly use this function or it have some defact. ```python import torch from torch_geometric.utils import scatter w = torch.load(r'test_tensor.pth') mu_ConnG = w["mu"].to("cuda") batch = w["index"].to("cuda") y = scatter(mu_ConnG, index=batch, dim=0, reduce="max") ``` The excepted output is: ```python tensor([[[1.2849e-01, 9.5606e-01, 3.1864e-02], [2.5623e-02, 9.2980e-01, 1.5113e-01], [6.6616e-03, 7.0254e-01, 3.3186e-01], [3.6990e-01, 6.5781e-01, 5.2398e-03], [6.1730e-01, 4.0564e-01, 1.1939e-03]], [[7.1755e-01, 3.1940e-01, 6.3681e-04], [3.2621e-03, 5.7120e-01, 4.4799e-01], [2.2195e-01, 8.4002e-01, 1.4240e-02], [2.8383e-03, 5.4655e-01, 4.7142e-01], [3.1675e-02, 9.5541e-01, 1.2908e-01]], [[7.4156e-01, 2.9976e-01, 5.4273e-04], [1.6149e-04, 1.7957e-01, 8.9443e-01], [6.2777e-05, 1.1652e-01, 9.6872e-01], [7.6054e-02, 9.9845e-01, 5.8712e-02], [9.3553e-01, 1.4634e-01, 1.0254e-04]], ..., [[8.4811e-03, 7.4741e-01, 2.9502e-01], [6.3141e-01, 3.9305e-01, 1.0959e-03], [1.5108e-03, 4.4157e-01, 5.7807e-01], [1.4651e-01, 9.3534e-01, 2.6747e-02], [2.9298e-02, 9.4651e-01, 1.3697e-01]], [[8.4811e-03, 7.4741e-01, 2.9502e-01], [6.3141e-01, 3.9305e-01, 1.0959e-03], [1.5108e-03, 4.4157e-01, 5.7807e-01], [1.4651e-01, 9.3534e-01, 2.6747e-02], [2.9298e-02, 9.4651e-01, 1.3697e-01]], [[2.3792e-02, 9.1984e-01, 1.5929e-01], [3.7287e-01, 6.5439e-01, 5.1442e-03], [6.9620e-02, 9.9985e-01, 6.4318e-02], [8.0458e-05, 1.3087e-01, 9.5342e-01], [4.1389e-04, 2.6855e-01, 7.8049e-01]]], device='cuda:0', grad_fn=<CopySlices>) ``` however I got: ```python tensor([[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], ..., [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]], device='cuda:0', grad_fn=<CppNode<class ScatterMax>>) ``` this problem is also occur in "scatter_min" and "scatter_mul". Here the "test_tensor.pth" is on below link (before you load it, you have to modify the file suffix name to ".pth"): [test_tensor.txt](https://github.com/pytorch/pytorch/files/11781987/test_tensor.txt) Here I create a simple test case: ```python mu_ConnG = torch.Tensor([ [[1e-3, 2, 3], [1, 5, 6]], [[4e-4, 8, 9], [10, 11, 12]], [[13, 14, 15], [18, 17, 16]], [[100, 8, 9], [10, 19, 12]], [[100, 8, 9], [99, 19, 12]], [[4583, 456, 9], [10, 19, 1]], ]).double().to("cuda") batch = torch.LongTensor([0, 0, 1, 1, 2, 0]).to("cuda") y1 = scatter(mu_ConnG, index=batch, dim=0, reduce="max") y2 = scatter_max(mu_ConnG, index=batch, dim=0) ``` I have found that "scatter" and "scatter_max" will return different result: *scatter()* ```python tensor([[[4583., 456., 9.], [ 10., 19., 12.]], [[ 100., 14., 15.], [ 18., 19., 16.]], [[ 100., 8., 9.], [ 99., 19., 12.]]], device='cuda:0', dtype=torch.float64) ``` *scatter_max()* ```python (tensor([[[0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.]]], device='cuda:0', dtype=torch.float64) ``` I am sure "scatter()" return the correct result. I still do not know why occur this situation. ### Versions python 3.9 torch 2.0.1+cu118 torch-geometric 2.3.1 torch-scatter 2.1.1 torch-sparse 0.6.17 cc @albanD
1
2,255
103,810
Add alias support for sparse tensors.
open source, release notes: sparse, topic: bug fixes, topic: not user facing, ciflow/periodic
This PR addresses the sparse tensors part of https://github.com/pytorch/pytorch/issues/99655 The PR introduces the following utility functions: - `at::sparse_csr::alias_with_values(a_sparse_compressed_tensor, new_values)` - `at::sparse::alias_with_values(a_sparse_tensor, new_values)` These functions return a wrapper of a sparse tensor with new specified values that allow introducing alias support for sparse tensors and more (e.g. the most efficient way to resolve https://github.com/pytorch/pytorch/pull/99292#discussion_r1232201104 is to use `at::sparse_csr::alias_with_values(self, self.values().to(dtype))` as a replacement of `self.to(dtype)` to avoid the unnecessary copy of indices). Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #104145 * __->__ #103810 * #108512
6
2,256
103,805
Error when building with USE_TENSORRT=1
module: onnx, module: build, triaged
### 🐛 Describe the bug When I compile with USE_TENSORRT=1 I get the following error: onnx2trt_utils.hpp:30:10: fatal error: onnx/onnxifi.h: No such file or directory Note that removing USE_TENSORRT=1 from the build command, it compiles successfully without tensorrt. I'm using WSL2 from Windows 10. Trying to compile pytorch v1.13.1 with CUDA that is compatible with a 4090. I have tried the same thing with CUDA 11.8, and with CUDNN 8.9.0 & 8.6.0, it's the exact same error. The exact procedure to replicate this error is in the following text file... [Reproduce.txt](https://github.com/pytorch/pytorch/files/11779520/Reproduce.txt) The output log is in this text file... [log-tensorrt-fail.txt](https://github.com/pytorch/pytorch/files/11779524/log-tensorrt-fail.txt) ### Versions Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.35 Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: N/A CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Nvidia driver version: 531.79 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-8086K CPU @ 4.00GHz CPU family: 6 Model: 158 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 10 BogoMIPS: 8016.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves flush_l1d arch_capabilities Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 1.5 MiB (6 instances) L3 cache: 12 MiB (1 instance) Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Unknown: Dependent on hypervisor status Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Versions of relevant libraries: [pip3] numpy==1.24.3 [conda] magma-cuda121 2.6.1 1 pytorch [conda] mkl 2023.1.0 h6d00ec8_46342 [conda] mkl-include 2023.1.0 h06a4308_46342 [conda] numpy 1.24.3 pypi_0 pypi cc @malfet @seemethere
2
2,257
103,803
Support `Sequence` type in JIT
oncall: jit, module: typing, triaged, enhancement
### 🚀 The feature, motivation and pitch This is a sub-task of #103798. We have functions annotated to accept `List[int]` but actually get `torch.Size` passed in, e.g.: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2435 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2481 However, static type checkers like mypy does not like it because `torch.Size`, which is a subtype of `Tuple[int, ...]`, is not compatible with `List[int]`. This conflicts with JIT, which does not understand `Tuple[int, ...]` without a fixed length. There are many cases of this problem, all involving functions taking in tensor sizes or similar (e.g., `Tensor.size` or `kernel_size`), where we only use them as `collections.abc.Sequence`. Therefore, I propose to annotate them as `Sequence[int]` and ask JIT to interpret `Sequence[int]` as `List[int]`. By annotating the sizes as `Sequence[int]`, we can i) make mypy happy because `List[int]` and `Tuple[int, ...]` (including `torch.Size`) are both `Sequence[int]`, and ii) borrow the JIT infrastructure for `ListType` which does not care about the length, bypassing the tuple length problem. ### Alternatives 1. _Adjust the implementation of `TupleType` in JIT to support `Tuple[int, ...]`_. This can be the best solution, but I don't know how to do it. Also, I assume it's not easy because otherwise it would already be implemented. 2. _Implement `SequenceType`, which differs from `ListType`_. This is also a good option, maybe second to the above, because it's safer than reusing the `ListType` -- some operations supported by `List` are not by `Sequence`, although JIT seems not to be using them. 3. _Add dozens of `type:ignore`s to suppress all related mypy warnings_. The good point is: the affected functions are for internal use (with `_` prefix) and will not affect public APIs (i.e., the users will not see any problem). However, this can leave the code error-prone. ### Additional context Since we mark them as `List[int]` for now, even with `Tuple`s/`Size`s, directly reusing `List` for `Sequence` will not cause any problems. We actually keep how everything works under the hood, just switching to a different name that can satisfy mypy. cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @malfet @rgommers @xuzhao9 @gramster
4
2,258
103,802
Eager PTDQ Performs Worse Than Non-Quantized Linear Layer on CPU(in Terms of Speed)
triaged
### 🐛 Describe the bug While Dynamically post-training quantized the following linear model: ```python class Net(nn.Module): def __init__(self): super().__init__() self.layer = nn.Linear(100, 20000*3) def forward(self, x): return self.layer(x) ``` It performed worse than the non-quantized model while inferring. Code To reproduce: ```python import torch import torch.nn as nn import time N = 200 class Net(nn.Module): def __init__(self): super().__init__() self.layer = nn.Linear(100, 20000*3) def forward(self, x): return self.layer(x) def time_model(model): start_time = time.time() x = torch.rand(1, N, 100) model(x) print(f"Model Running time {time.time() - start_time}") def main(): model = Net() print("regular model") print(model) time_model(model) model_int8 = torch.ao.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtype=torch.qint8) print("int8 model") print(model_int8) time_model(model_int8) main() ``` If the variable N is changed to 1 the quantized model perform better, as expected. Output: ``` regular model Net( (layer): Linear(in_features=100, out_features=60000, bias=True) ) Model Running time 0.07307147979736328 int8 model Net( (layer): DynamicQuantizedLinear(in_features=100, out_features=60000, dtype=torch.qint8, qscheme=torch.per_tensor_affine) ) Model Running time 0.10798835754394531 ``` Profiler Output (On different run) # The non quantisized run: Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Source Location aten::linear 0.22% 72.000us 99.60% 32.807ms 32.807ms 1 <built-in function linear> torch/nn/modules/linear.py(113): forward nn.Module: Linear_0 ./test.py(11): forward nn.Module: Net_0 # The quantisized run: Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls Source Location quantized::linear_dynamic 99.59% 42.069ms 99.69% 42.113ms 42.113ms 1 <built-in method linear_dynamic of PyCapsule object at 0x7f1490949020> torch/_ops.py(497): __call__ torch/ao/nn/quantized/dynamic/modules/linear.py(47): forward nn.Module: Linear_0 ./test.py(11): forward ``` ### Versions PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.25.2 Libc version: glibc-2.31 Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.107+-x86_64-with-glibc2.31 Is CUDA available: False CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU @ 2.20GHz Stepping: 0 CPU MHz: 2200.218 BogoMIPS: 4400.43 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB L1i cache: 32 KiB L2 cache: 256 KiB L3 cache: 55 MiB NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Versions of relevant libraries: [pip3] numpy==1.22.4 [pip3] torch==2.0.1+cu118 [pip3] torchaudio==2.0.2+cu118 [pip3] torchdata==0.6.1 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.15.2 [pip3] torchvision==0.15.2+cu118 [pip3] triton==2.0.0 [conda] Could not collect
4
2,259
103,800
Mis-annotated return for `F._no_grad_embedding_renorm_` (also JIT related)
oncall: jit, module: typing, triaged, bug
### 🐛 Describe the bug This is a sub-task of #103761. This is not user-facing as this only involves functions prefixed with an underscore. `_no_grad_embedding_renorm_` has mismatched typing on the return value: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2123-L2124 1. This function does not return anything at all. This is modified on purpose in #18684. (PS: the return value is not used anyway.) 2. The return now got annotated as a `Tuple[Tensor, Tensor]` in #78560. I don't know why it's changed in this way. @samdow Can you please explain this as the author of that PR? --- Other occurrences of `_no_grad_embedding_renorm_`: (I'm not quite sure whether these are related, nor which is correct) https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/csrc/api/include/torch/nn/functional/embedding.h#L15-L22 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/csrc/jit/runtime/register_special_ops.cpp#L330 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/csrc/jit/runtime/serialized_shape_function_registry.cpp#L3196 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/jit/_shape_functions.py#L1109 ### Versions master cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @malfet @rgommers @xuzhao9 @gramster
0
2,260
103,798
Type misalignments in `nn.functional` (also JIT related)
oncall: jit, module: typing, triaged
### 🐛 Describe the bug This is a sub-task of #103761. This is not user-facing as this only involves functions prefixed with an underscore. **A major problem is JIT cannot understand `torch.Size` or `Tuple[int, ...]`. This problem is relatively self-contained and tracked in a separate issue #103803.** --- The `_list_with_default` is used as this, where `input.size()` is `torch.Size`: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L1135 However, it's annotated to accept `List` only. It's even possible to work on `int`: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/modules/utils.py#L33-L35 --- Similar problem with `_verify_batch_size`: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2481 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2435 And the same as above for `_verify_spatial_size`: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2527 https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L2488 --- **Only this is not JIT related** This `_in_projection_packed` is annotated to return a `List`: https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L4778-L4784 But actually it returns a 3-`tuple` in all cases (L4819, L4831, L4838): https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/torch/nn/functional.py#L4812-L4838 --- <details> <summary>Related error logs</summary> ```log torch/nn/functional.py:1135:51: error: Argument 2 to "_list_with_default" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:1179:51: error: Argument 2 to "_list_with_default" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:1231:52: error: Argument 2 to "_list_with_default" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:1248:52: error: Argument 2 to "_list_with_default" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:2481:28: error: Argument 1 to "_verify_batch_size" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:2527:30: error: Argument 1 to "_verify_spatial_size" has incompatible type "Size"; expected "List[int]" [arg-type] torch/nn/functional.py:4819:20: error: Incompatible return value type (got "Tuple[Any, Any, Any]", expected "List[Tensor]") [return-value] torch/nn/functional.py:4831:20: error: Incompatible return value type (got "Tuple[Tensor, Any, Any]", expected "List[Tensor]") [return-value] torch/nn/functional.py:4838:16: error: Incompatible return value type (got "Tuple[Tensor, Tensor, Tensor]", expected "List[Tensor]") [return-value] ``` </details> ### Versions master ```[tasklist] ### Tasks - [ ] https://github.com/pytorch/pytorch/issues/103803 ``` cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @malfet @rgommers @xuzhao9 @gramster
0
2,261
103,788
[Torch Mlir] avg_pool1d function padding init value should be (0,)
triage review, oncall: jit
### 🐛 Describe the bug Currently in JITOperatorRegistryDump.txt the padding in avg_pool1d has the value List[int] = (0), this vill cause an error that ask you provide a default value of type List[int] on parameter padding, which should be (0,) or something else shape_function_signature = def aten〇avg_pool1d〡shape(self: List[int], kernel_size: List[int], stride: List[int] = (), padding: List[int] = (0), ceil_mode: bool = False, count_include_pad: bool = True) -> List[int]: decomposition_function_signature = def aten〇avg_pool1d〡decomposition(self: Tensor, kernel_size: List[int], stride: List[int] = (), padding: List[int] = (0), ceil_mode: bool = False, count_include_pad: bool = True) -> Tensor: dtype_function_signature = def aten〇avg_pool1d〡dtype(self_rank_dtype: Tuple[int, int], kernel_size: List[int], stride: List[int] = (), padding: List[int] = (0), ceil_mode: bool = False, count_include_pad: bool = True) -> int: ### Error logs RuntimeError: Expected a default value of type List[int] on parameter "padding".: def aten〇avg_pool1d〡shape(self: List[int], kernel_size: List[int], stride: List[int] = (), padding: List[int] = (0), ceil_mode: bool = False, count_include_pad: bool = True) -> List[int]: ### Minified repro _No response_ ### Versions PyTorch version: 2.1.0.dev20230612+cpu Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Debian GNU/Linux 10 (buster) (x86_64) GCC version: (GCC) 8.5.0 Clang version: 7.0.1-8+deb10u2 (tags/RELEASE_701/final) CMake version: version 3.26.3 Libc version: glibc-2.28 Python version: 3.9.16 (main, Feb 4 2023, 02:19:02) [GCC 8.5.0] (64-bit runtime) Python platform: Linux-5.4.56.bsk.9-amd64-x86_64-with-glibc2.28 Is CUDA available: False CUDA runtime version: 11.4.120 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 470.129.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz Stepping: 7 CPU MHz: 3100.011 CPU max MHz: 3900.0000 CPU min MHz: 1000.0000 BogoMIPS: 4800.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 36608K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.1.0.dev20230612+cpu [pip3] torchvision==0.16.0.dev20230523+cpu [conda] Could not collect cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
2,262
103,787
Generate complete annotations for `torch._C._nn`
module: typing, triaged, enhancement, release notes: devx
### 🚀 The feature, motivation and pitch This is a follow-up of #102918 and a sub-task of #103761. This is not user-facing and only involves changes in the type stub of `torch._C._nn` which is an internal module. #102918 has introduced annotations for some functions in `torch._C._nn` so that the end-users can see correct typing for public APIs exposed through `torch.nn.functional`. There's also a TODO left in that PR because there're many functions not covered by it which do not directly contribute to the publicly visible APIs. https://github.com/pytorch/pytorch/blob/918fe519a0f9bdb2dd213eaaffbcd420b7391280/tools/pyi/gen_pyi.py#L299 However, this is not enough for the goal in #103761. When we remove the pyi type stub, mypy will look into more details of `torch.nn.functional` and throw errors about undefined attributes in `torch._C._nn`, which are only internally used and not covered by #102918. <details> <summary>Related error logs</summary> ```log torch/nn/functional.py:766:12: error: Module has no attribute "max_pool2d_with_indices" [attr-defined] torch/nn/functional.py:852:12: error: Module has no attribute "max_pool3d_with_indices" [attr-defined] torch/nn/functional.py:957:12: error: Module has no attribute "max_unpool2d" [attr-defined] torch/nn/functional.py:989:12: error: Module has no attribute "max_unpool2d" [attr-defined] torch/nn/functional.py:1021:12: error: Module has no attribute "max_unpool3d" [attr-defined] torch/nn/functional.py:1232:12: error: Module has no attribute "adaptive_avg_pool2d"; maybe "adaptive_max_pool2d" or "adaptive_max_pool3d"? [attr-defined] torch/nn/functional.py:1249:12: error: Module has no attribute "adaptive_avg_pool3d"; maybe "adaptive_max_pool3d" or "adaptive_max_pool2d"? [attr-defined] torch/nn/functional.py:1511:12: error: Module has no attribute "glu"; maybe "gelu"? [attr-defined] torch/nn/functional.py:1550:18: error: Module has no attribute "relu6_"; maybe "elu_"? [attr-defined] torch/nn/functional.py:1552:18: error: Module has no attribute "relu6" [attr-defined] torch/nn/functional.py:1566:18: error: Module has no attribute "elu"; maybe "gelu" or "elu_"? [attr-defined] torch/nn/functional.py:2007:16: error: Module has no attribute "hardsigmoid_"; maybe "hardsigmoid"? [attr-defined] torch/nn/functional.py:2076:16: error: Module has no attribute "silu_" [attr-defined] torch/nn/functional.py:2077:12: error: Module has no attribute "silu" [attr-defined] torch/nn/functional.py:2095:16: error: Module has no attribute "mish_" [attr-defined] torch/nn/functional.py:2096:12: error: Module has no attribute "mish" [attr-defined] torch/nn/functional.py:2119:16: error: Module has no attribute "hardswish_" [attr-defined] torch/nn/functional.py:2120:12: error: Module has no attribute "hardswish" [attr-defined] torch/nn/functional.py:2737:12: error: Module has no attribute "nll_loss_nd" [attr-defined] torch/nn/functional.py:3061:12: error: Module has no attribute "cross_entropy_loss" [attr-defined] torch/nn/functional.py:3130:12: error: Module has no attribute "binary_cross_entropy" [attr-defined] torch/nn/functional.py:3243:16: error: Module has no attribute "l1_loss" [attr-defined] torch/nn/functional.py:3245:16: error: Module has no attribute "smooth_l1_loss" [attr-defined] torch/nn/functional.py:3275:12: error: Module has no attribute "huber_loss" [attr-defined] torch/nn/functional.py:3306:12: error: Module has no attribute "l1_loss" [attr-defined] torch/nn/functional.py:3337:12: error: Module has no attribute "mse_loss" [attr-defined] torch/nn/functional.py:3434:12: error: Module has no attribute "multilabel_margin_loss" [attr-defined] torch/nn/functional.py:3456:12: error: Module has no attribute "soft_margin_loss" [attr-defined] torch/nn/functional.py:3575:12: error: Module has no attribute "multi_margin_loss" [attr-defined] torch/nn/functional.py:4748:12: error: Module has no attribute "im2col" [attr-defined] torch/nn/functional.py:4770:12: error: Module has no attribute "col2im" [attr-defined] ``` </details> ### Alternatives Add a hundred `type:ignore`s in `torch/nn/functional.py` when working on #103761. (Obviously not good) ### Additional context There should be a better way to engineer it -- instead of manually annotating each function, it could be possible to generate the function signatures automatically. I'm currently looking into it. cc @ezyang @malfet @rgommers @xuzhao9 @gramster
0
2,263
103,785
[PT2] Return int32 indices in max_pool2d_with_indices
triaged, oncall: pt2
### 🚀 The feature, motivation and pitch Another case of https://github.com/pytorch/pytorch/issues/103475. If we guard on the tensor having numel < 2^32, we can return int32 indices. On one max_pool2d_with_indices_backward kernel I tested in the backward, using int32 instead of int64 sped it up 27%. Related: https://github.com/pytorch/pytorch/issues/103111 Because we sometimes fallback to eager in lowering max_pool2d_backward, we'd either have to figure out a way not to fallback eager or add eager support for int32 in max_pool2d_backward. ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
2
2,264
103,764
[ONNX] Handle absence of `onnxscript` module in PyTorch requirements.txt
module: onnx, triaged, enhancement
### 🐛 Describe the bug Currently, the new Dynamo-based ONNX Exporter heavily depends on ONNX Script (aka `onnxscript` module). However, PyTorch does not take dependency on it, officially To prevent `ModuleNotFound` during `import torch`, the `torch.onnx` namespace is deferring `import onnxscript` into runtime, out of the initialization path. We need to find a better way to handle this. ### Versions main
0
2,265
103,761
Merge type stubs for `torch.nn.functional`
module: typing, triaged
### 🚀 The feature, motivation and pitch From the discussion in https://github.com/pytorch/pytorch/issues/102918#issuecomment-1587576763 (cc @drisspg): Similar to #102428, we would like to merge the .pyi type stub in the .py file for `torch.nn.functional`. ### Alternatives Keep the .pyi file which is now generated by [`gen_pyi.py`](https://github.com/pytorch/pytorch/blob/main/tools/pyi/gen_pyi.py) from a .pyi.in template. ### Additional context The pyi for `torch.nn.functional` consists of three parts: - Functions imported from `torch` or `torch._C._nn` and passed through `_add_docstr`. - Functions in `torch` already have typing, #102918 has addressed the typing for `torch._C._nn`, and `_add_docstr` has been annotated to pass through the typing. Therefore, this part can be considered resolved. - Functions generated by `torch._jit_internal.boolean_dispatch`. Their annotations are currently generated by `gen_pyi`. - We can move the generated overloads into the .py file. They should work directly. - I assume these do not involve `torch._jit_internal._overload` because they're already `boolean_dispatch`, **but need to be confirmed**. - Functions directly defined in `nn/functional.py`. - They should already have annotation in the .py file, **but may need to be verified**. - By the following comment, these functions **may need special treatment to make both mypy and JIT work**, which I cannot imagine now (possibly blocks us from merging pyi): https://github.com/pytorch/pytorch/blob/5875a2fb3c4f21efec326dff6ba31d793a7194e1/torch/nn/functional.pyi.in#L30-L32 cc @ezyang @malfet @rgommers @xuzhao9 @gramster --- This issue can be decomposed into several relatively self-contained sub-tasks, corresponding to the logs recorded below: (I will open issues for each one and specify the sub-task in detail.) ```[tasklist] ### Tasks - [ ] mypy does not recognize `BroadcastingList*` - [ ] mypy does not recognize overloads decorated by `_jit_internal._overload` - [ ] https://github.com/pytorch/pytorch/issues/103787 - [ ] https://github.com/pytorch/pytorch/issues/103806 - [ ] https://github.com/pytorch/pytorch/issues/103798 - [ ] https://github.com/pytorch/pytorch/issues/103800 ```
3
2,266
103,760
dlrm and hf_T5_generate fails aot_eager with bfloat16+dynamic_shapes
triaged, oncall: pt2, module: dynamic shapes
Repro: ``` benchmarks/dynamo/torchbench.py --inductor --bfloat16 --accuracy --inference --device cuda --dynamic-shapes --dynamic-batch-only --only hf_T5_generate ``` Error: ``` 2023-06-16T13:53:37.3160247Z cuda eval hf_T5_generate 2023-06-16T13:54:07.7583460Z ERROR:common:Constraints violated! 2023-06-16T13:54:07.7584228Z 1. Could not validate constraint RelaxedUnspecConstraint(L['input_ids'].size()[0]) as L['input_ids'].size()[0] is actually a non-atomic symbolic expression 4. Did you really mean to mark this dimension as dynamic? 2023-06-16T13:54:07.7586656Z 2023-06-16T13:54:07.7587045Z 2023-06-16T13:54:07.7587387Z You can suppress this exception and fall back to eager by setting: 2023-06-16T13:54:07.7587758Z import torch._dynamo 2023-06-16T13:54:07.7589120Z torch._dynamo.config.suppress_errors = True 2023-06-16T13:54:07.7589393Z Traceback (most recent call last): 2023-06-16T13:54:07.7589768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1531, in check_accuracy 2023-06-16T13:54:07.7590502Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-06-16T13:54:07.7591023Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn 2023-06-16T13:54:07.7591317Z return fn(*args, **kwargs) 2023-06-16T13:54:07.7591712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1356, in run_n_iterations 2023-06-16T13:54:07.7592145Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-06-16T13:54:07.7592558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 436, in forward_pass 2023-06-16T13:54:07.7592841Z return mod(*inputs) 2023-06-16T13:54:07.7595175Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-06-16T13:54:07.7595553Z return self._call_impl(*args, **kwargs) 2023-06-16T13:54:07.7596028Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-06-16T13:54:07.7596419Z return forward_call(*args, **kwargs) 2023-06-16T13:54:07.7596873Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 204, in forward 2023-06-16T13:54:07.7597338Z return self.model.generate(inputs, self.generation_config) 2023-06-16T13:54:07.7597986Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-06-16T13:54:07.7598376Z return func(*args, **kwargs) 2023-06-16T13:54:07.7598969Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1192, in generate 2023-06-16T13:54:07.7599424Z self._validate_model_class() 2023-06-16T13:54:07.7599961Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1210, in <resume in generate> 2023-06-16T13:54:07.7600323Z generation_config = copy.deepcopy(generation_config) 2023-06-16T13:54:07.7600778Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1213, in <resume in generate> 2023-06-16T13:54:07.7601120Z self._validate_model_kwargs(model_kwargs.copy()) 2023-06-16T13:54:07.7601708Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1213, in <resume in generate> 2023-06-16T13:54:07.7602049Z self._validate_model_kwargs(model_kwargs.copy()) 2023-06-16T13:54:07.7602477Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1216, in <resume in generate> 2023-06-16T13:54:07.7602883Z logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() 2023-06-16T13:54:07.7603441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1217, in <resume in generate> 2023-06-16T13:54:07.7603871Z stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() 2023-06-16T13:54:07.7604372Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1268, in <resume in generate> 2023-06-16T13:54:07.7604734Z model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( 2023-06-16T13:54:07.7605245Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 634, in _prepare_encoder_decoder_kwargs_for_generation 2023-06-16T13:54:07.7605640Z model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) 2023-06-16T13:54:07.7606168Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-06-16T13:54:07.7606496Z return self._call_impl(*args, **kwargs) 2023-06-16T13:54:07.7607029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-06-16T13:54:07.7607433Z return forward_call(*args, **kwargs) 2023-06-16T13:54:07.7607935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 448, in catch_errors 2023-06-16T13:54:07.7608276Z return callback(frame, cache_size, hooks, frame_state) 2023-06-16T13:54:07.7608708Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in _convert_frame 2023-06-16T13:54:07.7609070Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-06-16T13:54:07.7609494Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 127, in _fn 2023-06-16T13:54:07.7609790Z return fn(*args, **kwargs) 2023-06-16T13:54:07.7610196Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert 2023-06-16T13:54:07.7610500Z return _compile( 2023-06-16T13:54:07.7610880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 180, in time_wrapper 2023-06-16T13:54:07.7611178Z r = func(*args, **kwargs) 2023-06-16T13:54:07.7611568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 478, in _compile 2023-06-16T13:54:07.7611884Z check_fn = CheckFunctionManager( 2023-06-16T13:54:07.7612289Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 863, in __init__ 2023-06-16T13:54:07.7612603Z guard.create(local_builder, global_builder) 2023-06-16T13:54:07.7612993Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_guards.py", line 208, in create 2023-06-16T13:54:07.7613349Z return self.create_fn(self.source.select(local_builder, global_builder), self) 2023-06-16T13:54:07.7613787Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 540, in SHAPE_ENV 2023-06-16T13:54:07.7614122Z guards = output_graph.shape_env.produce_guards( 2023-06-16T13:54:07.7614575Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2596, in produce_guards 2023-06-16T13:54:07.7614967Z raise ConstraintViolationError(f"Constraints violated!\n{err}") 2023-06-16T13:54:07.7615336Z torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated! 2023-06-16T13:54:07.7616030Z 1. Could not validate constraint RelaxedUnspecConstraint(L['input_ids'].size()[0]) as L['input_ids'].size()[0] is actually a non-atomic symbolic expression 4. Did you really mean to mark this dimension as dynamic? ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
4
2,267
103,756
libtorch > 1.9.1 produces segfault on Qt5 gui application exit
high priority, module: crash, module: cpp, triaged
### 🐛 Describe the bug We are developing a Qt5 based C++ application which is using the C++ API of libtorch to interact with pytorch accordingly. Since libtorch/pytorch version greater than 1.9.1 we noticed that our application exits with a segmentation fault on normal exit for no apparent reason related to the internals of that particular application. After some deeper investigation we created a very simple example Qt5 application which can be used to reproduce that issue accordingly. For the sake of reproducibility and debugging we will share this simple Qt5 application here. Here is the output that simple application `ptest` generates on command-line after creating a simple Qt5 based message box and then having closed it interactively: ```sh $ ./ptest 0.4288 0.0194 0.2141 0.1779 0.1772 0.2024 [ CPUFloatType{2,3} ] (... GUI opened, continuing after user closes requester ...) will SegFault afterwards with libtorch > 1.9.1... Segmentation fault ``` Here is the gdb output on that particular example application `ptest`: ```gdb Thread 1 "ptest" received signal SIGSEGV, Segmentation fault. 0x00007fffdf42447e in __GI___libc_free (mem=0x1) at ./malloc/malloc.c:3368 3368 ./malloc/malloc.c: No such file or directory. (gdb) bt #0 0x00007fffdf42447e in __GI___libc_free (mem=0x1) at ./malloc/malloc.c:3368 #1 0x00007fffe60197a7 in llvm::cl::Option::~Option() () from /usr/local/libtorch-2.1.0.dev20230616+cpu/lib/libtorch_cpu.so #2 0x00007fffdf3c4a56 in __cxa_finalize (d=0x7ffff7679000) at ./stdlib/cxa_finalize.c:83 #3 0x00007fffe182c703 in __do_global_dtors_aux () from /usr/local/libtorch-2.1.0.dev20230616+cpu/lib/libtorch_cpu.so #4 0x00007fffffffdc20 in ?? () #5 0x00007ffff7fc924e in _dl_fini () at ./elf/dl-fini.c:142 Backtrace stopped: frame did not save the PC ``` As one can see, the application seem to run into a segfault right when it is trying to cleanup memory which is/was probably allocated by libtorch during its initialization (`__do_global_dtors_aux`). In addition, we have tested that particular `ptest` app up to the latest 2.1.0.dev20230616 version available for public download from download.pytorch.org. In addition, we also walked down the road until version 1.9.0 and could verify that starting with version 1.11.0 of libtorch/pytorch (1.10.0 does not link correctly due to a bug in that particular version) it started to segfault with the same backtrace pointing to `__do_global_dtors_aux` and `llvm::cl::Option::~Option()` in `libtorch_cpu.so`. In addition, I also ran valgrind over the test program to check for anything suspicious and found the following references to libtorch: ``` will SegFault afterwards with libtorch > 1.9.1... ==332300== Invalid free() / delete / delete[] / realloc() ==332300== at 0x484B27F: free (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==332300== by 0x8A21C0E: llvm::cl::Option::~Option() (in /usr/local/libtorch-1.11.0+cpu/lib/libtorch_cpu.so) ==332300== by 0x1C96BA55: __cxa_finalize (cxa_finalize.c:83) ==332300== by 0x5893752: ??? (in /usr/local/libtorch-1.11.0+cpu/lib/libtorch_cpu.so) ==332300== by 0x400624D: _dl_fini (dl-fini.c:142) ==332300== by 0x1C96B494: __run_exit_handlers (exit.c:113) ==332300== by 0x1C96B60F: exit (exit.c:143) ==332300== by 0x1C94FD96: (below main) (libc_start_call_main.h:74) ==332300== Address 0x1 is not stack'd, malloc'd or (recently) free'd ==332300== ==332300== Invalid free() / delete / delete[] / realloc() ==332300== at 0x484B27F: free (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==332300== by 0x170F982D: llvm::cl::opt<llvm::FunctionSummary::ForceSummaryHotnessType, true, llvm::cl::parser<llvm::FunctionSummary::ForceSummaryHotnessType> >::~opt() (in /usr/local/libtorch-1.11.0+cpu/lib/libtorch_cpu.so) ==332300== by 0x1C96BA55: __cxa_finalize (cxa_finalize.c:83) ==332300== by 0x5893752: ??? (in /usr/local/libtorch-1.11.0+cpu/lib/libtorch_cpu.so) ==332300== by 0x400624D: _dl_fini (dl-fini.c:142) ==332300== by 0x1C96B494: __run_exit_handlers (exit.c:113) ==332300== by 0x1C96B60F: exit (exit.c:143) ==332300== by 0x1C94FD96: (below main) (libc_start_call_main.h:74) ==332300== Address 0x800000003 is not stack'd, malloc'd or (recently) free'd ``` Please find here the source code of that simple test application (`main.cpp`): ```c++ #include <torch/torch.h> #include <QApplication> #include <QMessageBox> // including torch headers after Qt headers (instead of before) leads // to compile error //#include <torch/torch.h> #include <iostream> int main(int argc, char *argv[]) { // required to link libtorch torch::Tensor tensor = torch::rand({2, 3}); std::cout << tensor << std::endl; QApplication app(argc, argv); QMessageBox::critical(NULL, "Error", "About to crash!"); // also crashes when used here //torch::Tensor tensor = torch::rand({2, 3}); //std::cout << tensor << std::endl; std::cout << "will SegFault afterwards with libtorch > 1.9.1..." << std::endl; return 0; } ``` as well as the corresponding Qt5 project file (`ptest.pro`): ```pro TEMPLATE = app QT += widgets TARGET = ptest DESTDIR = ./ QMAKE_CXXFLAGS *= -ggdb #TORCH_VERSION = 1.9.0+cpu # works #TORCH_VERSION = 1.9.1+cpu # works #TORCH_VERSION = 1.10.0+cpu # does not link (https://github.com/pytorch/pytorch/issues/72653) #TORCH_VERSION = 1.10.1+cpu # does not link (https://github.com/pytorch/pytorch/issues/72653) #TORCH_VERSION = 1.10.2+cpu # does not link (https://github.com/pytorch/pytorch/issues/72653) TORCH_VERSION = 1.11.0+cpu # crashes #TORCH_VERSION = 1.12.0+cpu # crashes #TORCH_VERSION = 1.12.1+cpu # crashes #TORCH_VERSION = 1.13.0+cpu # crashes #TORCH_VERSION = 1.13.1+cpu # crashes #TORCH_VERSION = 2.1.0.dev20230616+cpu # crashes QMAKE_RPATHDIR *= /usr/local/libtorch-$${TORCH_VERSION}/lib/ QMAKE_CXXFLAGS *= -I/usr/local/libtorch-$${TORCH_VERSION}/include/ QMAKE_CXXFLAGS *= -I/usr/local/libtorch-$${TORCH_VERSION}/include/torch/csrc/api/include/ LIBS *= -L/usr/local/libtorch-$${TORCH_VERSION}/lib -ltorch_cpu -lc10 SOURCES += main.cpp ``` or alternatively the corresponding `CMakeLists.txt` file for cmake-based compilation: ```cmake cmake_minimum_required(VERSION 3.18 FATAL_ERROR) project(ptest) set(CMAKE_AUTOMOC ON) find_package(Torch REQUIRED) find_package(Qt5 COMPONENTS Widgets REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(ptest main.cpp) target_compile_features(ptest PUBLIC cxx_range_for) target_link_libraries(ptest PRIVATE "${TORCH_LIBRARIES}" Qt5::Widgets) set_property(TARGET ptest PROPERTY CXX_STANDARD 17) ``` ### Versions ``` Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: 14.0.0-1ubuntu1 CMake version: version 3.26.3 Libc version: glibc-2.35 Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.15.107-2-pve-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.6.0 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.6.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-59 Off-line CPU(s) list: 60-63 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 Stepping: 4 BogoMIPS: 4200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 32 MiB (32 instances) L3 cache: 44 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable Versions of relevant libraries: [pip3] numpy==1.23.3 [pip3] numpydoc==1.2 [pip3] pytorch-lightning==2.0.2 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchmetrics==0.11.4 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 ``` cc @ezyang @gchanan @zou3519 @jbschlosser
9
2,268
103,752
Pytorch not calling to C code from a docker container
needs reproduction, triaged, module: docker, module: __torch_function__
### 🐛 Describe the bug I'm trying to run a StableDiffusionPipeline from a docker container by a Celery worker. When ran locally everything works fine, but when launched in a container execution just stops when python attempts to call `_has_torch_function_variadic` for come checkup. It doesn't produce any message, it's basically falling into the void. It seems that calls to C code happen on a tokenization of a prompt, when it attempts to make a forward call in a `CLIPTextModel`. I'm not sure if it's an issue of a docker settings or me screwing up some parameters, but it is certainly clear that the issue comes up when PyTorch is attempting to make a call to this exact function. Here's an example of a code i'm trying to run: ```python pipe = StableDiffusionPipeline.from_pretrained( "prompthero/openjourney-v4", torch_dtype=torch.float16, ) with torch.autocast('cuda'): image = self.pipe( # execution stops here data.question, num_inference_steps=150, num_images_per_prompt=4, ).images ``` Here's a Dockerfile and docker-compose.yml I'm usigs: ```Dockerfile FROM python:3.10 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 WORKDIR /code/ COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY build.py /code/ COPY docker-build.sh /code/ RUN ./docker-build.sh COPY . /code/ ``` ```yaml version: '3.7' services: celery: restart: always build: context: . dockerfile: ./Dockerfile image: air_local_ml_celery container_name: air_local_ml_celery env_file: ./.env command: celery -A ml_local.celery_app worker -l INFO --concurrency 4 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] ``` I'm using Celery version 5.3.0 and Torch 2.0.1 on Python 3.10 ### Versions PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 10.1.243 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Nvidia driver version: 530.30.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 25 Model: 80 Model name: AMD Ryzen 5 5600G with Radeon Graphics Stepping: 0 Frequency boost: enabled CPU MHz: 1700.000 CPU max MHz: 3900,0000 CPU min MHz: 1400,0000 BogoMIPS: 7785.19 Virtualization: AMD-V L1d cache: 192 KiB L1i cache: 192 KiB L2 cache: 3 MiB L3 cache: 16 MiB NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] triton==2.0.0 [conda] Could not collect cc @hameerabbasi @rgommers @peterbell10 @ezyang
6
2,269
103,749
SDPA produces NaN with padding mask
triage review, triaged, oncall: transformer/mha
### 🐛 Describe the bug I'm implementing padding support directly on my LLM model. To do so, I add extra rows to the boolean attention mask with all `False` values. However, calling `torch.nn.functional.scaled_dot_product_attention` returns NaNs. An equivalent and naive implementation also does (see `naive_fsdpa_1`). But if I use `torch.finfo(att.dtype).min` as the masked fill value, the result is correct (see `naive_fsdpa_2`) Here's the repro: ```python import torch scale = 0.125 # 5 tokens, 1 padding seq_length = 5 max_seq_length = 10 q = torch.randn(1, 2, seq_length + 1, 8) k = torch.randn(1, 2, max_seq_length, 8) v = torch.randn(1, 2, max_seq_length, 8) mask = torch.tensor([[[ [ True, False, False, False, False, False, False, False, False, False], # regular mask [ True, True, False, False, False, False, False, False, False, False], [ True, True, True, False, False, False, False, False, False, False], [ True, True, True, True, False, False, False, False, False, False], [ True, True, True, True, True, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False]]]]) # padding mask def torch_sdpa(q, k, v, mask): return torch.nn.functional.scaled_dot_product_attention(q, k, v, mask, dropout_p=0.0, scale=scale) def naive_sdpa_1(q, k, v, mask): att = (q @ k.transpose(-2, -1)) * scale att = torch.masked_fill(att, ~mask, float("-inf")) att = torch.nn.functional.softmax(att, dim=-1) return att @ v def naive_sdpa_2(q, k, v, mask): att = (q @ k.transpose(-2, -1)) * scale att = torch.masked_fill(att, ~mask, torch.finfo(att.dtype).min) att = torch.nn.functional.softmax(att, dim=-1) return att @ v y = torch_sdpa(q, k, v, mask) print(torch.isnan(y).any()) y = naive_sdpa_1(q, k, v, mask) print(torch.isnan(y).any()) y = naive_sdpa_2(q, k, v, mask) print(torch.isnan(y).any()) ``` This seems to suggest that there's a precision issue. Is this expected or known? Could `scaled_dot_product_attention` be fixed? Side question: do I lose flash attn if I append the padding mask like this? Thank you for your help. Maybe linked: https://github.com/pytorch/pytorch/issues/101967 ### Versions Current master cc @ezyang @gchanan @zou3519 @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @msaroufim @wconstab @bdhirsh @anijain2305
31
2,270
103,737
[FSDP] train throughput become slow down when loaded shard optimizer dict
triaged, module: fsdp
### 🐛 Describe the bug Training throughput slow down when loaded optimizer shard state dict. 1) train model and save model dict/optimizer dict with sharded state, the training throughput like: ![image](https://github.com/pytorch/pytorch/assets/42488903/9724fd6b-b288-4d02-a6b7-e44b8ba71e24) 2) load model dict/optimizer dict, continue the training ![image](https://github.com/pytorch/pytorch/assets/42488903/650221ca-9d87-4044-8039-f0af6d21dcad) The elapsed time per iteration slower than before. If we do not load optimizer(just load model), the speed is same as before. If we use full/local state dict(save and load), the throughput is also same as before. So only optimizer shard state dict has such problem. So I use tensor board to debug what happen, I found train from optimizer shard state dict cause a lot of CUDA free as bellow: ![image](https://github.com/pytorch/pytorch/assets/42488903/78e4e329-6f94-4bc2-9b91-6b0c9b6a708a) ![image](https://github.com/pytorch/pytorch/assets/42488903/8c4b494f-0ec2-45b7-bfc6-e3a679f5dec0) Why train from optimizer shard state dict has such problem? ### Versions PyTorch version: 2.1.0.dev20230522+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.1 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: version 3.19.6 Libc version: glibc-2.31 Python version: 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 470.129.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 3346.395 CPU max MHz: 2250.0000 CPU min MHz: 1500.0000 BogoMIPS: 4491.12 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-15,128-143 NUMA node1 CPU(s): 16-31,144-159 NUMA node2 CPU(s): 32-47,160-175 NUMA node3 CPU(s): 48-63,176-191 NUMA node4 CPU(s): 64-79,192-207 NUMA node5 CPU(s): 80-95,208-223 NUMA node6 CPU(s): 96-111,224-239 NUMA node7 CPU(s): 112-127,240-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] flake8==3.7.9 [pip3] numpy==1.19.2 [pip3] nvidia-dlprof-pytorch-nvtx==1.0.0 [pip3] pytorch-lightning==1.6.5 [pip3] pytorch-quantization==2.1.0 [pip3] pytorch-transformers==1.1.0 [pip3] pytorch-triton==2.1.0+7d1a95b046 [pip3] torch==2.1.0.dev20230522+cu117 [pip3] torchaudio==2.1.0.dev20230522+cu117 [pip3] torchdata==0.7.0.dev20230522 [pip3] torchmetrics==0.11.0 [pip3] torchtext==0.16.0.dev20230522+cpu [pip3] torchvision==0.16.0.dev20230522+cu117 [conda] magma-cuda110 2.5.2 5 local [conda] mkl 2019.4 243 [conda] mkl-include 2019.4 243 [conda] nomkl 3.0 0 [conda] numpy 1.19.2 py38h6163131_0 [conda] numpy-base 1.19.2 py38h75fe3a5_0 [conda] nvidia-dlprof-pytorch-nvtx 1.0.0 pypi_0 pypi [conda] pytorch-lightning 1.6.5 pypi_0 pypi [conda] pytorch-quantization 2.1.0 pypi_0 pypi [conda] pytorch-transformers 1.1.0 pypi_0 pypi [conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi [conda] torch 2.1.0.dev20230522+cu117 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230522+cu117 pypi_0 pypi [conda] torchdata 0.7.0.dev20230522 pypi_0 pypi [conda] torchmetrics 0.11.0 pypi_0 pypi [conda] torchtext 0.16.0.dev20230522+cpu pypi_0 pypi [conda] torchvision 0.16.0.dev20230522+cu117 pypi_0 pypi cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
5
2,271
103,730
[FSDP] save model checkpoint with StateDictType.LOCAL_STATE_DICT and LocalStateDictConfig(offload_to_cpu=True) fail
triaged, module: fsdp
### 🐛 Describe the bug Enable cpu offload and save FSDP model dict with local state dictionary fail with below error: ``` Traceback (most recent call last): File "train_llama_fsdp_datasets.py", line 219, in <module> trainer.do_train( File "/home/elrond/code/flagai-internal/flagai/env_trainer_v1.py", line 783, in do_train save_checkpoint(self.iteration+1, File "/home/elrond/code/flagai-internal/flagai/utils.py", line 239, in save_checkpoint fsdp_sd = model.state_dict() File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1833, in state_dict module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1833, in state_dict module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1833, in state_dict module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1835, in state_dict hook_result = hook(self, destination, prefix, local_metadata) File "/opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 674, in _post_state_dict_hook processed_state_dict = _post_state_dict_hook_fn[fsdp_state._state_dict_type]( File "/opt/conda/lib/python3.8/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 439, in _local_post_state_dict_hook sharded_tensor = sharded_tensor.cpu() File "/opt/conda/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 503, in cpu st_cpu = ShardedTensor._init_from_local_shards_and_global_metadata( File "/opt/conda/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 945, in _init_from_local_shards_and_global_metadata _raise_if_mismatch( File "/opt/conda/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 897, in _raise_if_mismatch raise ValueError( ValueError: Local shards' tensor requires_grad property is incompatible with tensor property on rank 3: tensor property requires_grad=True, local shard tensor requires_grad=False. ``` Code like: ```python with FSDP.state_dict_type( model, StateDictType.LOCAL_STATE_DICT, LocalStateDictConfig(offload_to_cpu=True) ): fsdp_sd = model.state_dict() ``` local state dictionary work well when disable cpu offload, like ```python with FSDP.state_dict_type( model, StateDictType.LOCAL_STATE_DICT, LocalStateDictConfig(offload_to_cpu=False) ): fsdp_sd = model.state_dict() ``` ### Versions PyTorch version: 2.1.0.dev20230522+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.1 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: version 3.19.6 Libc version: glibc-2.31 Python version: 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 470.129.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 /usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 3346.395 CPU max MHz: 2250.0000 CPU min MHz: 1500.0000 BogoMIPS: 4491.12 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-15,128-143 NUMA node1 CPU(s): 16-31,144-159 NUMA node2 CPU(s): 32-47,160-175 NUMA node3 CPU(s): 48-63,176-191 NUMA node4 CPU(s): 64-79,192-207 NUMA node5 CPU(s): 80-95,208-223 NUMA node6 CPU(s): 96-111,224-239 NUMA node7 CPU(s): 112-127,240-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] flake8==3.7.9 [pip3] numpy==1.19.2 [pip3] nvidia-dlprof-pytorch-nvtx==1.0.0 [pip3] pytorch-lightning==1.6.5 [pip3] pytorch-quantization==2.1.0 [pip3] pytorch-transformers==1.1.0 [pip3] pytorch-triton==2.1.0+7d1a95b046 [pip3] torch==2.1.0.dev20230522+cu117 [pip3] torchaudio==2.1.0.dev20230522+cu117 [pip3] torchdata==0.7.0.dev20230522 [pip3] torchmetrics==0.11.0 [pip3] torchtext==0.16.0.dev20230522+cpu [pip3] torchvision==0.16.0.dev20230522+cu117 [conda] magma-cuda110 2.5.2 5 local [conda] mkl 2019.4 243 [conda] mkl-include 2019.4 243 [conda] nomkl 3.0 0 [conda] numpy 1.19.2 py38h6163131_0 [conda] numpy-base 1.19.2 py38h75fe3a5_0 [conda] nvidia-dlprof-pytorch-nvtx 1.0.0 pypi_0 pypi [conda] pytorch-lightning 1.6.5 pypi_0 pypi [conda] pytorch-quantization 2.1.0 pypi_0 pypi [conda] pytorch-transformers 1.1.0 pypi_0 pypi [conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi [conda] torch 2.1.0.dev20230522+cu117 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230522+cu117 pypi_0 pypi [conda] torchdata 0.7.0.dev20230522 pypi_0 pypi [conda] torchmetrics 0.11.0 pypi_0 pypi [conda] torchtext 0.16.0.dev20230522+cpu pypi_0 pypi [conda] torchvision 0.16.0.dev20230522+cu117 pypi_0 pypi cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
3
2,272
103,727
torch.compile() bug in AOTAutograd or Dynamo
triaged, oncall: pt2
### 🐛 Describe the bug Our training model used to run correctly. And we tired torch.compile() for our model. In epcoh 4, there is an error. Here is the detail outputs: 2%|▏ | 1/50 [10:17<8:24:40, 617.98s/it]model saved Epoch 1/50, Train Loss: 1.0756, Validation Loss: 1.0851, Duration: 0:10:17.977241, Best Val Epoch: 0 4%|▍ | 2/50 [19:54<7:54:47, 593.49s/it]model saved Epoch 2/50, Train Loss: 0.9921, Validation Loss: 0.9738, Duration: 0:09:36.346815, Best Val Epoch: 1 6%|▌ | 3/50 [29:30<7:38:43, 585.61s/it]model saved Epoch 3/50, Train Loss: 0.9042, Validation Loss: 0.9325, Duration: 0:09:36.220263, Best Val Epoch: 2 8%|▊ | 4/50 [39:06<7:29:46, 586.65s/it]model saved Epoch 4/50, Train Loss: 0.8477, Validation Loss: 0.9207, Duration: 0:09:36.039177, Best Val Epoch: 3 --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) [<ipython-input-3-86742d6aee9f>](https://localhost:8080/#) in <cell line: 387>() 389 model.load_state_dict(torch.load(model_save_path, weights_only=True)) 390 else: --> 391 train_losses, val_losses = batch_gd(model, criterion, optimizer, epochs) 392 393 plt.figure(figsize=(15,6)) 8 frames [/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py](https://localhost:8080/#) in debug_compiled_function(*args) 2396 assert not isinstance(a, Tensor) 2397 elif not can_require_grad: -> 2398 assert not a.requires_grad, format_guard_bug_msg( 2399 aot_config, 2400 f"{describe_input(i, aot_config)} would not require grad", AssertionError: At compilation time, graph 8 was compiled under the assumption that parameter/buffer 1 would not require grad, but at runtime this was not the case. This indicates a guard bug in AOTAutograd or Dynamo, please file a bug to PyTorch. The relevent code to the issue is we costomized some parameters during the training: ``` def batch_gd(model, criterion, optimizer, epochs): train_losses = np.zeros(epochs) test_losses = np.zeros(epochs) best_test_loss = np.inf best_test_epoch = 0 i = 0 cont = 0 for it in tqdm(range(epochs)): if (it == 4): model.axial_height_1.f_qr.requires_grad = True model.axial_height_1.f_kr.requires_grad = True model.axial_height_1.f_sve.requires_grad = True model.axial_height_1.f_sv.requires_grad = True model.axial_width_1.f_qr.requires_grad = True model.axial_width_1.f_kr.requires_grad = True model.axial_width_1.f_sve.requires_grad = True model.axial_width_1.f_sv.requires_grad = True model.axial_height_2.f_qr.requires_grad = True model.axial_height_2.f_kr.requires_grad = True model.axial_height_2.f_sve.requires_grad = True model.axial_height_2.f_sv.requires_grad = True model.axial_width_2.f_qr.requires_grad = True model.axial_width_2.f_kr.requires_grad = True model.axial_width_2.f_sve.requires_grad = True model.axial_width_2.f_sv.requires_grad = True model.train() ``` ### Versions We ran the code on google clobe. torch version: 2.0.1+cu118. cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
2
2,273
103,719
DataParallel interfering with TorchDispatchMode
high priority, oncall: distributed, triaged, module: data parallel, module: __torch_dispatch__, oncall: pt2
### 🐛 Describe the bug TorchDispatchMode doesn't get all the ops that are used when used in conjunction with DDP. [Code Sample (Gist)](https://gist.github.com/jimwu6/5509839df956a0d495c02102c6760bf3) which is a slimmed down version of [official PyTorch Code for ResNet](https://github.com/pytorch/examples/blob/main/imagenet/main.py) with an additional `CustomDispatchMode` class being used. Expected behaviour can be found using `python custom_dispatch.py --gpu 0`, where ops like these will be printed: ``` torch._ops.aten.relu_.default torch._ops.aten.detach.default torch._ops.aten.convolution.default torch._ops.aten.add_.Tensor torch._ops.aten.cudnn_batch_norm.default torch._ops.aten.convolution.default torch._ops.aten.add_.Tensor torch._ops.aten.cudnn_batch_norm.default torch._ops.aten.add_.Tensor ``` The behaviour seen is that when running using `python custom_dispatch.py`, which uses DDP (if you have more than one GPU visible/on the system, which is needed to reproduce - I've seen that only a multiple GPU system with `CUDA_VISIBLE_DEVICES=0` will not trigger this error), the output doesn't contain convolution or add, but instead contains mostly ops such as ``` torch._ops.aten.slice.Tensor torch._ops.aten.view.default torch._ops.aten.detach.default ``` where we would expect the output to contain items from the first code block. I've been able to reproduce this on the latest PyTorch nightly (`2.1.0.dev20230615+cu121`) and a built-from-source version of `2.0.0`. ### Versions ``` Collecting environment information... PyTorch version: 2.1.0.dev20230615+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.5 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] (64-bit runtime) Python platform: Linux-5.4.0-64-generic-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 12.0.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM-80GB GPU 1: NVIDIA A100-SXM-80GB GPU 2: NVIDIA A100-SXM-80GB GPU 3: NVIDIA A100-SXM-80GB GPU 4: NVIDIA A100-SXM-80GB GPU 5: NVIDIA A100-SXM-80GB GPU 6: NVIDIA A100-SXM-80GB GPU 7: NVIDIA A100-SXM-80GB Nvidia driver version: 470.57.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7763 64-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 3250.104 CPU max MHz: 2450.0000 CPU min MHz: 1500.0000 BogoMIPS: 4900.16 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 64 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 NUMA node4 CPU(s): 64-79 NUMA node5 CPU(s): 80-95 NUMA node6 CPU(s): 96-111 NUMA node7 CPU(s): 112-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230615+cu121 [pip3] torchaudio==2.1.0.dev20230615+cu121 [pip3] torchvision==0.16.0.dev20230615+cu121 [conda] numpy 1.24.3 pypi_0 pypi [conda] torchvision 0.16.0.dev20230615+cu121 pypi_0 pypi ``` cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @Chillee @albanD @samdow @msaroufim @wconstab @bdhirsh @anijain2305
4
2,274
103,716
Non actionable perf hint: reduction over non-contiguous dims
triaged, enhancement, oncall: pt2, module: inductor
### 🐛 Describe the bug This is the exact same repro as https://github.com/pytorch/pytorch/issues/103715 but a different issue When using SAM with SDPA and torch.compile I get this perf hint coming from `torch/_inductor/codegen/triton.py` ``` reduction over non-contiguous dims reduction over non-contiguous dims ``` But it doesn't really tell me what I can change in my model code to fix this or which line of code in my program is causing this ## Setup ``` pip install git+https://github.com/cpuhrsch/saf.git@msaroufim/sdpa pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 pip install nvtx wget https://gist.githubusercontent.com/msaroufim/1002c17469fc670fc5af27dda12fa7e4/raw/c22a381f359da08c7efb4c87959efbd00f688bac/benchmark_segment_anything.py ``` ## Command ``` python benchmark_segment_anything.py --model_type "vit_h" --checkpoint_path "/data/home/vasiliy/cluster/sam_dataset/sam_vit_h_4b8939.pth" --compiled 1 --profile 0 --max_batch_size 30 --block_only 0 --quantized 0 ``` ## Logs OOM issue was reported seperately ``` (base) marksaroufim@a100-st-p4d24xlarge-42:~$ python bench2.py --model_type "vit_h" --checkpoint_path "/data/home/vasiliy/cluster/sam_dataset/sam_vit_h_4b8939.pth" --compiled 1 --profile 0 --max_batch_size 30 --block_only 0 --quantized 0 /data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/utils/benchmark/utils/timer.py:16: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()' if torch.has_cuda and torch.cuda.is_available(): Namespace(model_type='vit_h', checkpoint_path='/data/home/vasiliy/cluster/sam_dataset/sam_vit_h_4b8939.pth', max_batch_size=30, quantized=0, compiled=1, block_only=0, profile=0) AUTOTUNE convolution(30x3x1024x1024, 1280x3x16x16) convolution 14.1885 ms 100.0% triton_convolution_0 35.4304 ms 40.0% triton_convolution_3 51.1130 ms 27.8% triton_convolution_1 53.7979 ms 26.4% triton_convolution_6 57.7577 ms 24.6% triton_convolution_5 62.8029 ms 22.6% triton_convolution_4 93.8199 ms 15.1% triton_convolution_2 104.7224 ms 13.5% SingleProcess AUTOTUNE takes 13.5646 seconds AUTOTUNE mm(147000x1280, 1280x3840) mm 6.2659 ms 100.0% triton_mm_8 7.3764 ms 84.9% triton_mm_9 7.5295 ms 83.2% triton_mm_10 8.7255 ms 71.8% triton_mm_11 8.7378 ms 71.7% triton_mm_7 9.6256 ms 65.1% triton_mm_15 10.3578 ms 60.5% triton_mm_14 11.8845 ms 52.7% triton_mm_17 14.8060 ms 42.3% triton_mm_16 19.0935 ms 32.8% SingleProcess AUTOTUNE takes 9.0483 seconds AUTOTUNE bmm(12000x196x80, 12000x80x196) triton_bmm_21 1.6814 ms 100.0% triton_bmm_20 1.6988 ms 99.0% triton_bmm_26 1.7674 ms 95.1% triton_bmm_19 1.8811 ms 89.4% triton_bmm_23 1.8862 ms 89.1% triton_bmm_22 1.9661 ms 85.5% triton_bmm_29 2.0142 ms 83.5% bmm 2.3460 ms 71.7% triton_bmm_30 2.4054 ms 69.9% triton_bmm_27 3.0884 ms 54.4% SingleProcess AUTOTUNE takes 6.8304 seconds AUTOTUNE bmm(12000x196x196, 12000x196x80) triton_bmm_32 1.4879 ms 100.0% triton_bmm_34 1.6292 ms 91.3% triton_bmm_33 1.6312 ms 91.2% triton_bmm_35 1.7828 ms 83.5% bmm 1.7992 ms 82.7% triton_bmm_41 2.0500 ms 72.6% triton_bmm_31 2.0634 ms 72.1% triton_bmm_39 2.4177 ms 61.5% triton_bmm_38 2.4668 ms 60.3% triton_bmm_42 2.4893 ms 59.8% SingleProcess AUTOTUNE takes 6.5458 seconds AUTOTUNE mm(122880x1280, 1280x5120) mm 7.3851 ms 100.0% triton_mm_44 8.4147 ms 87.8% triton_mm_45 8.5765 ms 86.1% triton_mm_46 9.8478 ms 75.0% triton_mm_47 9.9625 ms 74.1% triton_mm_43 11.2773 ms 65.5% triton_mm_51 11.6705 ms 63.3% triton_mm_50 13.6346 ms 54.2% triton_mm_53 16.8397 ms 43.9% triton_mm_52 21.2070 ms 34.8% SingleProcess AUTOTUNE takes 7.3337 seconds AUTOTUNE mm(122880x5120, 5120x1280) triton_mm_57 8.2811 ms 100.0% triton_mm_56 8.2949 ms 99.8% triton_mm_58 9.3793 ms 88.3% triton_mm_59 9.6061 ms 86.2% mm 10.5482 ms 78.5% triton_mm_63 11.1032 ms 74.6% triton_mm_55 11.1974 ms 74.0% triton_mm_62 13.4717 ms 61.5% triton_mm_65 17.1244 ms 48.4% triton_mm_60 20.5896 ms 40.2% SingleProcess AUTOTUNE takes 7.2971 seconds AUTOTUNE mm(122880x1280, 1280x3840) mm 5.2255 ms 100.0% triton_mm_428 6.1379 ms 85.1% triton_mm_429 6.2868 ms 83.1% triton_mm_430 7.3503 ms 71.1% triton_mm_431 7.4537 ms 70.1% triton_mm_427 8.1193 ms 64.4% triton_mm_435 8.6641 ms 60.3% triton_mm_434 9.9072 ms 52.7% triton_mm_437 12.3535 ms 42.3% triton_mm_436 15.9247 ms 32.8% SingleProcess AUTOTUNE takes 7.0755 seconds AUTOTUNE convolution(30x1280x64x64, 256x1280x1x1) convolution 0.8335 ms 100.0% triton_convolution_1831 2.1627 ms 38.5% triton_convolution_1832 2.2426 ms 37.2% triton_convolution_1834 2.2477 ms 37.1% triton_convolution_1836 2.2958 ms 36.3% triton_convolution_1835 2.4504 ms 34.0% triton_convolution_1837 2.6307 ms 31.7% conv1x1_via_mm 11.3367 ms 7.4% triton_convolution_1833 17.4961 ms 4.8% SingleProcess AUTOTUNE takes 7.2176 seconds AUTOTUNE convolution(30x256x64x64, 256x256x3x3) convolution 1.3814 ms 100.0% triton_convolution_1839 3.9660 ms 34.8% triton_convolution_1844 4.0305 ms 34.3% triton_convolution_1841 4.9162 ms 28.1% triton_convolution_1842 7.7573 ms 17.8% triton_convolution_1843 7.8566 ms 17.6% triton_convolution_1838 8.1787 ms 16.9% triton_convolution_1840 33.1822 ms 4.2% SingleProcess AUTOTUNE takes 6.9797 seconds reduction over non-contiguous dims reduction over non-contiguous dims /data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py:85: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()' if torch.has_cuda: Traceback (most recent call last): File "/data/home/marksaroufim/bench2.py", line 144, in <module> main() File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/data/home/marksaroufim/bench2.py", line 123, in main model(image) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn return fn(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/segment_anything/modeling/image_encoder.py", line 106, in forward def forward(self, x: torch.Tensor) -> torch.Tensor: File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn return fn(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3761, in forward return compiled_fn(full_args) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1449, in g return f(*args) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2407, in runtime_wrapper all_outs = call_func_with_args( File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1473, in call_func_with_args out = normalize_as_list(f(args)) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1561, in rng_functionalization_wrapper return compiled_fw(args) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 457, in run return model(new_inputs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 499, in run return compiled_fn(new_inputs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 360, in deferred_cudagraphify fn, out = cudagraphify(model, inputs, static_input_idxs, *args, **kwargs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 384, in cudagraphify return manager.add_function( File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1856, in add_function return fn, fn(inputs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1676, in run out = self._run(new_inputs, function_id) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1717, in _run return self.run_eager(new_inputs, function_id) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1832, in run_eager return node.run(new_inputs) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 572, in run out = self.wrapped_function.model(new_inputs) File "/tmp/torchinductor_marksaroufim/va/cva4viinufgivaehrtjftdor756fg7lfvjjbvhjw525cii43a76w.py", line 3329, in call buf156 = empty_strided((480, 4096, 4096), (16777216, 4096, 1), device='cuda', dtype=torch.float16) File "/data/home/marksaroufim/miniconda/lib/python3.10/site-packages/torch/utils/_device.py", line 76, in __torch_function__ return func(*args, **kwargs) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 15.00 GiB. GPU 0 has a total capacty of 39.56 GiB of which 11.15 GiB is free. Including non-PyTorch memory, this process has 28.41 GiB memory in use. Of the allocated memory 25.28 GiB is allocated by PyTorch, and 1009.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` ### Error logs _No response_ ### Minified repro _No response_ ### Versions nightlies on aws cluster cc @ezyang @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
3
2,275
103,713
[ONNX] Discuss improvements to Diagnostic public API
module: onnx, triaged
### 🐛 Describe the bug Currently, when we use the diagnostic system through `@diagnostics.diagnose_call`, we need to create an extra utility function, that usually is only used once because it is function-signature-specific. To diagnose ```def foo(a: int, b: float, c : bool) pass ``` We need to first create a helper: ``` def _foo_formatter(fx: Callable, diagnostic_context: diagnostics.DiagnosticContext,, a: int, b: float, c : bool) return "string_consuming_a_or_b_or_c" ``` Next we decorate the function ``` @diagnostics.diagnose_call( diagnostics.rules.XXX, diagnostic_message_formatter=_foo_formatter, ) ``` This is error prone because * every time the function/method signature changes, the utility signature must also change, which creates unnecessary coupling. * beartype error message is not trivial to understand * call site becomes cluttered with utlities flying around A proposal is to replace the formatter as callable to string. We can leverage how decorators are implemented and automatically create a hidden lambda/utility function that mimics the callable signature. `diagnose_call` receives the callable, then it inspects the callable and find out the function/method signature. Next it builds a temporary function with the inspected signature and its implementation `return USER_FORMAT_STRING` That would make the use of the decorator simpler to the user because they won't need to change the utility signature every time the callable also changes. They only pass the string they want to print, like python formatted strings already work. ### Versions main
6
2,276
103,706
TorchDynamo assertion with `try: return; finally`
triaged, oncall: pt2, release notes: dynamo
### 🐛 Describe the bug TorchDynamo seems to raise an error if I have something like this. See the error logs and repro instructions below: ``` def forward(self, x): try: return ... finally: do_something() ``` ### Error logs ``` 2.0.0a0+gite9ebda2 Traceback (most recent call last): File "test.py", line 24, in <module> out = net(torch.Tensor([1, 2, 3, 4])) File "/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs) File "/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn return fn(*args, **kwargs) File "/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors return callback(frame, cache_size, hooks) File "/conda/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame result = inner_convert(frame, cache_size, hooks) File "/conda/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn return fn(*args, **kwargs) File "/conda/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert return _compile( File "/conda/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper r = func(*args, **kwargs) File "/conda/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile out_code = transform_code_object(code, transform) File "/conda/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 446, in transform_code_object return clean_and_assemble_instructions(instructions, keys, code_options)[1] File "/conda/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 470, in clean_and_assemble_instructions code_options["co_stacksize"] = stacksize_analysis(instructions) File "/conda/python3.8/site-packages/torch/_dynamo/bytecode_analysis.py", line 185, in stacksize_analysis assert low >= 0 AssertionError: Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True ``` ### Minified repro ``` $ cat test.py import torch import torch.nn as nn print(torch.__version__) class Net(nn.Module): def __init__(self): super().__init__() self.layer = nn.Identity() self.fwd_count = 0 def increment(self): self.fwd_count += 1 def forward(self, x): try: return self.layer(x) finally: self.increment() net = Net() net = net.cuda() net = torch.compile(net) out = net(torch.Tensor([1, 2, 3, 4])) print(out) ``` ### Versions ``` Collecting environment information... PyTorch version: 2.0.0a0+gite9ebda2 Is debug build: False CUDA used to build PyTorch: 12.0 ``` cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
2,277
103,686
[dtensor] introduce experimental dmap
release notes: distributed (dtensor)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #103686 This PR introduces `dmap`, this allows user to write function with local shard view inside the function, i.e. write manual collectives. the dmap can work together with DTensor operations. i.e. We can do some DTensor computation first, then call a dmap function to write some manual collectives, then use the results to do some additional DTensor computations. Given that we already have APIs to explicitly convert torch.Tensor to/from DTensor, this API just serves as a convenient tool to write simple funcs that works seamlessly with DTensor. caveat: manual collectives always need to couple with autograd.Function so that it works for backward. Ideally we should just make our functional collectives to have gradient formulas, but that might be a bit tricky to implement.
3
2,278
103,683
fairseq distributed training dumps core with flash attention
triaged, oncall: transformer/mha
### 🐛 Describe the bug When doing distributed training with Fairseq for an NMT model, flash attention core dumps. (Adding here for issue tracking, internal discussion for details). The reported source location of the core dump is consistent across multiple repros: ``` Opening GPU coredump: /tmp/cudacoredump_6-gpu_A100.devgpu003.prn6.facebook.com.3918025.1686793663 CUDA Exception: Warp Illegal Address The exception was triggered at PC 0x7f3279226060 (utils.h:819 in _ZN4fmha11Ldg_functorI5uint4Li1EE4loadEib inlined from utils.h:120) [Current focus set to CUDA kernel 0, grid 200269, block (0,0,0), thread (0,0,0), device 0, sm 0, warp 1, lane 0] #0 fmha_bwd_dot_do_o_kernel<FMHA_kernel_traits<256, 64, 16, 1, 8, 256u, __half> ><<<(2,16,3),(256,1,1)>>> () at /re_cwd/./fbcode/third-party-buck/platform010/build/cuda/11.4.2/include/cuda_fp16.hpp:733 in _ZN54_INTERNAL_32_fmha_bwd_hdim64_cu_pic_o_cpp1_ii_a4e28c6c14__half22float2E7__half2 inlined from cuda_fp16.hpp:539 733 /re_cwd/./fbcode/third-party-buck/platform010/build/cuda/11.4.2/include/cuda_fp16.hpp: No such file or directory. ``` Repro internally by Wei Ho. cc: @drisspg ### Versions fbcode trunk head cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
0
2,279
103,682
(fsdp) Support for accessing unsharded parameters for methods other than `forward()`
triaged, module: fsdp
### 🚀 The feature, motivation and pitch Certain models require execution methods other than `forward()`. A good example family of models are generation models, for example Huggingface's `transformers.PreTrainedModel`. Here, `PreTrainedModel.forward()` is used for training, but during evaluation (which is often in the validation part of the training loop), users need to call `PreTrainedModel.generate()`. Once the model is FSDP-wrapped, however, the user cannot use methods other than `forward()` to access unsharded parameters. As reported in https://github.com/pytorch/pytorch/issues/82461 , for Encoder-Decoder models, calling `PreTrainedModel.generate()` with FSDP-wrapped model will result in an error due to a direct access to a module (`_fsdp_wrapped_module.encoder`) that is not FSDP-wrapped. The recommendation in this issue was to individually FSDP-wrap `encoder`, but this is not a feasible solution for the case of T5 models, because embedding parameters should be shared across Encoder and Decoder; this results in errors during training. For this reason, HuggingFace Transformer team recommends a hack https://github.com/huggingface/transformers/issues/21667 which is not to wrap `encoder`, but to run `PreTrainedModel.forward()` (with an arbitrary input) before `PreTrainedModel.generate()`, because `forward()`'s side effect seems to populate unsharded parameters. In order to support methods other than `forward()`, we could generalize `FullyShardedDataParallel.forward()` to support any function between `_pre_forward()` and `_post_forward()`, making a single-line change to https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/fully_sharded_data_parallel.py#L784-L806 : ```python class ExecutableFSDP(FullyShardedDataParallel): """ Adds ``execute()`` to FullyShardedDataParallel, which allows access to unsharded parameters. .. note:: This is mostly a copy-paste of https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/fully_sharded_data_parallel.py#L784-L806 """ # noinspection PyNoneFunctionAssignment def execute(self, fcn: Callable[..., Any], *args: Any, **kwargs: Any) -> Any: """ Allows user to run the given ``fcn``, which would access unsharded parameters of the model. .. note:: Mostly, copy & paste from ``FullyShardedDataParallel.forward()``. """ with torch.autograd.profiler.record_function( "FullyShardedDataParallel.forward" ): args, kwargs = _root_pre_forward(self, self, args, kwargs) unused = None unshard_fn = functools.partial(_pre_forward_unshard, self, self._handles) reshard_fn = functools.partial(_post_forward_reshard, self, self._handles) args, kwargs = _pre_forward( self, self._handles, unshard_fn, self._fsdp_wrapped_module, args, kwargs ) for handle in self._handles: p_assert( handle.flat_param.device == self.compute_device, "Expected `FlatParameter` to be on the compute device " f"{self.compute_device} but got {handle.flat_param.device}", ) # !!! this is the only line we change from ``FullyShardedDataParallel.forward()`` !!! output = fcn(*args, **kwargs) return _post_forward(self, self._handles, reshard_fn, self, unused, output) class GenerationFSDP(ExecutableFSDP): """ Adds support for ``PreTrainedModel.generate()`` for ``FullyShardedDataParallel`` models. """ def __init__(self, module: PreTrainedModel, *args: Any, **kwargs: Any): super().__init__(module, *args, **kwargs) def generate( self, *args: Any, **kwargs: Any ) -> Union[GenerateOutput, torch.LongTensor]: """ Runs ``PreTrainedModel.generate()`` with access to unsharded parameters. """ return self.execute( self._fsdp_wrapped_module.generate, *args, **kwargs, ) ``` Of course, such a copy-paste approach is bound to fail when the implementation of `FullyShardedDataParallel.forward()` changes. Hence I was hoping this feature could be directly supported in `FullyShardedDataParallel`. If pytorch team wouldn't plan for supporting such feature, would the team advise users for a different approach? ### Alternatives Continue users to use the hack to run `forward()` before `generate()`, mentioned in https://github.com/pytorch/pytorch/issues/82461#issuecomment-1203377083 and recommended in https://github.com/huggingface/transformers/issues/21667 ### Additional context _No response_ cc @zhaojuanmao @mrshenli @rohan-varma @awgu @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
5
2,280
103,681
Exported model with dropout incorrectly applies dropout during eval
module: nn, triaged, module: correctness (silent), oncall: pt2, module: export
Normally calling `model.eval()` on a model with dropout changes the behavior of the dropout ops into a noop. However, this is not true if model is already exported, since the aten dropout pattern remains in the graph. Using the resulting model for inference led to incorrect computation, since we continue to apply dropout to the inputs during eval, which is unexpected. This affects QAT for the PT2 flow, where all transformations are done on an exported model. Simple repro: ``` import torch import torch._dynamo class MyModel(torch.nn.Module): def __init__(self): super().__init__() self.dropout = torch.nn.Dropout(p=0.5) def forward(self, x): return self.dropout(x) model = MyModel() model, _ = torch._dynamo.export(model, torch.randn(1), aten_graph=True) model.eval() print(model) ``` This prints: ``` def forward(self, x): arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) empty_like_default = torch.ops.aten.empty_like.default(arg0, memory_format = torch.contiguous_format) bernoulli__float = torch.ops.aten.bernoulli_.float(empty_like_default); empty_like_default = None div__scalar = torch.ops.aten.div_.Scalar(bernoulli__float, 0.5); bernoulli__float = None mul_tensor = torch.ops.aten.mul.Tensor(arg0, div__scalar); arg0 = div__scalar = None return pytree.tree_unflatten([mul_tensor], self._out_spec) ``` But it's supposed to print: ``` def forward(self, x): arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) return pytree.tree_unflatten([arg0], self._out_spec) ``` cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
14
2,281
103,676
[dynamo] Add config that turns on tracing through nn modules
Stale, module: dynamo, ciflow/inductor
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #104360 * __->__ #103676 * #103987 After: https://gist.github.com/soulitzer/c747d3e9cb1e241f6c7b9b57c0f84a9b Before: https://gist.github.com/soulitzer/4da95730e40d814aa0c64cdff5c48571 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @anijain2305 @ipiszy @msaroufim
2
2,282
103,672
detectron2_fcos_r_50_fpn and other models have enough graph breaks that we end up with multiple cache entries on module blocks
triaged, module: dynamic shapes
### 🐛 Describe the bug If you run detectron2_fcos_r_50_fpn with automatic_dynamic_shapes, you'll notice that there are dynamic shapes recompilations, which aren't really true dynamic shape recompilations: ``` [2023-06-15 05:43:07,747] torch.fx.experimental.symbolic_shapes: [INFO] 12.0: create_env [2023-06-15 05:43:08,412] torch.fx.experimental.symbolic_shapes: [INFO] 12.0: produce_guards [2023-06-15 05:43:08,423] torch.fx.experimental.symbolic_shapes: [INFO] 12.1: create_env [2023-06-15 05:43:08,425] torch._dynamo.variables.builder: [WARNING] automatic dynamic L['input'] size(1) 256 != 64 [2023-06-15 05:43:08,425] torch.fx.experimental.symbolic_shapes: [INFO] 12.1: create_symbol s0 = 256 for L['input'].size()[1] [2023-06-15 05:43:08,525] torch.fx.experimental.symbolic_shapes: [INFO] 12.1: eval Eq(s0, 256) [guard added] at home/ezyang/local/b/pytorch-env/lib/python3.10/site-packages/detectron2/layers/wrappers.py:106 in forward (_subclasses/fake_tensor.py:612 in conv) [2023-06-15 05:43:09,075] torch.fx.experimental.symbolic_shapes: [INFO] 12.1: produce_guards ``` Instead, they're due to the architecture of detectron2_fcos_r_50_fpn, which stacks a number of similar blocks together but with differing parameters. For Dynamo's intents and purposes, these are all using the same code block, and so each block with differing configuration all gets chucked in the same code cache. I think if our policy is that we want to specialize on every unique parameter size (this seems reasonable to me), it would be best if we could arrange for our compiled code to NOT live on all the same code object. @anijain2305 also mentioned to me that he noticed this could contribute to guard overhead, although ISTR that in the end it wasn't clear how much this actually mattered in the end. ### Versions main
1
2,283
103,671
"Y.getIntrusivePtr()->set_storage(X.getIntrusivePtr()->storage()); " in C++ is not supported
module: cpp, triaged
### 🚀 The feature, motivation and pitch There is no set_storage() support for getIntrusivePtr() object. I want to copy from X to Y where X and Y are from different devices (i.e. X.device() != Y.device()). However, two devices share the global memory address with each other, so I prefer just moving the storage point from X to Y to avoid expensive data copy. The following code in C++ doesn't work for me, as it fails for undefined `set_storage()`: ```cpp Y.getIntrusivePtr()->set_storage(X.getIntrusivePtr()->storage()) ``` If I perform a shallow copy using: ```cpp Y.getIntrusivePtr()->shallow_copy_from(X.getIntrusivePtr()); ``` It is also not what I want, because it will change Y's device to be X's device as well, making following computation of Y not performed on original Y's expected device. Any alternative way to share the storage link. or any supporting plans? ### Alternatives _No response_ ### Additional context _No response_ cc @jbschlosser
0
2,284
103,668
MultiheadAttention should split embed_dim into four parameters
triaged, oncall: transformer/mha
### 🚀 The feature, motivation and pitch The assertions around embed_dim are in `nn.MultiheadAttention` and `F.multi_head_attention_forward` too restrictive. The embed_dim currently seems to be a “catch-all” parameter, although the multi-head attention fundamentally supports four different dimensions to coexist, which are currently all noted as `embed_dim`: 1) The input dimension of the query - doesn’t need to be divisible by num_heads, 2) The embed_dim for query and key - this should be divisible by num_heads, 3) The embed_dim for value as well the input dimension of the out_projection - this should be divisible by num_heads, 4) The output dimension of out_proj - doesn’t need to be divisible by num_heads. ### Alternatives Split up the embed_dim into four, i.e. 1) qdim, 2) qk_embed_dim, 3) vo_embed_dim, 4) out_dim, and loosen the head_divisibility checks for qdim and out_dim. This would allow much more fine-grained adjustment of the MHA module, e.g. in the context of structured pruning. ### Additional context _No response_ cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg
3
2,285
103,652
inductor: support horizontal reduction with vec_transpose to improve TIMM swin_base_patch4_window7_224 dynamic shape performance
triaged, oncall: pt2, module: inductor, module: cpu inductor
### 🚀 The feature, motivation and pitch For TIMM swin_base_patch4_window7_224 dynamic shape, there has a horizontal reduction with vec_transpose: ``` extern "C" void kernel(float* in_out_ptr0, const float* in_ptr0, const float* in_ptr1, const long ks0) { auto out_ptr0 = in_out_ptr0; #pragma omp parallel num_threads(20) { { #pragma omp for for(long i0=static_cast<long>(0L); i0<static_cast<long>(ks0); i0+=static_cast<long>(1L)) { #pragma GCC ivdep for(long i1=static_cast<long>(0L); i1<static_cast<long>(3136L); i1+=static_cast<long>(16L)) { { #pragma omp declare reduction(+:at::vec::Vectorized<float>:omp_out = omp_out + omp_in) initializer(omp_priv={{0}}) float tmp_acc0 = 0; auto tmp_acc0_vec = at::vec::Vectorized<float>(tmp_acc0); for(long i2=static_cast<long>(0L); i2<static_cast<long>(128L); i2+=static_cast<long>(16L)) { for (long i2_inner = 0; i2_inner < 16; i2_inner++) { auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<long>(i1 + (3136L*i2) + (3136L*i2_inner) + (401408L*i0))); auto tmp1 = ([&]() { __at_align__ float tmpbuf[16]; for (long i1_inner = 0; i1_inner < 16; i1_inner++) tmpbuf[i1_inner] = in_ptr1[static_cast<long>(i2 + i2_inner + (128L*(static_cast<long>((static_cast<long>((i1 + i1_inner)) % static_cast<long>(56L))) % static_cast<long>(7L))) + (896L*(static_cast<long>(at::native::div_floor_integer((i1 + i1_inner), 56L)) % static_cast<long>(7L))) + (6272L*(at::native::div_floor_integer((static_cast<long>((i1 + i1_inner)) % static_cast<long>(56L)), 7L))) + (50176L*(at::native::div_floor_integer((i1 + i1_inner), 392L))) + (401408L*i0))]; return at::vec::Vectorized<float>::loadu(tmpbuf); })(); auto tmp2 = tmp0 + tmp1; tmp_acc0_vec = tmp_acc0_vec + tmp2; } } tmp_acc0_vec.store(out_ptr0 + static_cast<long>(i1 + (3136L*i0))); } } } } #pragma omp single { { for(long i0=static_cast<long>(0L); i0<static_cast<long>(3136L*ks0); i0+=static_cast<long>(16L)) { auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<long>(i0)); auto tmp1 = at::vec::Vectorized<float>(static_cast<float>(128.0)); auto tmp2 = tmp0 / tmp1; tmp2.store(in_out_ptr0 + static_cast<long>(i0)); } } } } } ``` https://github.com/pytorch/pytorch/pull/103651 fix the accuracy issue, but the performance can be further improved. ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78
0
2,286
103,650
AOT autograd: Avoid dependency on strides for manual regeneration of outputs that are aliased to inputs
triaged, oncall: pt2, module: aotdispatch
### 🐛 Describe the bug AOT autograd's manual regeneration of outputs assumes strided output tensor support. But backends like Habana (https://docs.habana.ai/en/latest/index.html) produces contiguous tensor outputs. As a result, examples like the below one results in accuracy issues ``` def fn(a): b = a.t() b.mul_(1.0) return b x = torch.arange(6).reshape([2, 3]).to('hpu') print("x ", x.cpu()) compiled_fn = torch.compile(fn, backend="aot_hpu_training_backend") y = compiled_fn(x) print("y ", y.cpu()) ``` **Console prints:** x tensor([[0, 1, 2], [3, 4, 5]]) y tensor([[0, 1], [2, 3], [4, 5]]) Tensor y has incorrect values. Expected output is y tensor([[0, 3], [1, 4], [2, 5]]) Our analysis suggests that the issue occurs in step 4 of runtime_wrapper(), inside gen_alias_from_base(). This function assumes strided tensor for mul's output. Related bug for TPU backend: https://github.com/pytorch/xla/issues/5179 **Request:** Please avoid dependency on the tensor strides for output regeneration. Instead the information regarding the original alias operation (transpose in the above example) should be used. ### Versions Collecting environment information... PyTorch version: 2.0.1a0+git7f346ef Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 14.0.5 (ssh://gerrit:29418/tpc_llvm10 38f3bfdbf5b8581125aa7ac5a2605c9674624566) CMake version: version 3.26.3 Libc version: glibc-2.31 Python version: 3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
13
2,287
103,644
Insert nvtx markers into generated triton kernels
triaged, open source, module: inductor, ciflow/inductor, topic: devs
Fixes #103552 Adds nvtx markers around the generated triton kernel call functions. Provides descriptive information about the original operators that were fused by the kernel. Useful for performance analysis and general understanding of how the fusion algorithm works. Information about the originating operators is appended to the RecordFunctionFast profiling message in the CachingAutotuner. The OriginOp message is a list consisting of op_type, module_name, torch_op, seq_nr. Each operator in the list is one of the ops that was included in the fused triton kernel. The OriginOp message is enabled by default in the same nvtx marker as the kernel_name string. There is an additional logic path for the case when the kernel is called from the extern_kernel() function. In this case, an nvtx_range push/pop pair is inserted around the kernel launch in the generated inductor code. The nvtx ranges are only inserted when **torch._inductor.config.profiler_mark_wrapper_call** is enabled. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov
8
2,288
103,643
"addmm_out_sparse_csr_impl_mkl" not implemented for 'Byte'
module: sparse, triaged
### 🐛 Describe the bug ``` import torch a =torch.zeros(128, 12288*4, dtype=torch.uint8) a = a.to_sparse_csr() # b = torch.rand(12288*4, 4096, dtype=torch.int8) b = torch.randint(high=255, size=(12288*4,12288), dtype=torch.uint8) torch.sparse.mm(a,b) ``` ### Versions PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 9.3.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.17 Versions of relevant libraries: [pip3] intel-extension-for-pytorch==2.0.100 [pip3] numpy==1.24.1 [pip3] torch==2.0.1 [pip3] torch-scatter==2.1.1 [pip3] torch-sparse==0.6.17 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] intel-extension-for-pytorch 2.0.100 pypi_0 pypi [conda] numpy 1.24.1 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torch-scatter 2.1.1 pypi_0 pypi [conda] torch-sparse 0.6.17 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi [conda] torchvision 0.15.2 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
3
2,289
103,640
Disclose C++ ATen ops type promotion rules under OpOverload in Python
module: docs, triaged
### 🚀 The feature, motivation and pitch As title, is it possible or good idea? Motivation is to enable doing fx aten -> fx aten transformation with casts explicitly inserted. ### Alternatives _No response_ ### Additional context _No response_ cc @svekars @carljparker
1
2,290
103,626
2D model checkpointing hangs on a ViT model
oncall: distributed
### 🐛 Describe the bug ### Issue Running a ViT model to verify the 2D checkpointing, it hangs for long time without throwing any error. This has been tested on Nighlies. ### Repro step clone this branch, https://github.com/lessw2020/transformer_framework/tree/2D-checkpoint ```bash sh run_training4.sh ``` ### Versions Collecting environment information... PyTorch version: 2.1.0.dev20230613+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.5 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.26.3 Libc version: glibc-2.31 Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.2.152 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping: 7 CPU MHz: 2999.998 BogoMIPS: 5999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 71.5 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.5 [pip3] pytorch-triton==2.1.0+440fd1bf20 [pip3] torch==2.1.0.dev20230613+cu118 [pip3] torch-model-archiver==0.8.0 [pip3] torch-tb-profiler==0.4.1 [pip3] torch-workflow-archiver==0.2.8 [pip3] torchaudio==2.1.0.dev20230613+cu118 [pip3] torchpippy==0.1.1+3edf3ab [pip3] torchserve==0.8.0 [pip3] torchvision==0.16.0.dev20230613+cu118 [pip3] triton==2.0.0 [pip3] vit-pytorch==1.2.2 [conda] numpy 1.23.5 pypi_0 pypi [conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi [conda] torch 2.1.0.dev20230613+cu118 pypi_0 pypi [conda] torch-model-archiver 0.8.0 pypi_0 pypi [conda] torch-tb-profiler 0.4.1 pypi_0 pypi [conda] torch-workflow-archiver 0.2.8 pypi_0 pypi [conda] torchaudio 2.1.0.dev20230613+cu118 pypi_0 pypi [conda] torchpippy 0.1.1+3edf3ab pypi_0 pypi [conda] torchserve 0.8.0 pypi_0 pypi [conda] torchvision 0.16.0.dev20230613+cu118 pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi [conda] vit-pytorch 1.2.2 pypi_0 pypi cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
0
2,291
103,625
DISABLED test_backward_ddp_outside (__main__.TensorPipeDdpUnderDistAutogradTest)
oncall: distributed, module: flaky-tests, skipped
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_backward_ddp_outside&suite=TensorPipeDdpUnderDistAutogradTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14264987221). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 3 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_backward_ddp_outside` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `distributed/rpc/test_tensorpipe_agent.py` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
1
2,292
103,622
ARM based GPU support for Distributed Data Parallelism Module
oncall: distributed
### 🚀 The feature, motivation and pitch I'm working on building a self driving model for electric race cars and would love to be able to use the full power of my ARM based GPUs for training. I know that this is being worked on but I can't find anything related to allowing Distributed training using multiple ARM based GPUs. I see this being huge for a lot of developers as ARM based chips are only becoming more and more popular as they are cost effective and efficient. I also recognize that there is already support for ARM based GPUs but not in a distributed fashion. If I am incorrect please let me know and provide me with an example if you can. ### Alternatives An alternative would be to provide more documentation on the PyTorch DistributedDataParallel module regarding ARM based chips. Making it abundantly clear that this module is developed solely for use with NVIDIA GPUs. This would prevent futile efforts to making it work. ### Additional context _No response_ ```[tasklist] ### Tasks ``` cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
0
2,293
103,619
torch._dynamo.exc.InternalTorchDynamoError: SymNodeVariable() is not a constant on DynamicShapesMiscTests.test_slice_input
triaged, oncall: pt2, module: dynamic shapes
### 🐛 Describe the bug ``` __________________________________________ DynamicShapesMiscTests.test_slice_input __________________________________________ Traceback (most recent call last): File "/data/users/ezyang/b/pytorch/test/dynamo/test_misc.py", line 1999, in test_slice_input res2 = opt_getitem(layers, slice(3, 8, 2)) File "/data/users/ezyang/b/pytorch/torch/_dynamo/eval_frame.py", line 295, in _fn return fn(*args, **kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/eval_frame.py", line 448, in catch_errors return callback(frame, cache_size, hooks, frame_state) File "/data/users/ezyang/b/pytorch/torch/_dynamo/convert_frame.py", line 127, in _fn return fn(*args, **kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/convert_frame.py", line 360, in _convert_frame_assert return _compile( File "/data/users/ezyang/b/pytorch/torch/_dynamo/utils.py", line 180, in time_wrapper r = func(*args, **kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/convert_frame.py", line 515, in _compile raise InternalTorchDynamoError(str(e)).with_traceback(e.__traceback__) from None File "/data/users/ezyang/b/pytorch/torch/_dynamo/convert_frame.py", line 430, in _compile out_code = transform_code_object(code, transform) File "/data/users/ezyang/b/pytorch/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object transformations(instructions, code_options) File "/data/users/ezyang/b/pytorch/torch/_dynamo/convert_frame.py", line 415, in transform tracer.run() File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 2025, in run super().run() File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 708, in run and self.step() File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 668, in step getattr(self, inst.opname)(inst) File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 390, in wrapper return inner_fn(self, inst) File "/data/users/ezyang/b/pytorch/torch/_dynamo/symbolic_convert.py", line 161, in impl self.push(fn_var.call_function(self, self.popn(nargs), {})) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/builtin.py", line 577, in call_function result = handler(tx, *args, **kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/builtin.py", line 883, in call_getitem return args[0].call_method(tx, "__getitem__", args[1:], kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 346, in call_method return super().call_method(tx, name, args, kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 307, in call_method return super().call_method(tx, name, args, kwargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 95, in call_method return self.getitem_const(args[0]) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 64, in getitem_const index = arg.as_python_constant() File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 607, in as_python_constant return slice(*[x.as_python_constant() for x in self.items]) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/lists.py", line 607, in <listcomp> return slice(*[x.as_python_constant() for x in self.items]) File "/data/users/ezyang/b/pytorch/torch/_dynamo/variables/base.py", line 150, in as_python_constant raise NotImplementedError(f"{self} is not a constant") torch._dynamo.exc.InternalTorchDynamoError: SymNodeVariable() is not a constant from user code: File "/data/users/ezyang/b/pytorch/test/dynamo/test_misc.py", line 1984, in getitem a[idx] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` when run with `automatic_dynamic_shapes=True` ### Versions main cc @msaroufim @wconstab @bdhirsh @anijain2305
1
2,294
103,615
Can't call allow_in_graph inside of a function being torch.compile'd
triaged, oncall: pt2
### 🐛 Describe the bug We are unable to call allow_in_graph inside of a function being torch.compile'd with fullgraph=True and the error message is confusing. We should either allow this or improve the error message. ```py import torch import torch._dynamo @torch.compile(backend='aot_eager', fullgraph=True) def f(x): @torch._dynamo.allow_in_graph def apply_func(x): return x.clone() return apply_func(x) x = torch.randn([], requires_grad=True) f(x) ``` produces: > Unsupported: call_function allow_in_graph in skip_files /raid/rzou/pt/debug-cpu4/torch/_dynamo/__init__.py Discovered with @drisspg ### Error logs _No response_ ### Minified repro _No response_ ### Versions main cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
1
2,295
103,608
Installing Torch on AMD Platform Leads to Huge Docker Image
module: rocm, triaged
### 🐛 Describe the bug I am not sure this is the best place to raise this issue and I am not sure "bug" is the best characterization, but I found some unexpected behavior when installing pytorch in docker images built for different platforms. In particular, when I install torch in an image built for the `linux/arm64` platform I get an image that is 628MB, but when I install torch in an image built for the `linux/amd64` platform I get an image that is 6.87GB. Ideally the image would be a similar size for both platforms because the large images are slow to build/push/pull. Here is my `Dockerfile`: ``` FROM python:3.11.2-slim-bullseye ENV DEBIAN_FRONTEND=noninteractive RUN pip3 install torch==2.0.0 ``` Here are the docker build commands I ran: ``` docker build --platform linux/amd64 -t amd_build -f Dockerfile . docker build --platform linux/arm64 -t arm_build -f Dockerfile . ``` And here you can see the resulting image sizes: ``` REPOSITORY TAG IMAGE ID CREATED SIZE amd_build latest 6a5b409f32bf 27 minutes ago 6.87GB arm_build latest 1fc9604a61f3 35 minutes ago 628MB ``` ### Versions Note: Some of this information may not be relevant because the issue I am raising happens in docker. Collecting environment information... PyTorch version: 2.0.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.4 (arm64) GCC version: Could not collect Clang version: 14.0.3 (clang-1403.0.22.14.1) CMake version: Could not collect Libc version: N/A Python version: 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime) Python platform: macOS-13.4-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M1 Pro Versions of relevant libraries: [pip3] numpy==1.24.2 [pip3] torch==2.0.0 [conda] Could not collect cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo
2
2,296
103,602
test_fstrings2 fails with dynamic
good first issue, triaged, module: dynamic shapes
### 🐛 Describe the bug ``` @make_test def test_fstrings2(x): tmp = f"{x.shape[0]} bar" if tmp.startswith("10"): return x + 1 ``` fails with ``` stats [('calls_captured', 1), ('unique_graphs', 1)] .[2023-06-14 10:10:13,223] torch._dynamo.utils: [ERROR] Accuracy failed (<class 'NoneType'>): None != tensor([[ 0.8883, 0.5034, 1.1631, 0.1183, 1.0539, 1.6684, 0.9403, 0.5325, 0.7847, 1.8840], [ 0.2416, 0.6311, 0.6576, -0.4020, 1.3206, -0.0219, 1.7988, 0.9077, 0.2951, -0.6024], [ 1.2891, 1.4899, 0.6147, 0.2880, 0.8294, -0.4594, 1.2207, 1.2463, -0.3248, 1.6970], [ 0.3369, 2.2158, -0.4949, 1.8810, -0.1786, 0.0660, 0.4325, 0.7228, -1.1834, 1.3668], [ 1.9380, 1.0078, 0.6861, -0.1567, 2.8409, -0.0174, 2.2192, 1.1601, 2.5985, 0.9531], [-0.5270, -1.0143, -0.5173, 1.3877, -0.1849, 1.6897, 2.3232, 2.8169, 1.6808, 1.7244], [ 1.0323, -0.6593, -0.8773, 1.7372, 1.9257, 1.9247, 1.1825, 0.9263, 1.3147, -0.0369], [ 1.2100, 1.6144, 1.0628, 0.6703, -0.7970, 1.8728, 1.7670, 0.8862, 0.0572, 1.7540], [ 1.1407, 0.3063, 0.3841, 0.2705, 2.3204, 2.5997, -0.0792, 0.6604, -0.4538, -1.6740], [ 2.5984, 1.8021, 1.5722, 1.0653, 0.9765, 1.8876, 2.4689, 2.2647, 0.7247, 0.8675]]) stats [('calls_captured', 1), ('unique_graphs', 1)] F ====================================================================== FAIL: test_fstrings2_dynamic_shapes (torch._dynamo.testing.DynamicShapesFunctionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/users/ezyang/b/pytorch/torch/_dynamo/testing.py", line 305, in _fn return fn(*args, **kwargs) File "/data/users/ezyang/b/pytorch/test/dynamo/test_functions.py", line 42, in test_fn return torch._dynamo.testing.standard_test(self, fn=fn, nargs=nargs) File "/data/users/ezyang/b/pytorch/torch/_dynamo/testing.py", line 225, in standard_test self.assertTrue(same(val1a, correct1)) AssertionError: False is not true ---------------------------------------------------------------------- Ran 2 tests in 2.427s ``` My guess is that SymNodeVariable isn't implementing printing correctly. The obvious thing to do is specialize and guard in this situation. However, it's a little subtle: if someone creates a string and then ends up not using it, we would prefer to NOT have specialized. This can matter for poorly written error checking code. ### Versions main
10
2,297
103,589
`interpolate` with `antialias=True` on CUDA doesn't work if the difference of spatial size is large
module: cuda, triaged, module: interpolation
### 🐛 Describe the bug Take the following example, I want to resize a tensor from `(941, 941)` to `(10, 10)` with `antialias=True`. ```python import torch from torchvision.transforms import InterpolationMode from torchvision.transforms.functional import resize test_tensor = torch.ones((1, 941, 941), dtype=torch.float, device="cuda") resize( test_tensor, size=(10, 10), interpolation=InterpolationMode.BILINEAR, antialias=True, ) ``` This will break with the following error: ``` Traceback (most recent call last): File "/home/yuhuang/.cache/junkfile/2023/06/2023-06-14-144608.py", line 6, in <module> resize( File "/home/yuhuang/miniconda3/envs/new-torchvision-bug/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 492, in resize return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias) File "/home/yuhuang/miniconda3/envs/new-torchvision-bug/lib/python3.10/site-packages/torchvision/transforms/_functional_tensor.py", line 467, in resize img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias) File "/home/yuhuang/miniconda3/envs/new-torchvision-bug/lib/python3.10/site-packages/torch/nn/functional.py", line 3958, in interpolate return torch._C._nn._upsample_bilinear2d_aa(input, output_size, align_corners, scale_factors) RuntimeError: Provided interpolation parameters can not be handled with current algorithm implementation. Please reduce the scale factor. Too much shared memory required: 49660 vs 49152 ``` We also tried the following combination ``` (940, 940) -> (10, 10): works (941, 941) -> (10, 11): breaks (941, 941) -> (11, 10): breaks (9401, 9401) -> (100, 100): breaks (941, 941) -> (10, 10) with antialias=False: works ``` I use PyTorch 2.0 to demonstrate the bug, the same bug exists in Pytorch 1.13 as well. 94 seems like a magic number, I found this out by trying many combinations. ### Versions ``` PyTorch version: 2.0.1 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.19.0-43-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Nvidia driver version: 510.108.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: GenuineIntel Model name: 12th Gen Intel(R) Core(TM) i9-12900K CPU family: 6 Model: 151 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 Stepping: 2 CPU max MHz: 5200.0000 CPU min MHz: 800.0000 BogoMIPS: 6374.40 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 640 KiB (16 instances) L1i cache: 768 KiB (16 instances) L2 cache: 14 MiB (10 instances) L3 cache: 30 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-23 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2023.1.0 h6d00ec8_46342 [conda] mkl-service 2.4.0 py310h5eee18b_1 [conda] mkl_fft 1.3.6 py310h1128e8f_1 [conda] mkl_random 1.2.2 py310h1128e8f_1 [conda] numpy 1.24.3 py310h5f9d8c6_1 [conda] numpy-base 1.24.3 py310hb5e798b_1 [conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch [conda] pytorch-cuda 11.7 h778d358_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.0.2 py310_cu117 pytorch [conda] torchtriton 2.0.0 py310 pytorch [conda] torchvision 0.15.2 py310_cu117 pytorch ``` cc @ezyang @gchanan @zou3519 @ptrblck
7
2,298
103,588
LSTM/RNN operation agnostic
module: nn, triaged, needs research
### 🚀 The feature, motivation and pitch Create a solution for recurrent neural networks (RNN, GRU, LSTM) that is agnostic to the type of operation and is performed for the X->hidden_size and hidden_size->hidden_size. These can currently be implemented using Pytorch, but they are not as efficient as LSTM since there is no efficient execution for multiple timesteps and layers. The motivation is to create an implementation agnostic to the operation (currently nn.Linear) that would allow to implement ConvLSTM, Transformer+LSTM efficiently and future new operations. I know that actually `nn.LSTM` calls directly for example a cudnn kernel to make the LSTM efficient in timesteps and number of layers: [cudnn reference](https://github.com/pytorch/pytorch/blob/e9674d146ce424d3ea44f8b2ffd9e9f92dfa15f7/aten/src/ATen/native/cudnn/RNN.cpp#LL1074C20-L1074C43). But I don't know if there would be a way for a similar function (without cudnn) to be generalized and receive two callbacks indicating the X->hidden_size and hidden_size->hidden_size. ### Alternatives Currently you can write the cell in pytorch, but to apply it to a sequence you must iteratively call timesteps vs num_layers times to GenericLSTMCell. Pytorch "Pseudo-code": ```python import torch from torch import Size from torch import nn from typing import Optional, Union, Iterable, Callable from functools import partial class GenericLSTMCell(nn.Module): def init_h_or_c(self, input: torch.Tensor) -> torch.Tensor: assert self.h0_shape_fn is not None, "h0_shape_fn can be None only when hx is provided" hcx = torch.zeros(*self.h0_shape_fn(input.shape), device=input.device, dtype=input.dtype) return hcx def __init__( self, h0_shape_fn: Optional[Callable[[Size], Iterable[int]]], layer_xh: nn.Module, layer_hh: nn.Module, dim: int = 1, ): super().__init__() self.h0_shape_fn = h0_shape_fn self.layer_xh = layer_xh self.layer_hh = layer_hh self.dim = dim def forward( self, input: torch.Tensor, hx: Optional[tuple[torch.Tensor, torch.Tensor]] = None ) -> tuple[torch.Tensor, torch.Tensor]: """_summary_ Args: input (torch.Tensor): input vector hx (Optional[Union[torch.Tensor, tuple[torch.Tensor, torch.Tensor]]], optional): hidden vector. Defaults to None. Returns: tuple[torch.Tensor, torch.Tensor]: hidden and context vectors """ if hx is None: h_t = self.init_h_or_c(input) c_t = self.init_h_or_c(input) else: h_t, c_t = hx gates = self.layer_xh(input) + self.layer_hh(h_t) # Compute the gates (i_t, f_t, g_t, o_t) input_gate, forget_gate, cell_gate, output_gate = gates.chunk(4, dim=self.dim) i_t = torch.sigmoid(input_gate) f_t = torch.sigmoid(forget_gate) g_t = torch.tanh(cell_gate) o_t = torch.sigmoid(output_gate) cy = c_t * f_t + i_t * g_t hy = o_t * torch.tanh(cy) return hy, cy class LinearLSTMCell(GenericLSTMCell): def h0_shape_fn(hidden_size: int, input_shape: Size) -> Iterable[int]: # B x C return input_shape[0], hidden_size def __init__(self, input_size: int, hidden_size: int): super().__init__( partial(LinearLSTMCell.h0_shape_fn, hidden_size), nn.Linear(input_size, hidden_size * 4, bias=True), nn.Linear(hidden_size, hidden_size * 4, bias=True), dim=1, ) class Conv2dLSTMCell(GenericLSTMCell): def h0_shape_fn(hidden_size: int, input_shape: Size) -> Iterable[int]: # B x C x H x W return (input_shape[0], hidden_size, input_shape[2], input_shape[3]) def __init__(self, input_size: int, hidden_size: int, kernel_size: int): super().__init__( partial(Conv2dLSTMCell.h0_shape_fn, hidden_size), nn.Conv2d(input_size, hidden_size * 4, bias=True, kernel_size=kernel_size), nn.Conv2d(hidden_size, hidden_size * 4, bias=True, kernel_size=kernel_size), dim=1, ) class GenericLSTM(nn.Module): def init_h_or_c(self, input: torch.Tensor) -> torch.Tensor: # B x hidden_size x ... cell_h0_size = self.h0_shape_fn(tuple(x for i, x in enumerate(input.shape) if i != self.timestep_dim)) # num_layers x B x hidden_size x ... hcx = torch.zeros((self.num_layers,) + cell_h0_size, device=input.device, dtype=input.dtype) return hcx def __init__( self, h0_shape_fn: Callable[[Size], Iterable[int]], layer_xh: nn.Module, layer_hh: nn.Module, num_layers: int = 2, timestep_dim: int = 1, channel_dim: int = 2, ): super().__init__() self.num_layers = num_layers self.h0_shape_fn = h0_shape_fn self.timestep_dim = timestep_dim self.layers: list[GenericLSTMCell] = [] self.layers.append( GenericLSTMCell( None, layer_xh, layer_hh, dim=channel_dim if channel_dim < timestep_dim else channel_dim - 1 ) ) for _ in range(1, num_layers): self.layers.append( GenericLSTMCell( None, layer_hh, layer_hh, dim=channel_dim if channel_dim < timestep_dim else channel_dim - 1 ) ) def forward( self, input: torch.Tensor, hx: Optional[tuple[torch.Tensor, torch.Tensor]] = None, ) -> tuple[torch.Tensor, torch.Tensor]: if hx is None: h_0 = self.init_h_or_c(input) c_0 = self.init_h_or_c(input) else: h_0, c_0 = hx h_x = h_0.clone() c_x = c_0.clone() outputs: list[torch.Tensor] = [] for timestep in range(input.shape[self.timestep_dim]): input_layer = input.index_select(dim=self.timestep_dim, index=torch.tensor([timestep])).squeeze( self.timestep_dim ) for layer_idx, layer in enumerate(self.layers): # obtain features from cell. # B x hidden_size hx_layer = h_x[layer_idx] cx_layer = c_x[layer_idx] hx_cell, cx_cell = layer(input_layer, (hx_layer, cx_layer)) # set hx as the new input for the next layer cell input_layer = hx_cell # change the hx for the current hx cx h_x[layer_idx] = hx_cell c_x[layer_idx] = cx_cell output_timestep = input_layer outputs.append(output_timestep) # stack all the timesteps # B x seq.length x hidden_size x ... outputs = torch.stack(outputs, dim=1) return outputs class LinearLSTM(GenericLSTM): def __init__( self, input_size: int, hidden_size: int, num_layers: int = 2, timestep_dim: int = 1, channel_dim: int = 2 ): super().__init__( h0_shape_fn=partial(LinearLSTMCell.h0_shape_fn, hidden_size), layer_xh=nn.Linear(input_size, hidden_size * 4, bias=True), layer_hh=nn.Linear(hidden_size, hidden_size * 4, bias=True), num_layers=num_layers, timestep_dim=timestep_dim, channel_dim=channel_dim, ) class Conv2dLSTM(GenericLSTM): def __init__( self, input_size: int, hidden_size: int, kernel_size: int, num_layers: int = 2, timestep_dim: int = 1, channel_dim: int = 2, ): super().__init__( h0_shape_fn=partial(Conv2dLSTMCell.h0_shape_fn, hidden_size), layer_xh=nn.Conv2d(input_size, hidden_size * 4, bias=True, kernel_size=kernel_size, padding=1), layer_hh=nn.Conv2d(hidden_size, hidden_size * 4, bias=True, kernel_size=kernel_size, padding=1), num_layers=num_layers, timestep_dim=timestep_dim, channel_dim=channel_dim, ) if __name__ == '__main__': m = Conv2dLSTM(3, 32, 3, num_layers=2) # Conv # B x seq.length x C x H x W out = m(torch.zeros(4, 15, 3, 92, 92)) print(out.shape) # Linear # B x seq.length x C m = LinearLSTM(3, 32, num_layers=2) out = m(torch.zeros(4, 15, 3)) print(out.shape) ``` ### Additional context _No response_ cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
2
2,299
103,584
torch.cuda.mem_get_info to return 0 if CUDA context isn't initialized
module: cuda, triaged, actionable
### 🐛 Describe the bug It appears that mem_get_info itself triggers CUDA context initialization which can be a bit surprising. But overall this issue is minor, so maybe just being explicit about this in the docs would solve the surprise ### Versions N/A cc @ptrblck
3
2,300
103,582
[Inductor] Constant folding support for FX module captured by Dynamo Export
oncall: quantization, triaged, oncall: pt2, module: inductor
### 🐛 Describe the bug As discussed here https://github.com/pytorch/pytorch/pull/101164#discussion_r1195953417, the Quantization 2.0 with Inductor backend needs the support of constant folding pass in order to do weight prepack for convolution and linear operations. The general use case in Quantization 2.0 with Inductor backend is as: https://github.com/pytorch/pytorch/blob/040d2cc96953290bfe49e32dbb250fa9df6f51ce/test/quantization/pt2e/test_quantize_pt2e_fx.py#L325-L446 - Step1: `torchdynamo.export` to capture the fx graph. - Step2: Pass the quantization 2.0 flow to generate reference quantized module. - Step3: Use `compile_fx` lower into Inductor with the reference quantized module. When I try to use constant passes inside inductor with the example in this [gist](https://gist.github.com/leslie-fang-intel/c1e1fe49ae9478425d39e14775fde7ea), I got the error of ``` File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_functorch/aot_autograd.py", line 2273, in aot_wrapper_synthetic_base return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_functorch/aot_autograd.py", line 1535, in aot_dispatch_base compiled_fw = compiler(fw_module, flat_args) File "/home/lesliefang/pytorch_1_7_1/inductor_quant/pytorch/torch/_inductor/compile_fx.py", line 710, in inference_compiler fw_metadata=torch._guards.TracingContext.get().fw_metadata, AttributeError: 'NoneType' object has no attribute 'fw_metadata' ``` ### Versions Collecting environment information... PyTorch version: 2.1.0a0+git69a2611 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1) Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.17 Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.19.5-1.el7.elrepo.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 56 On-line CPU(s) list: 0-55 Thread(s) per core: 1 Core(s) per socket: 28 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel Genuine CPU Stepping: 10 CPU MHz: 1199.980 CPU max MHz: 2900.0000 CPU min MHz: 1200.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 39424K NUMA node0 CPU(s): 0-27 NUMA node1 CPU(s): 28-55 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni flush_l1d arch_capabilities Versions of relevant libraries: [pip3] mypy==0.960 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.1 [pip3] torch==2.1.0a0+git193d841 [pip3] torchvision==0.15.0a0+5850f37 [conda] mkl 2023.0.0 intel_25398 intel [conda] mkl-include 2023.0.0 intel_25398 intel [conda] mkl-include 2023.0.0 <pip> [conda] mkl-service 2.4.0 py38h3605609_14 intel [conda] mkl-static 2023.0.0 <pip> [conda] mkl_fft 1.3.1 py38hcab1719_22 intel [conda] mkl_random 1.2.2 py38hbf47bc3_22 intel [conda] mkl_umath 0.1.1 py38hf66a691_32 intel [conda] numpy 1.23.1 <pip> [conda] numpy 1.22.3 py38hf0956d0_5 intel [conda] numpy-base 1.22.3 py38h45c9ace_5 intel cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @eellison
1