Serial Number
int64 1
6k
| Issue Number
int64 75.6k
112k
| Title
stringlengths 3
357
| Labels
stringlengths 3
241
โ | Body
stringlengths 9
74.5k
โ | Comments
int64 0
867
|
---|---|---|---|---|---|
2,001 | 105,211 |
autocast + torch.no_grad inference cause backward graph nodes to be lost
|
module: autograd, triaged
|
### ๐ Describe the bug
In autocast, after the inference operation of torch.no_grad, I ran the forward and backward of the network, found that the gradient of some parameters has not been calculated. This question is from the real scene of bevformer's amp training
https://github.com/fundamentalvision/BEVFormer/blob/master/projects/mmdet3d_plugin/bevformer/detectors/bevformer.py#L158ใ
```
import torch
import torchvision
device = 'cuda'
net = torchvision.models.resnet18().to(device)
input = torch.randn(1,3,224,224).to(device)
with torch.cuda.amp.autocast(True):
net.eval()
with torch.no_grad():
_ = net(input)
net.train()
#torch.clear_autocast_cache()
out = net(input)
grad = torch.randn_like(out.data)
out.backward(grad)
for name, param in net.named_parameters():
if param.grad != None:
print(f"{name} parameter's grad is not None")
else:
print(f"{name} parameter's grad is None")
```
### Versions
Collecting environment information...
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.10
Python version: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-4.4.0-186-generic-x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 10.2.89
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla V100S-PCIE-32GB
GPU 1: Tesla V100S-PCIE-32GB
GPU 2: Tesla V100S-PCIE-32GB
GPU 3: Tesla V100S-PCIE-32GB
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
Stepping: 7
CPU MHz: 3199.968
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 4799.99
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.20.2
[pip3] torch==1.9.0
[pip3] torchelastic==0.2.0
[pip3] torchtext==0.10.0
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py37h27cfd23_1
[conda] mkl_fft 1.3.0 py37h42c9631_2
[conda] mkl_random 1.2.1 py37ha9443f7_2
[conda] numpy 1.20.2 py37h2d18471_0
[conda] numpy-base 1.20.2 py37hfae3a4d_0
[conda] pytorch 1.9.0 py3.7_cuda10.2_cudnn7.6.5_0 pytorch
[conda] torch 1.9.0a0+git574ab98 pypi_0 pypi
[conda] torchelastic 0.2.0 pypi_0 pypi
[conda] torchtext 0.10.0 py37 pytorch
[conda] torchvision 0.10.0 py37_cu102 pytorch
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
| 2 |
2,002 | 105,203 |
Pytorch dataloader not loading first-available data with multiple workers
|
module: dataloader, triaged
|
### ๐ Describe the bug
When using a dataloader with num_workers > 1 the batches are constructed in parallel to speed up the data-loading. I would expect that the dataloader returns the first-available data (FIFO-queue) to make sure that it runs as fast as possible. However it seems that each process will take turns returning data which slows don't data-loading quite significantly.
Below is a minimal code example:
```python
import torch
import math
import time
class MyIterableDataset(torch.utils.data.IterableDataset):
def __init__(self, start, end):
super(MyIterableDataset).__init__()
assert end > start, "this example code only works with end >= start"
self.start = start
self.end = end
def give_data(self, start, end):
for i in range(start, end):
if i > 10:
time.sleep(2)
yield i
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
if worker_info is None: # single-process data loading, return the full iterator
iter_start = self.start
iter_end = self.end
else: # in a worker process
# split workload
per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))
worker_id = worker_info.id
iter_start = self.start + worker_id * per_worker
iter_end = min(iter_start + per_worker, self.end)
return self.give_data(iter_start, iter_end)
if __name__ == "__main__":
ds = MyIterableDataset(start=0, end=20)
# Mult-process loading with two worker processes
for item in (torch.utils.data.DataLoader(ds, num_workers=2, batch_size=2)):
print(item)
```
The result of this script is:
```python
tensor([0, 1]) # Loaded fast
tensor([10, 11]) # Loaded slowly
tensor([2, 3]) # Loaded fast
tensor([12, 13]) # Loaded slowly
tensor([4, 5]) # Loaded fast
tensor([14, 15]) # Loaded slowly
tensor([6, 7]) # Loaded fast
tensor([16, 17]) # Loaded slowly
tensor([8, 9]) # Loaded fast
tensor([18, 19]) # Loaded slowly
```
However I would expect something like to be the result:
```python
tensor([0, 1]) # Loaded fast
tensor([2, 3]) # Loaded fast
tensor([4, 5]) # Loaded fast
tensor([6, 7]) # Loaded fast
tensor([10, 11]) # Loaded slowly
tensor([8, 9]) # Loaded fast
tensor([12, 13]) # Loaded slowly
tensor([14, 15]) # Loaded slowly
tensor([16, 17]) # Loaded slowly
tensor([18, 19]) # Loaded slowly
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.11.0 (main, Nov 30 2022, 13:48:51) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-13.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.3.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.0.1
[pip3] torchmetrics==0.11.4
[conda] Could not collect
cc @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 0 |
2,003 | 105,196 |
Error loading TorchScript model with torchvision::nms operation in libtorch
|
oncall: jit
|
### ๐ Describe the bug
When using libtorch 1.11.0 (debug version) to load a TorchScript model (MaskRCNN) that includes the torchvision::nms operation, an error occurs during the model loading process, indicating that the torchvision::nms operation is unknown or not supported in libtorch.
**Error Message:**
```
Error loading the model:
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "/home/fanuc/fanuc/anaconda3/envs/pytorch_env/lib/python3.10/site-packages/torchvision/ops/boxes.py", line 40
_log_api_usage_once(nms)
_assert_has_ops()
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 154
_64 = __torch__.torchvision.extension._assert_has_ops
_65 = _64()
_66 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _66
'nms' is being compiled since it was called from '_batched_nms_vanilla'
File "/home/fanuc/fanuc/anaconda3/envs/pytorch_env/lib/python3.10/site-packages/torchvision/ops/boxes.py", line 108
for class_id in torch.unique(idxs):
curr_indices = torch.where(idxs == class_id)[0]
curr_keep_indices = nms(boxes[curr_indices], scores[curr_indices], iou_threshold)
~~~ <--- HERE
keep_mask[curr_indices[curr_keep_indices]] = True
keep_indices = torch.where(keep_mask)[0]
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 83
_31 = torch.index(boxes, _30)
_32 = annotate(List[Optional[Tensor]], [curr_indices])
curr_keep_indices = __torch__.torchvision.ops.boxes.nms(_31, torch.index(scores, _32), iou_threshold, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_33 = annotate(List[Optional[Tensor]], [curr_keep_indices])
_34 = torch.index(curr_indices, _33)
'_batched_nms_vanilla' is being compiled since it was called from 'batched_nms'
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 35
idxs: Tensor,
iou_threshold: float) -> Tensor:
_9 = __torch__.torchvision.ops.boxes._batched_nms_vanilla
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_10 = __torch__.torchvision.ops.boxes._batched_nms_coordinate_trick
_11 = torch.numel(boxes)
'batched_nms' is being compiled since it was called from 'RegionProposalNetwork.filter_proposals'
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 72
_11 = __torch__.torchvision.ops.boxes.clip_boxes_to_image
_12 = __torch__.torchvision.ops.boxes.remove_small_boxes
_13 = __torch__.torchvision.ops.boxes.batched_nms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
num_images = (torch.size(proposals))[0]
device = ops.prim.device(proposals)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
File "/home/fanuc/fanuc/anaconda3/envs/pytorch_env/lib/python3.10/site-packages/torchvision/models/detection/rpn.py", line 353
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
proposals = proposals.view(num_images, -1, 4)
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
losses = {}
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 43
proposals0 = torch.view(proposals, [num_images, -1, 4])
image_sizes = images.image_sizes
_8 = (self).filter_proposals(proposals0, objectness0, image_sizes, num_anchors_per_level, )
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
boxes, scores, = _8
losses = annotate(Dict[str, Tensor], {})
```
**Running Code**
```
#include <torch/script.h> // One-stop header.
#include <torch/torch.h>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <memory>
std::string model_path = "../../models/script_model.pt";
std::string image_path = "../../images/1_color.png";
int main() {
torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
torch::Device device(torch::kCPU);
module = torch::jit::load(model_path, device);
}
catch (const torch::jit::ErrorReport& e) {
std::cerr << "Error loading the model:" << e.what() << std::endl;
return -1;
}
return 0
}
```
**Model and convert script**
```
def get_model_instance_segmentation(num_classes, load_pretrain_weights=True):
# get maskrcnn model form torchvision.models
# load an instance segmentation model pre-trained on COCO
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=load_pretrain_weights)
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
# now get the number of input features for the mask classifier
in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
hidden_layer = 256
# and replace the mask predictor with a new one
model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
hidden_layer,
num_classes)
return model
def generate_torch_script(pytorch_model, device, img_file, save_path):
# input of the model (from pil image to tensor, do not normalize image)
original_img = Image.open(img_file).convert('RGB') # load image
data_transform = transforms.Compose([transforms.ToTensor()])
img = data_transform(original_img)
img = torch.unsqueeze(img, dim=0).to(device) # expand batch dimension to device
# export the model
pytorch_model.eval()
if device.type == 'cpu':
pytorch_model = pytorch_model.cpu()
traced_script_module = torch.jit.script(pytorch_model, img)
# traced_script_module = torch.jit.trace(pytorch_model, img)
traced_script_module.save(save_path)
```
### Versions
**Environment**
* PyTorch version: Pytorch 1.11.0
* TorchVision version: 0.12.0
* LibTorch version: libtorch cpu debug version 1.11.0
* Operating system: Windows 10, compiler MSVC
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,004 | 105,192 |
Repro str could be displayed with slightly wrong env vars
|
module: docs, module: cuda, triaged, actionable
|
## Issue description
While reproducing https://github.com/pytorch/pytorch/pull/102409#issuecomment-1615254393, I got
```console
...
To execute this test, run the following from the base repo dir:
TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py -k test_outplace_forward_mode_AD__foreach_expm1_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
but I speculate the correct way to enable `CUDA_MEM_LEAK_CHECK` is to set `PYTORCH_TEST_CUDA_MEM_LEAK_CHECK`, not `TEST_CUDA_MEM_LEAK_CHECK` as follows
```
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_foreach.py -k test_outplace_forward_mode_AD__foreach_expm1_cuda_float64
```
## Code example
N/A
## System Info
- PyTorch or Caffe2: pytorch of https://github.com/crcrpar/pytorch/commit/1cc2ec0fe83af238e7880b1db9e5cda4a53bb145
- How you installed PyTorch (conda, pip, source): source
cc @svekars @carljparker @ptrblck
| 1 |
2,005 | 105,182 |
[DO NOT MERGE][NCCL][CUDA][CUDA Graphs] Set watchdog runtime capture mode to thread local to handle cleaning straggling work
|
module: cuda, triaged, module: nccl, open source, module: cuda graphs, ciflow/trunk, topic: not user facing, ciflow/periodic, ciflow/inductor
|
A more minimal alternative to #104487 and #104555 that should not unwittingly suppress errors during graph captures.
I've tested this locally and it appears to fix the repro used to test #104487 and #104555, while checking that the watchdog is attempting to call `cudaEventQuery` (on an event recorded before the capture) in the middle of a capture on another thread.
X-posting summary from #104555 here:
1. The issue is a crash caused by the watchdog thread calling `cudaEventQuery` on work that was enqueued before a graph capture.
2. The initial proposal was to change the capture mode of the thread performing the capture to the "thread local" mode, so that other threads querying events (the watchdog) don't crash as the events in question were created before the actual capture and therefore shouldn't interfere with the capture.
3. Many such as @eellison and @albanD were concerned with how we changed the capture mode of the capturing thread in this PR (by globally changing the constructor), as we could potentially miss errors/failures caused by always using this more lenient mode.
4. @albanD pointed out that it is really just the watchdog thread that we are concerned with issuing a disallowed `cudaEventQuery` call, so another potential solution is to just have the watchdog thread operate in a more lenient mode. This seems to be fairly minimal change that shouldn't have unexpected effects on the correctness or debuggability of code involving graph captures. To clarify, while the watchdog thread itself is not capturing anything, since the watchdog thread never calls `cudaThreadExchangeStreamCaptureMode`, it is operating with the "global" mode when another thread is in the middle of a capture, and hence crash the capture because `cudaEventQuery` is disallowed in this scenario.
4a. Our current understanding is the following (assuming the `cudaEventQuery` is for an event outside of a capture).
capturing thread mode | watchdog thread mode | watchdog event query allowed |
--------------------------|---------------------------|-----------------------------------|
global (default) | global (default) | no |
global (default) | thread local | yes (this PR) |
thread local | global (default) | yes (#104555) |
thread local |thread local | yes |
CC @kwen2501 @eellison @albanD @ptrblck @Aidyn-A
cc @ptrblck @mcarilli @ezyang
| 28 |
2,006 | 105,181 |
torch.compile leaks memory after compiled object is deleted, no apparent way to clean
|
needs reproduction, module: memory usage, triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
Consider a program like this:
```
pipe = StableDiffusionXLPipeline.from_pretrained(base_model_path, torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
#...
for x in pipe.__module__:
del x
del pipe
gc.collect()
torch.cuda.empty_cache()
torch.cuda.synchronize()
```
After deleting pipe their is still a tremendous amount of memory residing on the gpu as a result of torch.compile. IF torch.compile is omitted in this program, their is only a small amount of residual memory leaked by torch. However, there's more than 10+GB leaked as a result of compile that has no obvious way of being cleaned. Is their a way to clean up compiled graph memory?
### Error logs
_No response_
### Minified repro
```
from diffusers import DiffusionPipeline
import torch, gc
#Any model will suffice for demonstration, this is from huggingface diffusers
pipe = DiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
#...
for x in pipe.__module__:
del x
del pipe
gc.collect()
torch.cuda.empty_cache()
torch.cuda.synchronize()
```
### Versions
python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21653 100 21653 0 0 4126 0 0:00:05 0:00:05 --:--:-- 4352
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: EndeavourOS Linux (x86_64)
GCC version: (GCC) 12.3.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.37
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.4.3-arch1-1-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA RTX A4000
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.2
/usr/lib/libcudnn_adv_infer.so.8.9.2
/usr/lib/libcudnn_adv_train.so.8.9.2
/usr/lib/libcudnn_cnn_infer.so.8.9.2
/usr/lib/libcudnn_cnn_train.so.8.9.2
/usr/lib/libcudnn_ops_infer.so.8.9.2
/usr/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700KF
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 50%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7222.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] lion-pytorch==0.1.2
[pip3] lovely-numpy==0.2.9
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.20.0
[pip3] pytorch-lightning==1.9.4
[pip3] torch==2.0.1
[pip3] torchaudio==2.1.0.dev20230630
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.5
[pip3] torchvision==0.16.0.dev20230630
[pip3] triton==2.0.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] lion-pytorch 0.1.2 pypi_0 pypi
[conda] lovely-numpy 0.2.9 pypi_0 pypi
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.23.5 pypi_0 pypi
[conda] open-clip-torch 2.20.0 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch-nightly
[conda] pytorch-lightning 1.9.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230630 py310_cu121 pytorch-nightly
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchsde 0.2.5 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230630 py310_cu121 pytorch-nightly
[conda] triton 2.0.0 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @ipiszy
| 6 |
2,007 | 105,167 |
[Dynamo]`__torch_function__` tracing support
|
module: inductor, module: dynamo, ciflow/inductor, release notes: dynamo
|
[Design Doc](https://docs.google.com/document/d/1WBxBSvW3NXhRp9ncmtokJloMLCtF4AYNhJaffvHe8Kw/edit#heading=h.vacn73lozd9w)
In this PR:
* Refactored `__torch_function__` handling to handle subclasses in any argument position.
* `__torch_function__` dispatch ordering is exactly as specified in the reference dispatcher
* Added handling for `__torch_function__` overrides on custom objects (allows wrapper classes)
* Added handling for `__torch_function__` attribute accesses and method calls on TensorWithTFOverrideVariable
* More tests
Future work needed:
* Properly make TensorWithTFOverrideVariable subclass TensorVariable with overridden functions in the correct place
* Support custom attributes on TensorWithTFOverrideVariable
* Make subclass handling sound; for example, if `__torch_function__` mutates guardable properties of a tensor, this could introduce constant recompiles, so we should graph break in this scenario
* Unwrap inputs to the graph so that their `__torch_function__` impl will not get triggered within the compiler/graph code, since the behavior of the `__torch_function__` impl should already be traced into the graph
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @anijain2305
| 7 |
2,008 | 105,166 |
[PyTorch-TB] Write full tensor as tensor proto
|
triaged, open source, Stale, topic: not user facing
|
Adding functions to support logging full tensors to TB as tensor protobuf
| 3 |
2,009 | 105,157 |
PT2 custom ops does not work with future annotations
|
triaged, module: custom-operators, oncall: pt2
|
PT2 custom ops does not work with future annotations
_Originally posted by @BowenBao in https://github.com/pytorch/pytorch/pull/105156#discussion_r1262892912_
Repro:
```python
from __future__ import annotations
import torch
from torch._custom_op import impl as custom_op
@custom_op.custom_op("mylibrary::foo_op")
def foo_op(x: torch.Tensor) -> torch.Tensor:
...
```
`ValueError: custom_op(...)(func): Parameter x has unsupported type torch.Tensor. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Union[torch.Tensor, NoneType], typing.Sequence[torch.Tensor], typing.Sequence[typing.Union[torch.Tensor, NoneType]], <class 'int'>, typing.Union[int, NoneType], typing.Sequence[int], typing.Union[typing.Sequence[int], NoneType], <class 'float'>, typing.Union[float, NoneType], typing.Sequence[float], typing.Union[typing.Sequence[float], NoneType], <class 'bool'>, typing.Union[bool, NoneType], typing.Sequence[bool], typing.Union[typing.Sequence[bool], NoneType], <class 'str'>, typing.Union[str, NoneType], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Union[torch.dtype, NoneType], <class 'torch.device'>, typing.Union[torch.device, NoneType]]). Got func with signature (x: 'torch.Tensor') -> 'torch.Tensor')`
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 2 |
2,010 | 105,141 |
[ROCm] Add ROCm AMDGPU support for inductor cpp codegen
|
module: rocm, triaged, open source, Merged, Reverted, ciflow/trunk, ciflow/periodic, module: inductor, ciflow/inductor, rocm, ciflow/unstable, release notes: inductor
|
Follows from previous enablement attempt: https://github.com/pytorch/pytorch/pull/101797
Adds support for hsaco binaries in inductor's cpp_wrapper codegen and enables the CUDA tests in test_cpp_wrapper.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 38 |
2,011 | 105,134 |
TypeError: 'NoneType' object is not subscriptable (Occurred when translating col2im). Can't translate torch.nn.functional.fold in opset_version 18.
|
module: onnx, triaged
|
### ๐ Describe the bug
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
x=torch.randn(176,64,56,56)
y=torch.randn(176,128,28,28)
input_names=['input']
output_names=['output']
class embedding_concat(nn.Module):
def __init__(self):
super(embedding_concat,self).__init__()
def forward(self,x):
y=torch.randn(176,128,28,28)
B, C1, H1, W1 = x.size()
_, C2, H2, W2 = y.size()
s = int(H1 / H2)
x = F.unfold(x, kernel_size=s, dilation=1, stride=s)
x = x.view(B, C1, -1, H2, W2)
z = torch.zeros(B, C1 + C2, x.size(2), H2, W2)
for i in range(x.size(2)):
z[:, :, i, :, :] = torch.cat((x[:, :, i, :, :], y), 1)
z = z.view(B, -1, H2 * W2)
print(z.shape)
fold=nn.Fold(output_size=(H1,W1),kernel_size=s,stride=s)
z = fold(z)
return z
model=embedding_concat()
torch.onnx.export(model,
x,
"embedding_concat.onnx",
input_names=input_names,
output_names=output_names,
opset_version=18,
do_constant_folding=False, # ๆฏๅฆๅ็ผฉๅธธ้
#export_params=False,
dynamic_axes={"input":{1: "channel",2:"h",3:"w"},}
)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 ๅฎถๅบญไธญๆ็
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.17 (default, Jul 5 2023, 20:44:21) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU
Nvidia driver version: 528.79
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2500
DeviceID=CPU0
Family=205
L2CacheSize=4096
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2500
Name=12th Gen Intel(R) Core(TM) i5-12500H
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] numpydoc==1.5.0
[pip3] torch==2.0.0+cu117
[pip3] torchaudio==2.0.1+cu117
[pip3] torchvision==0.15.1+cu117
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpydoc 1.5.0 pypi_0 pypi
[conda] torch 2.0.0+cu117 pypi_0 pypi
[conda] torchaudio 2.0.1+cu117 pypi_0 pypi
[conda] torchvision 0.15.1+cu117 pypi_0 pypi
```
| 7 |
2,012 | 105,125 |
DISABLED test_conv (quantization.jit.test_quantize_jit.TestQuantizeJit)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707569
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,013 | 105,124 |
DISABLED test_conv_transpose (quantization.jit.test_quantize_jit.TestQuantizeJit)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707569
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,014 | 105,123 |
DISABLED test_observer_with_ignored_function (quantization.jit.test_quantize_jit.TestQuantizeJit)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707569
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,015 | 105,121 |
DISABLED test_single_linear (quantization.jit.test_quantize_jit.TestQuantizeJit)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707569
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,016 | 105,120 |
DISABLED test_nested (quantization.jit.test_quantize_jit.TestQuantizeJit)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707499
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,017 | 105,119 |
DISABLED test_unary_ops (__main__.TestTensorExprFuser)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in CI runs for ROCm5.6 CI upgrade PR https://github.com/pytorch/pytorch/pull/103092: https://github.com/pytorch/pytorch/actions/runs/5527567640/jobs/10083707389
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 1 |
2,018 | 105,108 |
MacOS arm64 runners are not available in CI
|
module: ci, triaged
|
## Current Status
mitigated
## Error looks like
MacOS arm64 runners are not available following build will fail: ``trunk / macos-12-py3-arm64 / build``
## Incident timeline (all times pacific)
Jul 12 7:40 pm EST First job failed
Jul 12 8:56 pm EST as OSS CI oncall this issue is noticed and investigation is started
Jul 13 12:40AM EST runner reprovisioned
## User impact
MacOS arm64 runners are not available following build will fail: ``trunk / macos-12-py3-arm64 / build``
## Root cause
github daemon auto-updated in all instances at the same time, downloading a new version and then failing to start.
## Mitigation
tbd
## Prevention/followups
tbd
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| 2 |
2,019 | 105,105 |
Remaining functions without meta registrations
|
triaged, module: meta tensors
|
### ๐ Describe the bug
I think some of these can't have meta registrations (like unique), and some of them are legacy and we shouldn't bother with.
https://gist.github.com/ezyang/99a46bab8992ad142e7973e0e53d6eff
Computed with
```
print('\n'.join(sorted([n for n in torch._C._dispatch_get_all_op_names() if not torch._C._dispatch_has_computed_kernel_for_dispatch_key(n, "Meta") and n.startswith('aten::')])))
```
cc @eellison @bdhirsh @nkaretnikov
### Versions
main
| 4 |
2,020 | 105,494 |
workaround for using vmap when .item() is being used internally
|
triaged, module: vmap, module: functorch
|
Example code:
def func(x):
for i in torch.arange(x):
pass
return None
y = torch.tensor([3, 3])
torch.vmap(func)(y)
Output:
RuntimeError Traceback (most recent call last)
4 return None
6 y = torch.tensor([3, 3])
----> 7 torch.vmap(func)(y)
...
1 def func(x):
----> 2 for i in torch.arange(x):
3 pass
4 return None
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
Commentary: I would like to use functorch's vmap, with one of the input arguments of the function an integer representing a variable number of iterations of a for loop. Apparently, torch.arange() internally uses .item() and vmap cannot handle that. Is there any workaround?
This issue seems similar to pytorch/functorch#747, but it is not exactly the same.
cc @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 6 |
2,021 | 105,092 |
[RFC] Proposal to upgrade LLVM version
|
triaged, NNC, module: cpu inductor
|
### ๐ The feature, motivation and pitch
Currently, LLVM 9.0.1 is being used by PyTorch, which is quite old.
Some features we would like to add down the line require LLVM 10 or above.
If upgrading LLVM wouldn't break anything, then we would like to do so.
### Alternatives
Linking two separate versions of LLVM is an alternative, but it might cause symbol-resolution issues, and a workaround might increase the binary-size.
### Additional context
We'd like to upgrade the LLVM version to a version between 10 & 16.0.6.
cc @EikanWang @jgong5 @malfet @seemethere
| 7 |
2,022 | 105,090 |
Fix kwargs for `checkpoint`; composition with `fully_shard`
|
Stale, release notes: distributed (fsdp), topic: bug fixes
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #105090
This PR fixes kwargs for `checkpoint` and some composability issues with `fully_shard`, where the FSDP pre/post-forward incorrectly runs twice in backward causing errors.
Unlike for the module wrapper path, `checkpoint`'s `_no_hook` context only disables its own hooks, not FSDP's hooks. Hence, we need to make sure that FSDP does not re-run its forward hooks in backward. Checking against `BACKWARD_PRE` training state does this.
| 3 |
2,023 | 105,077 |
torch.load fails under FakeTensorMode for GPT2 model
|
triaged, ezyang's list, oncall: pt2, module: fakeTensor, module: dynamo, release notes: dynamo
|
### ๐ Describe the bug
I will break up this issue in 2 parts. The first discusses the issue without any modification to PyTorch code. The second one hacks `torch._util._rebuild_tensor` and tries to export the model to FX without success. I did this because I am not sure if the second part is a side effect of a bad workaround or an entirely different issue. If it is the latter, I am ok in filing a second issue for it after confirmation.
#### Part 1: Before the hack
Model loading works when called outside `FakeTensorMode` context, but it fails when called within it.
This becomes relevant after https://github.com/pytorch/pytorch/pull/100017/ in which we can fakefy input and model parameters before calling `torch._dynamo.export(fake_model, *fake_args, fake_mode=fake_mode, **fake_kwargs)` with the pre-instantiated `FakeTensorMode`.
**Repro:**
```python
from torch._subclasses import fake_tensor
from torch.fx.experimental.symbolic_shapes import ShapeEnv
import transformers
fake_mode = fake_tensor.FakeTensorMode(
allow_non_fake_inputs=False,
shape_env=ShapeEnv(
allow_scalar_outputs=False, allow_dynamic_output_shape_ops=False
),
)
fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2")
assert fake_model is not None
with fake_mode:
fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2") # raises OSError: Unable to load weights from pytorch checkpoint file for '...' at If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
This is the full error:
```
Some weights of the model checkpoint at sshleifer/tiny-gpt2 were not used when initializing GPT2Model: ['lm_head.weight']
- This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:463 in โ
โ load_state_dict โ
โ โ
โ 460 โ โ โ ) โ
โ 461 โ โ return safe_load_file(checkpoint_file) โ
โ 462 โ try: โ
โ โฑ 463 โ โ return torch.load(checkpoint_file, map_location="cpu") โ
โ 464 โ except Exception as e: โ
โ 465 โ โ try: โ
โ 466 โ โ โ with open(checkpoint_file) as f: โ
โ โ
โ /opt/pytorch/torch/serialization.py:1030 in load โ
โ โ
โ 1027 โ โ โ โ return _legacy_load(opened_file, map_location, _weights_only_unpickler, โ
โ 1028 โ โ โ except RuntimeError as e: โ
โ 1029 โ โ โ โ raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None โ
โ โฑ 1030 โ โ return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args โ
โ 1031 โ
โ 1032 โ
โ 1033 # Register pickling support for layout instances such as โ
โ โ
โ /opt/pytorch/torch/serialization.py:1258 in _legacy_load โ
โ โ
โ 1255 โ _sys_info = pickle_module.load(f, **pickle_load_args) โ
โ 1256 โ unpickler = UnpicklerWrapper(f, **pickle_load_args) โ
โ 1257 โ unpickler.persistent_load = persistent_load โ
โ โฑ 1258 โ result = unpickler.load() โ
โ 1259 โ โ
โ 1260 โ deserialized_storage_keys = pickle_module.load(f, **pickle_load_args) โ
โ 1261 โ
โ โ
โ /opt/pytorch/torch/_utils.py:201 in _rebuild_tensor_v2 โ
โ โ
โ 198 def _rebuild_tensor_v2( โ
โ 199 โ storage, storage_offset, size, stride, requires_grad, backward_hooks, metadata=None โ
โ 200 ): โ
โ โฑ 201 โ tensor = _rebuild_tensor(storage, storage_offset, size, stride) โ
โ 202 โ tensor.requires_grad = requires_grad โ
โ 203 โ if metadata: โ
โ 204 โ โ set_tensor_metadata(tensor, metadata) โ
โ โ
โ /opt/pytorch/torch/_utils.py:180 in _rebuild_tensor โ
โ โ
โ 177 def _rebuild_tensor(storage, storage_offset, size, stride): โ
โ 178 โ # first construct a tensor with the correct dtype/device โ
โ 179 โ t = torch.tensor([], dtype=storage.dtype, device=storage._untyped_storage.device) โ
โ โฑ 180 โ return t.set_(storage._untyped_storage, storage_offset, size, stride) โ
โ 181 โ
โ 182 โ
โ 183 def get_tensor_metadata(tensor): โ
โ โ
โ /opt/pytorch/torch/utils/_stats.py:20 in wrapper โ
โ โ
โ 17 โ โ if fn.__qualname__ not in simple_call_counter: โ
โ 18 โ โ โ simple_call_counter[fn.__qualname__] = 0 โ
โ 19 โ โ simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 โ
โ โฑ 20 โ โ return fn(*args, **kwargs) โ
โ 21 โ return wrapper โ
โ 22 โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1160 in __torch_dispatch__ โ
โ โ
โ 1157 โ def __torch_dispatch__(self, func, types, args=(), kwargs=None): โ
โ 1158 โ โ assert self not in _get_current_dispatch_mode_stack(), func โ
โ 1159 โ โ try: โ
โ โฑ 1160 โ โ โ return self.dispatch(func, types, args, kwargs) โ
โ 1161 โ โ except TypeError: โ
โ 1162 โ โ โ log.exception("fake tensor raised TypeError") โ
โ 1163 โ โ โ raise โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1318 in dispatch โ
โ โ
โ 1315 โ โ โ
โ 1316 โ โ # we are falling through to running non constant tensors, any input constant tha โ
โ 1317 โ โ # is written to must be invalidated โ
โ โฑ 1318 โ โ self.invalidate_written_to_constants(func, flat_arg_fake_tensors, args, kwargs) โ
โ 1319 โ โ โ
โ 1320 โ โ # Try for fastpath โ
โ 1321 โ โ if has_symbolic_sizes: โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1557 in invalidate_written_to_constants โ
โ โ
โ 1554 โ โ any_constant = any(e.constant is not None for e in flat_arg_fake_tensors) โ
โ 1555 โ โ if any_constant and get_schema_info(func).is_mutable(): โ
โ 1556 โ โ โ schema_info = get_schema_info(func) โ
โ โฑ 1557 โ โ โ _, new_kwargs = normalize_function( โ
โ 1558 โ โ โ โ func, args=args, kwargs=kwargs, normalize_to_only_use_kwargs=True โ
โ 1559 โ โ โ ) โ
โ 1560 โ โ โ for k, v in new_kwargs.items(): โ
โ โ
โ /opt/pytorch/torch/fx/operator_schemas.py:297 in normalize_function โ
โ โ
โ 294 โ โ new_args_and_kwargs = _args_kwargs_to_normalized_args_kwargs(sig, args, kwargs, โ
โ 295 โ else: โ
โ 296 โ โ assert callable(target) โ
โ โฑ 297 โ โ torch_op_schemas = get_signature_for_torch_op(target) โ
โ 298 โ โ matched_schemas = [] โ
โ 299 โ โ if torch_op_schemas: โ
โ 300 โ โ โ # Iterate through all of the schema until we find one that matches โ
โ โ
โ /opt/pytorch/torch/fx/operator_schemas.py:167 in get_signature_for_torch_op โ
โ โ
โ 164 โ โ โ return (None, None) if return_schemas else None โ
โ 165 โ โ schemas = torch._C._jit_get_schemas_for_operator(aten_fn) โ
โ 166 โ โ
โ โฑ 167 โ signatures = [_torchscript_schema_to_signature(schema) for schema in schemas] โ
โ 168 โ return (signatures, schemas) if return_schemas else signatures โ
โ 169 โ
โ 170 @compatibility(is_backward_compatible=False) โ
โ โ
โ /opt/pytorch/torch/fx/operator_schemas.py:167 in <listcomp> โ
โ โ
โ 164 โ โ โ return (None, None) if return_schemas else None โ
โ 165 โ โ schemas = torch._C._jit_get_schemas_for_operator(aten_fn) โ
โ 166 โ โ
โ โฑ 167 โ signatures = [_torchscript_schema_to_signature(schema) for schema in schemas] โ
โ 168 โ return (signatures, schemas) if return_schemas else signatures โ
โ 169 โ
โ 170 @compatibility(is_backward_compatible=False) โ
โ โ
โ /opt/pytorch/torch/fx/operator_schemas.py:70 in _torchscript_schema_to_signature โ
โ โ
โ 67 โ from inspect import Parameter โ
โ 68 โ parameters : List[Parameter] = [] โ
โ 69 โ for arg in ts_schema.arguments: โ
โ โฑ 70 โ โ arg_type = _torchscript_type_to_python_type(arg.type) โ
โ 71 โ โ default = arg.default_value if arg.has_default_value() else Parameter.empty โ
โ 72 โ โ # TODO: Figure out if this is safe. It seems like when generating the type signa โ
โ 73 โ โ # PythonArgParser, we emit signatures with `input` instead of `self` as the firs โ
โ โ
โ /opt/pytorch/torch/fx/operator_schemas.py:64 in _torchscript_type_to_python_type โ
โ โ
โ 61 โ eval'ing the annotation_str. _type_eval_globals sets up expressions โ
โ 62 โ like "List" and "Future" to map to actual types (typing.List and jit.Future) โ
โ 63 โ """ โ
โ โฑ 64 โ return eval(ts_type.annotation_str, _type_eval_globals) โ
โ 65 โ
โ 66 def _torchscript_schema_to_signature(ts_schema : torch._C.FunctionSchema) -> inspect.Sig โ
โ 67 โ from inspect import Parameter โ
โ <string>:1 in <module> โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
NameError: name 'Storage' is not defined
During handling of the above exception, another exception occurred:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:467 in โ
โ load_state_dict โ
โ โ
โ 464 โ except Exception as e: โ
โ 465 โ โ try: โ
โ 466 โ โ โ with open(checkpoint_file) as f: โ
โ โฑ 467 โ โ โ โ if f.read(7) == "version": โ
โ 468 โ โ โ โ โ raise OSError( โ
โ 469 โ โ โ โ โ โ "You seem to have cloned a repository without having git-lfs ins โ
โ 470 โ โ โ โ โ โ "git-lfs and run `git lfs install` followed by `git lfs pull` in โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/codecs.py:322 in decode โ
โ โ
โ 319 โ def decode(self, input, final=False): โ
โ 320 โ โ # decode input (taking the buffer into account) โ
โ 321 โ โ data = self.buffer + input โ
โ โฑ 322 โ โ (result, consumed) = self._buffer_decode(data, self.errors, final) โ
โ 323 โ โ # keep undecoded input until the next call โ
โ 324 โ โ self.buffer = data[consumed:] โ
โ 325 โ โ return result โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
During handling of the above exception, another exception occurred:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/pytorch/bug_repro.py:16 in <module> โ
โ โ
โ 13 fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2") โ
โ 14 assert fake_model is not None โ
โ 15 with fake_mode: โ
โ โฑ 16 โ fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2") # raises โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:484 in โ
โ from_pretrained โ
โ โ
โ 481 โ โ โ ) โ
โ 482 โ โ elif type(config) in cls._model_mapping.keys(): โ
โ 483 โ โ โ model_class = _get_model_class(config, cls._model_mapping) โ
โ โฑ 484 โ โ โ return model_class.from_pretrained( โ
โ 485 โ โ โ โ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, โ
โ 486 โ โ โ ) โ
โ 487 โ โ raise ValueError( โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:2604 in โ
โ from_pretrained โ
โ โ
โ 2601 โ โ if from_pt: โ
โ 2602 โ โ โ if not is_sharded and state_dict is None: โ
โ 2603 โ โ โ โ # Time to load the checkpoint โ
โ โฑ 2604 โ โ โ โ state_dict = load_state_dict(resolved_archive_file) โ
โ 2605 โ โ โ โ
โ 2606 โ โ โ # set dtype to instantiate the model under: โ
โ 2607 โ โ โ # 1. If torch_dtype is not None, we use that dtype โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py:479 in โ
โ load_state_dict โ
โ โ
โ 476 โ โ โ โ โ โ "model. Make sure you have saved the model properly." โ
โ 477 โ โ โ โ โ ) from e โ
โ 478 โ โ except (UnicodeDecodeError, ValueError): โ
โ โฑ 479 โ โ โ raise OSError( โ
โ 480 โ โ โ โ f"Unable to load weights from pytorch checkpoint file for '{checkpoint_f โ
โ 481 โ โ โ โ f"at '{checkpoint_file}'. " โ
โ 482 โ โ โ โ "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please s โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
OSError: Unable to load weights from pytorch checkpoint file for '/root/.cache/huggingface/hub/models--sshleifer--tiny-gpt2/snapshots/5f91d94bd9cd7190a9f3216ff93cd1dd95f2c7be/pytorch_model.bin' at
'/root/.cache/huggingface/hub/models--sshleifer--tiny-gpt2/snapshots/5f91d94bd9cd7190a9f3216ff93cd1dd95f2c7be/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set
from_tf=True.
```
By patching `torch._utils._rebuild_tensor` with a context manager like
** Repro **
```python
class PytorchPatcher:
def __init__(self):
def torch__util__rebuild_tensor_wrapper(storage, storage_offset, size, stride):
from torch._subclasses.fake_tensor import FakeTensorMode
from torch.utils._mode_utils import no_dispatch
from torch.utils._python_dispatch import _get_current_dispatch_mode
def _rebuild_real_tensor(storage, storage_offset, size, stride):
t = torch.tensor(
[], dtype=storage.dtype, device=storage._untyped_storage.device
)
return t.set_(storage._untyped_storage, storage_offset, size, stride)
mode = _get_current_dispatch_mode()
if isinstance(mode, FakeTensorMode):
# Create a real tensor and then convert it to FakeTensor.
# We cannot directly create a FakeTensor because it tensor.set_(...)
# is not supported in FakeTensorMode dispatcher.
with no_dispatch():
t = _rebuild_real_tensor(storage, storage_offset, size, stride)
return mode.from_tensor(t)
return _rebuild_real_tensor(storage, storage_offset, size, stride)
# Original version of torch.load.
self.torch__util_rebuild_tensor = torch._utils._rebuild_tensor
# Wrapper or modified version of torch functions.
self.torch__util_rebuild_tensor_wrapper = torch__util__rebuild_tensor_wrapper
def __enter__(self):
torch._utils._rebuild_tensor = self.torch__util_rebuild_tensor_wrapper
def __exit__(self, exc_type, exc_value, traceback):
torch._utils._rebuild_tensor = self.torch__util_rebuild_tensor
```
it does work, but I wonder if this is the right way of doing this. My expectation was that under a fake mode context, real tensors would never be created so having to hack `torch._util._rebuild_tensor` was unexpected.
```
torch_patcher = PytorchPatcher()
with fake_mode, torch_patcher:
fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2") # raises OSError: Unable to load weights from
```
#### Part 2: Exporting to FX after the hack
With the hack above, going a step further and trying to call `torch._dynamo.export` fails because it seems non-fake tensors are found within the fake_model instance.
I am not sure whether this is a side-effect of patching `torch._util._rebuild_tensor` or an entirely different issue uncovered after a succesful hack.
** Repro **
```python
def create_args():
tokenizer = transformers.AutoTokenizer.from_pretrained("sshleifer/tiny-gpt2")
kwargs = tokenizer("Hello world!", return_tensors="pt")
input_ids = kwargs["input_ids"]
attention_mask = kwargs["attention_mask"]
return input_ids, None, attention_mask
def create_kwargs():
return {"return_dict": False}
converter = fake_mode.fake_tensor_converter
fake_mode.validate_and_convert_non_fake_tensors(fake_model, converter, fake_args, fake_kwargs) # passes, only fake stuff here
torch._dynamo.export(fake_model, *fake_args, tracing_mode="fake", fake_mode=fake_mode, **fake_kwargs) # validate_and_convert_non_fake_tensors fails here. what is the difference?
```
That is the error from the export attempt:
```bash
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/pytorch/bug_repro.py:120 in <module> โ
โ โ
โ 117 โ
โ 118 โ
โ 119 โ
โ โฑ 120 torch._dynamo.export(fake_model, *create_args(), tracing_mode="fake", fake_mode=fake_mod โ
โ โ
โ /opt/pytorch/torch/_dynamo/eval_frame.py:948 in export โ
โ โ
โ 945 โ โ )(f) โ
โ 946 โ โ # TODO(voz): We may have instances of `f` that mutate inputs, we should track si โ
โ 947 โ โ try: โ
โ โฑ 948 โ โ โ result_traced = opt_f(*args, **kwargs) โ
โ 949 โ โ except ConstraintViolationError as e: โ
โ 950 โ โ โ constraint_violation_error = e โ
โ 951 โ remove_from_cache(f) โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1522 in _wrapped_call_impl โ
โ โ
โ 1519 โ โ if self._compiled_call_impl is not None: โ
โ 1520 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] โ
โ 1521 โ โ else: โ
โ โฑ 1522 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1523 โ โ
โ 1524 โ def _call_impl(self, *args, **kwargs): โ
โ 1525 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1531 in _call_impl โ
โ โ
โ 1528 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1529 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1530 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1531 โ โ โ return forward_call(*args, **kwargs) โ
โ 1532 โ โ โ
โ 1533 โ โ try: โ
โ 1534 โ โ โ result = None โ
โ โ
โ /opt/pytorch/torch/_dynamo/eval_frame.py:294 in _fn โ
โ โ
โ 291 โ โ โ dynamic_ctx = enable_dynamic(self.dynamic, self.export) โ
โ 292 โ โ โ dynamic_ctx.__enter__() โ
โ 293 โ โ โ try: โ
โ โฑ 294 โ โ โ โ return fn(*args, **kwargs) โ
โ 295 โ โ โ finally: โ
โ 296 โ โ โ โ set_eval_frame(prior) โ
โ 297 โ โ โ โ dynamic_ctx.__exit__(None, None, None) โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1522 in _wrapped_call_impl โ
โ โ
โ 1519 โ โ if self._compiled_call_impl is not None: โ
โ 1520 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] โ
โ 1521 โ โ else: โ
โ โฑ 1522 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1523 โ โ
โ 1524 โ def _call_impl(self, *args, **kwargs): โ
โ 1525 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1531 in _call_impl โ
โ โ
โ 1528 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1529 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1530 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1531 โ โ โ return forward_call(*args, **kwargs) โ
โ 1532 โ โ โ
โ 1533 โ โ try: โ
โ 1534 โ โ โ result = None โ
โ โ
โ /opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py:751 โ
โ in forward โ
โ โ
โ 748 โ โ for layer, heads in heads_to_prune.items(): โ
โ 749 โ โ โ self.h[layer].attn.prune_heads(heads) โ
โ 750 โ โ
โ โฑ 751 โ @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) โ
โ 752 โ @add_code_sample_docstrings( โ
โ 753 โ โ checkpoint=_CHECKPOINT_FOR_DOC, โ
โ 754 โ โ output_type=BaseModelOutputWithPastAndCrossAttentions, โ
โ โ
โ /opt/pytorch/torch/_dynamo/eval_frame.py:294 in _fn โ
โ โ
โ 291 โ โ โ dynamic_ctx = enable_dynamic(self.dynamic, self.export) โ
โ 292 โ โ โ dynamic_ctx.__enter__() โ
โ 293 โ โ โ try: โ
โ โฑ 294 โ โ โ โ return fn(*args, **kwargs) โ
โ 295 โ โ โ finally: โ
โ 296 โ โ โ โ set_eval_frame(prior) โ
โ 297 โ โ โ โ dynamic_ctx.__exit__(None, None, None) โ
โ โ
โ /opt/pytorch/torch/_dynamo/external_utils.py:17 in inner โ
โ โ
โ 14 โ โ
โ 15 โ @functools.wraps(fn) โ
โ 16 โ def inner(*args, **kwargs): โ
โ โฑ 17 โ โ return fn(*args, **kwargs) โ
โ 18 โ โ
โ 19 โ return inner โ
โ 20 โ
โ โ
โ /opt/pytorch/torch/_dynamo/eval_frame.py:920 in result_capturing_wrapper โ
โ โ
โ 917 โ โ โ โ
โ 918 โ โ โ graph_captured_input = graph_inputs โ
โ 919 โ โ โ assert graph is not None โ
โ โฑ 920 โ โ โ graph_captured_result = graph(*graph_inputs) โ
โ 921 โ โ โ return graph_captured_result โ
โ 922 โ โ โ
โ 923 โ โ return result_capturing_wrapper โ
โ โ
โ /opt/pytorch/torch/fx/graph_module.py:678 in call_wrapped โ
โ โ
โ 675 โ โ โ cls._wrapped_call = _WrappedCall(cls, cls_call) # type: ignore[attr-defined โ
โ 676 โ โ โ
โ 677 โ โ def call_wrapped(self, *args, **kwargs): โ
โ โฑ 678 โ โ โ return self._wrapped_call(self, *args, **kwargs) โ
โ 679 โ โ โ
โ 680 โ โ cls.__call__ = call_wrapped โ
โ 681 โ
โ โ
โ /opt/pytorch/torch/fx/graph_module.py:284 in __call__ โ
โ โ
โ 281 โ โ โ โ โ file=sys.stderr) โ
โ 282 โ โ โ โ raise e.with_traceback(None) โ
โ 283 โ โ โ else: โ
โ โฑ 284 โ โ โ โ raise e โ
โ 285 โ
โ 286 @compatibility(is_backward_compatible=True) โ
โ 287 class GraphModule(torch.nn.Module): โ
โ โ
โ /opt/pytorch/torch/fx/graph_module.py:274 in __call__ โ
โ โ
โ 271 โ โ โ if self.cls_call is not None: โ
โ 272 โ โ โ โ return self.cls_call(obj, *args, **kwargs) โ
โ 273 โ โ โ else: โ
โ โฑ 274 โ โ โ โ return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[mi โ
โ 275 โ โ except Exception as e: โ
โ 276 โ โ โ assert e.__traceback__ โ
โ 277 โ โ โ topmost_framesummary: traceback.FrameSummary = \ โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1522 in _wrapped_call_impl โ
โ โ
โ 1519 โ โ if self._compiled_call_impl is not None: โ
โ 1520 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] โ
โ 1521 โ โ else: โ
โ โฑ 1522 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1523 โ โ
โ 1524 โ def _call_impl(self, *args, **kwargs): โ
โ 1525 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1531 in _call_impl โ
โ โ
โ 1528 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1529 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1530 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1531 โ โ โ return forward_call(*args, **kwargs) โ
โ 1532 โ โ โ
โ 1533 โ โ try: โ
โ 1534 โ โ โ result = None โ
โ <eval_with_key>.0:16 in forward โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1522 in _wrapped_call_impl โ
โ โ
โ 1519 โ โ if self._compiled_call_impl is not None: โ
โ 1520 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] โ
โ 1521 โ โ else: โ
โ โฑ 1522 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1523 โ โ
โ 1524 โ def _call_impl(self, *args, **kwargs): โ
โ 1525 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo โ
โ โ
โ /opt/pytorch/torch/nn/modules/module.py:1531 in _call_impl โ
โ โ
โ 1528 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1529 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1530 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1531 โ โ โ return forward_call(*args, **kwargs) โ
โ 1532 โ โ โ
โ 1533 โ โ try: โ
โ 1534 โ โ โ result = None โ
โ โ
โ /opt/pytorch/torch/nn/modules/sparse.py:162 in forward โ
โ โ
โ 159 โ โ โ โ self.weight[self.padding_idx].fill_(0) โ
โ 160 โ โ
โ 161 โ def forward(self, input: Tensor) -> Tensor: โ
โ โฑ 162 โ โ return F.embedding( โ
โ 163 โ โ โ input, self.weight, self.padding_idx, self.max_norm, โ
โ 164 โ โ โ self.norm_type, self.scale_grad_by_freq, self.sparse) โ
โ 165 โ
โ โ
โ /opt/pytorch/torch/nn/functional.py:2238 in embedding โ
โ โ
โ 2235 โ โ # torch.embedding_renorm_ โ
โ 2236 โ โ # remove once script supports set_grad_enabled โ
โ 2237 โ โ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) โ
โ โฑ 2238 โ return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) โ
โ 2239 โ
โ 2240 โ
โ 2241 def embedding_bag( โ
โ โ
โ /opt/pytorch/torch/utils/_stats.py:20 in wrapper โ
โ โ
โ 17 โ โ if fn.__qualname__ not in simple_call_counter: โ
โ 18 โ โ โ simple_call_counter[fn.__qualname__] = 0 โ
โ 19 โ โ simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 โ
โ โฑ 20 โ โ return fn(*args, **kwargs) โ
โ 21 โ return wrapper โ
โ 22 โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1040 in __torch_dispatch__ โ
โ โ
โ 1037 โ โ โ return NotImplemented โ
โ 1038 โ โ โ
โ 1039 โ โ with fake_mode: # type: ignore[attr-defined] โ
โ โฑ 1040 โ โ โ return func(*args, **kwargs) โ
โ 1041 โ โ
โ 1042 โ @staticmethod โ
โ 1043 โ def _find_common_device(func, args, kwargs) -> Tuple[torch.device, bool]: โ
โ โ
โ /opt/pytorch/torch/_ops.py:437 in __call__ โ
โ โ
โ 434 โ โ ) โ
โ 435 โ โ
โ 436 โ def __call__(self, *args, **kwargs): โ
โ โฑ 437 โ โ return self._op(*args, **kwargs or {}) โ
โ 438 โ โ
โ 439 โ def __hash__(self): โ
โ 440 โ โ return hash(self._op) โ
โ โ
โ /opt/pytorch/torch/utils/_stats.py:20 in wrapper โ
โ โ
โ 17 โ โ if fn.__qualname__ not in simple_call_counter: โ
โ 18 โ โ โ simple_call_counter[fn.__qualname__] = 0 โ
โ 19 โ โ simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1 โ
โ โฑ 20 โ โ return fn(*args, **kwargs) โ
โ 21 โ return wrapper โ
โ 22 โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1160 in __torch_dispatch__ โ
โ โ
โ 1157 โ def __torch_dispatch__(self, func, types, args=(), kwargs=None): โ
โ 1158 โ โ assert self not in _get_current_dispatch_mode_stack(), func โ
โ 1159 โ โ try: โ
โ โฑ 1160 โ โ โ return self.dispatch(func, types, args, kwargs) โ
โ 1161 โ โ except TypeError: โ
โ 1162 โ โ โ log.exception("fake tensor raised TypeError") โ
โ 1163 โ โ โ raise โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1271 in dispatch โ
โ โ
โ 1268 โ โ โ args, โ
โ 1269 โ โ โ kwargs, โ
โ 1270 โ โ โ flat_arg_fake_tensors, โ
โ โฑ 1271 โ โ ) = self.validate_and_convert_non_fake_tensors(func, converter, args, kwargs) โ
โ 1272 โ โ โ
โ 1273 โ โ # The current constant handling only support tracing systems โ
โ 1274 โ โ # (aot autograd, torchdynamo) where each operation is run consecutively. โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1467 in validate_and_convert_non_fake_tensors โ
โ โ
โ 1464 โ โ โ flat_arg_fake_tensors.append(x) โ
โ 1465 โ โ โ return x โ
โ 1466 โ โ โ
โ โฑ 1467 โ โ args, kwargs = tree_map_only( โ
โ 1468 โ โ โ torch.Tensor, โ
โ 1469 โ โ โ validate, โ
โ 1470 โ โ โ (args, kwargs), โ
โ โ
โ /opt/pytorch/torch/utils/_pytree.py:393 in tree_map_only โ
โ โ
โ 390 โ ... โ
โ 391 โ
โ 392 def tree_map_only(ty: TypeAny, fn: FnAny[Any], pytree: PyTree) -> PyTree: โ
โ โฑ 393 โ return tree_map(map_only(ty)(fn), pytree) โ
โ 394 โ
โ 395 def tree_all(pred: Callable[[Any], bool], pytree: PyTree) -> bool: โ
โ 396 โ flat_args, _ = tree_flatten(pytree) โ
โ โ
โ /opt/pytorch/torch/utils/_pytree.py:323 in tree_map โ
โ โ
โ 320 โ
โ 321 def tree_map(fn: Any, pytree: PyTree) -> PyTree: โ
โ 322 โ flat_args, spec = tree_flatten(pytree) โ
โ โฑ 323 โ return tree_unflatten([fn(i) for i in flat_args], spec) โ
โ 324 โ
โ 325 Type2 = Tuple[Type[T], Type[S]] โ
โ 326 Type3 = Tuple[Type[T], Type[S], Type[U]] โ
โ โ
โ /opt/pytorch/torch/utils/_pytree.py:323 in <listcomp> โ
โ โ
โ 320 โ
โ 321 def tree_map(fn: Any, pytree: PyTree) -> PyTree: โ
โ 322 โ flat_args, spec = tree_flatten(pytree) โ
โ โฑ 323 โ return tree_unflatten([fn(i) for i in flat_args], spec) โ
โ 324 โ
โ 325 Type2 = Tuple[Type[T], Type[S]] โ
โ 326 Type3 = Tuple[Type[T], Type[S], Type[U]] โ
โ โ
โ /opt/pytorch/torch/utils/_pytree.py:374 in inner โ
โ โ
โ 371 โ โ @functools.wraps(f) โ
โ 372 โ โ def inner(x: T) -> Any: โ
โ 373 โ โ โ if isinstance(x, ty): โ
โ โฑ 374 โ โ โ โ return f(x) โ
โ 375 โ โ โ else: โ
โ 376 โ โ โ โ return x โ
โ 377 โ โ return inner โ
โ โ
โ /opt/pytorch/torch/_subclasses/fake_tensor.py:1455 in validate โ
โ โ
โ 1452 โ โ โ โ โ ) โ
โ 1453 โ โ โ โ if not self.allow_non_fake_inputs: โ
โ 1454 โ โ โ โ โ # import pdb; pdb.set_trace() โ
โ โฑ 1455 โ โ โ โ โ raise Exception( โ
โ 1456 โ โ โ โ โ โ f"Please convert all Tensors to FakeTensors first or instantiate โ
โ 1457 โ โ โ โ โ โ f"with 'allow_non_fake_inputs'. Found in {render_call(func, args โ
โ 1458 โ โ โ โ โ ) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in
aten.embedding.default(Parameter(FakeTensor(..., size=(50257, 2), requires_grad=True)), tensor([...], size=(1, 3)))
```
### Versions
Latest pytorch main, but the complete list is below:
```
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0a0+gita396dc5
[pip3] torch-nebula==0.16.2
[pip3] torch-ort==1.16.0.dev20230626
[pip3] torchaudio==2.1.0a0+7096829
[pip3] torchdata==0.7.0a0+901b483
[pip3] torchmetrics==1.0.0rc0
[pip3] torchsnapshot==0.1.0
[pip3] torchtext==0.16.0a0+60bea66
[pip3] torchvision==0.16.0a0+52eb503
[conda] magma-cuda117 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0a0+gita396dc5 dev_0 <develop>
[conda] torch-nebula 0.16.2 pypi_0 pypi
[conda] torch-ort 1.16.0.dev20230626 dev_0 <develop>
[conda] torchaudio 2.1.0a0+7096829 pypi_0 pypi
[conda] torchdata 0.7.0a0+901b483 pypi_0 pypi
[conda] torchmetrics 1.0.0rc0 pypi_0 pypi
[conda] torchsnapshot 0.1.0 pypi_0 pypi
[conda] torchtext 0.16.0a0+60bea66 pypi_0 pypi
[conda] torchvision 0.16.0a0+52eb503 pypi_0 pypi
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 4 |
2,024 | 105,073 |
[ONNX] Support aten::var_mean
|
module: onnx, triaged
|
Is there a issue to track it? or at least add a `TODO:`
_Originally posted by @thiagocrepaldi in https://github.com/pytorch/pytorch/pull/104491#discussion_r1261282006_
* ONNX shape inference segfaults in CI. Suspect it is a known fixed issue in ONNX main.
* Support type promotion for `var_mean`.
| 0 |
2,025 | 105,068 |
[linalg] test_ops.py::test_python_ref_meta__refs_linalg_svd_cpu_complex failing
|
triaged, module: linear algebra, module: meta tensors
|
## Issue description
test_ops.py failing:
```
======================================================================
ERROR: test_python_ref_meta__refs_linalg_svd_cpu_complex128 (__main__.TestCommonCPU)
----------------------------------------------------------------------
RuntimeError: Conj mismatch! is_conj is set to False and True
To execute this test, run the following from the base repo dir:
python test/test_ops.py -k test_python_ref_meta__refs_linalg_svd_cpu_complex128
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_python_ref_meta__refs_linalg_svd_cpu_complex64 (__main__.TestCommonCPU)
----------------------------------------------------------------------
RuntimeError: Conj mismatch! is_conj is set to False and True
To execute this test, run the following from the base repo dir:
python test/test_ops.py -k test_python_ref_meta__refs_linalg_svd_cpu_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 2 tests in 0.343s
```
## System Info
```
Collecting environment information...
PyTorch version: 2.1.0a0+git4ab1409
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1435.740
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] botorch==0.8.5
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-coding==1.3.3
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] gpytorch==1.10
[pip3] mypy==0.960
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.1.0a0+git999abd5
[pip3] torch_geometric==2.4.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchmultimodal-nightly==2023.6.2
[pip3] torchrl==0.1.1
[pip3] torchviz==0.0.2
[pip3] torchx==0.5.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] botorch 0.8.5 pypi_0 pypi
[conda] gpytorch 1.10 pypi_0 pypi
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2022.0.1 h06a4308_117
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.20.0 pypi_0 pypi
[conda] numpy-base 1.21.5 py39hf524024_1 anaconda
[conda] pytorch-lightning 2.0.2 pypi_0 pypi
[conda] torch 2.1.0a0+git999abd5 dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchmultimodal-nightly 2023.6.2 pypi_0 pypi
[conda] torchrl 0.1.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] torchx 0.5.0 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @ezyang @eellison @bdhirsh
| 10 |
2,026 | 105,066 |
test_view_dynamic_zero_dim no longer testing zero input
|
module: onnx, triaged
|
### ๐ Describe the bug
https://github.com/pytorch/pytorch/pull/104828#discussion_r1261361082
Filing an issue as requested by @thiagocrepaldi @titaiwangms
I don't think the exported model should work with a zero size input. The reason is that we 0/1 specialize on tracing; when you trace a model that has size 2, we assume that if you dynamically vary the shape, you won't vary it to 0 or 1. It is strange to have a model that exports and works with either size 0, or size 2.
If there is a more realistic example, we should put it in here instead.
### Versions
main
| 2 |
2,027 | 105,062 |
[feature request] make the input k in rot90 a list of int to rotate tensors individually in a batch
|
feature, triaged, module: python frontend
|
### ๐ The feature, motivation and pitch
I'm using torch.rot90(input, k=1, dims=[0, 1]) to rotate a batch sample. Since the k here should be an int, which means all the samples in a batch will be rotated with the same angle (because they share the same k). What I want to do is to rotate each sample of the batch separately because this will improve the accuracy of my task.
Is it possible to extend rot90 to make the input k a list of int, to make it possible to rotate tensors individually in a batch?
### Alternatives
A simple alternative to achieve the function described above is to use the for loop:
```python
import numpy as np
import torch
batch_sample = torch.randn(10, 1, 32, 32, 129)
for i in range(len(batch_sample)):
k = np.random.choice(4, 1, replace=False)[0]
batch_sample[i, :] = torch.rot90(batch_sample[i, :], k=k, dims=[1,2])
```
But it is slow using the for loop. So, I'm wondering if it is possible to extend rot90 to make the input k a list of int, making it faster to rotate tensors individually in a batch.
### Additional context
_No response_
cc @albanD
| 4 |
2,028 | 105,061 |
Add more mac messages to setup.py
|
triaged, open source, Stale, topic: not user facing
|
Fixes #105060
| 5 |
2,029 | 105,060 |
extra information messages for mac in setup.py would help.
|
module: build, module: docs, triaged
|
### ๐ Describe the bug
I am following [https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md] (https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md) on an M1 mac, to setup the development source build.
I can successfully run `python setup.py develop` but it does not seem to have done anything. Looking at the code in setup.py I would have expected the IS_DARWIN to be set and also CMAKE_OSX_ARCHITECTURES to be set appropriately. For me these are not set.
to get this to work I used
USE_DISTRIBUTED=0 CMAKE_OSX_ARCHITECTURES=arm64 MACOSX_DEPLOYMENT_TARGET=11.0 USE_MKLDNN=OFF USE_QNNPACK=OFF WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py develop
I would have liked more information messages to be put out by the setup.py. I suggest the following code
```
# Cross-compile for M1
if IS_DARWIN:
report('-- detected IS_DARWIN')
macos_target_arch = os.getenv('CMAKE_OSX_ARCHITECTURES', '')
if macos_target_arch in ['arm64', 'x86_64']:
report('-- detected macos target architecture ' + macos_target_arch)
macos_sysroot_path = os.getenv('CMAKE_OSX_SYSROOT')
if macos_sysroot_path is None:
macos_sysroot_path = subprocess.check_output([
'xcrun', '--show-sdk-path', '--sdk', 'macosx'
]).decode('utf-8').strip()
extra_compile_args += ['-arch', macos_target_arch, '-isysroot', macos_sysroot_path]
extra_link_args += ['-arch', macos_target_arch]
elif macos_target_arch == '':
report('-- no target architecture found. Please ensure CMAKE_OSX_ARCHITECTURES environment variable is set.')
else:
report('-- unrecognised macos target architecture ' + macos_target_arch + '. CMAKE_OSX_ARCHITECTURES environment variable does not contain a valid target architecture name.')
```
### Versions
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2023-07-12 15:27:01-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8002::154, 2606:50c0:8001::154, 2606:50c0:8003::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8002::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21653 (21K) [text/plain]
Saving to: โcollect_env.pyโ
collect_env.py 100%[=======================================>] 21.15K --.-KB/s in 0s
2023-07-12 15:27:02 (42.4 MB/s) - โcollect_env.pyโ saved [21653/21653]
zsh: command not found: #
Collecting environment information...
PyTorch version: 2.1.0a0+gitc03558f
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4.1 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-13.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] torch==2.1.0a0+gitf353d17
[pip3] torchgen==0.0.1
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.1.0a0+gitc03558f dev_0 <develop>
[conda] torchgen 0.0.1 pypi_0 pypi
cc @malfet @seemethere @svekars @carljparker
| 1 |
2,030 | 105,058 |
Support Delay Loading of c10.dll in when using libtorch as a thirdparty library.
|
module: windows, module: abi, triaged
|
### ๐ The feature, motivation and pitch
Hi PyTorch team,
I'm currently working on a project where I have added libtorch as a third-party dependency. Due to the size of libtorch, I wanted to make it an optional dependency using delay load hooks. However, I've encountered an issue with c10.dll, which is a dependency of libtorch. It seems that c10.dll exports a global variable, causing it to be unable to be delay loaded.
Here's the error message I encountered

I'm wondering if it's possible to add support for delay loading c10.dll in libtorch. Having this feature would be really helpful for my project, as it will allow me to better manage the dependency loading and reduce the initial load time.
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite
| 0 |
2,031 | 105,053 |
Multiple dimensions support for `torch.max`
|
feature, triaged, module: numpy, needs design, module: python frontend
|
### ๐ The feature, motivation and pitch
Numpy's `np.max` has support for multiple dimensions, but `torch.max` does not.
To keep both synced and avoid implementing custom function like #105045, it would be really helpful to have this.
### Alternatives
#105045
### Additional context
@davidradl
cc @mruberry @rgommers @albanD
| 10 |
2,032 | 105,042 |
`assert has_same_metadata(inpt_new, inpt_old)` fails when capturing forwards + backwards in train_step with resnet18
|
triaged, oncall: pt2, module: functorch
|
### ๐ Describe the bug
```
import torch
from torchvision.models import resnet18
torch.set_default_device('cuda')
model = resnet18()
optim = torch.optim.Adam(model.parameters(), lr=0.01)
inp = torch.randn(1, 3, 224, 224)
@torch.compile
def train_step():
model(inp).sum().backward()
optim.step()
train_step()
train_step()
```
```
File "/scratch/chilli/fresh/pytorch/torch/fx/_symbolic_trace.py", line 817, in trace
(self.create_arg(fn(*args)),),
File "/scratch/chilli/fresh/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/scratch/chilli/fresh/pytorch/torch/_functorch/aot_autograd.py", line 1363, in fwd_helper
return functionalized_f_helper(*args)
File "/scratch/chilli/fresh/pytorch/torch/_functorch/aot_autograd.py", line 1352, in functionalized_f_helper
assert has_same_metadata(inpt_new, inpt_old)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError:
```
### Versions
N/A
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @samdow @kshitij12345 @janeyx99
| 1 |
2,033 | 105,024 |
DISABLED test_homogeneous_attributes (__main__.TestFSDPMiscMultiThread)
|
oncall: distributed, module: flaky-tests, skipped, module: fsdp
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_homogeneous_attributes&suite=TestFSDPMiscMultiThread) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14960856221).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_homogeneous_attributes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/fsdp/test_fsdp_misc.py`
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
| 7 |
2,034 | 105,013 |
DISABLED test_compile_vmap_hessian_cuda (__main__.TestCompileTransformsCUDA)
|
module: rocm, triaged, skipped
|
Platforms: rocm
Disabling this unit test since it showed up in our local CI testing for ROCm5.6 CI upgrade PR: https://github.com/pytorch/pytorch/pull/103092
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @BLOrange-AMD
| 3 |
2,035 | 105,003 |
Move tools/autograd to torchgen/autograd
|
fb-exported, Stale, release notes: releng, suppress-bc-linter
|
Summary:
Performed with:
```
hg move tools/autograd torchgen/autograd
sed -i 's$tools/autograd$torchgen/autograd$g' $(find . -type f)
sed -i 's$tools.autograd$torchgen.autograd$g' $(find . -type f)
# + some manual modification to tools/setup_helpers/generate_code.py and BUCK files
```
Test Plan: Existing tests
Differential Revision: D47375735
| 14 |
2,036 | 104,998 |
[export] tensor creation ops burn in device
|
triaged, oncall: pt2, module: export
|
```
import torch
import torch._export
def foo(x):
return x + torch.ones(2, 2)
e = torch._export.export(foo, (torch.ones(2, 2),))
print(e.graph_module.graph)
```
produces:
```
graph():
%arg0_1 : [num_users=3] = placeholder[target=arg0_1]
%sym_size_int : [num_users=1] = call_function[target=torch.ops.aten.sym_size.int](args = (%arg0_1, 0), kwargs = {})
%sym_size_int_1 : [num_users=1] = call_function[target=torch.ops.aten.sym_size.int](args = (%arg0_1, 1), kwargs = {})
%eq : [num_users=1] = call_function[target=operator.eq](args = (%sym_size_int_1, 2), kwargs = {})
%scalar_tensor_default : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (%eq,), kwargs = {})
%_assert_async_msg : [num_users=0] = call_function[target=torch.ops.aten._assert_async.msg](args = (%scalar_tensor_default, Input arg0_1.shape[1] is specialized at 2), kwargs = {})
%eq_1 : [num_users=1] = call_function[target=operator.eq](args = (%sym_size_int, 2), kwargs = {})
%scalar_tensor_default_1 : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (%eq_1,), kwargs = {})
%_assert_async_msg_1 : [num_users=0] = call_function[target=torch.ops.aten._assert_async.msg](args = (%scalar_tensor_default_1, Input arg0_1.shape[0] is specialized at 2), kwargs = {})
%full_default : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([2, 2], 1), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%add_tensor : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg0_1, %full_default), kwargs = {})
return (add_tensor,)
```
Note the following line burns in `device=cpu`.
```
%full_default : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([2, 2], 1), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
```
These kwargs are optional; I'd like for them to not show up in the graph if the user didn't specify them.
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
2,037 | 104,981 |
NotImplementedError: Could not run 'aten::_spdiags' with arguments from the 'CUDA' backend.
|
module: sparse, module: cuda, triaged
|
### ๐ Describe the bug
```python
w = torch.ones([n, 1]).T
# when device use gpu , the following code will report an error
# but when device use cpu, the following code is right
W = torch.sparse.spdiags(w.to(device), torch.LongTensor([0]).to(device), (n, n)).to(device)
```
```
W = torch.sparse.spdiags(w, torch.LongTensor([0]), (n, n))
NotImplementedError: Could not run 'aten::_spdiags' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_spdiags' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradHIP: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradMPS: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradIPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradVE: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradMeta: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradMTIA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:16061 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:14198 [kernel]
AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ../aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ../aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ../aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ../aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]
```
### Versions
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 525.89.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
Stepping: 7
Frequency boost: enabled
CPU MHz: 3100.076
CPU max MHz: 2501.0000
CPU min MHz: 1000.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.0+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] blas 1.0 mkl https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] mkl 2021.4.0 h06a4308_640 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py38h7f8727e_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.1 py38hd3c417c_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.2 py38h51133e4_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] numpy 1.24.2 pypi_0 pypi
[conda] numpy-base 1.22.3 py38hf524024_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] torch 2.0.0+cu118 pypi_0 pypi
[conda] torchvision 0.15.1+cu118 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @ptrblck
| 3 |
2,038 | 104,974 |
[not ready for review yet] torch.compile support for parseSemiStructuredTensor
|
release notes: sparse
|
cc @jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.
The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.
A few things to note:
(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104974
* #111011
| 1 |
2,039 | 104,962 |
Add a diagram showing the code structure to CONTRIBUTING.md
|
module: docs, triaged
|
### ๐ The doc issue
Add a diagram showing the code structure to CONTRIBUTING.md
### Suggest a potential alternative/fix
I will supply a diagram showing the simplified important parts of the code structure. I will put it in the top level folder - if it should be moved to a sub folder please let me know.
cc @svekars @carljparker
| 0 |
2,040 | 104,959 |
Saving a LightningModule torch.jit.ScriptModule is incompatible with torch.amp.autocast
|
oncall: jit, has workaround, module: amp (automated mixed precision)
|
### ๐ Describe the bug
I am training a model using pytorch lightning using mixed precision. I have a region of the network that requires the dynamic range of float32, so I am using the `with torch.amp.autocast(enabled=False, device_type="cuda"):` to disable casting to lower precision. At the end of training, I save the nn.Module to torchscript format using `torch.jit.script`. However, I then encounter an error when running the scripted module. This issue goes away when I remove the context manager. The issue also doesn't occur using tracing instead of scripting. However, we need to use scripting due to latency constraints. Please see full stack trace, and minimal example to reproduce below.
```python
import pytorch_lightning as pl
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchdata.datapipes.iter import IterableWrapper
class SimpleModel(nn.Module):
def __init__(self, input_dim, hidden_dim=128, output_dim=2):
super().__init__()
self.layer1 = nn.Linear(input_dim, hidden_dim)
self.layer2 = nn.Linear(hidden_dim, output_dim)
def forward(self, inputs):
with torch.amp.autocast(enabled=False, device_type="cuda"):
x = self.layer1(inputs)
x = self.layer2(x)
return x
class SimpleTrainingModule(pl.LightningModule):
def __init__(self, model):
super().__init__()
self.model = model
self.loss = torch.nn.BCEWithLogitsLoss()
def forward(self, inputs):
return self.model(inputs)
def training_step(self, inputs):
input_tensor, target = inputs
outputs = self.forward(input_tensor)
loss = self.loss(outputs, target)
return { "loss": loss }
def configure_optimizers(self):
return torch.optim.AdamW(self.model.parameters())
def data_generator(input_dim=4):
for _ in range(20):
yield torch.rand((1, input_dim), dtype=torch.float32), torch.empty((1, 2), dtype=torch.float32).random_(1)
class RandomDrivingDataModule(pl.LightningDataModule):
def __init__(
self,
input_dim: int,
batch_size: int,
) -> None:
super().__init__()
# no overhead in generating the datapipe here
self.input_dim = input_dim
self.batch_size = batch_size
self.dp = IterableWrapper(data_generator(self.input_dim))
def train_dataloader(self) -> DataLoader:
return DataLoader(self.dp, batch_size=None, num_workers=1, pin_memory=True)
def val_dataloader(self) -> DataLoader:
return DataLoader(self.dp, batch_size=None, num_workers=1, pin_memory=True)
if __name__ == '__main__':
trainer = pl.Trainer(
accelerator="gpu",
max_steps=10,
devices=1,
)
datamodule = RandomDrivingDataModule(input_dim=4, batch_size=1)
training_module = SimpleTrainingModule(SimpleModel(input_dim=4))
trainer.fit(training_module, datamodule)
# trace model
example_input, _ = next(data_generator())
scripted_module: torch.jit.ScriptModule = torch.jit.script(training_module)
# scripted_module: torch.jit.ScriptModule = torch.jit.trace(training_module, example_inputs=example_input) <- This works!
scripted_module(example_input)
```
The traceback I get is:
```python
Traceback (most recent call last):
File "exception_example.py", line 83, in <module>
saved_model(example_input)
File "/home/sasha/si-venv-v2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
RuntimeError: isInt() INTERNAL ASSERT FAILED at "../aten/src/ATen/core/ivalue.h":626, please report a bug to PyTorch.
```
### Versions
Versions of relevant libraries:
[pip3] mypy-protobuf==3.3.0
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.0.1
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.10.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ptrblck @leslie-fang-intel
| 2 |
2,041 | 104,957 |
fix Type Hinting Annotations
|
open source, Stale, release notes: optimizer
|
Fixes #100804
| 21 |
2,042 | 104,954 |
RuntimeError: DataLoader worker (pid(s) 9036, 10492) exited unexpectedly
|
module: windows, module: dataloader, triaged
|
### ๐ Describe the bug
I installed the Pytorch + CUDA 2.0.1+cu118, While training the YOLO V8 model getting the error on training on GPU, On CPU its working fine.
```
model = YOLO()
model.train(data="coco.yaml" )
```
```
Ultralytics YOLOv8.0.132 Python-3.9.13 torch-2.0.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB)
yolo\engine\trainer: task=detect, mode=train, model=yolov8n.pt, data=coco.yaml, epochs=100, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs\detect\train25
from n params module arguments
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]]
Model summary: 225 layers, 3157200 parameters, 3157184 gradients
Transferred 355/355 items from pretrained weights
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
AMP: checks passed
train: Scanning C:\Users\vu1ad\Desktop\ReserachPlan\COCO-Dataset\train\labels.cache... 105 images, 3 backgrounds, 0 corrupt: 100%|โโโโโโโโโโ| 105/105 [00:00<?, ?it/s]
val: Scanning C:\Users\vu1ad\Desktop\ReserachPlan\COCO-Dataset\valid\labels.cache... 50 images, 0 backgrounds, 0 corrupt: 100%|โโโโโโโโโโ| 50/50 [00:00<?, ?it/s]
Plotting labels to runs\detect\train25\labels.jpg...
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\detect\train25
Starting training for 100 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
0%| | 0/7 [00:10<?, ?it/s]
```
```
---------------------------------------------------------------------------
Empty Traceback (most recent call last)
~\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _try_get_data(self, timeout)
1131 try:
-> 1132 data = self._data_queue.get(timeout=timeout)
1133 return (True, data)
~\anaconda3\lib\queue.py in get(self, block, timeout)
178 if remaining <= 0.0:
--> 179 raise Empty
180 self.not_empty.wait(remaining)
Empty:
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5500\2453824846.py in <module>
1 model = YOLO()
2
----> 3 model.train(data="coco.yaml" )
~\anaconda3\lib\site-packages\ultralytics\yolo\engine\model.py in train(self, **kwargs)
371 self.model = self.trainer.model
372 self.trainer.hub_session = self.session # attach optional HUB session
--> 373 self.trainer.train()
374 # Update model and cfg after training
375 if RANK in (-1, 0):
~\anaconda3\lib\site-packages\ultralytics\yolo\engine\trainer.py in train(self)
190 ddp_cleanup(self, str(file))
191 else:
--> 192 self._do_train(world_size)
193
194 def _setup_ddp(self, world_size):
~\anaconda3\lib\site-packages\ultralytics\yolo\engine\trainer.py in _do_train(self, world_size)
313 self.tloss = None
314 self.optimizer.zero_grad()
--> 315 for i, batch in pbar:
316 self.run_callbacks('on_train_batch_start')
317 # Warmup
~\anaconda3\lib\site-packages\tqdm\std.py in __iter__(self)
1193
1194 try:
-> 1195 for obj in iterable:
1196 yield obj
1197 # Update and possibly print the progressbar.
~\anaconda3\lib\site-packages\ultralytics\yolo\data\build.py in __iter__(self)
36 """Creates a sampler that repeats indefinitely."""
37 for _ in range(len(self)):
---> 38 yield next(self.iterator)
39
40 def reset(self):
~\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
631 # TODO(https://github.com/pytorch/pytorch/issues/76750)
632 self._reset() # type: ignore[call-arg]
--> 633 data = self._next_data()
634 self._num_yielded += 1
635 if self._dataset_kind == _DatasetKind.Iterable and \
~\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
1326
1327 assert not self._shutdown and self._tasks_outstanding > 0
-> 1328 idx, data = self._get_data()
1329 self._tasks_outstanding -= 1
1330 if self._dataset_kind == _DatasetKind.Iterable:
~\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _get_data(self)
1282 elif self._pin_memory:
1283 while self._pin_memory_thread.is_alive():
-> 1284 success, data = self._try_get_data()
1285 if success:
1286 return data
~\anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _try_get_data(self, timeout)
1143 if len(failed_workers) > 0:
1144 pids_str = ', '.join(str(w.pid) for w in failed_workers)
-> 1145 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
1146 if isinstance(e, queue.Empty):
1147 return (False, None)
RuntimeError: DataLoader worker (pid(s) 7744, 14724, 8752, 6428, 6432) exited unexpectedly
```
### Versions
I installed the Pytorch + CUDA 2.0.1+cu118, While training the YOLO V8 model getting the error on training on GPU, On CPU its working fine.
Model Details:
Ultralytics YOLOv8.0.132 Python-3.9.13 torch-2.0.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB)
yolo\engine\trainer: task=detect, mode=train, model=yolov8n.pt, data=coco.yaml, epochs=100, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs\detect\train25
Errors are:
```
Empty:
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5500\2453824846.py in <module>
RuntimeError: DataLoader worker (pid(s) 7744, 14724, 8752, 6428, 6432) exited unexpectedly
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @vladimir-aubrecht @iremyux @Blackhex @cristianPanaite @SsnL @VitalyFedyunin @ejguan @dzhulgakov
| 2 |
2,043 | 104,952 |
[Inductor] [CPU] performance regression with TORCHINDUCTOR_FREEZING=1
|
triaged, oncall: pt2, module: cpu inductor
|
### ๐ Describe the bug
There are 6 performance regression from https://github.com/pytorch/pytorch/issues/93531#issuecomment-1630223790
| | 2023-07-09 nightly | | | | 2023-07-06 nightly | | | | Result Comp | | |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| model | batch_size | speedup | inductor | eager | batch_size | speedup | inductor | eager | speedup ratio | eager ratio | inductor ratio |
|Background_Matting |1 |0.782193| 0.350721461| 0.274331872| 1 |1.00737 |0.279395653 |0.281454799 |0.78 |1.03 |0.8|
|doctr_det_predictor |1 |1.090279| 0.148405348| 0.161803234| 1 |1.713053 |0.095406578 |0.163436525 |0.64 |1.01 |0.64|
|functorch_dp_cifar10 |64 |0.622732| 0.009190167| 0.005723011| 64 |1.008348 |0.005596095 |0.005642811 |0.62 |0.99| 0.61|
|gmlp_s16_224 |128 |1.068468| 0.658434753| 0.703516464| 128 |1.227975 |0.587295424 |0.721184098 |0.87| 1.03 |0.89|
|resmlp_12_224 |128 |0.749039| 0.415152565| 0.310965462| 128 |1.237528 |0.259625741 |0.321294124 |0.61| 1.03 |0.63|
|tnt_s_patch16_224| 1 |1.173545 |0.094649226 |0.111075126 |1 |1.367958 |0.081854891 |0.111974053 |0.86 |1.01 |0.86
SW information:
SW | Nightly commit | Main commit
-- | -- | --
Pytorch|[9b5a84f](https://github.com/pytorch/pytorch/commit/9b5a84f)|[dd6c38c](https://github.com/pytorch/pytorch/commit/dd6c38c)
Torchbench|/|[8526eabb](https://github.com/pytorch/benchmark/commit/8526eabb)
torchaudio|[a233cc1](https://github.com/pytorch/audio/commit/a233cc1)|[1e117f5](https://github.com/pytorch/audio/commit/1e117f5)
torchtext|[90ea46c](https://github.com/pytorch/text/commit/90ea46c)| [8546bbb](https://github.com/pytorch/text/commit/8546bbb)
torchvision|[2ab2f74](https://github.com/pytorch/vision/commit/2ab2f74)|[657027f](https://github.com/pytorch/vision/commit/657027f)
torchdata|[9ed0325](https://github.com/pytorch/data/commit/9ed0325)|[901b483](https://github.com/pytorch/data/commit/901b483)
dynamo_benchmarks|[6226b7d](https://github.com/pytorch/pytorch/commit/6226b7d)|/
### Versions
```bash
export LD_PRELOAD=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}/lib/libiomp5.so:${CONDA_PREFIX:-"$(dirname $(which conda))/../"}/lib/libjemalloc.so
export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:-1,muzzy_decay_ms:-1"
export TORCHINDUCTOR_FREEZING=1
CORES=$(lscpu | grep Core | awk '{print $4}')
export OMP_NUM_THREADS=$CORES
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --inference --inference --float32 -dcpu -n50 --inductor --no-skip --dashboard --only Background_Matting --cold_start_latency
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --inference --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only doctr_det_predictor --cold_start_latency
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --inference --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only functorch_dp_cifar10 --cold_start_latency
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/torchbench.py --inference --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only gmlp_s16_224 --cold_start_latency
python -m torch.backends.xeon.run_cpu --node_id 0 benchmarks/dynamo/timm_models.py --inference --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only resmlp_12_224 --cold_start_latency
python -m torch.backends.xeon.run_cpu --core_list 0 --ncores_per_instance 1 benchmarks/dynamo/timm_models.py --inference --performance --float32 -dcpu -n50 --inductor --no-skip --dashboard --only tnt_s_patch16_224 --cold_start_latency
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 0 |
2,044 | 104,949 |
ONNX export process failed to keep consistence of input_names specified
|
module: onnx, triaged
|
### ๐ Describe the bug
When I use the torch.onnx.export to make the ONNX format model, the result onnx model, when loaded with onnxruntime, is failed to keep the input_names identical with my settings.
eg.
```
import torch
import torch.nn as nn
class MyModel(nn.Model):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.Linear(10, 1)
def forward(self, input_a, input_b=None): # the input_b is not actually used in the forward computation
return self.layer(input_a)
# Here we export the trained model! the file "MyModel_checkpoint_file.pt" is the training result
my_model = torch.load("MyModel_checkpoint_file.pt")
my_model.eval()
input_names = ["input_a", "input_b"]
output_names = ["output"]
fake_input_a = torch.rand(10)
fake_input_b = torch.rand(10)
with torch.no_grad():
torch.onnx.export(
model=my_model,
args = (fake_input_a, fake_input_b),
f = "result.onnx",
input_names = input_names,
output_names = output_names
)
```
Here I got a "result.onnxโ model file, I specified that it should take two inputs, names "input_a", and "input_b"; although in fect the parameter "input_b" is not really used in the graph.
But when I load the onnx model with onnxruntime
```
import onnxruntime
session = onnxruntime.InferenceSession("result.onnx", providers=["CPUExecutionProvider"])
for input in session.get_inputs():
print(input.name)
```
the result output string is only "input_a", No "input_b" at All.
So there is an dis-consistence between the exportation and inference.
I do not known the detail of the onnx conversion, I guess that this process may have removed the input items that not really used in the final onnx graph. But in my view, I would like to keep the input_names just as I had specified, even if it is not really used. So that in the software project using this model, the inference interface is fixed as I had designed.
### Versions
python == 3.8.5
torch == 1.10.2+cu111
onnxruntime == 1.14.1
| 0 |
2,045 | 104,943 |
[torch.compile] RuntimeError during Gradient Computation in torch.compile()
|
triaged, has workaround, oncall: pt2, module: functorch, module: aotdispatch
|
### ๐ Describe the bug
I have encountered a RuntimeError related to gradient computation when using torch.compile() with a model that includes nn.ELU layers with inplace=True. The error occurs when the forward pass of the model is invoked and states that a variable needed for gradient computation has been modified by an inplace operation.
This issue does not occur when executing the model's forward pass without torch.compile(). It seems to be specifically related to the combination of torch.compile() and inplace operations in the model.
The error message is as follows:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [16, 16]], which is output 0 of EluBackward1, is at version 2; expected version 1 instead.
You can reproduce the behavior with the following code snippet:
```python
import torch
from torch import nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
# Define the layers in the model
self.fc = nn.Linear(in_features=16, out_features=16, bias=True)
self.elu1 = nn.ELU(alpha=1, inplace=True)
self.elu2 = nn.ELU(alpha=1, inplace=True)
def forward(self, x):
# Define the forward pass
x = self.fc(x)
x = self.elu1(x)
x = self.elu2(x)
return x
md = MyModel()
ip = torch.rand([16,16])
md(ip) # No error
torch.compile(md)(ip) # RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [16, 16]]
```
### Versions
PyTorch version: 2.1.0.dev20230622+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 12.0.0-3ubuntu1~20.04.5
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2070
GPU 1: NVIDIA GeForce RTX 2070
GPU 2: NVIDIA GeForce RTX 2070
GPU 3: NVIDIA GeForce RTX 2070
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1224.656
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.39
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230622+cu118
[pip3] torchaudio==2.1.0.dev20230622+cu118
[pip3] torchvision==0.16.0.dev20230622+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230622+cu118 pypi_0 pypi
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @Chillee @samdow @kshitij12345 @janeyx99
| 3 |
2,046 | 104,935 |
torch version compare
|
triaged, module: python frontend
|
### ๐ Describe the bug
When I install pytorch from source code, I got torch version '2.0.0a0+gitc263bd4', but 'torch.torch_version.TorchVersion('2.0.0a0+gitc263bd4') >= (2, 0, 0)' return false. Torch installed from whl will show version '2.0.0+cpu' and
'torch.torch_version.TorchVersion('2.0.0+cpu') >= (2, 0, 0)' is correct.
```
import torch
from torch.torch_version import TorchVersion
torch.torch_version.TorchVersion('2.0.0a0+gitc263bd4') >= (2, 0, 0)
```
### Versions
2.0.0a0+gitc263bd4
cc @albanD
| 1 |
2,047 | 104,925 |
Unnecessary record_stream call for backend:cudaMallocAsync
|
module: cuda, module: logging, triaged
|
### ๐ Describe the bug
When setting `os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "backend:cudaMallocAsync"`, I now see this warning printed 20x per backward pass ( I'm accumulating gradients for 3 batches, and this prints 20x every third batch, so does not appear on forward pass AFAICT):
```[W CUDAMallocAsyncAllocator.cpp:590] Warning: Called record_stream on tensor whose original creation stream matches the recorded stream. This is unnecessary and has no effect. (function recordStream) ```
1. any ideas on how to suppress this warning so it only prints once or never?
2. possible to resolve by removing these unnecessary `record_stream` calls when using `cudaMallocAsync`?
I don't yet have a minimal example, but hope this might make sense to someone. Might be relevant to @eqy (thanks for work on this new backend!)
### Versions
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 10.4.0-4ubuntu1~22.04) 10.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA TITAN RTX
GPU 1: NVIDIA TITAN RTX
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5879.8818
CPU min MHz: 3000.0000
BogoMIPS: 8983.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.2
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[pip3] vector-quantize-pytorch==1.6.18
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-lightning 2.0.2 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
[conda] vector-quantize-pytorch 1.6.18 pypi_0 pypi
cc @ptrblck
| 3 |
2,048 | 104,919 |
fix for documentation links
|
triaged, open source, topic: docs, release notes: dynamo
|
Fixes #103276
| 3 |
2,049 | 104,913 |
StableDiffusion with dynamic=True still recompiles
|
triaged, oncall: pt2, module: dynamic shapes
|
@ezyang It completed to run, but still took a long time if the input shape changed. Already upgraded pytorch to `2.1.0.dev20230706`.
```python
from diffusers import StableDiffusionPipeline
import torch
import torch._dynamo
import datetime
torch._dynamo.config.dynamic_shapes = True
torch._dynamo.config.automatic_dynamic_shapes = True
torch._dynamo.config.assume_static_by_default = True
pipe = StableDiffusionPipeline.from_pretrained(
"SG161222/Realistic_Vision_V2.0", torch_dtype=torch.float16
)
pipe.to("cuda:0")
pipe.unet = torch.compile(pipe.unet)
start = datetime.datetime.now()
pipe(prompt="prompt", height=512, width=512)
first_done = datetime.datetime.now()
pipe(prompt="prompt", height=768, width=768)
second_done = datetime.datetime.now()
print("first elapsed:", first_done - start)
print("second elapsed:", second_done - first_done)
```
Log:
```
0%| | 0/50 [00:00<?, ?it/s]Using FallbackKernel: aten._scaled_dot_product_efficient_attention
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
100%|โโโโโโโโโโ| 50/50 [01:14<00:00, 1.49s/it]
0%| | 0/50 [00:00<?, ?it/s]Using FallbackKernel: aten._scaled_dot_product_efficient_attention
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
reduction over non-contiguous dims
100%|โโโโโโโโโโ| 50/50 [00:51<00:00, 1.04s/it]
first elapsed: 0:01:15.543955
second elapsed: 0:00:52.173735
```
Not sure if this `reduction over non-contiguous dims` warning indicates anything ?
_Originally posted by @sunhs in https://github.com/pytorch/pytorch/issues/103587#issuecomment-1625324835_
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 3 |
2,050 | 104,906 |
torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported: for dgl.nn.HeteroGraphConv()
|
oncall: jit
|
### ๐ The feature, motivation and pitch
Hello,
I get this error once I call the following line after my model is trained.
```
model_scripted = torch.jit.script(model) # Export to TorchScript
model_scripted.save('model_scripted.pt') # Save
```
my model has a HeteroGraphConv layer that uses dictionary-based arguments (two node types). However, TorchScript has limitations compared to dynamic Python execution, and keyword argument expansion is one of the features not supported in TorchScript.
Not sure if it's a bug or a feature to be developed, but NotSupportedError made me think it's a feature request.
Thanks!
### Alternatives
_No response_
### Additional context
```
torch.jit.frontend.NotSupportedError: keyword-arg expansion is not supported:
File "../opt/anaconda3/lib/python3.9/site-packages/dgl/nn/pytorch/hetero.py", line 178
(src_inputs[stype], dst_inputs[dtype]),
*mod_args.get(etype, ()),
**mod_kwargs.get(etype, {}))
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
outputs[dtype].append(dstdata)
else:
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 3 |
2,051 | 104,903 |
Errors when converting LLaMA to ONNX using dynamo export
|
module: onnx, triaged
|
### ๐ Describe the bug
While exporting LLaMA from PyTorch to ONNX using the dynamo exporter, the following error occurs.
```
While executing %full : [num_users=2] = call_function[target=torch.full](args = ((8, 8), %tensor), kwargs = {device: cpu})
Original traceback:
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 688, in forward
outputs = self.model(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1514, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 537, in forward
attention_mask = self._prepare_decoder_attention_mask(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 465, in _prepare_decoder_attention_mask
combined_attention_mask = _make_causal_mask(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 49, in _make_causal_mask
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 590, in dynamo_export
return Exporter(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 455, in export
graph_module = pre_export_passes(self.options, graph_module, updated_model_args)
File "<@beartype(torch.onnx._internal.exporter.pre_export_passes) at 0x7f096de17af0>", line 72, in pre_export_passes
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 622, in pre_export_passes
module = passes.Decompose(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 142, in wrapper
ctx.log_and_raise_if_error(diag)
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 265, in log_and_raise_if_error
raise RuntimeErrorWithDiagnostic(
torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Running Decompose pass. Raised from:
DataDependentOutputException: aten._local_scalar_dense.default
```
Here is the code used to export LLaMA.
```
import torch
from transformers import LlamaForCausalLM
llama = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf")
batch_size, seq_len = 2, 8
input_ids = torch.randint(low=0, high=6, size=(batch_size, seq_len), dtype=torch.int64)
attn_mask = torch.randint(low=0, high=2, size=(batch_size, seq_len), dtype=torch.int64)
torch.onnx.dynamo_export(
llama,
input_ids,
attn_mask,
).save("llama-7b-dynamo.onnx")
```
This error appears to arise from how `torch.full` is used. The [documentation](https://pytorch.org/docs/stable/generated/torch.full.html) says that `fill_value` should be a scalar value. Hugging Face's [implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L49) defines `fill_value` as `torch.tensor(...)` though.
```
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
def _make_causal_mask(
input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz, tgt_len = input_ids_shape
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
...
```
After trying a quick workaround to change `mask` to `torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)`, another error occurs.
```
[2023-07-10 19:36:59,057] torch.onnx: [ERROR] Cannot find symbolic function for aten::index.Tensor, which should be registered under aten.index.Tensor.
[2023-07-10 19:36:59,058] torch.onnx: [ERROR] None
[2023-07-10 19:36:59,059] torch.onnx: [ERROR] Cannot find symbolic function for aten::index.Tensor, which should be registered under aten.index.Tensor.
[2023-07-10 19:36:59,060] torch.onnx: [ERROR] None
...
[2023-07-10 19:36:59,100] torch.onnx: [ERROR] Cannot find symbolic function for aten::index.Tensor, which should be registered under aten.index.Tensor.
[2023-07-10 19:36:59,100] torch.onnx: [ERROR] None
[2023-07-10 19:36:59,100] torch.onnx: [ERROR] Cannot find symbolic function for aten::index.Tensor, which should be registered under aten.index.Tensor.
[2023-07-10 19:36:59,100] torch.onnx: [ERROR] None
[2023-07-10 19:36:59,101] torch.onnx: [ERROR] Unsupported FX nodes: {'call_function': ['aten.index.Tensor']}.
[2023-07-10 19:36:59,101] torch.onnx: [ERROR] None
Traceback (most recent call last):
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 590, in dynamo_export
return Exporter(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 455, in export
graph_module = pre_export_passes(self.options, graph_module, updated_model_args)
File "<@beartype(torch.onnx._internal.exporter.pre_export_passes) at 0x7f23bf50baf0>", line 72, in pre_export_passes
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py", line 646, in pre_export_passes
analysis.UnsupportedFxNodesAnalysis(
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 83, in analyze
self._lint(analysis_result, diagnostic_level)
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/fx/analysis/unsupported_nodes.py", line 37, in _lint
self.diagnostic_context.log_and_raise_if_error(diagnostic)
File "/home/kvaishnavi/.local/lib/python3.8/site-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 265, in log_and_raise_if_error
raise RuntimeErrorWithDiagnostic(
torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Unsupported FX nodes: {'call_function': ['aten.index.Tensor']}.
```
Note: There was an initial error with `tabulate` that I resolved with [this PR](https://github.com/pytorch/pytorch/pull/104468). The PR change is in my version of torch.
### Versions
Torch: v2.1.0.dev20230630+cu118
Transformers: v4.30.0
| 5 |
2,052 | 104,899 |
Refactor Adam and AdamW by abstracting out common code
|
module: optimizer, triaged, better-engineering, actionable
|
AdamW differs with Adam only in the weight_decay handling. Everything else is the same. We should reuse code instead of hosting the same exact logic in two places.
One way to do this is to have certain functions in adam.py that can be accessed in adamw.py. The main work that would need to be done would be consolidating the forloop and foreach implementations. The fused implementation already takes advantage of the commonality.
cc @vincentqb @jbschlosser @albanD @crcrpar
| 7 |
2,053 | 104,884 |
[dynamo][higher_order_op] assert in check_kwargs leads to error instead of graph-break
|
good first issue, triaged, oncall: pt2, module: dynamo
|
### ๐ Describe the bug
https://github.com/pytorch/pytorch/blob/e695b397e16e46e17481ab26517c9713a62e8c4a/torch/_dynamo/variables/higher_order_ops.py#L223-L227
cc: @zou3519
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @aakhundov
| 1 |
2,054 | 104,880 |
torch.onnx.export does not respect nn.Module.forward API when using export_modules_as_functions=True
|
module: onnx, triaged
|
### ๐ Describe the bug
Hi everyone. I'm currently trying to debug an issue with ONNX exporting and I'm currently trying to get ideas on how to further debug the problem.
I'm currently on PyTorch 2.0.
I've included an minimum runnable example below in case you're interested in testing it on your own setup.
**Context:** I'm trying to export a network with two `nn.Module`s, that will both be replaced by [ONNX functions](https://onnx.ai/onnx/intro/concepts.html#functions).
These functions are used to abstract out data-dependent computational graphs, so that it can later be replaced by
a custom op, on the target hardware. This ensures that each `nn.Module` receives and outputs fixed sized tensors.
**Problem:** When I connect the output of `FixedShapeUnique` to the input of `CreateVoxelGrid` is where problems start.
`FixedShapeUnique` reports having **four** output tensors instead of two and `CreateVoxelGrid` reports six input tensors
instead of four.
If I don't pass `CreateVoxelGrid` as an argument into `export_modules_as_functions`, `FixedShapeUnique` still reports
having **four** output tensors instead of two.
If I don't use `export_modules_as_functions`, I also don't see these bogus nodes appearing.
Any ideas on where should I start looking into
**Code:**
```python
from typing import Optional
import torch
from torch import nn
class CreateVoxelGrid(nn.Module):
def __init__(self, shape: tuple[int, int, int, int]) -> None:
super().__init__()
self.grid_shape = shape
def forward(
self,
voxel_features: torch.Tensor,
indices: torch.Tensor,
voxel_features_mask: Optional[torch.Tensor] = None,
indices_mask: Optional[torch.Tensor] = None,
) -> torch.Tensor:
grid = voxel_features.new_zeros(self.grid_shape)
if voxel_features_mask is not None:
voxel_features = voxel_features[voxel_features_mask]
if indices_mask is not None:
indices = indices[indices_mask]
grid[indices[:, 0], indices[:, 1], indices[:, 2]] = voxel_features
return grid
class FixedShapeUnique(nn.Module):
def forward(
self,
tensor: torch.Tensor,
mask: Optional[torch.Tensor] = None,
) -> tuple[torch.Tensor, torch.Tensor]:
if mask is None:
mask = torch.ones(tensor.shape[0], dtype=torch.bool, device=tensor.device)
output = torch.zeros_like(tensor)
valid = torch.zeros_like(mask)
unique_tensor = torch.unique(tensor[mask], dim=0)
output[: unique_tensor.shape[0]] = unique_tensor
valid[: unique_tensor.shape[0]] = True
return output, valid
class Network(nn.Module):
def __init__(self, grid_shape: tuple[int, int, int, int]) -> None:
super().__init__()
self.unique = FixedShapeUnique()
self.voxel_grid = CreateVoxelGrid(grid_shape)
def forward(
self,
voxel_features: torch.Tensor,
indices: torch.Tensor,
voxel_features_mask: Optional[torch.Tensor] = None,
indices_mask: Optional[torch.Tensor] = None,
) -> torch.Tensor:
indices, indices_mask = self.unique(indices, mask=indices_mask) # <- the million dollar question
return self.voxel_grid(
voxel_features, indices, voxel_features_mask=voxel_features_mask, indices_mask=indices_mask
)
def main():
torch.manual_seed(24)
channels = 8
n_occupied_voxels = 20
voxel_features = torch.randn(n_occupied_voxels, channels)
batch_size = 1
grid_shape = (batch_size, 256, 256, channels)
indices = torch.stack([torch.randint(size, size=(n_occupied_voxels,)) for size in grid_shape], dim=1)
voxel_features_mask = torch.rand(voxel_features.shape[0]) > 0.5
# just creating a new mask with the same number of True elements
indices_mask = torch.flipud(voxel_features_mask)
model = Network(grid_shape)
model(voxel_features, indices, voxel_features_mask=voxel_features_mask, indices_mask=indices_mask)
path = "/tmp/playground.onnx"
torch.onnx.export(
model=model.eval(),
args=(voxel_features, indices, {"voxel_features_mask": voxel_features_mask, "indices_mask": indices_mask}),
f=path,
opset_version=15,
input_names=["voxel_features", "indices", "voxel_features_mask", "indices_mask"],
export_modules_as_functions={FixedShapeUnique, CreateVoxelGrid},
)
if __name__ == "__main__":
main()
```
**ONNX Graph:**


### Versions
```
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 5000
Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz
Stepping: 7
CPU MHz: 3900.000
CPU max MHz: 4700.0000
CPU min MHz: 1200.0000
BogoMIPS: 7799.87
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 16.5 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy==1.0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] numpy-quaternion==2022.4.3
[pip3] torch==2.0.0+cu117
[pip3] torch-scatter==2.1.1+pt20cu117
[pip3] torchmetrics==0.12.0.dev0
[pip3] torchvision==0.15.0+cu117
[pip3] triton==2.0.0
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] mkl 2023.0.0 pypi_0 pypi
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] numpy 1.23.5 pypi_0 pypi
[conda] numpy-quaternion 2022.4.3 pypi_0 pypi
[conda] torch 2.0.0+cu117 pypi_0 pypi
[conda] torch-scatter 2.1.1+pt20cu117 pypi_0 pypi
[conda] torchmetrics 0.12.0.dev0 pypi_0 pypi
[conda] torchvision 0.15.0+cu117 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
```
| 2 |
2,055 | 104,878 |
DISABLED test_custom_op_cuda_cuda_wrapper (__main__.TestCudaWrapper)
|
triaged, module: flaky-tests, skipped, module: inductor
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_custom_op_cuda_cuda_wrapper&suite=TestCudaWrapper) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14905256174).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_custom_op_cuda_cuda_wrapper`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_cpp_wrapper.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
| 2 |
2,056 | 104,875 |
torch/testing/_comparison.py: If you are a user and see this message during normal operation please file an issue
|
triaged, module: testing
|
### ๐ Describe the bug
OOM not correctly caught in comparison under weird striding:
```
import torch
from einops import rearrange
def fast(x) -> torch.Tensor:
x = rearrange(x, "i j k (two l) -> i (j k l) two", two=2)
x = torch.view_as_complex(x)
return x
def slow(x) -> torch.Tensor:
x = rearrange(x, "i j k (two l) -> i (j k l) two", two=2)
x = torch.view_as_complex(x.contiguous())
return x
if __name__ == "__main__":
device = torch.device("cuda")
print(torch.cuda.get_device_properties(device))
a = torch.rand(1024, 8, 16, 256, device=device)
yf = fast(a)
ys = slow(a)
torch.testing.assert_close(yf, ys)
```
```
_CudaDeviceProperties(name='NVIDIA GeForce GT 1030', major=6, minor=1, total_memory=1998MB, multi_processor_count=3)
Traceback (most recent call last):
File "/foo/localenv/lib/python3.9/site-packages/torch/testing/_comparison.py", line 1224, in not_close_error_metas
pair.compare()
File "/foo/localenv/lib/python3.9/site-packages/torch/testing/_comparison.py", line 706, in compare
self._compare_values(actual, expected)
File "/foo/localenv/lib/python3.9/site-packages/torch/testing/_comparison.py", line 824, in _compare_values
compare_fn(
File "/foo/lib/python3.9/site-packages/torch/testing/_comparison.py", line 994, in _compare_regular_values_close
matches = torch.isclose(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 1.95 GiB total capacity; 1.27 GiB already allocated; 180.38 MiB free; 1.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "mwe.py", line 26, in <module>
torch.testing.assert_close(yf, ys)
File "/foo/localenv/lib/python3.9/site-packages/torch/testing/_comparison.py", line 1489, in assert_close
error_metas = not_close_error_metas(
File "/foo/localenv/lib/python3.9/site-packages/torch/testing/_comparison.py", line 1230, in not_close_error_metas
raise RuntimeError(
RuntimeError: Comparing
TensorLikePair(
id=(),
actual=tensor([[0.5528+0.6277j, 0.2594+0.9218j, 0.6938+0.6858j, ...,
0.3728+0.2267j, 0.9894+0.9470j, 0.1317+0.7768j],
[0.6751+0.5199j, 0.6546+0.8712j, 0.7528+0.3251j, ...,
0.7132+0.0744j, 0.5763+0.7044j, 0.4192+0.1781j],
[0.9773+0.2660j, 0.0375+0.5843j, 0.8705+0.7881j, ...,
0.4815+0.1623j, 0.9864+0.8712j, 0.6572+0.1675j],
...,
[0.9890+0.5754j, 0.4324+0.9647j, 0.1394+0.7539j, ...,
0.3246+0.4463j, 0.5527+0.6973j, 0.0100+0.3823j],
[0.0878+0.0177j, 0.4789+0.9992j, 0.3644+0.0269j, ...,
0.1525+0.1928j, 0.8523+0.3019j, 0.0676+0.2238j],
[0.4822+0.5437j, 0.6591+0.5484j, 0.0793+0.2377j, ...,
0.1888+0.1810j, 0.3516+0.4124j, 0.2638+0.5107j]], device='cuda:0'),
expected=tensor([[0.5528+0.6277j, 0.2594+0.9218j, 0.6938+0.6858j, ...,
0.3728+0.2267j, 0.9894+0.9470j, 0.1317+0.7768j],
[0.6751+0.5199j, 0.6546+0.8712j, 0.7528+0.3251j, ...,
0.7132+0.0744j, 0.5763+0.7044j, 0.4192+0.1781j],
[0.9773+0.2660j, 0.0375+0.5843j, 0.8705+0.7881j, ...,
0.4815+0.1623j, 0.9864+0.8712j, 0.6572+0.1675j],
...,
[0.9890+0.5754j, 0.4324+0.9647j, 0.1394+0.7539j, ...,
0.3246+0.4463j, 0.5527+0.6973j, 0.0100+0.3823j],
[0.0878+0.0177j, 0.4789+0.9992j, 0.3644+0.0269j, ...,
0.1525+0.1928j, 0.8523+0.3019j, 0.0676+0.2238j],
[0.4822+0.5437j, 0.6591+0.5484j, 0.0793+0.2377j, ...,
0.1888+0.1810j, 0.3516+0.4124j, 0.2638+0.5107j]], device='cuda:0'),
rtol=1.3e-06,
atol=1e-05,
equal_nan=False,
check_device=True,
check_dtype=True,
check_layout=True,
check_stride=False,
)
resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead.
```
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: 11.0.1-2
CMake version: version 3.25.0
Libc version: glibc-2.31
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.10.0-23-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GT 1030
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Stepping: 13
CPU MHz: 1379.032
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[pip3] triton==2.0.0
[conda] Could not collect
| 0 |
2,057 | 104,872 |
errors in CONTRIBUTING.md
|
module: docs, triaged
|
In [CONTRIBUTING.md](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md) it says
```
Install clang-tidy driver script dependencies
pip3 install -r tools/linter/clang_tidy/requirements.txt
Run clang-tidy
# Run clang-tidy on the entire codebase
make clang-tidy
# Run clang-tidy only on your changes
make clang-tidy CHANGED_ONLY=--changed-only
```
requirements.txt does not exist in the source tree
When I ran `make clang-tidy` from the root folder - I get
```
pytorch % make clang-tidy
make: *** No rule to make target 'clang-tidy'. Stop.
```
I says to install the LLVM 8 binaries. It would be useful to add in the docs why such a back level version is needed; as there have been 8 releases since this. It might be this was just the level that was tested to have worked.
cc @svekars @carljparker
| 2 |
2,058 | 104,868 |
Conversion of a CSR tensor with batches to COO tensor fails
|
module: sparse, triaged
|
## Issue description
As in the title.
## Code example
```python
>>> torch.tensor([[[0, 1], [2, 3]], [[4, 5], [6, 0]]]).to_sparse_csr().to_sparse()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: crow_indices is supposed to be a vector, but got 2 dimensional tensor.
```
The expected result is the same as in the following example:
```python
>>> torch.tensor([[[0, 1], [2, 3]], [[4, 5], [6, 0]]]).to_sparse_csr().to_dense().to_sparse()
tensor(indices=tensor([[0, 0, 0, 1, 1, 1],
[0, 1, 1, 0, 0, 1],
[1, 0, 1, 0, 1, 0]]),
values=tensor([1, 2, 3, 4, 5, 6]),
size=(2, 2, 2), nnz=6, layout=torch.sparse_coo)
```
## More information
The exception is raised from `convert_indices_from_csr_to_coo` which does not implement batched CSR tensor support. A number of tests (to_sparse tests, reduction tests, binary operation tests) disable the corresponding samples meaning that the issue has existed for a long time.
Now, while adding sparse samples to TestGradients methods in order to enable gradcheck tests for operations with sparse inputs, I realized that there is no straightforward way to disable batched sparse samples for these tests without disabling all sparse samples. So, the issue blocks my task. I suggest resolving it, especially because supporting batched crow_indices in `convert_indices_from_csr_to_coo` ought to be a straightforward task.
## System Info
- PyTorch version: main
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer
| 0 |
2,059 | 104,867 |
rfftn and irfftn operations in pt2 return different results compared to v1.12.1
|
module: cuda, triaged, module: third_party, module: fft
|
### ๐ Describe the bug
When running inference on the [Lama inpainting model](https://github.com/advimman/lama) under PT2 using the supplied [predict.py](https://github.com/advimman/lama/blob/main/bin/predict.py) script, differences emerge after the FFT operations in the network architecture (specifically in lines 86 and 108 of [this file](https://github.com/advimman/lama/blob/main/saicinpainting/training/modules/ffc.py#L86), compared to older pre-PT2 versions.
These differences are cascaded through the network, and result in visible "grid" artifacts when running the inpainting model on large resolution images (around 1024 and above).
Example code that creates a very minor difference:
```
x = torch.rand((1,192,256,256), device='cuda')*3-1.2
ffted = torch.fft.rfftn(x, dim=(-2, -1), norm='ortho')
output = torch.fft.irfftn(ffted, s=ifft_shape_slice, dim=fft_dim, norm=self.fft_norm)
```
The actual differences I get when running the Lama model are much larger - after the first rfftn+irfftn operations on a similar sized input (from 1024x1024 input image), the MAD (mean-absolute difference) between the resulting tensors is around 0.15 and the max difference is 1.12.
It looks like most of the difference is created in the `irfftn` method (the inputs to this operation have MAD of 2e-8 and max difference of 8e-5).
I didn't see any related information in the release notes of the latest versions, so I wondered if these differences are expected.
BTW - this behavior also happens when running the Lama model using the [Lama Cleaner](https://github.com/Sanster/lama-cleaner) codebase
### Versions
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 470.199.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2800.000
BogoMIPS: 5600.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==1.2.9
[pip3] torch==2.0.1
[pip3] torch-model-archiver==0.8.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==0.2.0
[pip3] torchserve==0.8.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-lightning 1.2.9 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-model-archiver 0.8.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchmetrics 0.2.0 pypi_0 pypi
[conda] torchserve 0.8.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
cc @ptrblck @mruberry @peterbell10 @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 8 |
2,060 | 104,860 |
torch.nn.Conv2d's padding mode circular cannot accept 3-dim input
|
module: nn, triaged, actionable
|
### ๐ Describe the bug
torch.nn.Conv2d can accept 3-dim tensor without batch,
but when I set padding_mode="circular", Conv2d seemed to get some
error at the underlying level.
When it's set to other modes, Conv2d will run normally and successfully
calculate 3-dim input, like this:
```
conv2d_circular = torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=0, padding_mode="circular")
conv2d_zeros = torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=0, padding_mode="zeros")
conv2d_reflect = torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=0, padding_mode="reflect")
conv2d_replicate = torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=0, padding_mode="replicate")
x_3dim = torch.randn(256, 32, 32)
```
these ran without any error:
```
print(conv2d_zeros(x_3dim).shape)
print(conv2d_reflect(x_3dim).shape)
print(conv2d_replicate(x_3dim).shape)
```
output:
```
torch.Size([256, 30, 30])
torch.Size([256, 30, 30])
torch.Size([256, 30, 30])
```
But run ```conv2d_circular(x_3dim)``` will show this error message:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 conv2d_circular(x_3dim).shape
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\conv.py:456, in Conv2d._conv_forward(self, input, weight, bias)
454 def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
455 if self.padding_mode != 'zeros':
--> 456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
459 return F.conv2d(input, weight, bias, self.stride,
460 self.padding, self.dilation, self.groups)
RuntimeError: Invalid padding size, expected 2 but got 4
```
what's more, I tried to set "padding" to 0~100, and this problem is still not solved.
### Versions
```
PS D:\PythonProjects\venv\Lib\site-packages\torch\utils> python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 ๅฎถๅบญไธญๆ็
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 532.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2200
DeviceID=CPU0
Family=207
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2200
Name=13th Gen Intel(R) Core(TM) i9-13900HX
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1+cu117
[pip3] torchaudio==2.0.2+cu117
[pip3] torchvision==0.15.2+cu117
[conda] Could not collect
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 4 |
2,061 | 104,857 |
Torch's `LayerNorm` and Adam optimizer vs those in tensorflow
|
needs reproduction, module: numerical-stability, module: nn, module: optimizer, triaged
|
### ๐ Describe the bug
Hello pytorch team, I am recently converting [OpenAI's tensorflow RLHF code](https://github.com/openai/lm-human-preferences) to pytorch. Given the same data and model, I was able even to align the gradients, but somehow training with pytorch resulted in significant instability. Upon further investigation, I was able to pinpoint the difference to **subtle differences in the gradient calculation in Torch's `LayerNorm` and Adam optimizer's update**.
**The reproduction snippet is https://github.com/vwxyzjn/minimal-adam-layer-norm-bug-repro/blob/master/main.py.**
## More detail
I used the same data, model, and model weights to perform a Proximal Policy Optimization (PPO) update. After the first backward pass, you can see that most gradients match perfectly with those obtained in OpenAI's codebase via `tf.train.AdamOptimizer(learning_rate=0.00001, epsilon=1e-5)`.

### Observation 1
Notice how the gradients of the `LayerNorm` parameters are significantly different.
As an example, we print some gradients below. Notice how the first gradient for `h0/ln_1/b:0` is `2.4899011e-05` in OAI's codebase, but `1.8152525e-05` in `main.py`. This is a difference of `0.6746486e-05`, which is quite significant.
In comparison, the gradients of the other layers are much more similar. For example, the first gradient for `h0/attn/c_attn/w:0` is `2.88992633e-05` in OAI's codebase, but `2.88992633e-05` in `main.py`. This is a difference of `0.0`, which is much smaller than the difference in the `LayerNorm` parameters.
```
(Pdb) oname, ograd[:10], name, param.grad.detach().numpy()[:10]
('policy/model/h0/ln_1/b:0', array([ 2.4899011e-05, -1.1588502e-03, 1.7985557e-03, 7.4343453e-03,
-2.5840786e-03, -3.5906259e-03, -6.6465489e-04, 1.8007826e-03,
-1.6414827e-03, -6.6386913e-03], dtype=float32), 'transformer.h.0.ln_1.bias', array([ 1.8152525e-05, -1.1576341e-03, 1.7961735e-03, 7.4219629e-03,
-2.5832835e-03, -3.5855419e-03, -6.7265466e-04, 1.8039590e-03,
-1.6386800e-03, -6.6277790e-03], dtype=float32))
(Pdb) oname, ograd[:10], name, param.grad.detach().numpy()[:10]
('policy/model/h0/attn/c_attn/w:0', array([[[ 2.88992633e-05, -6.70402551e-06, -1.57610848e-05, ...,
-1.05873929e-04, -9.40704340e-05, 1.00523466e-04],
[ 7.87996178e-05, -5.04239551e-07, -8.35032733e-06, ...,
-4.07231477e-04, 4.93751504e-05, -2.81412737e-04],
[ 8.21374197e-05, -1.94475469e-05, -1.36382323e-05, ...,
-1.95847577e-04, -4.09606873e-04, 2.84076581e-04],
...,
[-6.02674390e-06, 4.23970641e-06, -7.39748998e-07, ...,
1.90844381e-04, -8.59782376e-05, -6.60822116e-05],
[ 3.50006849e-05, -1.32066066e-06, -3.52823263e-05, ...,
-1.33828435e-04, 1.01715421e-04, 3.40739585e-04],
[ 1.05423496e-04, -2.66656862e-05, -4.54609835e-05, ...,
-4.23200603e-04, -1.64171652e-04, 2.63288792e-04]]],
dtype=float32), 'transformer.h.0.attn.c_attn.weight', array([[ 2.88245592e-05, -6.68141320e-06, -1.57281083e-05, ...,
-1.05716754e-04, -9.39845631e-05, 1.00422243e-04],
[ 7.86117525e-05, -4.96559778e-07, -8.27262829e-06, ...,
-4.06837964e-04, 4.93464249e-05, -2.81286135e-04],
[ 8.19143170e-05, -1.94303120e-05, -1.35097052e-05, ...,
-1.95316272e-04, -4.09374770e-04, 2.83872039e-04],
...,
[-1.33238527e-05, -6.14432452e-08, 7.30297143e-06, ...,
8.94646073e-05, -1.24311875e-04, 1.05930310e-04],
[-1.15070456e-04, 1.79788076e-05, 3.04212826e-05, ...,
6.06048678e-04, 3.23058601e-04, -4.77053138e-04],
[-5.75132690e-05, 2.93947778e-05, 3.10599062e-05, ...,
2.26184493e-05, 1.36010476e-05, 9.29407452e-06]], dtype=float32))
```
Then after a gradient pass (i.e., `optimizer.step()`) in `main.py`, I plot the parameter difference between `main.py` and `lm-human-preferences`:

### Observation 2
Even though the gradients of most layers are similar between pytorch and OAI's codebase, the Adam optimizer causes a significant difference in the parameters post-update. For example, `transformers.h.11.mlp.c_proj_bias` have nearly identical gradients as its counterparts in TensorFlow, as indicated in the last section, but their weights become quite different after a gradient pass.
Then I did the same setup but with the SGD optimizer. The gradient difference is of course the same, but the parameter difference is much smaller and only relevant to the `LayerNorm` parameters:

### End-to-end
I then did end-to-end testing with a toy RLHF codebase (https://gist.github.com/vwxyzjn/010e0f92e057ef1a779028d656ab4705) using SGD and Adam, respectively, with 10 random seeds.
```bash
# The only difference is the optimizer used
diff train_policy.py train_policy_adam.py
292c292
< optimizer = optim.SGD(policy.parameters(), lr=args.ppo.lr)
---
> optimizer = optim.Adam(policy.parameters(), lr=args.ppo.lr)
```
The results are staggering, with SGD converging to a good policy and Adam experiencing significant instability:

While I can probably use SGD for now, but this issue is so puzzling and it may warrant further investigation. If you could take a look at it, I would be greatly appreciated.
### Versions
```
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Pop!_OS 21.10 (x86_64)
GCC version: (Ubuntu 11.2.0-7ubuntu2) 11.2.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.9.5 (default, Jul 19 2021, 13:27:26) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.17.5-76051705-generic-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3060 Ti
GPU 1: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 113
Model name: AMD Ryzen 9 3900X 12-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3800.000
CPU max MHz: 3800.0000
CPU min MHz: 2200.0000
BogoMIPS: 7600.37
Virtualization: AMD-V
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 6 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.0
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @vincentqb @janeyx99 @crcrpar
| 3 |
2,062 | 104,856 |
DISABLED test_custom_op_cuda (__main__.CudaTests)
|
triaged, module: flaky-tests, skipped, oncall: pt2, module: inductor
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_custom_op_cuda&suite=CudaTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14894386161).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_custom_op_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py`
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
| 22 |
2,063 | 104,854 |
DISABLED test_custom_op_cpu_dynamic_shapes_cpp_wrapper (__main__.DynamicShapesCppWrapperCpuTests)
|
triaged, module: flaky-tests, skipped, module: inductor
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_custom_op_cpu_dynamic_shapes_cpp_wrapper&suite=DynamicShapesCppWrapperCpuTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14894136800).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 12 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_custom_op_cpu_dynamic_shapes_cpp_wrapper`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_cpp_wrapper.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
| 3 |
2,064 | 104,853 |
torch.norm inconsistency?
|
module: numerical-stability, triaged, module: norms and normalization
|
### ๐ Describe the bug
```
import torch as ch
a = ch.randn(2, int(1e6))
# The following should be equivalent
print(a[0].norm())
print(a.norm(dim=1)[0])
```
===
```
tensor(999.3896)
tensor(999.1658)
```
### Versions
Collecting environment information...
PyTorch version: 1.12.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.9
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: NVIDIA Quadro K620
GPU 1: NVIDIA TITAN X (Pascal)
GPU 2: NVIDIA TITAN X (Pascal)
GPU 3: NVIDIA TITAN X (Pascal)
GPU 4: NVIDIA TITAN X (Pascal)
Nvidia driver version: 465.19.01
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] torch==1.12.0
[pip3] torchaudio==0.12.0
[pip3] torchvision==0.13.0
[conda] blas 1.0 mkl anaconda
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640 anaconda
[conda] mkl-service 2.4.0 py39h7e14d7c_0 conda-forge
[conda] pytorch 1.12.0 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.12.0 py39_cu102 pytorch
[conda] torchvision 0.13.0 py39_cu102 pytorch
| 1 |
2,065 | 104,849 |
[feature request] torch.mix function to generalize/symmetrize addcmul
|
feature, triaged, needs research, module: python frontend
|
### ๐ The feature, motivation and pitch
I propose to introduce a convenience/ux function in core:
`mix = lambda a = 1, b = 1, c = 1, A = 1, B = 1, C = 1: a*b*c + A*B*C` in one go where all a,b,c,A,B,C can be scalars or tensors. The most frequent cases would be:
- a, A, b, B are scalars, c and C are tensors
- a, A, b are scalars, c, B, C are tensors
This would generalize/symmetrize torch.addcmul, and could be used in optimizers to make second moment calculations more idiomatic and clear. This function could be implemented using torch.compile in core. Also an inplace version is needed (kind of generalization of torch.lerp_)
Instead of `mix` (as `mix` in graphics context often means lerp: https://registry.khronos.org/OpenGL-Refpages/gl4/html/mix.xhtml) , the name could also be `linear_combination` (?) (although if B=C or other cases, it would not actually be a linear combination) Or maybe existing `addcmul` could be extended to support more arguments. The inplace mode of mix would be different from existent addcmul in that it should allow inplace multiplication of the accumulator by scalar/tensor prior to adding the rhs summand.
Some older context: https://github.com/pytorch/pytorch/issues/71683#issuecomment-1379058082 https://github.com/pytorch/pytorch/issues/79352#issuecomment-1152946076 https://github.com/pytorch/pytorch/pull/104781#issuecomment-1625709917
Here is that line from Adam https://github.com/pytorch/pytorch/blob/main/torch/optim/adam.py#L363:
```python
exp_avg_sq.mul_(beta2).addcmul_(grad, grad.conj(), value=1 - beta2)
# exp_avg.lerp_(grad, 1 - beta1)
```
Maybe another alternative for design is `lerp2` : `exp_avg_sq.lerp2_(grad, grad.conj(), 1 - beta2)`, but IMO generalized `addcmul`/`addcmul2`/`linear_combination` is still better
cc @albanD
| 6 |
2,066 | 104,845 |
Implement `diag` method for sparse COO tensors
|
module: sparse, triaged
|
### ๐ The feature, motivation and pitch
## Feature
To implement `diag` method for sparse coo tensors. Currently it raises `NotImplementedError`.
## Motivation
Given a 1-d sparse tensor, I want to make a diagonal out of it to use it in matrix multiplications then.
### Alternatives
Given 1-d sparse tensor `t`, I do:
```python
torch.sparse_coo_tensor(t.indices().repeat(2, 1), t.values())
```
Thankfully, this alternative preserves grad operations, if `t.requires_grad=True`.
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
| 2 |
2,067 | 104,832 |
MPS matmul with sliced (strided) out argument produces wrong output, may corrupt memory
|
triaged, module: mps
|
### ๐ Describe the bug
When using `torch.matmul` with an `out` argument that is a view that is not contiguous, the op has no change to the `out` argument instead of computing the matmul correctly. Here is a minimal example that replicates the behavior. It seems to happen consistently across many different sizes (i.e. the `256` here can be changed to something else and it still outputs zeros).
```
import torch
X = torch.randn(256,256).to("mps")
Y = torch.randn(256,256).to("mps")
Z = torch.zeros(256,2*256).to("mps")
torch.matmul(X,Y,out=Z[:,128:(128+256)])
```
Here, `torch.matmul` both outputs all zeros and does not change `Z`. Compare the behavior of the non-MPS code
```
import torch
X = torch.randn(256,256)
Y = torch.randn(256,256)
Z = torch.zeros(256,2*256)
torch.matmul(X,Y,out=Z[:,128:(128+256)])
```
which runs correctly and produces a non-zero output.
After doing this, I sometimes get segfaults in my program, which suggests that this is also corrupting memory, although I am unable to find any code that consistently replicates this segfault behavior.
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.9.13 (main, Aug 25 2022, 18:24:45) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.0
[pip3] pytorch-pretrained-bert==0.6.2
[pip3] torch==2.0.1
[pip3] torchaudio==0.13.1
[pip3] torchdata==0.6.1
[pip3] torchtext==0.15.2
[pip3] torchvision==0.13.1
[conda] htorch 0.1 pypi_0 pypi
[conda] numpy 1.24.0 pypi_0 pypi
[conda] pytorch-pretrained-bert 0.6.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 0.13.1 py39_cpu pytorch
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.13.1 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 0 |
2,068 | 104,823 |
Unrelated error messages with torch.nn.AdaptiveAvgPool3d
|
module: nn, triaged, module: pooling
|
### ๐ Describe the bug
I've encountered a potential issue regarding the error messaging of the torch.nn.AdaptiveAvgPool3d method. When providing inputs with incompatible dimensions, the raised error message doesn't seem to correctly reflect the actual problem.
```python
import torch.nn as nn
output_size = [1,2,3]
m = nn.AdaptiveAvgPool3d(output_size)
input_size = [1,2,3,4]
inputs = torch.randn(input_size)
m(inputs)
```
When the output_size is increased incrementally, the error messages raised are:
With output_size = [1,2,3,4], the error is: "ValueError: Input dimension should be at least 5".
With output_size = [1,2,3,4,5], the error is: "ValueError: Input dimension should be at least 6".
With output_size = [1,2,3,4,5,6], the error is: "ValueError: Input dimension should be at least 7".
Additionally, when the output_size is less than 3, the error message raised is: "RuntimeError: adaptive_avg_pool3d: output_size must be 3". this error message should be raised too when the output_size is bigger than 3?
### Versions
PyTorch version: 2.1.0.dev20230622+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 12.0.0-3ubuntu1~20.04.5
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2070
GPU 1: NVIDIA GeForce RTX 2070
GPU 2: NVIDIA GeForce RTX 2070
GPU 3: NVIDIA GeForce RTX 2070
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1224.656
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.39
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230622+cu118
[pip3] torchaudio==2.1.0.dev20230622+cu118
[pip3] torchvision==0.16.0.dev20230622+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230622+cu118 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 0 |
2,069 | 104,933 |
torch.func.jvp fails with BERT training
|
module: autograd, triaged, actionable, module: forward ad, module: functorch
|
As the title suggests just as a fun experiment I am trying to training a huggingface BERT model using forward gradient. I am getting this error in jvp computation.
> `RuntimeError: wrapper->level().value() <= current_level INTERNAL ASSERT FAILED at "../aten/src/ATen/functorch/ADInterpreters.cpp":39, please report a bug to PyTorch. escaped?`
Here is the minimal implementation
```
import torch
from torch.nn.utils import parameters_to_vector, vector_to_parameters
from transformers import BertTokenizer, BertForMaskedLM, BatchEncoding
from torch.utils.data import DataLoader
from datasets import load_dataset
import torch.optim as optim
import gc
from torch.func import jvp
from functools import partial
class ForwardADTraining:
def __init__(self, model, dataloader, criterion, device):
self.model = model.to(device)
self.dataloader = dataloader
self.criterion = criterion
self.device = device
def train_one_epoch(self):
print("Training started")
with open('log.txt', 'w') as f:
for i, data in enumerate(self.dataloader, 0):
inputs = {k: v.to(self.device) for k, v in data.items()}
labels = data['labels'].to(self.device)
# Get the model parameters
model_params = list(self.model.parameters())
original_params = tuple([p.detach().clone() for p in model_params])
# Generate a random tangent vector
tangents_vector = tuple(
[torch.normal(mean=0, std=1, size=p.size(), device=self.device) for p in original_params])
# Define the loss function as a callable function for jvp
def compute_loss(*params):
for param, p in zip(model_params, params):
param.data = p.data
input_ids = inputs["input_ids"]
attention_mask = inputs.get("attention_mask", None)
if attention_mask is not None:
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
else:
outputs = self.model(input_ids=input_ids)
loss = self.criterion(outputs.logits.view(-1, outputs.logits.shape[-1]), labels.view(-1))
return loss
# Compute the forward gradient (Jacobian-vector product)
loss_value, grad_vector = jvp(compute_loss, original_params, tangents_vector)
# Update the model parameters manually
for param, grad in zip(model_params, grad_vector):
param.data -= 0.001 * grad # Here, 0.001 is the learning rate
# print statistics
if i % 100 == 99:
print(f'Batch: {i + 1}, loss: {loss_value.item()}')
f.write(f'Batch: {i + 1}, loss: {loss_value.item()}\n')
# Cleanup memory
del inputs, labels, loss_value, grad_vector
torch.cuda.empty_cache()
gc.collect()
print("Training finished")
# Load the tokenizer and model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
# Load the dataset
dataset = load_dataset('wikitext', 'wikitext-2-raw-v1')
# Preprocessing function to tokenize the texts and prepare labels
def preprocess(examples):
tokenized_inputs = tokenizer(examples['text'], truncation=True, padding='max_length', max_length=128,
return_tensors='pt')
labels = tokenized_inputs.input_ids.clone()
labels[:-1] = tokenized_inputs.input_ids[1:]
return {"input_ids": tokenized_inputs.input_ids.squeeze(), "labels": labels.squeeze()}
# Preprocess the dataset
encoded_dataset = dataset.map(preprocess)
encoded_dataset.set_format(type='torch', columns=['input_ids', 'labels'])
# Create a DataLoader
dataloader = DataLoader(encoded_dataset['train'], batch_size=16)
# Define the criterion
criterion = torch.nn.CrossEntropyLoss()
# Define device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Create the training object and train for one epoch
trainer = ForwardADTraining(model, dataloader, criterion, device)
trainer.train_one_epoch()
`
```
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
2,070 | 104,817 |
[RFC] Let in-place foreach functions return a list of Tensors
|
triaged, module: mta
|
Currently in-place foreach functions are a void function as you can see in the symbols in native_functions.yaml such as https://github.com/pytorch/pytorch/blob/3d071849300048a50555391874ab0d663a074d23/aten/src/ATen/native/native_functions.yaml#L9919.
This is mainly because of the limitation of torchgen/model here: https://github.com/pytorch/pytorch/blob/d8cb80e3827226eeaafd6a007018112301f89d41/torchgen/model.py#L1411-L1428
Even after enabling in-place foreach to return a TensorList, method chaining is infeasible. I however expect it to make the behavior a little bit more intuitive.
rel:
- https://github.com/pytorch/pytorch/pull/104780#discussion_r1256289461
- https://github.com/pytorch/pytorch/issues/58833
cc @mcarilli @albanD @soulitzer @janeyx99
| 1 |
2,071 | 104,814 |
[compile][dynamic] dsplit is seeing a list of mixed ints and symints
|
triaged, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
```
import torch
@torch.compile(dynamic=True)
def fn(x, sections):
return torch.dsplit(x, sections)
fn(torch.randn(4, 4, 4), [1,2,3])
```
```
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 890, in wrap_fake_exception
return fn()
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 1301, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 1366, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/scratch/anijain/work/pytorch/torch/_dynamo/utils.py", line 1353, in run_node
return node.target(*args, **kwargs)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method dsplit of type object at 0x7fc96edb3fa0>(*(FakeTensor(..., size=(s0, s0, s0)), [1, s1, s2]), **{}):
dsplit() received an invalid combination of arguments - got (FakeTensor, immutable_list), but expected one of:
* (Tensor input, int sections)
didn't match because some of the arguments have invalid types: (FakeTensor, immutable_list of [int, SymInt, SymInt])
* (Tensor input, tuple of ints indices)
didn't match because some of the arguments have invalid types: (FakeTensor, immutable_list of [int, SymInt, SymInt])
from user code:
File "/scratch/anijain/work/pytorch/examples/dsplit.py", line 5, in fn
return torch.dsplit(x, sections)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
cc @ezyang @msaroufim @wconstab @bdhirsh
### Versions
N/A
| 0 |
2,072 | 104,808 |
PyTorch built with CuDNN-8.8.1 crashes if CuDNN-8.9.2 is installed on the system
|
module: cudnn, module: ci, triaged
|
### ๐ Describe the bug
When PyTorch manywheels docker container was finally updated to 8.9.2 as part of https://github.com/pytorch/builder/pull/1436 but before PyPI dependence were updated https://github.com/pytorch/pytorch/pull/104757 basic attempts to use CuDNN (for example by running https://github.com/pytorch/builder/blob/main/test_example_code/cnn_smoke.py ) segfaulted:
```
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:
/opt/python/cp38-cp38/bin/python /builder/test_example_code/cnn_smoke.py
Segmentation fault (core dumped)
```
And backtrace was exceedingly unhelpful:
```
* thread #1: tid = 789, 0x00007f3fe257c140 libcuda.so.525.105.17`??? + 272, name = 'python', stop reason = invalid address (fault address: 0x20000002ee0)
frame #0: 0x00007f3fe257c140 libcuda.so.525.105.17`??? + 272
libcuda.so.525.105.17`??? + 272:
-> 0x7f3fe257c140: movq (%rsi), %rax
0x7f3fe257c143: testq %rax, %rax
0x7f3fe257c146: je 0x7f3fe257c1d0 ; ??? + 416
0x7f3fe257c14c: movq %rax, 0x18(%rsp)
```
But list of loaded shared libraries provided an insight, that it happens because some CuDNN libraries were loaded from 8.8(fetch from pypi) and some from 8.9(installed in /usr/local/cuda) cudnn (see lines 90 and 91):
````
(lldb) image list
[ 0] 6C2263D9-55FE-580A-8929-71B53C242367-3B19A31F /opt/python/cp38-cp38/bin/python
[ 1] BFBE63A1-682F-2156-75D7-35A28ADB0399-1C056202 /opt/_internal/cpython-3.8.1/bin/../lib/libpython3.8.so.1.0
[ 2] 97BE6F91-99FE-D449-1B00-AA91F7E6EACC-4D5328F7 /lib64/libcrypt.so.1
[ 3] E10CC8F2-B932-FC3D-AEDA-22F8DAC5EBB9-69524E5B /lib64/libpthread.so.0
[ 4] 7F2E9CB0-769D-7E57-BD66-9B485A74B537-B63A57C4 /lib64/libdl.so.2
[ 5] FF2196BD-22A8-4430-54C8-3031E0E76EB0-1BA1219C /lib64/libutil.so.1
[ 6] 7615604E-AF4A-068D-FAE5-085444D15C0D-EE93DFBD /lib64/libm.so.6
[ 7] 9470E279-388F-7F9C-B2ED-3B2872D0C209-5B191FF4 /lib64/libc.so.6
[ 8] 020C788B-41DC-C71A-EE66-B822D7670BC4-347DA006 /lib64/libfreebl3.so
[ 9] 62C44997-4331-341B-B08D-CCE3859560A2-2AF1E172 /lib64/ld-linux-x86-64.so.2
[ 10] 0D1845E2-DCE9-324E-DA46-3088C89DE7E8-59E28958 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/math.cpython-38-x86_64-linux-gnu.so
[ 11] 73EAA68F-88B6-DE75-8625-0A356F401B8B-89782B06 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_heapq.cpython-38-x86_64-linux-gnu.so
[ 12] 36C78741-A4D8-6CE0-8FB1-1C276CCA289D-A58C7A57 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so
[ 13] 7BE2DC09-660D-2D01-F922-AA5F2E9BBD01-D01E8906 /usr/lib64/libffi.so.6.0.1
[ 14] AB0E08CA-A10B-F4C6-DA16-B3995B4CBF39-48429C5E /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_struct.cpython-38-x86_64-linux-gnu.so
[ 15] 8C967D92-A6FC-0E72-412C-BAC5E0722D8D-5C30865B /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_opcode.cpython-38-x86_64-linux-gnu.so
[ 16] EF853282-CC54-DDDE-E64B-59AAA353E15D-B67B8274 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/zlib.cpython-38-x86_64-linux-gnu.so
[ 17] B9D5F734-28BD-6AD6-8C96-986B57BEA3B7-CEDB9745 /usr/lib64/libz.so.1.2.7
[ 18] A9645FE2-9750-BD31-6283-2DF059E69B86-13DA5C78 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_bz2.cpython-38-x86_64-linux-gnu.so
[ 19] 0C85C038-6F0C-F41E-A399-69CF7F58A558-D1AD3235 /usr/lib64/libbz2.so.1.0.6
[ 20] 66A8EE3A-92BF-464D-2C71-56A708236DB5-3428B8E1 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_lzma.cpython-38-x86_64-linux-gnu.so
[ 21] F744796E-A6FA-4D80-1941-4F5487EE85DE-74DC5ADC /usr/lib64/liblzma.so.5.2.2
[ 22] D8F0C268-9D0F-2CF2-E591-17522E780D04-1358A073 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/grp.cpython-38-x86_64-linux-gnu.so
[ 23] B66EE92B-0B94-A948-10A0-1477AEEE7BD4-CED92595 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_bisect.cpython-38-x86_64-linux-gnu.so
[ 24] 29B99260-D20D-F204-D75F-032FAB3F784C-89D2CD4E /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_sha512.cpython-38-x86_64-linux-gnu.so
[ 25] CF95AB0E-35CC-43A7-C148-CA3B6E9D0A91-CFA55D55 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_random.cpython-38-x86_64-linux-gnu.so
[ 26] 3585F61A-C47F-B239-8DC7-A16CA843AE20-A5FC2992 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_posixsubprocess.cpython-38-x86_64-linux-gnu.so
[ 27] EFB58F7F-8A1C-D376-980F-6248D9A1A77F-734464AA /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/select.cpython-38-x86_64-linux-gnu.so
[ 28] 0658FA78-ADAF-06A7-E581-AED8E4067BAB-0AAB9E41 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libtorch_global_deps.so
[ 29] 5272492A-E71E-0A00-0A8F-2BF5B6727F82-D695650D /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cufft/lib/libcufft.so.11
[ 30] 861D8F1C-AD8C-0333-7FAA-099C54118A3A-A4BB1B1B /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/curand/lib/libcurand.so.10
[ 31] 76DF3592-B88C-3034-3C5A-520ACB5219B8-19E26DB0 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cublas/lib/libcublas.so.12
[ 32] 7B302283-5A98-F516-72BC-A6F6AFCBC04B-2B2EC0A3 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cublas/lib/libcublasLt.so.12
[ 33] 3130AC46-E7FB-8FE6-09E7-5651BC752B0C-A5BDE565 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
[ 34] 8AF27612-DA52-D3A7-806C-B5C936DCCF53-6CAEDF7C /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/nvtx/lib/libnvToolsExt.so.1
[ 35] 5F4FB88A-F97B-E3EC-ACC7-1363136BB015-B2A07119 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libgomp-a34b3233.so.1
[ 36] 3E44DF70-5594-2478-D052-E40FDD1F5B78-62B152B0 /usr/lib64/librt-2.17.so
[ 37] EDF51350-C7F7-1496-149D-064AA8B1441F-786DF88A /usr/lib64/libgcc_s-4.8.5-20150702.so.1
[ 38] 09CFB171-3101-10BC-7EA9-F4476C9FA044-D85BAFF4 /usr/lib64/libstdc++.so.6.0.19
[ 39] 51435A43-4D86-772E-08CF-3DC21F4C06EB-DD3ED21D /usr/lib64/libcuda.so.525.105.17
[ 40] 26C34EAE-C0EE-9260-E96A-B862426D0C98-E3CD59BE /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/_C.cpython-38-x86_64-linux-gnu.so
[ 41] CB0C8FA4-6BD3-9CBA-6EF5-9F214A2F4280-70926875 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libtorch_python.so
[ 42] 899D0D99-E8FB-2D1F-4664-C75C38C58973-B6F7564C /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libshm.so
[ 43] E51BEADA-63C5-41AE-2C21-91BAC506413C-D3A69A09 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libtorch.so
[ 44] B69D8FC0-0EDA-9E06-976F-F5DDF07F3976-0135764C /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so
[ 45] 37A63A40-59E3-0C41-7142-06C304712A9E-991F5719 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so
[ 46] B987F520-A143-3C26-F8E7-21C520AA0141-5206B7E4 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libc10_cuda.so
[ 47] 8C597938-23B1-B92F-F463-C85D3B6B3F47-777647C7 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libc10.so
[ 48] 114B774D-3354-90C2-1AA9-55D7E18032C6-1D0A73FF /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cudnn/lib/libcudnn.so.8
[ 49] 398392FB-0000-0000-0000-000000000000 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cuda_cupti/lib/libcupti.so.12
[ 50] FB88E924-2FD2-A229-005F-3C1520C10393-28FDB9B6 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cusparse/lib/libcusparse.so.12
[ 51] 1DAD5F31-1750-CD20-62C4-F0915AEC7610-635E0A7D /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/nccl/lib/libnccl.so.2
[ 52] 83B6DD72-558E-E08A-0675-E7BC58C04170-F02E42CB /usr/local/cuda-12.1/targets/x86_64-linux/lib/libnvJitLink.so.12.1.55
[ 53] 4757FADA-DDCC-A281-D3B0-2988FBB8A437-24FCBEFF /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/torch/lib/libnvfuser_codegen.so
[ 54] 5535086D-3380-568F-8EAE-CFA2E73F456F-1EDD94EC /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12
[ 55] B4662EB0-AFDF-C43A-60A2-5B1C5AC6C64B-F0CABC9A /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_json.cpython-38-x86_64-linux-gnu.so
[ 56] 9AF3FB5E-D069-3360-93BB-BA1892AC868E-59D621BB /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
[ 57] 8C5774C0-E3F6-3EBC-5CD3-4216721D90FD-4AC8050C /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so
[ 58] 5BBE74EB-6855-E0A2-C043-C0BEC2F484BF-3E9F14C0 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy.libs/libgfortran-040039e1.so.5.0.0
[ 59] 549B4C82-3477-8545-9571-C79239872AD3-1509DCF4 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy.libs/libquadmath-96973f99.so.0.0.0
[ 60] 42B95253-F816-764F-54D2-51562B70103E-0D1784A0 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_datetime.cpython-38-x86_64-linux-gnu.so
[ 61] A96F33E1-AD90-BC7A-56FB-75417FA015F6-2C86AFFB /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so
[ 62] 6F9D5615-1A0A-60B5-0E75-B75D5B31CCB8-3647E64F /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_contextvars.cpython-38-x86_64-linux-gnu.so
[ 63] 95E12C42-F6DC-52C4-0771-4BC5368B9194-D159EC7F /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/core/_multiarray_tests.cpython-38-x86_64-linux-gnu.so
[ 64] 4A6F85D5-9ADF-BB83-0408-89A6CE872F2A-86400F82 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/linalg/_umath_linalg.cpython-38-x86_64-linux-gnu.so
[ 65] 7178A59D-0F32-4EE7-7027-7F2FD4134453-743F845E /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/fft/_pocketfft_internal.cpython-38-x86_64-linux-gnu.so
[ 66] 6E091182-1E1A-C873-9EBB-353B3754F257-EA7A5198 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/mtrand.cpython-38-x86_64-linux-gnu.so
[ 67] E61710B6-BA4C-131A-47EF-45980C3BC833-74003CCB /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/bit_generator.cpython-38-x86_64-linux-gnu.so
[ 68] 78AD06AD-F174-A044-8054-922C701AB973-F2E762F2 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_common.cpython-38-x86_64-linux-gnu.so
[ 69] 8129B613-27DE-6C7C-9BC0-8FA8A258BA82-4DCD11B8 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/binascii.cpython-38-x86_64-linux-gnu.so
[ 70] 9F92A5F5-D4A1-2173-519C-2D9C50C7BFCB-D03AF5AC /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_hashlib.cpython-38-x86_64-linux-gnu.so
[ 71] 9A3DB2C4-F4E4-E261-9ECB-6A322E88E3AE-65F66327 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_blake2.cpython-38-x86_64-linux-gnu.so
[ 72] 6E2F9EEB-3FA2-EEF6-C332-CC5980E9E0B9-A887579B /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_sha3.cpython-38-x86_64-linux-gnu.so
[ 73] B5015B93-D909-5C06-9885-88CBB3B8E0D5-8226CD6C /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_bounded_integers.cpython-38-x86_64-linux-gnu.so
[ 74] D5C1B0AB-8061-76E6-EA15-F365C47134D3-C3C7E59D /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_mt19937.cpython-38-x86_64-linux-gnu.so
[ 75] 7985FDCE-EF43-F84E-C1D9-B01D696F712B-C8D3B0C9 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_philox.cpython-38-x86_64-linux-gnu.so
[ 76] 64078B93-4F31-DAC6-2149-836EA303D489-C6550A12 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_pcg64.cpython-38-x86_64-linux-gnu.so
[ 77] 470E2A1D-8BB2-53EE-BEC1-582F43F4871C-098E049F /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_sfc64.cpython-38-x86_64-linux-gnu.so
[ 78] 4BE2A108-0EF6-2E2C-DFCF-06A498EA34D3-97E63274 /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/numpy/random/_generator.cpython-38-x86_64-linux-gnu.so
[ 79] 0A973651-2284-007D-E38E-08BACF8645D2-5EE14141 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/cmath.cpython-38-x86_64-linux-gnu.so
[ 80] 44B37B80-7FDD-5D29-3751-6D93A148DA4B-5C738939 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_socket.cpython-38-x86_64-linux-gnu.so
[ 81] 60FF3E5D-49DD-339D-4C36-C844707A6DCD-3D9B2DCE /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/array.cpython-38-x86_64-linux-gnu.so
[ 82] D5B0A305-F4ED-8584-D4F4-ED26D80156E9-61B7B32F /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_multiprocessing.cpython-38-x86_64-linux-gnu.so
[ 83] D476780F-37C6-DE60-EF7F-6BC76C18BDE9-EA5B1D8A /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_ssl.cpython-38-x86_64-linux-gnu.so
[ 84] 3E0CE9DE-5FC0-FBD3-C6E1-BD515885008A-5A589D5E /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_queue.cpython-38-x86_64-linux-gnu.so
[ 85] F59B22DB-1AD8-17BC-2300-396FD62CA269-25B91238 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_asyncio.cpython-38-x86_64-linux-gnu.so
[ 86] 20069E7C-6172-9980-1E27-00893D29E322-5822355B /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/_decimal.cpython-38-x86_64-linux-gnu.so
[ 87] 62807570-4697-0A8B-9A08-965750BE07BA-0098AE75 /opt/_internal/cpython-3.8.1/lib/python3.8/lib-dynload/unicodedata.cpython-38-x86_64-linux-gnu.so
[ 88] 9E6CEDED-CD8B-82BA-57C1-98B7F514C711-CCC9199F /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvfuser/_C.cpython-38-x86_64-linux-gnu.so
[ 89] F70D737E-0000-0000-0000-000000000000 /usr/lib64/libnvidia-ml.so.525.105.17
[ 90] AE971699-772B-BDA3-51A3-28B8F318DAFD-4C28A1B4 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
[ 91] EFA0AE70-BC7A-9C92-0E45-060D810FAF61-07BD23BE /opt/_internal/cpython-3.8.1/lib/python3.8/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8
[ 92] 1AD32154-4717-01BD-AB25-B9F04FAFA0AA-56210145 /usr/local/cuda-12.1/targets/x86_64-linux/lib/libnvrtc.so.12.1.55
````
I think it would be great to have 2 followups from this issue:
- Make sure that there is only one definition of CuDNN version (probably in builder repo)
- Make sure that cudnn dynamic loading prefers full-versioned libraries to a partially versioned ones (cc: @ptrblck)
### Versions
Nightly/CI
cc @csarofeen @ptrblck @xwang233 @seemethere @pytorch/pytorch-dev-infra
| 0 |
2,073 | 104,797 |
Regression in Dalle2 due to dynamic shapes
|
triaged, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
See regression in Dalle2_Pytorch [here](https://hud.pytorch.org/benchmark/torchbench/inductor_with_cudagraphs?startTime=Fri,%2030%20Jun%202023%2020:02:40%20GMT&stopTime=Fri,%2007%20Jul%202023%2020:02:40%20GMT&granularity=hour&mode=inference&dtype=bfloat16&lBranch=main&lCommit=54e320d4d1654d45b65c3a5a1ba8fe35faf21460&rBranch=main&rCommit=7ae100628ec530e1da7bd5e5f86024afa8843a32).
And corresponding Error from [log](https://ossci-raw-job-status.s3.amazonaws.com/log/14785878501):
```
2023-07-05T09:35:43.8510365Z ERROR:common:Failed running call_function <function rearrange at 0x7f524b7b6ef0>(*(FakeTensor(..., device='cuda:0', size=(4, 128, s0, s1)), 'b c ... -> b ... c'), **{}):
2023-07-05T09:35:43.8511611Z unhashable type: 'SymInt'
2023-07-05T09:35:43.8515016Z
2023-07-05T09:35:43.8515940Z from user code:
2023-07-05T09:35:43.8517079Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/dalle2_pytorch/dalle2_pytorch.py", line 1670, in forward
2023-07-05T09:35:43.8517834Z h = rearrange(h, 'b c ... -> b ... c')
2023-07-05T09:35:43.8518130Z
```
### Error logs
_No response_
### Minified repro
_No response_
### Versions
master
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 7 |
2,074 | 104,793 |
have inductor fallback for fp16.view(dtype=torch.int16)
|
Stale, module: inductor, ciflow/inductor
|
This is a potential fix for https://github.com/pytorch/pytorch/issues/104791. Totally open to suggestions (should we not land this, and instead figure out how to get `tmp0.to(tl.int16, bitcast=True)` to work in triton?) - but this PR detects the case mentioned in the issue and falls back to eager necessary.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #104793
* #106381
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @ngimel
| 2 |
2,075 | 104,791 |
inductor/triton fails on `view(..., dtype=torch.int16)`
|
triaged, oncall: pt2, module: inductor
|
It looks like triton can't handle bitcasting `float16` to `int16`. This example works on cpu though. Repro:
```
import torch
@torch.compile
def f(x):
x_view = x.view(dtype=torch.int16)
return x_view.mul(2)
x = torch.ones(4, dtype=torch.float16, device='cuda')
out = f(x)
```
output:
```
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CompilationError: at 8:19:def triton_(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 4
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask).to(tl.float32)
tmp1 = tmp0.to(tl.int16, bitcast=True)
^
ValueError('Cannot bitcast data-type of size 32to data-type of size 16')
```
One question: should triton be able to handle this? Otherwise, maybe we can change the lowering for `view.dtype` to fall back to eager when it sees an fp16->int16 conversion.
I ran into this when trying to use @jcaip 's `SparseSemiStructuredTensor` subclass prototype with torch.compile.
cc @ezyang @msaroufim @wconstab @anijain2305 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
| 7 |
2,076 | 104,789 |
[BE] Evaluate and improve eager for-loop optimizer memory perf
|
module: optimizer, triaged, better-engineering, actionable
|
We are not the most careful with using the least amount of memory in our for-loop implementations of our optimizers. For example, we have `max_exp_avg_sqs[i].copy_(torch.maximum(max_exp_avg_sqs_i, exp_avg_sq))` in adam.py which could be `torch.maximum(max_exp_avg_sqs_i, exp_avg_sq, out = max_exp_avg_sqs[i])` to save an allocation. Motivated by this comment: https://github.com/pytorch/pytorch/pull/104781#issuecomment-1625621945
This issue is to track the task of going through each of our for-loop (single_tensor) optimizers and ensuring that we are clean and minimal with our memory footprint.
- [ ] Adam
- [ ] AdamW
- [ ] Adagrad
- [ ] Adadelta
- [ ] Adamax
- [ ] NAdam
- [ ] RAdam
- [ ] ASGD
- [ ] SGD
- [ ] Rprop
- [ ] RMSprop
- [ ] LBFGS
- [ ] SparseAdam
cc @vincentqb @jbschlosser @albanD
| 2 |
2,077 | 104,788 |
Use `isinstance` instead of `type` when checking for `torch.nn.Parameter`
|
module: nn, triaged
|
### ๐ The feature, motivation and pitch
From comment https://github.com/pytorch/pytorch/pull/104069?notification_referrer_id=NT_kwDOAhpHxbM2ODUxNzMzMDg5OjM1Mjc2NzQx#discussion_r1256262813
```
p = torch.nn.Parameter(t)
type(p) is torch.nn.Parameter
```
only works if `t` (the data passed to nn.Parameter) is a regular `torch.Tensor` and not a tensor subclass. However `isinstance(p, torch.nn.Parameter)` will work for both regular tensors and tensor subclasses. We should update all instances of type(t) is Parameter to be `isinstance(t, Parameter)` within PyTorch core.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr
| 2 |
2,078 | 104,776 |
torch.nn.CrossEntropyLoss: class weighting changes label_smoothing
|
module: nn, module: loss, triaged
|
### ๐ Describe the bug
The class weighting in torch.nn.CrossEntropyLoss changes the probabilities that classification networks predict if use together with nonzero label smoothing. This leads to networks learning to predict wrong uncertainties in unbalanced datasets.
In extreme cases / datasets with rare classes the predicted probabilities are so wrong that the wrong class gets predicted.
I wrote a toy example below to reproduce this issue. It is a simple 2 layer network that gets as input a one hot vector with a class label and is supposed to predict the same class. As you can imagine it gets 100 % training accuracy with standard setting and predicts uncertainties according to the correct label smoothed probabilities.
The code snippet contains 4 tests with different class weighting.
When upweighting one of the classes (together with label smoothing) the training accuracy drops to 0.0008 because the network always puts 0.9147% probability on the rare class. This is because the label smoothing on the incorrect class is increased by the class weighting.
Training without label smoothing but with class weighting again gives 100% accuracy.
My expectation would be that class weighting does not change how much probability the label smoothing puts on the wrong classes.
Example:
(I use sklearn to automatically compute balanced class weights here but one can of course also set them manually.)
```Python
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
import numpy as np
from sklearn.utils.class_weight import compute_class_weight
num_classes = 4
# Test 1: balanced dataset + label smoothing -> balance class weights -> no problem
#probabilities = [0.25, 0.25, 0.25, 0.25]
#label_smoothing = 0.1
# Test 2: imbalanced dataset + label smoothing -> one class has 8x higher weight -> wrong uncertainties
#probabilities = [0.32, 0.04, 0.32, 0.32]
#label_smoothing = 0.1
# Test 3: rare class + label smoothing -> one class 333x higher weight -> wrong uncertainties + wrong predictions
probabilities = [0.333, 0.001, 0.333, 0.333]
label_smoothing = 0.1
# Test 4: rare class no label smoothing -> one class ~300x higher weight -> no problem
#probabilities = [0.333, 0.001, 0.333, 0.333]
#label_smoothing = 0.0
model = torch.nn.Sequential(nn.Linear(num_classes, 32),
nn.ReLU(inplace=True),
nn.Linear(32, num_classes))
optimizer = optim.AdamW(model.parameters(), lr=0.0003)
dataset_size = 10000
options = np.arange(num_classes)
# Normalize to make sure we have valid probabilities
probabilities = [float(i)/sum(probabilities) for i in probabilities]
print('Class probabilities in dataset', probabilities)
inputs = []
labels = []
labels_numpy = []
for _ in range(dataset_size):
sample = np.random.choice(options, p=probabilities)
labels_numpy.append(sample)
tensor_sample = torch.from_numpy(np.array(sample))
labels.append(tensor_sample)
inputs.append(F.one_hot(tensor_sample, num_classes).to(torch.float32))
inputs = torch.stack(inputs)
labels = torch.stack(labels)
model.train()
class_weights = torch.tensor(compute_class_weight(class_weight='balanced', classes=np.arange(4), y=labels_numpy),
dtype=torch.float32)
print('Class weights', class_weights)
ce_loss = nn.CrossEntropyLoss(weight=class_weights, label_smoothing=label_smoothing)
optimizer.zero_grad()
for epoch in range(5000):
prediction = model(inputs)
loss = ce_loss(prediction, labels)
if epoch % 100 == 0:
print('Epoch:', epoch, loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
final_logits = model(inputs)
final_prob = torch.nn.functional.softmax(final_logits, dim=1)
final_pred = torch.argmax(final_prob, dim=1)
print(final_prob[:10])
print(final_pred[:10])
print(labels[:10])
test = labels == final_pred
print('Accuracy:', sum(test) / len(test))
```
### Versions
[pip3] numpy==1.21.6
[pip3] pytorch-lightning==0.7.1
[pip3] torch==1.12.1+cu113
[pip3] torchaudio==0.12.1+cu113
[pip3] torchmetrics==0.11.0
[pip3] torchvision==0.13.1+cu113
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch-lightning 0.7.1 pypi_0 pypi
[conda] torch 1.12.1+cu113 pypi_0 pypi
[conda] torchaudio 0.12.1+cu113 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchvision 0.13.1+cu113 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 4 |
2,079 | 104,775 |
Subgraph matcher returned a false match
|
triaged, module: fx, oncall: pt2
|
I ran into a "bug" where the subgraph matcher returned a match, but the matched pattern doesn't actually exist in the original model. This resulted in the following error in subgraph rewriter:
```
File "/home/andrewor/local/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/fx/subgraph_rewriter.py", line 218, in replace_pattern_with_filters
return _replace_pattern(gm, pattern, replacement, match_filters, ignore_literals)
File "/home/andrewor/local/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/fx/subgraph_rewriter.py", line 326, in _replace_pattern
gn = match.nodes_map[node]
KeyError: div__scalar
```
Matched pattern looks like this (doesn't exist in the original model):
```
def forward(self, x):
arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
empty_like_default = torch.ops.aten.empty_like.default(arg0, memory_format = torch.contiguous_format)
bernoulli__float = torch.ops.aten.bernoulli_.float(empty_like_default); empty_like_default = None
div__scalar = torch.ops.aten.div_.Scalar(bernoulli__float, 0.5); bernoulli__float = None
mul_tensor = torch.ops.aten.mul.Tensor(arg0, div__scalar); arg0 = div__scalar = None
return pytree.tree_unflatten([mul_tensor], self._out_spec)
```
Returned nodes map looks like this (doesn't look the same as the match pattern above):
```
NODES MAP:
mul_tensor : mul_tensor
arg0 : arg0
```
For more detail, see P784006912.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
2,080 | 104,768 |
Support for `eval` in functional_call
|
module: nn, triaged, module: functional UX
|
### ๐ The feature, motivation and pitch
`functional_call` on a model currently runs in `.train()` mode by default, which will fail on models containing for eg `BatchNorm2d` layers which explicitly need the `training` variable to be `False` at inference.
`functional_call` currently has no way of calling layers as such, hence fails on BatchNorm2d layers.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @zou3519
| 1 |
2,081 | 104,761 |
Torch Filename Storage hangs on "file_system" sharing strategy after in-place fill
|
module: multiprocessing, triaged, module: mps
|
### ๐ Describe the bug
On MacOS M-series machines (AND linux machines using 'file_system' sharing) if you use an in-place fill operation on a Linear layer before launching torch multi-processing, then shared filename storage will fail to allocate on sufficiently large batches of data. The problem also appears if you move a normal tensor into shared memory on the host. This is a problem because the torch DataLoader relies on torch Queues which in turn rely on the ForkingPickler which in turn relies on shared filename storage.
```python
import torch
import torch.nn as nn
from torch.multiprocessing import Process, set_start_method, set_sharing_strategy
def _do_pickle(batch_size):
print(f"starting batch size: {batch_size}")
x = torch.ones((batch_size, 28, 28))
storage = x._typed_storage()
storage._share_filename_cpu_()
if __name__ == "__main__":
set_start_method("fork", force=True)
set_sharing_strategy('file_system') # This is the default/only option on Mac, but this problem also exists for linux when using this file sharing strategy
t = nn.Linear(28 * 28, 300)
t.weight.data.fill_(0.01) # If you comment this out then both batch sizes succeed
# An alternative way to reproduce this error:
# t = torch.ones((128, 28, 28)) # Smaller tensors don't freeze, not sure if this is the exact limit or not
# t.share_memory_()
w = Process(target=_do_pickle, args=(1, ))
w.daemon = True
w.start()
w.join()
print("batch size 1 succeeded")
w = Process(target=_do_pickle, args=(128, ))
w.daemon = True
w.start()
w.join()
print("batch size 128 succeeded")
```
The expected output here would be to see:
```
starting batch size: 1
batch size 1 succeeded
starting batch size: 128
batch size 128 succeeded
```
The actual output is as follows, with the program then hanging forever rather than exiting:
```
starting batch size: 1
batch size 1 succeeded
starting batch size: 128
```
### Versions
**MacOS**
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.3 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:13) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-13.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchinfo==1.7.2
[pip3] torchview==0.2.6
[pip3] torchvision==0.15.2
[conda] numpy 1.24.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchinfo 1.7.2 pypi_0 pypi
[conda] torchview 0.2.6 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
---
**Linux**
Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-DGXS-16GB
Nvidia driver version: 470.182.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 1199.199
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4397.07
Virtualization: VT-x
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 5 MiB
L3 cache: 50 MiB
NUMA node0 CPU(s): 0-39
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
---
cc @VitalyFedyunin @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr @abhudev
| 5 |
2,082 | 104,755 |
fsdp load model causing insufficient CPU memory
|
triaged, module: fsdp
|
### ๐ The feature, motivation and pitch
when use fsdp, it need load model on cpu, but every process load which means it need 8 times cpu memory on a 8 GPU machine, causing insufficient CPU memory, is there any solution to this now? If not, please optimize as soon as possible. Thank you
### Alternatives
_No response_
### Additional context
_No response_
cc @zhaojuanmao @mrshenli @rohan-varma @awgu
| 3 |
2,083 | 104,748 |
torch._dynamo.exc.InternalTorchDynamoError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend
|
triaged, oncall: pt2, module: export
|
### ๐ Describe the bug
I am trying to get the following to run the following code to compile a BERT model
``` python
import torch
import onnx
from transformers import BertTokenizer, BertModel
device = torch.device("cuda:0")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
model = model.to(device)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt').to(device="cuda:0")
opt_model = torch.compile(model, mode='max-autotune', fullgraph=True)
torch.onnx.export(opt_model, tuple(encoded_input.values()),
f='bert_triton.onnx',
input_names=['input_ids'],
output_names=['logits'],
dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'},
'logits': {0: 'batch_size', 1: 'sequence'}},
do_constant_folding=True,
opset_version=13,
)
```
On exporting the compiled code to ONNX, I get the following error:
```
File "/home/nipunagarwala/pytorch/torch/_subclasses/meta_utils.py", line 461, in meta_tensor
r.set_(r_s, storage_offset, sizes, strides)
torch._dynamo.exc.InternalTorchDynamoError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_local_scalar_dense' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
It says that CUDA is supported, and the system I am running it on has CUDA. I wonder why it still throws an error.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+gitc42de84
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Stepping: 9
CPU MHz: 4200.000
CPU max MHz: 4500.0000
CPU min MHz: 800.0000
BogoMIPS: 8400.00
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0a0+gitc42de84
[pip3] torchviz==0.0.2
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-include 2023.1.0 h06a4308_46342
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 1 |
2,084 | 104,739 |
Error reporting uses formal parameter names of downstream C++ function
|
triaged, module: multi-headed-attention
|
### ๐ Describe the bug
When calling sdpa, I get the error message `RuntimeError: The size of tensor a (256) must match the size of tensor b (2) at non-singleton dimension 2` -- there are no parameters a and b in sdpa. Likely it's the formal parameter names of a function called from inside sdpa, which does not relate to the visible Python call stack. (Partial stack backtrace below.)
Can we / Should we check parameters inside sdpa and use proper sdpa formal parameter names? I think this behavior has shown for other functions, although getting the order of tensor dimensions right when updating arbitrary transformer code feels an order of magnitude harder. No concern about having to do that per se, but it would be nice to have errors expressed in terms of sdpa to help debug.
```
File "/data/users/mikekg/fbsource/buck-out/v2/gen/fbcode/46d412658cf1d160/careml/nlp/toolkit/__tests__/tests#link-tree/pytorch/text/fb/nn/modules/transformer.py", line 69, in forward
attention = self.attention(input, key_padding_mask)
File "/data/users/mikekg/fbsource/buck-out/v2/gen/fbcode/46d412658cf1d160/careml/nlp/toolkit/__tests__/tests#link-tree/torch/nn/modules/module.py", line 1505, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/mikekg/fbsource/buck-out/v2/gen/fbcode/46d412658cf1d160/careml/nlp/toolkit/__tests__/tests#link-tree/torch/nn/modules/module.py", line 1514, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/mikekg/fbsource/buck-out/v2/gen/fbcode/46d412658cf1d160/careml/nlp/toolkit/__tests__/tests#link-tree/pytorch/text/fb/nn/modules/multihead_attention.py", line 320, in forward
attn = F.scaled_dot_product_attention(q, k, v, key_padding_mask, self.dropout_p, False);
RuntimeError: The size of tensor a (256) must match the size of tensor b (2) at non-singleton dimension 2
```
### Versions
fbcode trunk
| 5 |
2,085 | 104,732 |
torch.jit.trace says "Arguments for call are invalid" on torch.ops.aten.sub(3, x, alpha=3)
|
oncall: jit
|
### ๐ Describe the bug
The following model runs fine in pytorch but fails in `torch.jit.script`:
```python
import torch
class TestModule(torch.nn.Module):
def forward(self, x):
y = torch.ops.aten.sub(3, x, alpha=3)
return y
args = torch.tensor([1, 0, -10, 255, 256], dtype=torch.int)
model = TestModule()
print("Running Torch via python")
model(args) # Fine
torch.jit.script(model) # Error
```
with
```
Traceback (most recent call last):
File "", line 32, in <module>
torch.jit.script(model) # Error
File "torch/jit/_script.py", line 1284, in script
return torch.jit._recursive.create_script_module(
File "torch/jit/_recursive.py", line 480, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "torch/jit/_recursive.py", line 546, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "torch/jit/_recursive.py", line 397, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor:
Expected a value of type 'Tensor' for argument 'self' but instead found type 'int'.
aten::sub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor:
Expected a value of type 'Tensor' for argument 'self' but instead found type 'int'.
aten::sub.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'int'.
aten::sub.Scalar_out(Tensor self, Scalar other, Scalar alpha=1, *, Tensor(a!) out) -> Tensor(a!):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'int'.
aten::sub.int(int a, int b) -> int:
Keyword argument alpha unknown.
aten::sub.complex(complex a, complex b) -> complex:
Expected a value of type 'complex' for argument 'a' but instead found type 'int'.
aten::sub.float(float a, float b) -> float:
Expected a value of type 'float' for argument 'a' but instead found type 'int'.
aten::sub.int_complex(int a, complex b) -> complex:
Keyword argument alpha unknown.
aten::sub.complex_int(complex a, int b) -> complex:
Expected a value of type 'complex' for argument 'a' but instead found type 'int'.
aten::sub.float_complex(float a, complex b) -> complex:
Expected a value of type 'float' for argument 'a' but instead found type 'int'.
aten::sub.complex_float(complex a, float b) -> complex:
Expected a value of type 'complex' for argument 'a' but instead found type 'int'.
aten::sub.int_float(int a, float b) -> float:
Keyword argument alpha unknown.
aten::sub.float_int(float a, int b) -> float:
Expected a value of type 'float' for argument 'a' but instead found type 'int'.
aten::sub(Scalar a, Scalar b) -> Scalar:
Keyword argument alpha unknown.
sub(float a, Tensor b) -> Tensor:
Expected a value of type 'float' for argument 'a' but instead found type 'int'.
sub(int a, Tensor b) -> Tensor:
Keyword argument alpha unknown.
sub(complex a, Tensor b) -> Tensor:
Expected a value of type 'complex' for argument 'a' but instead found type 'int'.
```
This came up while we were ingesting torch decompositions (for `torch.ops.aten.rsub`) into `torch.jit.script`.
### Versions
Collecting environment information...
PyTorch version: 2.1.0.dev20230705+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 15.0.7
CMake version: version 3.26.3
Libc version: glibc-2.35
Python version: 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-166-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD EPYC Processor
CPU family: 23
Model: 1
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 2
BogoMIPS: 5988.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 virt_ssbd arat
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 16 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] iree-torch==0.0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0.dev20230705+cpu
[pip3] torchvision==0.16.0.dev20230612+cpu
[pip3] triton==2.0.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 0 |
2,086 | 104,731 |
Correcting error message for invalid output_size input in nn.AdaptiveAvgPool2d
|
triaged, open source
|
Fixes #104698
.Added check before _list_with_default to ensure output_size was a valid shape.
| 9 |
2,087 | 104,729 |
Add support for NEON ISA in the Inductor C++ backend
|
triaged, module: inductor
|
### ๐ The feature, motivation and pitch
Context: The TorchInductor C++ backend currently supports vectorization in C++ Codegen through two Intel ISAs: AVX2 and AVX512, as mentioned in the [Update 5 Blog](https://dev-discuss.pytorch.org/t/torchinductor-update-5-cpu-backend-backend-performance-update-and-deep-dive-on-key-optimizations/1117#vectorization-in-c-codegen-4). While the Aten library does support Arm as well, we are yet to leverage its NEON/SVE ISAs to generate optimized kernels. The blog also [mentions](https://dev-discuss.pytorch.org/t/torchinductor-update-5-cpu-backend-backend-performance-update-and-deep-dive-on-key-optimizations/1117#vectorization-in-c-codegen-4:~:text=It%20can%20be,sub%2Dclasses.) that the VecISA class can be subclassed in order to support other ISAs.
Proposal: I am working on providing NEON ISA support for the TorchInductor's C++ backend. Particularly, I intend to provide a NEON implementation of the `vec_reduce_all()` function, which currently has optimized [AVX2 and AVX512 intrinsics implementations](https://github.com/pytorch/pytorch/blob/ced5c89b6fbe827a538b7ada96b2f9a5989871c7/aten/src/ATen/cpu/vec/functional_base.h#L37-L79) for x86 processors introduced by @mingfeima in #73953, as well as a [slow path](https://github.com/pytorch/pytorch/blob/ced5c89b6fbe827a538b7ada96b2f9a5989871c7/aten/src/ATen/cpu/vec/functional_base.h#L12-L28) implementation for other processors including Arm. I have implemented a NEON version for the function, wired up the Inductor's generated C++ to invoke this NEON path on Arm CPUs & I've seen performance improvements, particularly in the Softmax operation.
Posting this here for any discussion before raising a PR.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng
| 17 |
2,088 | 104,719 |
Nondeterministic segfault in test_content_store.py under Dynamo config
|
module: tests, triaged, module: dynamo
|
### ๐ Describe the bug
Running `PYTORCH_TEST_WITH_DYNAMO=1 python test/test_content_store.py -v` enough (~5 times) leads to a segfault. Weirdly, some unrelated PRs (e.g. https://github.com/pytorch/pytorch/pull/104481) seem to cause the segfault to be triggered more consistently.
Stack trace looks like the following and involves Python GC and weakrefs.: https://gist.github.com/zou3519/799c8c069d220beca9fa56a908044212
### Versions
main; Python 3.10.4
cc @mruberry @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78
| 2 |
2,089 | 104,712 |
torch.jit slicing error (styleganv2)
|
oncall: jit
|
### ๐ Describe the bug
```
class StyleGAN2GeneratorCSFT(StyleGAN2GeneratorClean):
"""StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).
It is the clean version without custom compiled CUDA extensions used in StyleGAN2.
Args:
out_size (int): The spatial size of outputs.
num_style_feat (int): Channel number of style features. Default: 512.
num_mlp (int): Layer number of MLP style layers. Default: 8.
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
narrow (float): The narrow ratio for channels. Default: 1.
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
"""
def __init__(
self,
out_size,
num_style_feat=512,
num_mlp=8,
channel_multiplier=2,
narrow=1,
sft_half=False,
):
super(StyleGAN2GeneratorCSFT, self).__init__(
out_size,
num_style_feat=num_style_feat,
num_mlp=num_mlp,
channel_multiplier=channel_multiplier,
narrow=narrow,
)
self.sft_half = sft_half
def forward(
self,
styles,
conditions,
input_is_latent=False,
noise=None,
randomize_noise=True,
truncation=1,
truncation_latent=None,
inject_index=None,
return_latents=False,
):
"""Forward function for StyleGAN2GeneratorCSFT.
Args:
styles (list[Tensor]): Sample codes of styles.
conditions (list[Tensor]): SFT conditions to generators.
input_is_latent (bool): Whether input is latent style. Default: False.
noise (Tensor | None): Input noise or None. Default: None.
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
truncation (float): The truncation ratio. Default: 1.
truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
inject_index (int | None): The injection index for mixing noise. Default: None.
return_latents (bool): Whether to return style latents. Default: False.
"""
# style codes -> latents with Style MLP layer
if not input_is_latent:
styles = [self.style_mlp(s) for s in styles]
# noises
if noise is None:
if randomize_noise:
noise = [None] * self.num_layers # for each style conv layer
else: # use the stored noise
noise = [
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
]
# style truncation
if truncation < 1:
style_truncation = []
for style in styles:
style_truncation.append(
truncation_latent + truncation * (style - truncation_latent)
)
styles = style_truncation
# get style latents with injection
if len(styles) == 1:
inject_index = self.num_latent
if styles[0].ndim < 3:
# repeat latent code for all the layers
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
else: # used for encoder with different latent code for each layer
latent = styles[0]
elif len(styles) == 2: # mixing noises
if inject_index is None:
inject_index = random.randint(1, self.num_latent - 1)
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
latent2 = (
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
)
latent = torch.cat([latent1, latent2], 1)
# main generation
out = self.constant_input(latent.shape[0])
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
skip = self.to_rgb1(out, latent[:, 1])
i = 1
for conv1, conv2, noise1, noise2, to_rgb in zip(
self.style_convs[::2],
self.style_convs[1::2],
noise[1::2],
noise[2::2],
self.to_rgbs,
):
out = conv1(out, latent[:, i], noise=noise1)
# the conditions may have fewer levels
if i < len(conditions):
# SFT part to combine the conditions
if self.sft_half: # only apply SFT to half of the channels
out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)
out_sft = out_sft * conditions[i - 1] + conditions[i]
out = torch.cat([out_same, out_sft], dim=1)
else: # apply SFT to all the channels
out = out * conditions[i - 1] + conditions[i]
out = conv2(out, latent[:, i + 1], noise=noise2)
skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
i += 2
image = skip
if return_latents:
return image, latent
else:
return image, None
```
Getting the following error
```
When converting JIT by torch.jit
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::slice.Tensor(Tensor(a) self, int dim=0, int start=0, int end=9223372036854775807, int step=1) -> (Tensor(a)):
Expected a value of type 'Tensor' for argument 'self' but instead found type '__torch__.torch.nn.modules.container.___torch_mangle_1340.ModuleList'.
aten::slice.t(t[] l, int start, int end=9223372036854775807, int step=1) -> (t[]):
Could not match type __torch__.torch.nn.modules.container.___torch_mangle_1340.ModuleList to List[t] in argument 'l': Cannot match List[t] to __torch__.torch.nn.modules.container.___torch_mangle_1340.ModuleList.
aten::slice.str(str string, int start, int end=9223372036854775807, int step=1) -> (str):
Expected a value of type 'str' for argument 'string' but instead found type '__torch__.torch.nn.modules.container.___torch_mangle_1340.ModuleList'.
The original call is:
File "/tmp/ipykernel_6835/567265009.py", line 573
i = 1
for conv1, conv2, noise1, noise2, to_rgb in zip(
self.style_convs[::2],
~~~~~~~~~~~~~~~~~~~~ <--- HERE
self.style_convs[1::2],
noise[1::2],
```
what can be the solution for this. Please help
### Versions
Collecting environment information...
PyTorch version: 1.5.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.10
Python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 3102.165
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.5.1
[pip3] torcheia==1.0.0
[pip3] torchvision==0.6.1
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.5.1 pypi_0 pypi
[conda] torcheia 1.0.0 pypi_0 pypi
[conda] torchvision 0.6.1 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| 2 |
2,090 | 104,711 |
New Loss Function Add In Pytorch
|
feature, module: loss, triaged
|
### ๐ The feature, motivation and pitch
I am trying to add a new loss function for using object detection time.
One Short Story: Suppose you create one project like face detection. And that time find a loss function to train your model. At that point your hand two ways have one is to call Pytorch inbuild loss function for using this type of task. And another way to create a custom loss function similar to the PyTorch loss function. My question is which way do you choose?
I know most people answer the first way because it's a fast way.

And a similar type of loss function for used the YoloV8 model, see this link: https://arxiv.org/pdf/2305.09972.pdf
### Alternatives
_No response_
### Additional context
_No response_
| 2 |
2,091 | 104,704 |
generate_vmap_rule=True sometimes gives batched grad_output
|
triaged, module: functorch
|
### ๐ Describe the bug
```python
import torch
from torch.func import jacrev, vmap
class Foo(torch.autograd.Function):
generate_vmap_rule = True
@staticmethod
def forward(x):
a = 0.5 * x**2
b = 0.5 * x**2
return a, b
@staticmethod
def setup_context(ctx, inputs, output):
ctx.save_for_backward(inputs[0])
@staticmethod
def backward(ctx, *grad_outputs):
x = ctx.saved_tensors[0]
# not vmapped when grad_outputs[-1] = 0
non_vmapped = grad_outputs[-1].new_zeros(x.shape)
non_vmapped[:] = x
return x
foo = Foo.apply
def bar(a, b):
return a * 10
def f(z):
return bar(*foo(z))
ttt = torch.randn(10, 2)
grad_vmap = vmap(jacrev(f))(ttt)
```
raises a "vmap incompatible in-place" error. Changing `bar` to `a * 10 + b` fixes the problem. It's unclear to me if this is a bug or not.
### Versions
main
cc @Chillee @samdow @kshitij12345 @janeyx99
| 0 |
2,092 | 104,702 |
[feature request] Specialized memory layouts and wide blocked/tiled dtypes for cublasLt/onednn: e.g. torch.float16x32 / torch.int8x32 (akin to torch.quint2x4)
|
feature, triaged, module: memory format
|
### ๐ The feature, motivation and pitch
I found a blog post explaining how to get speedups by using int8 gemm kernels on CUDA: https://www.speechmatics.com/company/articles-and-news/fast-and-accurate-gpu-quantization-for-transformers
It mentions several specialized memory layouts to maximize perf of cublas gemm ops (found them documented in https://docs.nvidia.com/cuda/cublas/#cublasltorder-t):
- `CUBLASLT_ORDER_COL32` (mentioned as the most performant memory layout for cublasLt int8 gemms)
- `CUBLASLT_ORDER_COL4_4R2_8C`
- `CUBLASLT_ORDER_COL32_2R_4R4`
These memory formats/layouts seem important for max-perf cublasLt for int8, as evidenced by this blog post and FasterTransformer kernels. @vkuzo are these formats supported by `_int_mm` in https://github.com/pytorch/pytorch/pull/96685?
Is `CUBLASLT_ORDER_COL32` logically representable in PyTorch? Other formats?
For int8, I would understand this means as if there was dtype holding 32-sized tiles/tuples of `torch.int8` (32 bytes = 256 bits), currently the widest dtype is `complex128` (supposed to hold a tuple of two float64 i.e. 16bytes = 128 bits). (and then these tuples are stored in column-major)
So IMO a minimal way to support it would be:
- introducing uninterpreted (except torch.to/torch.view/print) dtypes `torch.int8x32`, maybe `torch.float16x32` and so forth
- https://github.com/pytorch/pytorch/pull/94992 introduced `torch.bits2x4` (there also exists torch.quint2x4`) and `torch.bits16`, so maybe at least `torch.bits2048` could be added?
- implementing int8_tensor.to(dtype = torch.int8x32) and maybe tensor.view(torch.int8x32)
- an alternative way might be to introduce separate `memory_format=` supporting conversions to these layouts via this avenue (especially for `CUBLASLT_ORDER_COL32_2R_4R4`?)
In that blog post they are fusing the quantization + conversion into a LayerNorm kernel, but I guess this can be introduced later (if needed at all).
Also, FasterTransformer has many COL32 kernels: https://github.com/NVIDIA/FasterTransformer/blob/main/src/fastertransformer/kernels/unfused_attention_int8_kernels.cu
It also appears that similar tiled/blocked memory formats dtypes are used for MKLDNN: https://oneapi-src.github.io/oneDNN/v1.0/dev_guide_understanding_memory_formats.html and are probably already supported by calling `.to_mkldnn()` / weight prepacking. I wonder if supporting such tupled-dtypes could be good and unify the layouts for cublasLt and MKLDNN (cc @jamesr66a @mingfeima)
For onednn it is:
- nChw16c
- nChw8c
@jerryzh168 @vkuzo @ngimel this is probably also related to int8 gemm code paths as we discussed in https://github.com/pytorch/pytorch/issues/69364
(256 bit AVX2's equivalent is float32x8, and for AVX512 it's float32x16
| 0 |
2,093 | 104,701 |
System memory leak when using different input size of torch.nn.Conv3d
|
module: cudnn, module: nn, module: cuda, module: memory usage, triaged
|
### ๐ Describe the bug
There is a system memory leak when using different input sizes to `torch.nn.Conv3d` on the GPU. A very simple script to reproduce the issue is:
```python
import gc
import psutil
import torch
import torch._C
import torch._dynamo
import torch.backends.cudnn as cudnn
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.convolution = torch.nn.Sequential(
torch.nn.Conv3d(
in_channels=1,
out_channels=32,
kernel_size=1,
dilation=1,
bias=False,
),
torch.nn.Conv3d(
in_channels=32,
out_channels=1,
kernel_size=1,
dilation=1,
bias=False,
),
)
def forward(self, input, labels):
output = self.convolution(input)
loss = torch.abs(output - labels).mean()
return loss
class Trainer:
def train(self) -> None:
# Set our device.
device = torch.device("cuda:0")
# Set some training flags.
cudnn.benchmark = False
cudnn.deterministic = False
# Create module.
module = Model()
module.to(device=device)
# Put model in train mode.
module.train()
# Create optimizer.
optimizer = torch.optim.Adam(module.parameters(), lr=1e-5)
optimizer.zero_grad()
# Train for a number of steps.
for step in range(1, 1_000_000):
if step % 10 == 0:
self._report_leak_stats(step)
input, labels = self._get_example(device)
# Run model.
loss = module(input, labels)
# Raise on nan.
if torch.isnan(loss).item():
raise RuntimeError("Encountered nan loss during training.")
# Do backwards pass.
loss.backward()
# Step optimizer and zero gradients after.
optimizer.step()
optimizer.zero_grad()
def _get_example(self, device):
depth = int(torch.randint(10, 80, size=()))
height = int(torch.randint(10, 80, size=()))
width = int(torch.randint(10, 80, size=()))
return (
torch.rand((1, 1, depth, height, width), dtype=torch.float32, device=device),
torch.rand((1, 1, depth, height, width), dtype=torch.float32, device=device),
)
def _report_leak_stats(self, step):
gc.collect()
torch._C._cuda_clearCublasWorkspaces()
torch._dynamo.reset()
gc.collect()
torch.cuda.empty_cache()
gc.collect()
print(
"step",
step,
"residual memory",
psutil.Process().memory_info().rss / 1e6,
"MB",
)
def main():
Trainer().train()
if __name__ == "__main__":
main()
```
When running it will show the residual memory of the process which keeps increasing over time. This is not the case for fixed inputs. The memory leak does not appear to be Python objects (I checked using various profilers). The memory leak seems to be affected by the filter size (and corresponding padding). Bigger filter sizes create bigger leaks.
The leak is fairly small per step (about 4 MB is leaked per 10 steps) but is really problematic when using DDP with multiple processes as it multiplies. Also the leak seems to be bigger for more convolutions.
Took me a very long time to track this leak down to make a minimal example :sweat_smile: . It restricts us from training a 3D model to convergence that takes variable sized inputs (3D scans). I feel like there could be other use cases where the input is variable.
### Versions
```
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: 14.0.0-1ubuntu1
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.9 (main, Mar 29 2023, 10:13:28) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.60.11
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7302 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 5999.84
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.1
[pip3] torch-summary==1.4.5
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
[conda] Could not collect
```
cc @csarofeen @ptrblck @xwang233 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 3 |
2,094 | 104,698 |
Incorrect Error Message Ordering for nn.AdaptiveAvgPool2d with Incorrect output_size
|
module: nn, triaged
|
### ๐ Describe the bug
I've discovered an issue in the nn.AdaptiveAvgPool2d function concerning the order of error messages when an incorrect length of output_size is provided.
The code snippet below demonstrates the issue:
Here is the code to reproduce:
```python
import torch.nn as nn
m = nn.AdaptiveAvgPool2d(output_size = [1,1,1,1])
m(torch.rand([3,3,3,3]))
```
The main problem with this code is that the output_size argument provided has a length of 4, whereas nn.AdaptiveAvgPool2d expects it to be of length 2.
The current error handling produces the following misleading message: "Input dimension should be at least 5". Only after correcting the input tensor size, the second error message, which correctly identifies the problem, emerges: "adaptive_avg_pool2d: output_size must be 2".
### Versions
PyTorch version: 2.1.0.dev20230622+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: 12.0.0-3ubuntu1~20.04.5
CMake version: version 3.26.1
Libc version: glibc-2.31
Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2070
GPU 1: NVIDIA GeForce RTX 2070
GPU 2: NVIDIA GeForce RTX 2070
GPU 3: NVIDIA GeForce RTX 2070
Nvidia driver version: 530.30.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1224.656
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.39
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+440fd1bf20
[pip3] torch==2.1.0.dev20230622+cu118
[pip3] torchaudio==2.1.0.dev20230622+cu118
[pip3] torchvision==0.16.0.dev20230622+cu118
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.1 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-triton 2.1.0+440fd1bf20 pypi_0 pypi
[conda] torch 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230622+cu118 pypi_0 pypi
[conda] torchvision 0.16.0.dev20230622+cu118 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| 2 |
2,095 | 104,697 |
LSTM built-in dropout not reproducible on GPU
|
module: cudnn, module: nn, triaged, module: random
|
### ๐ Describe the bug
The results of a LSTM with the built-in dropout aren't reproducible.
**Test 1 for built-in dropout:**
```python
import torch
seed = 42
ex = torch.ones(10,2).cuda()
lstm = torch.nn.LSTM(input_size=2, hidden_size=4, num_layers=2, dropout=0.5, batch_first=True).cuda()
# set seed
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# check dropout output
print("1. lstm with dropout:")
print(lstm(ex))
# save seed states
saved_torch_seed_state = torch.get_rng_state()
saved_torch_cuda_seed_state = torch.cuda.get_rng_state_all()
# set seed again
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# check dropout output again
print("1. lstm with dropout after re-seeding:")
print(lstm(ex))
# check second output
print("2. lstm with dropout after re-seeding:")
print(lstm(ex))
# set old seed states
torch.set_rng_state(saved_torch_seed_state)
torch.cuda.set_rng_state_all(saved_torch_cuda_seed_state)
# check if the second output here is the same as the second one above
print("2. lstm with dropout after setting rng states:")
print(lstm(ex))
```
**Expected behaviour:** The results of the second LSTM run with dropout after re-seeding and of the second LSTM run with dropout after setting rng states should be identical.
The results are identical when using the CPU, but not the GPU. I tried to set ``torch.backends.cudnn.benchmark = False``, ``torch.use_deterministic_algorithms(True)``, ``torch.backends.cudnn.deterministic = True``, ``os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:2"`` and/or ``os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8"``, but nothing worked. The workaround I found is to use first an LSTM with one layer without dropout, then a dropout layer, then an LSTM with one layer again, essentially mimicking the LSTM with two layers and built-in dropout. The results are reproducible.
**Test 2 for independent dropout layer:**
```python
import torch
seed = 42
ex = torch.ones(10,2).cuda()
lstm = torch.nn.LSTM(input_size=2, hidden_size=4, num_layers=1, batch_first=True).cuda()
drop = torch.nn.Dropout(0.5).cuda()
lstm2 = torch.nn.LSTM(input_size=4, hidden_size=4, num_layers=1, batch_first=True).cuda()
# set seed
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# check dropout output
print("1. lstm with dropout:")
x,_ = lstm(ex)
x = drop(x)
print(lstm2(x))
# save seed states
saved_torch_seed_state = torch.get_rng_state()
saved_torch_cuda_seed_state = torch.cuda.get_rng_state_all()
# set seed again
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# check dropout output again
print("1. lstm with dropout after re-seeding:")
x,_ = lstm(ex)
x = drop(x)
print(lstm2(x))
# check second output
print("2. lstm with dropout after re-seeding:")
x,_ = lstm(ex)
x = drop(x)
print(lstm2(x))
# set old seed states
torch.set_rng_state(saved_torch_seed_state)
torch.cuda.set_rng_state_all(saved_torch_cuda_seed_state)
# check if the second output here is the same as the second one above
print("2. lstm with dropout after setting rng states:")
x,_ = lstm(ex)
x = drop(x)
print(lstm2(x))
```
### Versions
(Output of ``collect_env.py``, slightly edited)
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64), error also occurs on Windows and Centos7
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650
Nvidia driver version: 531.79
cuDNN version: 8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 80
Model name: AMD Ryzen 5 5600H with Radeon Graphics
Stepping: 0
CPU MHz: 3293.653
BogoMIPS: 6587.30
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 3 MiB
L3 cache: 16 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.8.6
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.15.1
[pip3] triton==2.0.0
cc @csarofeen @ptrblck @xwang233 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @pbelevich
| 2 |
2,096 | 104,695 |
DISABLED test_cuda_memory_leak_detection (__main__.TestCudaMultiGPU)
|
module: cuda, triaged, module: flaky-tests, skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cuda_memory_leak_detection&suite=TestCudaMultiGPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14816707226).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cuda_memory_leak_detection`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_cuda_multigpu.py`
cc @ptrblck
| 1 |
2,097 | 104,678 |
torch._dynamo.export does not work with bert model
|
triaged, ezyang's list, oncall: pt2, module: export
|
### ๐ Describe the bug
I'm interested in using aot inductor. However, I'm getting an error while running the following script. Note torch.compile is able to generate one graph for this model.
```py
import torch
import transformers
import logging
torch._logging.set_logs(dynamo=logging.INFO,inductor=logging.INFO)
device = torch.device("cuda:0")
model = transformers.BertForMaskedLM.from_pretrained("bert-base-uncased")
model.eval()
model.cuda()
model.half()
bs = 1
ins = {'input_ids': torch.randint(0, 10, size=(bs, 512)).to(device), 'attention_mask': torch.ones(bs, 512, dtype=torch.int64).to(device)}
with torch.no_grad():
module, tmp = torch._dynamo.export(model, **ins)
```
Error message:
```
[2023-07-06 00:16:23,006] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2023-07-06 00:16:29,274] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing forward /home/ubuntu/anaconda3/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py:1326
[2023-07-06 00:16:33,244] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing forward (RETURN_VALUE)
[2023-07-06 00:16:33,275] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function dynamo_normalization_capturing_compiler
[2023-07-06 00:16:33,275] torch._dynamo.output_graph: [INFO] Step 2: done compiler function dynamo_normalization_capturing_compiler
[2023-07-06 00:16:33,951] torch._dynamo.eval_frame: [INFO] Summary of dimension constraints:
The following dimensions have been specialized and CANNOT be dynamic.
```
```
def specializations(input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None):
# input_ids:
assert input_ids.size()[0] == 1
assert input_ids.size()[1] == 512
# attention_mask:
assert attention_mask.size()[0] == 1
assert attention_mask.size()[1] == 512
```
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/ubuntu/src/AITemplate/fx2ait/bert_inductor.py:19 in <module> โ
โ โ
โ 16 ins = {'input_ids': torch.randint(0, 10, size=(bs, 512)).to(device), 'attention_mask': t โ
โ 17 โ
โ 18 with torch.no_grad(): โ
โ โฑ 19 โ module, tmp = torch._dynamo.export(model, **ins) โ
โ 20 โ
โ โ
โ /home/ubuntu/src/pytorch/torch/_dynamo/eval_frame.py:1007 in export โ
โ โ
โ 1004 โ โ
โ 1005 โ assert graph_captured_result is not None โ
โ 1006 โ flat_both = list(graph_captured_result) + flat_args โ
โ โฑ 1007 โ matched_output_elements_positions = produce_matching(flat_both, flat_results_traced) โ
โ 1008 โ โ
โ 1009 โ if aten_graph: โ
โ 1010 โ โ # Running graph with interpreter is needed for propagating the stack_trace โ
โ โ
โ /home/ubuntu/src/pytorch/torch/_dynamo/eval_frame.py:888 in produce_matching โ
โ โ
โ 885 โ โ โ โ โ โ "Dynamo input/output is not consistent with traced input/output" โ
โ 886 โ โ โ โ โ ) โ
โ 887 โ โ โ else: โ
โ โฑ 888 โ โ โ โ assert ( โ
โ 889 โ โ โ โ โ id(arg) in dict_of_source_args โ
โ 890 โ โ โ โ ), "Dynamo input and output is a strict subset of traced input/output" โ
โ 891 โ โ โ โ matched_elements_positions.append(dict_of_source_args[id(arg)]) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
AssertionError: Dynamo input and output is a strict subset of traced input/output
[2023-07-06 00:16:34,090] torch._dynamo.utils: [INFO] TorchDynamo compilation metrics:
Function Runtimes (s)
------------------------------ --------------
_compile 4.129
OutputGraph.call_user_compiler 0.0003
```
### Versions
```
Collecting environment information...
PyTorch version: 2.1.0a0+git4baac20
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-6ubuntu2) 7.5.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
2,098 | 104,674 |
[compile] DDPOptimizer + activation checkpointing not supported
|
module: checkpoint, triaged, module: ddp, oncall: pt2
|
Activation checkpointing is supported via higher order operators. However, DDP optimizer backend in Dynamo does not work with higher order operators yet. The workaround is to disable DDP optimizer using `torch._dynamo.config.optimize_ddp = False`. However, the tradeoff is bad performance because there will be just one bucket for the entire Dynamo graph.
No plan to support it yet. We will revisit if this is a common ask.
cc @ezyang @msaroufim @wconstab @bdhirsh
| 2 |
2,099 | 104,655 |
Extend ATen op benchmarks
|
module: cpu, triaged, open source, module: amp (automated mixed precision), release notes: benchmark, ciflow/mps, ciflow/inductor
|
## Summary
Extended ATen op benchmarking coverage:
1. Enabled benchmarking of more ATen ops
2. Enabled benchmarking with BF16 dtype as well, if `AVX512_BF16` ISA is supported (on only Linux).
3. Enabled benchmarking of inplace variants, wherever possible.
4. Enabled benchmarking with channels-last memory layout, wherever applicable
## Verification
Currently, no CI job runs this code, so maybe we can create a new weekly CI job on TorchBench.
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @jiayisunx @peterbell10
cc @albanD @ezyang
| 6 |
2,100 | 104,653 |
vision_maskrcnn: AssertionError: expected size 368==368, stride 156==28 at dim=0
|
triaged, oncall: pt2, module: dynamic shapes
|
### ๐ Describe the bug
Perf runs of vision_maskrcnn are failing like this:
```
2023-07-05T08:43:42.4437803Z cuda train vision_maskrcnn
2023-07-05T08:44:17.7648116Z [2023-07-05 08:44:17,763] torch._inductor.utils: [WARNING] DeviceCopy in input program
2023-07-05T08:44:17.7656761Z [2023-07-05 08:44:17,765] torch._inductor.utils: [WARNING] DeviceCopy in input program
2023-07-05T08:44:17.7665486Z [2023-07-05 08:44:17,766] torch._inductor.utils: [WARNING] DeviceCopy in input program
2023-07-05T08:44:17.7675309Z [2023-07-05 08:44:17,767] torch._inductor.utils: [WARNING] DeviceCopy in input program
2023-07-05T08:44:17.7684606Z [2023-07-05 08:44:17,768] torch._inductor.utils: [WARNING] DeviceCopy in input program
2023-07-05T08:44:20.0071113Z Using FallbackKernel: aten.topk
2023-07-05T08:44:43.4110737Z [2023-07-05 08:44:43,409] torch.fx.experimental.symbolic_shapes: [WARNING] RecursionError in sympy.xreplace(Ne(Mod(2*(((s2 + 1)//2)), s2), 0), {s2: shape_0 + 3})
2023-07-05T08:44:44.8615715Z [2023-07-05 08:44:44,860] torch.fx.experimental.symbolic_shapes: [WARNING] RecursionError in sympy.xreplace(Eq(Mod(2*(((s2 + 1)//2)), s2), 0), {s2: shape_0 + 3})
2023-07-05T08:45:16.4014989Z [2023-07-05 08:45:16,400] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard Eq(28, s1) == False, this could result in accuracy problems
2023-07-05T08:45:16.4028274Z [2023-07-05 08:45:16,402] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard Ne(28, s1) == True, this could result in accuracy problems
2023-07-05T08:45:16.5794265Z [2023-07-05 08:45:16,578] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 28*s0 + s1 - 28 < 2147483648 == True, this could result in accuracy problems
2023-07-05T08:45:16.9219233Z ERROR:common:expected size 368==368, stride 156==28 at dim=0
2023-07-05T08:45:16.9219582Z Traceback (most recent call last):
2023-07-05T08:45:16.9219954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1957, in check_accuracy
2023-07-05T08:45:16.9220350Z new_result = optimized_model_iter_fn(model_copy, example_inputs)
2023-07-05T08:45:16.9221158Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn
2023-07-05T08:45:16.9221496Z return fn(*args, **kwargs)
2023-07-05T08:45:16.9228823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1782, in run_n_iterations
2023-07-05T08:45:16.9229669Z self.model_iter_fn(mod, inputs, collect_outputs=False)
2023-07-05T08:45:16.9230380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 440, in forward_and_backward_pass
2023-07-05T08:45:16.9230756Z cloned_inputs = clone_inputs(inputs)
2023-07-05T08:45:16.9231149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 441, in <resume in forward_and_backward_pass>
2023-07-05T08:45:16.9231517Z self.optimizer_zero_grad(mod)
2023-07-05T08:45:16.9231882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 443, in <resume in forward_and_backward_pass>
2023-07-05T08:45:16.9232331Z pred = mod(*cloned_inputs)
2023-07-05T08:45:16.9232957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 444, in <resume in forward_and_backward_pass>
2023-07-05T08:45:16.9233418Z loss = self.compute_loss(pred)
2023-07-05T08:45:16.9233937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 445, in <resume in forward_and_backward_pass>
2023-07-05T08:45:16.9234638Z self.grad_scaler.scale(loss).backward()
2023-07-05T08:45:16.9235695Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 491, in backward
2023-07-05T08:45:16.9236064Z torch.autograd.backward(
2023-07-05T08:45:16.9236579Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward
2023-07-05T08:45:16.9237031Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
2023-07-05T08:45:16.9237999Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 274, in apply
2023-07-05T08:45:16.9238670Z return user_fn(self, *args)
2023-07-05T08:45:16.9239324Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3057, in backward
2023-07-05T08:45:16.9239687Z out = call_compiled_backward()
2023-07-05T08:45:16.9240566Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3032, in call_compiled_backward
2023-07-05T08:45:16.9240929Z out = call_func_with_args(
2023-07-05T08:45:16.9241453Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1457, in call_func_with_args
2023-07-05T08:45:16.9241842Z out = normalize_as_list(f(args))
2023-07-05T08:45:16.9242311Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 295, in _fn
2023-07-05T08:45:16.9242652Z return fn(*args, **kwargs)
2023-07-05T08:45:16.9243135Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
2023-07-05T08:45:16.9243477Z return fn(*args, **kwargs)
2023-07-05T08:45:16.9243946Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 326, in __call__
2023-07-05T08:45:16.9244406Z return self.get_current_callable()(inputs)
2023-07-05T08:45:16.9244916Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 548, in run
2023-07-05T08:45:16.9245241Z return model(new_inputs)
2023-07-05T08:45:16.9245743Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 353, in _run_from_cache
2023-07-05T08:45:16.9246303Z return compiled_graph.compiled_artifact(inputs)
2023-07-05T08:45:16.9246770Z File "/tmp/torchinductor_jenkins/z7/cz7ezmnxhztwnezszoxgk3cbs5k7htr4m3edlgmgpnwu4zbdb67d.py", line 160, in call
2023-07-05T08:45:16.9247191Z assert_size_stride(tangents_6, (s0, s1), (28, 1))
2023-07-05T08:45:16.9247510Z AssertionError: expected size 368==368, stride 156==28 at dim=0
2023-07-05T08:45:16.9247863Z TorchDynamo optimized model failed to run because of following error
2023-07-05T08:45:16.9358249Z fail_to_run
```
Oddly, accuracy is passing though.
### Versions
master
cc @msaroufim @wconstab @bdhirsh @anijain2305
| 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.