runtime error

Exit code: 1. Reason: █▋ | 4.12G/7.21G [00:41<00:09, 330MB/s] virtual_tryon_dc.pth: 64%|██████▍ | 4.61G/7.21G [00:43<00:07, 339MB/s] virtual_tryon_dc.pth: 71%|███████▏ | 5.15G/7.21G [00:44<00:05, 374MB/s] virtual_tryon_dc.pth: 78%|███████▊ | 5.62G/7.21G [00:45<00:04, 379MB/s] virtual_tryon_dc.pth: 93%|█████████▎| 6.68G/7.21G [00:46<00:00, 548MB/s] virtual_tryon_dc.pth: 100%|██████████| 7.21G/7.21G [00:47<00:00, 152MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 159, in <module> leffa_predictor = LeffaPredictor() File "/home/user/app/app.py", line 21, in __init__ self.mask_predictor = AutoMasker( File "/home/user/app/leffa_utils/garment_agnostic_mask_predictor.py", line 211, in __init__ self.densepose_processor = DensePose(densepose_path, device) File "/home/user/app/leffa_utils/densepose_for_mask.py", line 40, in __init__ self.predictor = DefaultPredictor(self.cfg) File "/home/user/app/detectron2/engine/defaults.py", line 282, in __init__ self.model = build_model(self.cfg) File "/home/user/app/detectron2/modeling/meta_arch/build.py", line 23, in build_model model.to(torch.device(cfg.MODEL.DEVICE)) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1369, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 955, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1355, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Container logs:

Fetching error logs...