repo stringclasses 1
value | github_id int64 1.27B 4.24B | github_node_id stringlengths 18 18 | number int64 8 13.4k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-04-10 09:58:27 | updated_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 | closed_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 2
values | extracted_at stringdate 2026-04-07 13:34:13 2026-04-10 21:59:46 | author_login stringlengths 3 28 | author_id int64 1.54k 258M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,848,623,536 | I_kwDOHa8MBc6pyouw | 10,782 | https://github.com/huggingface/diffusers/issues/10782 | https://api.github.com/repos/huggingface/diffusers/issues/10782 | Lumina Image 2.0 minor issue | ### Describe the bug
Error during inference: `height` and `width` have to be divisible by 8 but are 1080 and 1920
Should be 16 instead of 8.
**Minor issue, can be fixed later.**
### Reproduction
Simply pass width = 1920 and height=1080
### Logs
```shell
Error during inference: `height` and `width` have to be divi... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-02-12T15:36:58Z | 2025-02-14T20:55:12Z | 2025-02-14T20:55:12Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,853,353,521 | I_kwDOHa8MBc6qErgx | 10,790 | https://github.com/huggingface/diffusers/issues/10790 | https://api.github.com/repos/huggingface/diffusers/issues/10790 | Lower VRAM usage in CPU offload for Flux ControlNet Pipeline | **Is your feature request related to a problem? Please describe.**
I have a 24GB VRAM GPU. When running a diffusion model like Flux1, I can barely fit the model in memory during inference with batch size 1. Enabling CPU offload does not help because the offload does not occur between the controlnet forward pass and th... | open | null | false | 15 | [
"stale"
] | [] | 2025-02-14T10:56:10Z | 2025-03-20T15:03:22Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | NielsPichon | 54,447,592 | MDQ6VXNlcjU0NDQ3NTky | User | false |
huggingface/diffusers | 2,853,727,131 | I_kwDOHa8MBc6qGGub | 10,791 | https://github.com/huggingface/diffusers/issues/10791 | https://api.github.com/repos/huggingface/diffusers/issues/10791 | Incorrect Code in Custom Diffusion Pipeline Example | The provided code for loading and using the DiffusionPipeline contains errors that prevent it from running correctly.
https://huggingface.co/docs/diffusers/v0.32.2/en/training/custom_diffusion?training-inference=multiple+concepts
https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/custom_diffus... | closed | completed | false | 0 | [] | [] | 2025-02-14T13:26:56Z | 2025-02-14T16:19:12Z | 2025-02-14T16:19:12Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | puhuk | 2,902,772 | MDQ6VXNlcjI5MDI3NzI= | User | false |
huggingface/diffusers | 2,854,431,458 | I_kwDOHa8MBc6qIyri | 10,795 | https://github.com/huggingface/diffusers/issues/10795 | https://api.github.com/repos/huggingface/diffusers/issues/10795 | Can't torch.compile transformer models that load GGUF via from_single_file | ### Describe the bug
`transformer` model loaded via GGUF can't be torch.compile(d) and raises `torch._dynamo.exc.Unsupported: call_method SetVariable() __setitem__ (UserDefinedObjectVariable(GGUFParameter), ConstantVariable(NoneType: None)) {}`
'normal' model loaded from HF for the same pipeline can be torch.compile(... | open | null | false | 16 | [
"bug"
] | [] | 2025-02-14T18:12:30Z | 2025-04-18T06:44:39Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AstraliteHeart | 81,396,681 | MDQ6VXNlcjgxMzk2Njgx | User | false |
huggingface/diffusers | 2,855,142,004 | I_kwDOHa8MBc6qLgJ0 | 10,796 | https://github.com/huggingface/diffusers/issues/10796 | https://api.github.com/repos/huggingface/diffusers/issues/10796 | Docs for HunyuanVideo LoRA? | ### Describe the bug
As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?
### Reproduction
Search for HunyuanVideo and LoRA
### Logs
```shell
```
### System Info
As it is the online docs...
### Who can help?
@stevhliu @sayakpaul | closed | completed | false | 9 | [
"bug",
"stale"
] | [] | 2025-02-15T04:31:34Z | 2025-06-10T20:52:28Z | 2025-06-10T20:52:28Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,855,325,607 | I_kwDOHa8MBc6qMM-n | 10,797 | https://github.com/huggingface/diffusers/issues/10797 | https://api.github.com/repos/huggingface/diffusers/issues/10797 | Group_offloading with HunyuanVideoPipeline not working | ### Describe the bug
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
### Reproduction
```
import json
import time
import torch
import gc
from diffusers.models import HunyuanV... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-02-15T08:16:52Z | 2025-02-23T08:16:21Z | 2025-02-23T08:16:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,855,592,468 | I_kwDOHa8MBc6qNOIU | 10,798 | https://github.com/huggingface/diffusers/issues/10798 | https://api.github.com/repos/huggingface/diffusers/issues/10798 | Device Combinations Bug of Flux Quantization With Bitsandbytes | ### Describe the bug
Quantizing Flux with Bitsandbytes and **moving the pipeline to another cuda device instead of cuda:0** will cause a device combination bug.
Everything works fine when the pipeline works in cuda:0.
### Reproduction
This reproduction shows how the components are quantized.
And now I've updated a ... | closed | completed | false | 15 | [
"bug"
] | [] | 2025-02-15T16:39:47Z | 2025-02-24T23:09:44Z | 2025-02-20T03:39:19Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberVy | 72,680,847 | MDQ6VXNlcjcyNjgwODQ3 | User | false |
huggingface/diffusers | 1,429,788,069 | I_kwDOHa8MBc5VONWl | 1,080 | https://github.com/huggingface/diffusers/issues/1080 | https://api.github.com/repos/huggingface/diffusers/issues/1080 | How to run dreambooth accelerate from python file | ### Describe the bug
I tried to finetune model, works fine.
Now i want to make a python script which accepts parameters from Flask and start training,
What i found is that, i have to pass all parameters as command like
`accelerate launch --config_file=$ACCELERATE_CONFIG train_dreambooth.py \`
But i want ... | closed | completed | false | 3 | [
"bug",
"stale"
] | [
"patil-suraj"
] | 2022-10-31T12:44:18Z | 2022-12-11T15:03:10Z | 2022-12-11T15:03:10Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adhikjoshi | 11,740,719 | MDQ6VXNlcjExNzQwNzE5 | User | false |
huggingface/diffusers | 2,855,945,807 | I_kwDOHa8MBc6qOkZP | 10,800 | https://github.com/huggingface/diffusers/issues/10800 | https://api.github.com/repos/huggingface/diffusers/issues/10800 | HunyuanVideo + BnB int4 + enable_sequential_cpu_offload = Blockwise quantization only supports 16/32-bit floats, but got torch.uint8 | ### Describe the bug
Works fine for pipe.enable_model_cpu_offload() but come up with error for enable_sequential_cpu_offload
### Reproduction
```
import torch
import gc
from diffusers.models import HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video
from diffusers import HunyuanVideoPipeline
f... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-02-16T07:50:00Z | 2025-02-23T00:31:51Z | 2025-02-23T00:31:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,856,153,867 | I_kwDOHa8MBc6qPXML | 10,803 | https://github.com/huggingface/diffusers/issues/10803 | https://api.github.com/repos/huggingface/diffusers/issues/10803 | SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion | ### Model/Pipeline/Scheduler description
I made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary.
here's the link, enjoy
https://github.com/alexblattner/SANARubbe... | open | null | false | 1 | [
"stale"
] | [] | 2025-02-16T15:08:11Z | 2025-03-19T15:03:31Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | alexblattner | 15,870,094 | MDQ6VXNlcjE1ODcwMDk0 | User | false |
huggingface/diffusers | 2,856,361,063 | I_kwDOHa8MBc6qQJxn | 10,804 | https://github.com/huggingface/diffusers/issues/10804 | https://api.github.com/repos/huggingface/diffusers/issues/10804 | comfy-ui compatible FLUX1.dev LoRA fails to load | ### Describe the bug
https://civitai.com/models/677200?modelVersionId=758070
This LoRA fails to load. Specifically, `model.load_state_dict` fails with mismatched tensors when attempting to load in the LoRA state dict.
### Reproduction
```py
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_p... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-02-16T22:07:55Z | 2025-02-24T11:24:40Z | 2025-02-24T11:24:40Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AmericanPresidentJimmyCarter | 110,263,573 | U_kgDOBpJ9FQ | User | false |
huggingface/diffusers | 2,856,520,239 | I_kwDOHa8MBc6qQwov | 10,805 | https://github.com/huggingface/diffusers/issues/10805 | https://api.github.com/repos/huggingface/diffusers/issues/10805 | is there inpainiting dataset and parameters example provided for xl training? | **What API design would you like to have changed or added to the library? Why?**
**What use case would this enable or better enable? Can you give us a code example?**
Hi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.c... | closed | completed | true | 2 | [] | [] | 2025-02-17T01:56:14Z | 2025-02-17T02:03:09Z | 2025-02-17T02:03:09Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | fire2323 | 5,301,204 | MDQ6VXNlcjUzMDEyMDQ= | User | false |
huggingface/diffusers | 2,858,310,800 | I_kwDOHa8MBc6qXlyQ | 10,812 | https://github.com/huggingface/diffusers/issues/10812 | https://api.github.com/repos/huggingface/diffusers/issues/10812 | Step-Video-T2V | New txt2vid project:
> A Step-Video-T2V, a state-of-the-art (SoTA) text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames. To enhance both training and inference efficiency, we propose a deep compression VAE for videos, achieving 16x16 spatial and 8x temporal ... | open | null | false | 4 | [
"stale"
] | [
"a-r-r-o-w"
] | 2025-02-17T16:29:18Z | 2025-03-20T15:03:15Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,860,373,284 | I_kwDOHa8MBc6qfdUk | 10,817 | https://github.com/huggingface/diffusers/issues/10817 | https://api.github.com/repos/huggingface/diffusers/issues/10817 | auto_pipeline missing SD3 contol nets | ### Describe the bug
Hey, auto_pipeline seesm to be missing the control nets variants for SD3
venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py
### Reproduction
Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in ... | closed | completed | false | 3 | [
"bug",
"help wanted",
"contributions-welcome"
] | [] | 2025-02-18T12:54:40Z | 2025-02-24T16:21:03Z | 2025-02-24T16:21:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | JoeGaffney | 11,944,251 | MDQ6VXNlcjExOTQ0MjUx | User | false |
huggingface/diffusers | 1,429,883,201 | I_kwDOHa8MBc5VOklB | 1,082 | https://github.com/huggingface/diffusers/issues/1082 | https://api.github.com/repos/huggingface/diffusers/issues/1082 | Dreambooth: RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasLtMatmul | ### Describe the bug
Hi - I've spent a couple days trying to get Dreambooth to run, and can't get past this:
_Steps: 0%| ... | closed | completed | false | 7 | [
"bug",
"stale"
] | [
"patil-suraj"
] | 2022-10-31T13:49:35Z | 2022-12-12T15:03:54Z | 2022-12-12T15:03:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | enn-nafnlaus | 116,288,799 | U_kgDOBu5tHw | User | false |
huggingface/diffusers | 2,860,875,628 | I_kwDOHa8MBc6qhX9s | 10,821 | https://github.com/huggingface/diffusers/issues/10821 | https://api.github.com/repos/huggingface/diffusers/issues/10821 | Inconsistency in condition transforms across different ControlNet example scripts | I've noticed an inconsistency in the transforms applied to condition images across different ControlNet training examples. In the [flux training script](https://github.com/huggingface/diffusers/blob/b75b204a584e29ebf4e80a61be11458e9ed56e3e/examples/controlnet/train_controlnet_flux.py#L755), the condition images are exp... | open | null | false | 1 | [
"stale"
] | [] | 2025-02-18T15:49:28Z | 2025-03-21T15:03:21Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | YanivDorGalron | 89,192,632 | MDQ6VXNlcjg5MTkyNjMy | User | false |
huggingface/diffusers | 2,861,493,483 | I_kwDOHa8MBc6qjuzr | 10,823 | https://github.com/huggingface/diffusers/issues/10823 | https://api.github.com/repos/huggingface/diffusers/issues/10823 | Fine tuning script does not produce files needed for inference out of the box | ### Describe the bug
I used [this script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) as per [this tutorial](https://huggingface.co/docs/diffusers/v0.32.2/training/text2image) to fine tune a model on my dataset. After training, the following file structure is produ... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-02-18T19:48:39Z | 2025-02-18T21:21:23Z | 2025-02-18T21:21:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | surya-narayanan | 17,240,858 | MDQ6VXNlcjE3MjQwODU4 | User | false |
huggingface/diffusers | 2,862,877,996 | I_kwDOHa8MBc6qpA0s | 10,829 | https://github.com/huggingface/diffusers/issues/10829 | https://api.github.com/repos/huggingface/diffusers/issues/10829 | DreamBooth LoRA SDXL OOM issue at the last inference (8xRTX3090 24G) | Hi, I'm encountering a CUDA out of memory error during the final inference step when running the 3D Icon example from the [advanced diffusion training](https://github.com/huggingface/diffusers/tree/main/examples/advanced_diffusion_training#3d-icon-example).
Script: I used the exact same script provided in the example.... | open | null | false | 1 | [
"stale"
] | [] | 2025-02-19T10:20:11Z | 2025-03-21T15:03:11Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kkjh0723 | 10,104,262 | MDQ6VXNlcjEwMTA0MjYy | User | false |
huggingface/diffusers | 1,429,936,839 | I_kwDOHa8MBc5VOxrH | 1,083 | https://github.com/huggingface/diffusers/issues/1083 | https://api.github.com/repos/huggingface/diffusers/issues/1083 | [Community] FP16 ONNX produces incorrect output | ### Describe the bug
#932 enabled conversion of the main branch FP32 model (git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 -b main) to ONNX FP16. While it runs fine with OnnxStableDiffusionPipeline using DMLExecutionProvider (onnxruntime-directml==1.13.1), the produced image is just a black square.
... | closed | completed | false | 9 | [
"bug",
"good first issue",
"help wanted"
] | [
"anton-l"
] | 2022-10-31T14:25:18Z | 2023-01-23T08:25:50Z | 2023-01-23T08:25:49Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kleiti | 11,665,940 | MDQ6VXNlcjExNjY1OTQw | User | false |
huggingface/diffusers | 2,863,137,887 | I_kwDOHa8MBc6qqARf | 10,831 | https://github.com/huggingface/diffusers/issues/10831 | https://api.github.com/repos/huggingface/diffusers/issues/10831 | Single-file refactor based off of the changes from 10013 | Cc: @DN6 | open | null | false | 1 | [
"wip",
"single_file"
] | [
"DN6"
] | 2025-02-19T12:05:56Z | 2025-03-21T16:19:40Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,864,172,665 | I_kwDOHa8MBc6qt855 | 10,833 | https://github.com/huggingface/diffusers/issues/10833 | https://api.github.com/repos/huggingface/diffusers/issues/10833 | FluxFillPipeline With Model Quantized By Bitsandbytes Has A Matrix Shapes Cannot Be Multiplied Bug | ### Describe the bug
Inferring with ```FluxFillPipeline``` quantized by ```bitsandbytes``` will cause a ```RuntimeError: mat1 and mat2 shapes cannot be multiplied ```.
Everything works fine when using ```FluxPipeline```, ```FluxImg2ImgPipeline``` and ```FluxInpaintPipeline```.
### Reproduction
```python
import torch... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-02-19T19:02:59Z | 2025-02-20T15:24:49Z | 2025-02-20T09:47:56Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberVy | 72,680,847 | MDQ6VXNlcjcyNjgwODQ3 | User | false |
huggingface/diffusers | 2,865,704,986 | I_kwDOHa8MBc6qzzAa | 10,839 | https://github.com/huggingface/diffusers/issues/10839 | https://api.github.com/repos/huggingface/diffusers/issues/10839 | Dreambooth LoRA Flux training last step error | ### Describe the bug
As soon as the training is done and the code wants to clear up and do its last steps I get this error
Steps: 99%|█████████▉| 397/400 [07:45<00:03, 1.16s/it, loss=0.397, lr=1]
Steps: 100%|█████████▉| 398/400 [07:47<00:02, 1.20s/it, lo... | open | null | false | 16 | [
"bug",
"stale"
] | [] | 2025-02-20T10:14:43Z | 2025-03-22T15:02:51Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PluginBOXone | 26,670,577 | MDQ6VXNlcjI2NjcwNTc3 | User | false |
huggingface/diffusers | 1,430,114,238 | I_kwDOHa8MBc5VPc-- | 1,084 | https://github.com/huggingface/diffusers/issues/1084 | https://api.github.com/repos/huggingface/diffusers/issues/1084 | [Feature Request] Documentation around multiple pipelines reusing model weights | **Is your feature request related to a problem? Please describe.**
If I have a StableDiffusionPipeline, a StableDiffusionImg2ImgPipeline, and a StableDiffusionInpaintPipeline, these will all load separate weights to the GPU even though they are all using the same model. This makes creating applications that use the fu... | closed | completed | false | 9 | [
"stale"
] | [] | 2022-10-31T16:17:27Z | 2022-12-15T20:53:31Z | 2022-12-15T20:53:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | theahura | 6,380,637 | MDQ6VXNlcjYzODA2Mzc= | User | false |
huggingface/diffusers | 2,866,354,682 | I_kwDOHa8MBc6q2Rn6 | 10,842 | https://github.com/huggingface/diffusers/issues/10842 | https://api.github.com/repos/huggingface/diffusers/issues/10842 | train_text_to_image_lora.py this file was debugged and found a lot of problems, please ask which one has a file that can be trained | train_text_to_image_lora.py this file was debugged and found a lot of problems, please ask which one has a file that can be trained
| open | null | false | 4 | [
"stale"
] | [] | 2025-02-20T14:38:11Z | 2025-03-22T15:02:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | llm8047 | 94,896,845 | U_kgDOBagCzQ | User | false |
huggingface/diffusers | 2,867,702,693 | I_kwDOHa8MBc6q7aul | 10,848 | https://github.com/huggingface/diffusers/issues/10848 | https://api.github.com/repos/huggingface/diffusers/issues/10848 | Unexpected keyword argument `device` for `load_model_dict_into_meta` when loading IP-Adapters | ### Describe the bug
The function signature of `load_model_dict_into_meta` changed in #10604, and `device` is no longer an accepted argument. However, IP-Adapter loading still passes `device`, as we can see below:
https://github.com/huggingface/diffusers/blob/e3bc4aab2ef7b319d2b49e99a25bc2b1b1363bfa/src/diffusers/lo... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-02-21T01:59:26Z | 2025-02-21T12:16:33Z | 2025-02-21T12:16:32Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | guiyrt | 35,548,192 | MDQ6VXNlcjM1NTQ4MTky | User | false |
huggingface/diffusers | 2,868,278,653 | I_kwDOHa8MBc6q9nV9 | 10,853 | https://github.com/huggingface/diffusers/issues/10853 | https://api.github.com/repos/huggingface/diffusers/issues/10853 | I2VGen-XL is image2video not text2video | I2VGen-XL is not a text to video model but it only support image to video. Please fix the documentation mistake.
https://github.com/huggingface/diffusers/blob/6cef7d2366c05a72f6b1e034e9260636d1eccd8d/docs/source/en/api/pipelines/overview.md?plain=1#L57 | closed | completed | false | 0 | [] | [] | 2025-02-21T07:46:59Z | 2025-02-21T16:03:24Z | 2025-02-21T16:03:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | maxpaynestory | 462,460 | MDQ6VXNlcjQ2MjQ2MA== | User | false |
huggingface/diffusers | 2,868,345,784 | I_kwDOHa8MBc6q93u4 | 10,855 | https://github.com/huggingface/diffusers/issues/10855 | https://api.github.com/repos/huggingface/diffusers/issues/10855 | When using gradient accumulation, optimizer.step() runs every single step. Is it as expected? | ### Describe the bug
Should these three lines
```
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
be indented to fall within the scope of `if accelerator.sync_gradients:`? When gradient accumulation is enabled, gradient updates should only occur within the... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-02-21T08:23:54Z | 2025-07-05T21:29:30Z | 2025-07-05T21:29:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tyxsspa | 15,213,900 | MDQ6VXNlcjE1MjEzOTAw | User | false |
huggingface/diffusers | 2,868,952,911 | I_kwDOHa8MBc6rAL9P | 10,860 | https://github.com/huggingface/diffusers/issues/10860 | https://api.github.com/repos/huggingface/diffusers/issues/10860 | The LoRA trained using train_text_to_image_lora.py cannot be used in ComfyUI. | ### Describe the bug
The LoRA trained using train_text_to_image_lora.py cannot be used in ComfyUI.
The error message is as follows:
CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
lora key ... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-02-21T12:51:40Z | 2025-02-24T18:21:57Z | 2025-02-24T18:21:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | llm8047 | 94,896,845 | U_kgDOBagCzQ | User | false |
huggingface/diffusers | 2,868,969,173 | I_kwDOHa8MBc6rAP7V | 10,861 | https://github.com/huggingface/diffusers/issues/10861 | https://api.github.com/repos/huggingface/diffusers/issues/10861 | In StableAudioPipeline initial_audio_waveforms basically have no effect on output because of latent scaling | ### Describe the bug
I think it is not intended
In default pipe pipe.scheduler.init_noise_sigma = 500, first it scales up noise and then add latent of provided initial_audio_waveforms
So latent variable is like [-2000, 2000] while encoded audio is like [-4, 4]
I think this should be correct math here
```
latents = ra... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-02-21T12:59:03Z | 2025-03-23T15:02:50Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hadaev8 | 20,247,085 | MDQ6VXNlcjIwMjQ3MDg1 | User | false |
huggingface/diffusers | 2,869,028,135 | I_kwDOHa8MBc6rAeUn | 10,862 | https://github.com/huggingface/diffusers/issues/10862 | https://api.github.com/repos/huggingface/diffusers/issues/10862 | Support T5 loras for Flux | Some new Loras have trained T5 lora weights:
For example: https://civitai.com/models/1022387/randommaxx-illustrify
Following discussion in #10745 | closed | completed | false | 3 | [] | [] | 2025-02-21T13:26:16Z | 2025-02-25T12:57:01Z | 2025-02-25T12:56:45Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 2,869,242,275 | I_kwDOHa8MBc6rBSmj | 10,863 | https://github.com/huggingface/diffusers/issues/10863 | https://api.github.com/repos/huggingface/diffusers/issues/10863 | Error loading locally safetensors files | ### Describe the bug
Some models are getting error on load safetensors directly from .safetensors file.
### Reproduction
```python
StableDiffusionXLControlTilingPipeline.from_single_file("F:\\models\\Stable-diffusion\\RealVisXL v5.0 Lightning.safetensors",...)
```
@SunMarc - > metadata is returning {} len = 0 and ... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-02-21T14:51:08Z | 2025-02-21T18:56:17Z | 2025-02-21T18:56:17Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | elismasilva | 40,075,615 | MDQ6VXNlcjQwMDc1NjE1 | User | false |
huggingface/diffusers | 2,869,757,774 | I_kwDOHa8MBc6rDQdO | 10,865 | https://github.com/huggingface/diffusers/issues/10865 | https://api.github.com/repos/huggingface/diffusers/issues/10865 | SDXL_base-1.0 VS SDXL_base-1.0-inpainting 0.1 for inpainting | ### Describe the bug
Dear all,
I have quick question on Unet config on SDXL-base-1.0 and SDXL-base-1.0-inpainting-0.1.
In the each Unet config, each in_channel parameter has different value like 4 and 9.
Why they have different setting?
and which model is better for inpainting? my experiment results is that utilizi... | closed | completed | true | 3 | [
"bug"
] | [] | 2025-02-21T18:38:57Z | 2025-02-22T02:18:59Z | 2025-02-22T02:18:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | john09282922 | 114,594,549 | U_kgDOBtSS9Q | User | false |
huggingface/diffusers | 2,869,804,083 | I_kwDOHa8MBc6rDbwz | 10,866 | https://github.com/huggingface/diffusers/issues/10866 | https://api.github.com/repos/huggingface/diffusers/issues/10866 | Lumina Image 2.0 lora not working with lora available on Civitai | ### Describe the bug
Using Lumina 2.0 lora from civitai throw error.
Works fine for https://huggingface.co/sayakpaul/trained-lumina2-lora-yarn
### Reproduction
I tried using loras listed here
https://civitai.com/search/models?baseModel=Lumina&modelType=LORA&sortBy=models_v9&query=lumina
with code
https://huggingfa... | closed | completed | false | 8 | [
"bug"
] | [] | 2025-02-21T19:04:59Z | 2025-03-07T12:28:57Z | 2025-03-07T12:28:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,870,612,384 | I_kwDOHa8MBc6rGhGg | 10,869 | https://github.com/huggingface/diffusers/issues/10869 | https://api.github.com/repos/huggingface/diffusers/issues/10869 | Tensor.item() cannot be called on meta tensors | ### Describe the bug
As explained in the documentation, I am trying to use this feature to save memory
https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/memory.md#cpu-offloading
I understand that enable_sequential_cpu_offload is currently not possible with bitsandbytes - int4 for which bu... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-02-22T08:57:07Z | 2025-02-22T18:06:51Z | 2025-02-22T16:10:17Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,873,120,689 | I_kwDOHa8MBc6rQFex | 10,872 | https://github.com/huggingface/diffusers/issues/10872 | https://api.github.com/repos/huggingface/diffusers/issues/10872 | [Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model | **Is your feature request related to a problem? Please describe.**
We all know Sana model is very good but unfortunately the LICENSE is restrictive.
Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it
**Describe the solution... | closed | completed | false | 5 | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | [] | 2025-02-23T11:36:21Z | 2025-03-10T03:08:32Z | 2025-03-10T03:08:32Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,873,139,143 | I_kwDOHa8MBc6rQJ_H | 10,873 | https://github.com/huggingface/diffusers/issues/10873 | https://api.github.com/repos/huggingface/diffusers/issues/10873 | Issue with SanaPipeline with Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers | ### Describe the bug
Not able to generate any decent image using both 512 and 1024 model (2K and 4K works fine). It wasn't always like this, earlier it used to work.

### Reproduction
https://huggingface.co/Efficient-Large-Mode... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-02-23T11:58:11Z | 2025-02-23T17:29:52Z | 2025-02-23T17:29:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,873,145,123 | I_kwDOHa8MBc6rQLcj | 10,874 | https://github.com/huggingface/diffusers/issues/10874 | https://api.github.com/repos/huggingface/diffusers/issues/10874 | Does it support adding LoHa method | Does it support adding LoHa method?
Where can I modify it? | open | null | false | 3 | [
"stale"
] | [] | 2025-02-23T12:06:14Z | 2025-03-25T15:03:41Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | llm8047 | 94,896,845 | U_kgDOBagCzQ | User | false |
huggingface/diffusers | 2,873,215,869 | I_kwDOHa8MBc6rQct9 | 10,878 | https://github.com/huggingface/diffusers/issues/10878 | https://api.github.com/repos/huggingface/diffusers/issues/10878 | How to expand peft.LoraConfig | If expanding
peft.LoraConfig, How to modify to accommodate more lora? | open | null | false | 5 | [
"stale"
] | [] | 2025-02-23T14:01:11Z | 2025-03-25T15:03:28Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | llm8047 | 94,896,845 | U_kgDOBagCzQ | User | false |
huggingface/diffusers | 1,430,369,399 | I_kwDOHa8MBc5VQbR3 | 1,088 | https://github.com/huggingface/diffusers/issues/1088 | https://api.github.com/repos/huggingface/diffusers/issues/1088 | train_dreambooth_flax.py 😭 | ### Describe the bug
I tried CompVis/stable-diffusion-v1-4 flax and bf16 branches with train_dreambooth_flax.py but not working and I can generate images with this code in same vm
```
real_seed = random.randint(0, 2147483647)
prng_seed = jax.random.PRNGKey(real_seed)
num_samples = jax.device_count()
... | closed | completed | false | 7 | [
"bug"
] | [
"patil-suraj"
] | 2022-10-31T19:24:00Z | 2022-11-03T23:33:24Z | 2022-11-03T23:33:24Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | camenduru | 54,370,274 | MDQ6VXNlcjU0MzcwMjc0 | User | false |
huggingface/diffusers | 2,873,554,332 | I_kwDOHa8MBc6rRvWc | 10,883 | https://github.com/huggingface/diffusers/issues/10883 | https://api.github.com/repos/huggingface/diffusers/issues/10883 | Marigold Update: v1-1 models, Intrinsic Image Decomposition pipeline, documentation | ### Model/Pipeline/Scheduler description
This ticket tracks updates and refactoring of the Marigold pipelines to support the new `v1-1` models, enabling fast inference (1-4 steps) with DDIM. It also introduces the Intrinsic Image Decomposition (IID) pipeline, along with its tests and documentation.
### Open source st... | closed | completed | false | 0 | [] | [] | 2025-02-23T23:33:59Z | 2025-02-26T00:13:03Z | 2025-02-26T00:13:03Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | toshas | 4,390,695 | MDQ6VXNlcjQzOTA2OTU= | User | false |
huggingface/diffusers | 2,873,942,257 | I_kwDOHa8MBc6rTODx | 10,886 | https://github.com/huggingface/diffusers/issues/10886 | https://api.github.com/repos/huggingface/diffusers/issues/10886 | Time series diffusion | Diffusers is go to for me to train diffusion models from scratch. While it has a lot of recipes for image/video and text, I find that time series diffusion is missing. One good example is https://github.com/yyysjz1997/Awesome-TimeSeries-SpatioTemporal-Diffusion-Model/blob/main/README.md I would be happy to contribute i... | closed | completed | false | 2 | [
"stale"
] | [] | 2025-02-24T06:08:13Z | 2025-07-05T21:28:55Z | 2025-07-05T21:28:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | manmeet3591 | 8,520,638 | MDQ6VXNlcjg1MjA2Mzg= | User | false |
huggingface/diffusers | 2,874,301,442 | I_kwDOHa8MBc6rUlwC | 10,887 | https://github.com/huggingface/diffusers/issues/10887 | https://api.github.com/repos/huggingface/diffusers/issues/10887 | cannot import name 'StableDiffusionXLTrainer' from 'diffusers' | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered.**
A clear and co... | closed | completed | false | 5 | [] | [] | 2025-02-24T09:17:57Z | 2025-02-24T11:56:54Z | 2025-02-24T11:56:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HripsimeS | 42,246,765 | MDQ6VXNlcjQyMjQ2NzY1 | User | false |
huggingface/diffusers | 2,874,702,761 | I_kwDOHa8MBc6rWHup | 10,890 | https://github.com/huggingface/diffusers/issues/10890 | https://api.github.com/repos/huggingface/diffusers/issues/10890 | Suggestion to add TEs | ### Did you like the remote VAE solution?
I do... it's helpful when training/finetuning models, makes the infrastructure easier to set up.
### What can be improved about the current solution?
Adding TEs would be very helpful for a complete solution for auxiliary model off-loading during training.
Consider caching, t... | open | null | false | 2 | [
"wip",
"remote-vae"
] | [] | 2025-02-24T11:49:00Z | 2025-03-27T04:16:12Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | janekm | 100,807 | MDQ6VXNlcjEwMDgwNw== | User | false |
huggingface/diffusers | 2,874,740,447 | I_kwDOHa8MBc6rWQ7f | 10,891 | https://github.com/huggingface/diffusers/issues/10891 | https://api.github.com/repos/huggingface/diffusers/issues/10891 | Internal type conversions breaks DCAE on bfloat16 | ### Describe the bug
Trying to use dcae
But i get the error
```RuntimeError: expected scalar type Float but found BFloat16```
Despite both the model and the tensor being the same type.
Looking at the apply_quadratic_attention function on which it errors
```
def apply_quadratic_attention(self, query: torch.Tenso... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-02-24T12:02:21Z | 2025-02-24T12:08:16Z | 2025-02-24T12:08:14Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | SwayStar123 | 46,050,679 | MDQ6VXNlcjQ2MDUwNjc5 | User | false |
huggingface/diffusers | 2,875,313,493 | I_kwDOHa8MBc6rYc1V | 10,892 | https://github.com/huggingface/diffusers/issues/10892 | https://api.github.com/repos/huggingface/diffusers/issues/10892 | Adding support for Datasets in the StableDiffusionPipeline | **Is your feature request related to a problem? Please describe.**
Contrary to other types of pipelines, the Stable Diffusion pipeline doesn't support directly using a HF Dataset as input, i.e.
```
base = DiffusionPipeline.from_pretrained"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp... | closed | completed | false | 2 | [] | [] | 2025-02-24T15:14:01Z | 2025-02-24T18:52:00Z | 2025-02-24T18:51:58Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sashavor | 14,205,986 | MDQ6VXNlcjE0MjA1OTg2 | User | false |
huggingface/diffusers | 2,875,793,312 | I_kwDOHa8MBc6raR-g | 10,893 | https://github.com/huggingface/diffusers/issues/10893 | https://api.github.com/repos/huggingface/diffusers/issues/10893 | [Feature request] CogvideoX Controlnet integration for 5B / 2B | **Is your feature request related to a problem? Please describe.**
Came across and would be useful addition
https://github.com/TheDenk/cogvideox-controlnet
**Describe the solution you'd like.**
If possible add the controlnet support for CogVideoX. The existing code is based on diffusers only
**Describe alternatives y... | closed | completed | false | 1 | [
"stale"
] | [] | 2025-02-24T18:15:02Z | 2025-03-27T15:14:10Z | 2025-03-27T15:14:10Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,876,219,621 | I_kwDOHa8MBc6rb6Dl | 10,895 | https://github.com/huggingface/diffusers/issues/10895 | https://api.github.com/repos/huggingface/diffusers/issues/10895 | Need to handle v0.33.0 deprecations | - https://github.com/huggingface/diffusers/blob/87599691b9b2b21921e5a403872eb9851ff59f63/src/diffusers/utils/export_utils.py#L144
- https://github.com/huggingface/diffusers/blob/87599691b9b2b21921e5a403872eb9851ff59f63/src/diffusers/models/embeddings.py#L186
cc @DN6 | closed | completed | false | 1 | [
"stale"
] | [] | 2025-02-24T21:41:49Z | 2025-06-10T20:49:58Z | 2025-06-10T20:49:58Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | a-r-r-o-w | 72,266,394 | MDQ6VXNlcjcyMjY2Mzk0 | User | false |
huggingface/diffusers | 2,876,735,736 | I_kwDOHa8MBc6rd4D4 | 10,896 | https://github.com/huggingface/diffusers/issues/10896 | https://api.github.com/repos/huggingface/diffusers/issues/10896 | Support SkyReels-A1 for expressive portrait animation. | [https://github.com/SkyworkAI/SkyReels-A1](url) is a creative productivity tool that can transfer expressions and motions onto the portrait. Can diffusers support it, so that the community can simply use this project, thanks.
**Open source status:**
- [x] The model implementation is available.
- [x] The model weights... | closed | completed | false | 0 | [] | [] | 2025-02-25T02:58:31Z | 2025-02-25T02:59:13Z | 2025-02-25T02:59:13Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | qiudi0127 | 26,405,062 | MDQ6VXNlcjI2NDA1MDYy | User | false |
huggingface/diffusers | 2,876,742,806 | I_kwDOHa8MBc6rd5yW | 10,897 | https://github.com/huggingface/diffusers/issues/10897 | https://api.github.com/repos/huggingface/diffusers/issues/10897 | Support SkyReels-A1 for expressive portrait animation. | ### Model/Pipeline/Scheduler description
[https://github.com/SkyworkAI/SkyReels-A1](https://github.com/huggingface/diffusers/issues/url) is a creative productivity tool that can transfer expressions and motions onto the portrait. Can diffusers support it, so that the community can simply use this project, thanks.
**O... | open | null | false | 1 | [
"contributions-welcome"
] | [] | 2025-02-25T03:02:33Z | 2025-03-31T13:57:11Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | qiudi0127 | 26,405,062 | MDQ6VXNlcjI2NDA1MDYy | User | false |
huggingface/diffusers | 2,877,586,180 | I_kwDOHa8MBc6rhHsE | 10,899 | https://github.com/huggingface/diffusers/issues/10899 | https://api.github.com/repos/huggingface/diffusers/issues/10899 | Whether lohaconfig is supported in the convert_state_dict_to_diffusers method | In the train_text_to_image_lora.py file
unet_lora_config = LoraConfig(
r=cfg.rank,
lora_alpha=cfg.rank,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
)
modified to
unet_lora_config = LoHaConfig(
r=cfg.rank,
alpha=cfg.rank,
... | open | null | false | 2 | [
"stale"
] | [] | 2025-02-25T08:39:08Z | 2025-03-27T15:03:17Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | llm8047 | 94,896,845 | U_kgDOBagCzQ | User | false |
huggingface/diffusers | 2,877,676,676 | I_kwDOHa8MBc6rhdyE | 10,900 | https://github.com/huggingface/diffusers/issues/10900 | https://api.github.com/repos/huggingface/diffusers/issues/10900 | The docs notebooks are not updated since Jul 26, 2023 | The docs notebooks, where you land when clicking "Open In Colab" from the docs are not updated since Jul 26, 2023, after the merge of:
- #4277
For example:
- Go to the docs: https://huggingface.co/docs/diffusers/en/using-diffusers/depth2img
- Click "Open in Colab"
- And open the associated docs notebook: https://colab... | closed | completed | false | 6 | [] | [] | 2025-02-25T09:12:21Z | 2025-06-23T08:36:20Z | 2025-06-23T08:36:00Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | User | false |
huggingface/diffusers | 2,878,030,534 | I_kwDOHa8MBc6ri0LG | 10,901 | https://github.com/huggingface/diffusers/issues/10901 | https://api.github.com/repos/huggingface/diffusers/issues/10901 | HunyuanVIdeo in diffusers use negative_prompt but generate wrong video | ### Describe the bug
Diffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail.
How can I fix my problem? Thanks
### Reproduction
import torch
... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-02-25T11:08:43Z | 2025-07-15T07:19:15Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | philipwan | 18,528,611 | MDQ6VXNlcjE4NTI4NjEx | User | false |
huggingface/diffusers | 2,878,396,534 | I_kwDOHa8MBc6rkNh2 | 10,903 | https://github.com/huggingface/diffusers/issues/10903 | https://api.github.com/repos/huggingface/diffusers/issues/10903 | need Wan2.1 of diffusers version | https://huggingface.co/Wan-AI
Wan2.1 offers these key features:
👍 SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
👍 Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible w... | closed | completed | false | 3 | [
"stale"
] | [] | 2025-02-25T13:24:25Z | 2025-03-28T14:37:12Z | 2025-03-28T14:37:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhaoyun0071 | 35,762,050 | MDQ6VXNlcjM1NzYyMDUw | User | false |
huggingface/diffusers | 2,879,008,960 | I_kwDOHa8MBc6rmjDA | 10,904 | https://github.com/huggingface/diffusers/issues/10904 | https://api.github.com/repos/huggingface/diffusers/issues/10904 | CLIP Score Evaluation without Pre-processing. | I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example.
We have images of shape (6, 512, 512, 3).
CLIP score is calculated using `"openai/clip-vit-base-patch16"`.
However, as far as I can... | open | null | false | 1 | [
"stale"
] | [] | 2025-02-25T16:51:44Z | 2025-03-28T15:03:20Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | e-delaney | 55,100,385 | MDQ6VXNlcjU1MTAwMzg1 | User | false |
huggingface/diffusers | 2,881,861,845 | I_kwDOHa8MBc6rxbjV | 10,910 | https://github.com/huggingface/diffusers/issues/10910 | https://api.github.com/repos/huggingface/diffusers/issues/10910 | ValueError: Attempting to unscale FP16 gradients. | ### Describe the bug
I encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients.
The script I am running is as follows:
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/naruto-blip-captions"
accelerate launch --mixed_... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-02-26T14:43:57Z | 2025-03-18T17:43:08Z | 2025-03-03T11:33:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Messimanda | 46,048,341 | MDQ6VXNlcjQ2MDQ4MzQx | User | false |
huggingface/diffusers | 2,882,485,677 | I_kwDOHa8MBc6rzz2t | 10,913 | https://github.com/huggingface/diffusers/issues/10913 | https://api.github.com/repos/huggingface/diffusers/issues/10913 | Loading dataset from disk where it exists | In these scripts [[1](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py)] [[2](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)], loading a dataset that is saved with `save_to_disk` throws an error, and one is forced to... | open | null | false | 2 | [] | [] | 2025-02-26T18:35:29Z | 2025-03-11T04:15:54Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | surya-narayanan | 17,240,858 | MDQ6VXNlcjE3MjQwODU4 | User | false |
huggingface/diffusers | 2,882,840,872 | I_kwDOHa8MBc6r1Kko | 10,914 | https://github.com/huggingface/diffusers/issues/10914 | https://api.github.com/repos/huggingface/diffusers/issues/10914 | Model getting offloaded to CPU without user's intention | ### Describe the bug
I came across an issue that my model kept getting moved to CPU after loading LoRA weights with the `load_lora_weights()` method.
I found out that `is_sequential_cpu_offload` is set to `True` while loading LoRA on https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py... | open | null | false | 0 | [
"bug"
] | [] | 2025-02-26T21:19:52Z | 2025-02-26T21:19:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | janzd | 5,581,222 | MDQ6VXNlcjU1ODEyMjI= | User | false |
huggingface/diffusers | 2,884,210,484 | I_kwDOHa8MBc6r6Y80 | 10,917 | https://github.com/huggingface/diffusers/issues/10917 | https://api.github.com/repos/huggingface/diffusers/issues/10917 | Is lumina-2.0 script correct? | I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
it gets stuck on loss around 0.5, and i think it is a lot, isn't it? | open | null | true | 3 | [] | [] | 2025-02-27T11:17:00Z | 2025-02-28T15:46:43Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Riko0 | 5,251,810 | MDQ6VXNlcjUyNTE4MTA= | User | false |
huggingface/diffusers | 1,430,777,861 | I_kwDOHa8MBc5VR_AF | 1,092 | https://github.com/huggingface/diffusers/issues/1092 | https://api.github.com/repos/huggingface/diffusers/issues/1092 | Euler & Attention Slicing | ### Describe the bug
Whenever I try to use the new implementation of Euler and enable attention slicing, it throws me the below error
### Reproduction
_No response_
### Logs
```shell
File "C:\GPU_Farm\main1.py", line 313, in check_jobs
images = make(pending, gpu)
File "C:\GPU_Farm\main1.py", line 285, in m... | closed | completed | false | 2 | [
"bug"
] | [] | 2022-11-01T02:52:03Z | 2022-11-03T18:14:17Z | 2022-11-03T18:14:17Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dblunk88 | 39,381,389 | MDQ6VXNlcjM5MzgxMzg5 | User | false |
huggingface/diffusers | 2,884,272,467 | I_kwDOHa8MBc6r6oFT | 10,920 | https://github.com/huggingface/diffusers/issues/10920 | https://api.github.com/repos/huggingface/diffusers/issues/10920 | [DDIMInverseScheduler] `inf` values at first iteration when `set_alpha_to_one=True` and `prediction_type="sample"` | ### Describe the bug
I got `inf` values in https://github.com/huggingface/diffusers/blob/560fb5f4d65b8593c13e4be50a59b1fd9c2d9992/src/diffusers/schedulers/scheduling_ddim_inverse.py#L347
because `beta_prod_t` is 0.0 at the first iteration, when `timestep` is < 0 and `beta_prod_t` is 0.0 (because `alpha_prod_t` is se... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2025-02-27T11:44:57Z | 2026-02-03T15:24:16Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | andreabosisio | 79,710,398 | MDQ6VXNlcjc5NzEwMzk4 | User | false |
huggingface/diffusers | 2,886,257,699 | I_kwDOHa8MBc6sCMwj | 10,925 | https://github.com/huggingface/diffusers/issues/10925 | https://api.github.com/repos/huggingface/diffusers/issues/10925 | Dreambooth finetune FLUX dev CLIPTextModel | ### Describe the bug
ValueError: Sequence length must be less than max_position_embeddings (got `sequence length`: 77 and max_position_embeddings: 0
I used four A100 to full amount of fine-tuning Flux. 1 dev model, according to https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md
I u... | open | null | false | 17 | [
"bug",
"stale",
"training"
] | [] | 2025-02-28T05:44:17Z | 2026-02-03T15:24:12Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Wuyiche | 114,409,368 | U_kgDOBtG_mA | User | false |
huggingface/diffusers | 2,886,662,160 | I_kwDOHa8MBc6sDvgQ | 10,928 | https://github.com/huggingface/diffusers/issues/10928 | https://api.github.com/repos/huggingface/diffusers/issues/10928 | OSError: Error no file named diffusion_pytorch_model.fp16.bin found in directory /local/data/RealVisXL_V5.0/unet | ### Describe the bug
pipe = SDXLLongPromptWeightingPipeline.from_pretrained(repo_path,
File "/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 924,... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-02-28T09:33:50Z | 2025-02-28T09:50:55Z | 2025-02-28T09:50:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kolar1988 | 39,948,604 | MDQ6VXNlcjM5OTQ4NjA0 | User | false |
huggingface/diffusers | 1,430,893,541 | I_kwDOHa8MBc5VSbPl | 1,093 | https://github.com/huggingface/diffusers/issues/1093 | https://api.github.com/repos/huggingface/diffusers/issues/1093 | fall to train textual inversion with DeepSpeed | ### Describe the bug
When I try the demo in https://github.com/huggingface/diffusers/blob/main/docs/source/training/text_inversion.mdx
at the step:
```
accelerate config
```
when i choice the option with use DeepSpeed with yes
it will borken with this error each time(with different device test, T4 and A5000 ... | closed | completed | false | 3 | [
"bug",
"stale"
] | [
"patil-suraj"
] | 2022-11-01T05:26:07Z | 2022-12-11T15:03:06Z | 2022-12-11T15:03:06Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CrazyBoyM | 35,400,185 | MDQ6VXNlcjM1NDAwMTg1 | User | false |
huggingface/diffusers | 2,887,943,145 | I_kwDOHa8MBc6sIoPp | 10,932 | https://github.com/huggingface/diffusers/issues/10932 | https://api.github.com/repos/huggingface/diffusers/issues/10932 | HunyuanVideo pipe.transformer.compile(): torch._dynamo hit config.recompile_limit (8) | ### Describe the bug
HunyuanVideo transformer compilation is not working as expected and results also in corrupted output video.
See [here](https://github.com/huggingface/diffusers/pull/10730#issuecomment-2639842593) related discussion as well and a functioning example for Flux-1.dev.
### Reproduction
```
import to... | open | null | false | 5 | [
"bug",
"stale"
] | [] | 2025-02-28T19:46:09Z | 2025-04-26T15:03:41Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | eppaneamd | 186,286,759 | U_kgDOCxqCpw | User | false |
huggingface/diffusers | 2,888,027,900 | I_kwDOHa8MBc6sI878 | 10,933 | https://github.com/huggingface/diffusers/issues/10933 | https://api.github.com/repos/huggingface/diffusers/issues/10933 | error load IP-adapter xl plus version | ### Describe the bug
RuntimeError: Error(s) in loading state_dict for ImageProjModel:
Missing key(s) in state_dict: "proj.weight", "proj.bias", "norm.weight", "norm.bias".
When I use ip-adapter-plus_sdxl_vit-h.safetensors or ip-adapter-plus_sdxl_vit-h.bin, it make that errors.
Is it wrong file? I use image_... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-02-28T20:42:29Z | 2025-02-28T21:19:20Z | 2025-02-28T21:19:20Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | john09282922 | 114,594,549 | U_kgDOBtSS9Q | User | false |
huggingface/diffusers | 2,888,684,821 | I_kwDOHa8MBc6sLdUV | 10,935 | https://github.com/huggingface/diffusers/issues/10935 | https://api.github.com/repos/huggingface/diffusers/issues/10935 | parameter missing in Documentation and mat cant be multipled | null | closed | completed | false | 0 | [
"bug"
] | [] | 2025-03-01T07:15:52Z | 2025-03-22T05:10:31Z | 2025-03-02T08:59:48Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Rts-Wu | 181,694,156 | U_kgDOCtRuzA | User | false |
huggingface/diffusers | 2,889,335,612 | I_kwDOHa8MBc6sN8M8 | 10,937 | https://github.com/huggingface/diffusers/issues/10937 | https://api.github.com/repos/huggingface/diffusers/issues/10937 | torch.compile errors on vae.encode | ### Describe the bug
torch.compile fails at compiling `vae.encode` while is ok at compiling `vae`.
By removing `@apply_forward_hook`, it works.
### Reproduction
```python
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
vae = AutoencoderKL.from_pretrained(
'stable-diffusion-v1... | closed | completed | false | 6 | [
"bug",
"stale"
] | [] | 2025-03-02T05:13:09Z | 2025-06-14T01:41:22Z | 2025-06-14T01:41:22Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Luciennnnnnn | 20,135,317 | MDQ6VXNlcjIwMTM1MzE3 | User | false |
huggingface/diffusers | 1,430,982,683 | I_kwDOHa8MBc5VSxAb | 1,094 | https://github.com/huggingface/diffusers/issues/1094 | https://api.github.com/repos/huggingface/diffusers/issues/1094 | Explore potential speed benefits from implementing kernl (Up to 12X faster GPU inference) | **Is your feature request related to a problem? Please describe.**
N/A
**Describe the solution you'd like**
Explore whether implementing kernl would provide speedups for Stable Diffusion.
- https://github.com/ELS-RD/kernl
- > Kernl lets you run PyTorch transformer models several times faster on GPU with a si... | closed | completed | false | 9 | [
"stale"
] | [] | 2022-11-01T07:08:12Z | 2022-12-09T15:03:49Z | 2022-12-09T15:03:49Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | 0xdevalias | 753,891 | MDQ6VXNlcjc1Mzg5MQ== | User | false |
huggingface/diffusers | 2,889,998,882 | I_kwDOHa8MBc6sQeIi | 10,940 | https://github.com/huggingface/diffusers/issues/10940 | https://api.github.com/repos/huggingface/diffusers/issues/10940 | diffusers/utils/import_utils.py No module named 'triton.ops' | ### Describe the bug
` File "/home/zanepoe/miniconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/utils/import_utils.py", line 910, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zanepoe/miniconda3/envs/comfyui... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-03-03T02:24:00Z | 2025-03-05T13:44:04Z | 2025-03-05T13:44:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ZanePoe | 16,047,426 | MDQ6VXNlcjE2MDQ3NDI2 | User | false |
huggingface/diffusers | 2,892,174,312 | I_kwDOHa8MBc6sYxPo | 10,950 | https://github.com/huggingface/diffusers/issues/10950 | https://api.github.com/repos/huggingface/diffusers/issues/10950 | Accelerate Inference Section of Doc Broken | ### Describe the bug
https://huggingface.co/docs/diffusers/en/tutorials/fast_diffusion
` from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")
prompt = "Astronaut in a... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-03-03T19:54:22Z | 2025-03-04T18:09:48Z | 2025-03-04T18:09:47Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AbhinavGopal | 43,016,805 | MDQ6VXNlcjQzMDE2ODA1 | User | false |
huggingface/diffusers | 2,892,806,769 | I_kwDOHa8MBc6sbLpx | 10,952 | https://github.com/huggingface/diffusers/issues/10952 | https://api.github.com/repos/huggingface/diffusers/issues/10952 | 【SDXLLongPromptWeightingPipeline load lora】ValueError: Adapter name(s) {'toy'} not in the list of present adapters: {'default_0'}. | ### Describe the bug
SDXLLongPromptWeightingPipeline load lora raise ValueError
ValueError: Adapter name(s) {'toy'} not in the list of present adapters: {'default_0'}.
Diffusers version: 0.32.2
### Reproduction
```python
import torch
import os, sys
from diffusers import DiffusionPipeline
import SDXLLongPromptWeigh... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-03-04T03:15:06Z | 2025-04-08T07:48:34Z | 2025-04-08T07:48:34Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kolar1988 | 39,948,604 | MDQ6VXNlcjM5OTQ4NjA0 | User | false |
huggingface/diffusers | 2,892,872,010 | I_kwDOHa8MBc6sbblK | 10,953 | https://github.com/huggingface/diffusers/issues/10953 | https://api.github.com/repos/huggingface/diffusers/issues/10953 | should use 'permute' replace 'transpose' | ### Describe the bug
https://github.com/huggingface/diffusers/blob/8f15be169fdc0329d2745faa6a9d91605e416cde/src/diffusers/image_processor.py#L192
TypeError: transpose() received an invalid combination of arguments - got (int, int, int, int), but expected one of:
* (int dim0, int dim1)
* (name dim0, name dim1)
sho... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-03-04T04:09:58Z | 2025-03-04T11:33:51Z | 2025-03-04T11:33:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nku-shengzheliu | 77,380,578 | MDQ6VXNlcjc3MzgwNTc4 | User | false |
huggingface/diffusers | 2,893,013,927 | I_kwDOHa8MBc6sb-On | 10,954 | https://github.com/huggingface/diffusers/issues/10954 | https://api.github.com/repos/huggingface/diffusers/issues/10954 | Another comfy-ui compatible FLUX1.dev LoRA fails to load | ### Describe the bug
https://civitai.com/models/631986/xlabs-flux-realism-lora
This LoRA fails to load. Specifically, I get a `ValueError: Incompatible keys detected`.
### Reproduction
```py
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dt... | open | null | false | 13 | [
"bug",
"stale",
"lora"
] | [] | 2025-03-04T05:42:03Z | 2025-05-02T15:04:05Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AmericanPresidentJimmyCarter | 110,263,573 | U_kgDOBpJ9FQ | User | false |
huggingface/diffusers | 2,893,719,636 | I_kwDOHa8MBc6seqhU | 10,958 | https://github.com/huggingface/diffusers/issues/10958 | https://api.github.com/repos/huggingface/diffusers/issues/10958 | Cannot copy out of meta tensor; no data! with CogView4Pipeline | ### Describe the bug
Getting error while using enable_sequential_cpu_offload.
The models are used without quantization so it should work.
### Reproduction
```python
from diffusers import CogView4Pipeline
import torch
pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16)
pipe.enabl... | closed | completed | false | 9 | [
"bug"
] | [] | 2025-03-04T10:23:33Z | 2025-04-02T15:51:24Z | 2025-04-02T15:51:24Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,894,152,876 | I_kwDOHa8MBc6sgUSs | 10,962 | https://github.com/huggingface/diffusers/issues/10962 | https://api.github.com/repos/huggingface/diffusers/issues/10962 | Cogview4 pipeline not accepting prompt embeds, due to shape issues . | ### Describe the bug
I've trying to run CogView4 using separate pipelines to encode text and generate the image in order to save memory (Unified Memory so I can't use offloading) with the aim of doing multiple prompts
e.g.
```py
te_pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B",
... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-03-04T12:50:44Z | 2025-03-05T06:11:03Z | 2025-03-05T06:11:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Vargol | 62,868 | MDQ6VXNlcjYyODY4 | User | false |
huggingface/diffusers | 2,894,155,329 | I_kwDOHa8MBc6sgU5B | 10,963 | https://github.com/huggingface/diffusers/issues/10963 | https://api.github.com/repos/huggingface/diffusers/issues/10963 | cannot import name 'AutoencoderKLWan' from 'diffusers' | ### Describe the bug
ImportError: cannot import name 'AutoencoderKLWan' from 'diffusers' (/usr/local/lib/python3.10/dist-packages/diffusers/__init__.py)
### Reproduction
from diffusers import AutoencoderKLWan, WanPipeline
### Logs
```shell
```
### System Info
diffusers-0.32.2,linux,python3.10
### Who can help?... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-03-04T12:51:41Z | 2025-03-04T13:03:13Z | 2025-03-04T13:03:13Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spawner1145 | 179,383,288 | U_kgDOCrEr-A | User | false |
huggingface/diffusers | 2,895,632,961 | I_kwDOHa8MBc6sl9pB | 10,967 | https://github.com/huggingface/diffusers/issues/10967 | https://api.github.com/repos/huggingface/diffusers/issues/10967 | Integrate LaVie (IJCV 2024) and Cinemo (CVPR 2025) to diffusers | ### Model/Pipeline/Scheduler description
I am one of the authors of LaVie (IJCV 2024) and Cinemo (CVPR 2025). We are considering submitting pull requests (PRs) to the diffusers repository for these two video models, both of which are based on U-Net architectures. We would like to know if `diffusers` is still open to s... | open | null | false | 1 | [
"stale"
] | [] | 2025-03-04T22:53:19Z | 2025-04-04T15:02:52Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | maxin-cn | 38,418,898 | MDQ6VXNlcjM4NDE4ODk4 | User | false |
huggingface/diffusers | 2,895,926,256 | I_kwDOHa8MBc6snFPw | 10,969 | https://github.com/huggingface/diffusers/issues/10969 | https://api.github.com/repos/huggingface/diffusers/issues/10969 | Run FLUX-controlnet zero3 training failed: 'weight' must be 2-D | ### Describe the bug
I am attempting to use Zero-3 for Flux Controlnet training on 8 GPUs following the guidance of [README](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_flux.md#apply-deepspeed-zero3). The error below occured:
```
[rank0]: RuntimeError: 'weight' must be 2-D
```
### ... | open | null | false | 17 | [
"bug",
"stale"
] | [] | 2025-03-05T02:14:09Z | 2026-02-03T15:24:05Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | alien-0119 | 185,337,709 | U_kgDOCwwHbQ | User | false |
huggingface/diffusers | 1,431,770,564 | I_kwDOHa8MBc5VVxXE | 1,097 | https://github.com/huggingface/diffusers/issues/1097 | https://api.github.com/repos/huggingface/diffusers/issues/1097 | Dreambooth and DeepSpeedCPUAdam | Hello,
I was playing with a Dreambooth example and noticed a mention that about deepspeed optimizer
> ```deepspeed.ops.adam.DeepSpeedCPUAdam``` gives a substantial speedup
Whenever I change `torch.optim.AdamW` to `deepspeed.ops.adam.DeepSpeedCPUAdam` explicitly it actually gives a speedup.
I am wondering w... | closed | completed | false | 3 | [
"stale"
] | [
"patil-suraj"
] | 2022-11-01T17:02:04Z | 2022-12-11T15:03:05Z | 2022-12-11T15:03:05Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zetyquickly | 25,350,960 | MDQ6VXNlcjI1MzUwOTYw | User | false |
huggingface/diffusers | 2,897,286,388 | I_kwDOHa8MBc6ssRT0 | 10,972 | https://github.com/huggingface/diffusers/issues/10972 | https://api.github.com/repos/huggingface/diffusers/issues/10972 | Loading LoRA weights fails for OneTrainer Flux LoRAs | ### Describe the bug
Loading [OneTrainer](https://github.com/Nerogar/OneTrainer) style LoRAs, using diffusers commit #[dcd77ce22273708294b7b9c2f7f0a4e45d7a9f33](https://github.com/huggingface/diffusers/commit/dcd77ce22273708294b7b9c2f7f0a4e45d7a9f33), fails with error:
```
Traceback (most recent call last):
File "... | closed | completed | false | 2 | [
"bug"
] | [
"sayakpaul"
] | 2025-03-05T13:07:40Z | 2025-03-06T08:33:34Z | 2025-03-06T08:23:37Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spezialspezial | 75,758,219 | MDQ6VXNlcjc1NzU4MjE5 | User | false |
huggingface/diffusers | 1,432,088,503 | I_kwDOHa8MBc5VW--3 | 1,098 | https://github.com/huggingface/diffusers/issues/1098 | https://api.github.com/repos/huggingface/diffusers/issues/1098 | convert_original_stable_diffusion_to_diffusers.py issue | null | closed | completed | false | 0 | [
"bug"
] | [] | 2022-11-01T20:57:14Z | 2022-11-02T04:45:25Z | 2022-11-02T02:26:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CitizenRoby | 69,502,183 | MDQ6VXNlcjY5NTAyMTgz | User | false |
huggingface/diffusers | 2,900,599,603 | I_kwDOHa8MBc6s46Mz | 10,986 | https://github.com/huggingface/diffusers/issues/10986 | https://api.github.com/repos/huggingface/diffusers/issues/10986 | Diffusers Transformer Pipeline Produces ComplexDouble Tensors on MPS, Causing Conversion Error | ### Describe the bug
When running the WanPipeline from diffusers on an MPS device, the pipeline fails with the error:
`TypeError: Trying to convert ComplexDouble to the MPS backend but it does not have support for that dtype.`
Investigation indicates that in the transformer component (specifically in the rotary posit... | closed | completed | false | 6 | [
"bug",
"stale"
] | [
"yiyixuxu",
"a-r-r-o-w"
] | 2025-03-06T14:39:06Z | 2025-06-03T11:00:21Z | 2025-06-03T11:00:20Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | mozzipa | 96,029,391 | U_kgDOBblKzw | User | false |
huggingface/diffusers | 2,900,599,821 | I_kwDOHa8MBc6s46QN | 10,987 | https://github.com/huggingface/diffusers/issues/10987 | https://api.github.com/repos/huggingface/diffusers/issues/10987 | Spatio-temporal diffusion models | **Is your feature request related to a problem? Please describe.**
Including https://github.com/yyysjz1997/Awesome-TimeSeries-SpatioTemporal-Diffusion-Model/blob/main/README.md models
| open | null | false | 1 | [
"stale"
] | [] | 2025-03-06T14:39:11Z | 2025-04-05T15:02:42Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | moghadas76 | 23,231,913 | MDQ6VXNlcjIzMjMxOTEz | User | false |
huggingface/diffusers | 2,900,609,067 | I_kwDOHa8MBc6s48gr | 10,988 | https://github.com/huggingface/diffusers/issues/10988 | https://api.github.com/repos/huggingface/diffusers/issues/10988 | WAN 2.1 T2V Unable to Quantize the model based on using QuantoConfig | ### Describe the bug
Using the WAN 2.1 Github version of text2video.py trying to quantize the model get error msg:
Message=Unknown quantization type, got QuantizationMethod.QUANTO - supported types are: ['bitsandbytes_4bit', 'bitsandbytes_8bit', 'gguf', 'torchao']
Source=C:\Users\xxxxx\source\repos\AI\runtimes\bi... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-03-06T14:42:40Z | 2025-03-12T10:57:41Z | 2025-03-12T10:57:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ukaprch | 107,368,096 | U_kgDOBmZOoA | User | false |
huggingface/diffusers | 2,900,716,934 | I_kwDOHa8MBc6s5W2G | 10,989 | https://github.com/huggingface/diffusers/issues/10989 | https://api.github.com/repos/huggingface/diffusers/issues/10989 | Flux Controlnet Lora Fails To Load When Transformers Are Quantized. | ### Describe the bug
Failed to load Flux controlnet lora when the transformer is quantized to 4bit by ```bitsandbytes```.
Maybe #10337 is a bit related to this issue.
### Reproduction
```python
import torch
from diffusers import FluxControlPipeline
pipe = FluxControlPipeline.from_pretrained("eramth/flux-4bit", torch... | closed | completed | false | 13 | [
"bug",
"lora"
] | [] | 2025-03-06T15:23:46Z | 2025-04-08T15:47:04Z | 2025-04-08T15:47:04Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberVy | 72,680,847 | MDQ6VXNlcjcyNjgwODQ3 | User | false |
huggingface/diffusers | 1,432,509,147 | I_kwDOHa8MBc5VYlrb | 1,099 | https://github.com/huggingface/diffusers/issues/1099 | https://api.github.com/repos/huggingface/diffusers/issues/1099 | No class name for dreambooth | ### Describe the bug
It might come from the latest update?
Traceback (most recent call last):
File "/dreambooth/infer.py", line 5, in <module>
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
File "/lib/python3.10/site-packages/diffusers/pipeline_utils.py", l... | closed | completed | false | 5 | [
"bug"
] | [
"pcuenca"
] | 2022-11-02T05:11:32Z | 2022-11-30T13:35:07Z | 2022-11-30T13:35:07Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Li-En-Good | 56,750,667 | MDQ6VXNlcjU2NzUwNjY3 | User | false |
huggingface/diffusers | 2,901,169,968 | I_kwDOHa8MBc6s7Fcw | 10,992 | https://github.com/huggingface/diffusers/issues/10992 | https://api.github.com/repos/huggingface/diffusers/issues/10992 | Loading WanTransformer3DModel using torch_dtype=torch.bfloat16 keeps some parameters as float32 | ### Describe the bug
Just checking if this is the expected behavior. Calling WanTransformer3DModel.from_pretrained with argument torch_dtype=torch.bfloat16 keeps some parameters as float32.
### Reproduction
```
repo_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
transformer = WanTransformer3DModel.from_pretrained(repo_... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-03-06T18:48:23Z | 2025-03-07T10:25:09Z | 2025-03-07T10:25:09Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spezialspezial | 75,758,219 | MDQ6VXNlcjc1NzU4MjE5 | User | false |
huggingface/diffusers | 2,901,590,294 | I_kwDOHa8MBc6s8sEW | 10,993 | https://github.com/huggingface/diffusers/issues/10993 | https://api.github.com/repos/huggingface/diffusers/issues/10993 | f-divergence | Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library. | open | null | false | 5 | [
"stale"
] | [] | 2025-03-06T22:46:13Z | 2025-04-06T15:02:55Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | manmeet3591 | 8,520,638 | MDQ6VXNlcjg1MjA2Mzg= | User | false |
huggingface/diffusers | 2,901,730,728 | I_kwDOHa8MBc6s9OWo | 10,994 | https://github.com/huggingface/diffusers/issues/10994 | https://api.github.com/repos/huggingface/diffusers/issues/10994 | train_dreambooth_lora_flux crash on batch size greater than 1 | ### Describe the bug
As soon as launching the train_dreambooth_lora_flux.py (with accelerate) and train_batch_size greater than 1, it takes 40-50sec to load and then crashes with an error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1536 and 768x3072)"
File "/workspace/./train_dreambooth_lora_flux.py",... | closed | completed | false | 5 | [
"bug",
"stale"
] | [] | 2025-03-07T00:44:13Z | 2025-07-05T21:27:36Z | 2025-07-05T21:27:36Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PluginBOXone | 26,670,577 | MDQ6VXNlcjI2NjcwNTc3 | User | false |
huggingface/diffusers | 2,902,506,586 | I_kwDOHa8MBc6tALxa | 10,999 | https://github.com/huggingface/diffusers/issues/10999 | https://api.github.com/repos/huggingface/diffusers/issues/10999 | WanImageToVideoPipeline - swap out a limited number of blocks | I can fit WanImageToVideoPipeline on a 24GB card but it does scrape the ceiling and is a bit too close for comfort to OOMing at some random system event.
The kijai/ComfyUI-WanVideoWrapper has a nice option to swap a limited and user defined number of blocks out of VRAM. Can a similar thing be done right now with seque... | open | null | false | 9 | [
"stale"
] | [] | 2025-03-07T09:43:53Z | 2026-02-03T15:23:59Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spezialspezial | 75,758,219 | MDQ6VXNlcjc1NzU4MjE5 | User | false |
huggingface/diffusers | 1,272,010,527 | I_kwDOHa8MBc5L0Vcf | 11 | https://github.com/huggingface/diffusers/issues/11 | https://api.github.com/repos/huggingface/diffusers/issues/11 | Pros and cons of the configuration setup | Could you mention the reasons why you opted for a configuration setup that is different from transformers'?
From a previous conversation I remember it was in order to not repeat twice the arguments, however when looking at schedulers it seems like it is still the case:
https://github.com/huggingface/diffusers/blo... | closed | completed | false | 2 | [] | [] | 2022-06-15T10:21:20Z | 2022-07-21T19:08:25Z | 2022-07-21T19:08:25Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LysandreJik | 30,755,778 | MDQ6VXNlcjMwNzU1Nzc4 | User | false |
huggingface/diffusers | 1,432,704,385 | I_kwDOHa8MBc5VZVWB | 1,100 | https://github.com/huggingface/diffusers/issues/1100 | https://api.github.com/repos/huggingface/diffusers/issues/1100 | Low probability of throwing 'CLIPTextTransformer' object has no attribute 'device' exception | ### Describe the bug
When running a pipeline, there is a probability that an exception will be thrown.
```
Traceback (most recent call last):
File "/home/xxx/yyy/trainer.py", line 376, in main
image = [pipe(prompt,negative_prompt=negative_prompt,width=h,height=w,max_embeddings_multiples=3).images[0] for i ... | closed | completed | false | 7 | [
"bug",
"stale"
] | [] | 2022-11-02T08:27:54Z | 2022-12-10T15:03:13Z | 2022-12-10T15:03:13Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chinoll | 22,575,232 | MDQ6VXNlcjIyNTc1MjMy | User | false |
huggingface/diffusers | 2,902,530,542 | I_kwDOHa8MBc6tARnu | 11,000 | https://github.com/huggingface/diffusers/issues/11000 | https://api.github.com/repos/huggingface/diffusers/issues/11000 | WanImageToVideoPipeline.__call__ missing shift argument | ### Describe the bug
I was trying to compare generations and noticed `WanImageToVideoPipeline.__call__` is missing the shift argument. I can see it in the docstring so I was wondering if it could be added easily.
### Reproduction
`WanImageToVideoPipeline.__call__(shift=5.0)`
### Logs
```shell
```
### System Info... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-03-07T09:53:49Z | 2025-03-07T10:22:30Z | 2025-03-07T10:22:30Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spezialspezial | 75,758,219 | MDQ6VXNlcjc1NzU4MjE5 | User | false |
huggingface/diffusers | 2,902,751,333 | I_kwDOHa8MBc6tBHhl | 11,002 | https://github.com/huggingface/diffusers/issues/11002 | https://api.github.com/repos/huggingface/diffusers/issues/11002 | Any chance class members like self._interrupt could be defined in __init__ across pipelines? | ### Describe the bug
I think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.
### Reproduction
```
class WanImageToVideoPipeline:
def ... | open | null | false | 11 | [
"bug",
"help wanted",
"stale",
"contributions-welcome"
] | [] | 2025-03-07T11:28:27Z | 2026-02-03T15:23:55Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | spezialspezial | 75,758,219 | MDQ6VXNlcjc1NzU4MjE5 | User | false |
huggingface/diffusers | 2,903,127,637 | I_kwDOHa8MBc6tCjZV | 11,003 | https://github.com/huggingface/diffusers/issues/11003 | https://api.github.com/repos/huggingface/diffusers/issues/11003 | Flux Lora Unloading Issue Discussion | This issue is for discussing the behavior of ```FluxLoraLoaderMixin``` when unloading one or multiple specific LoRA like ```black-forest-labs/FLUX.1-Canny-dev-lora``` **which expands the shapes** before submitting a PR.
Related: #9325 #10989 #10397 | open | null | false | 7 | [
"stale"
] | [] | 2025-03-07T14:19:21Z | 2025-04-13T15:03:10Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberVy | 72,680,847 | MDQ6VXNlcjcyNjgwODQ3 | User | false |
huggingface/diffusers | 2,903,456,045 | I_kwDOHa8MBc6tDzkt | 11,005 | https://github.com/huggingface/diffusers/issues/11005 | https://api.github.com/repos/huggingface/diffusers/issues/11005 | pipeline_wan_i2v.py: minor discrepancy between arg default and docstring | ### Describe the bug
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
Line 447 (arg default):
```output_type: Optional[str] = "np",```
Line 496 (docstring):
```output_type (`str`, *optional*, defaults to `"pil"`):```
### Reproduction
n/a
### Logs
```shell
```
#... | closed | completed | false | 2 | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | [] | 2025-03-07T16:37:48Z | 2025-04-24T18:49:38Z | 2025-04-24T18:49:38Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rolux | 152,646 | MDQ6VXNlcjE1MjY0Ng== | User | false |
huggingface/diffusers | 2,903,553,882 | I_kwDOHa8MBc6tELda | 11,006 | https://github.com/huggingface/diffusers/issues/11006 | https://api.github.com/repos/huggingface/diffusers/issues/11006 | Broken video output with Wan 2.1 I2V pipeline + quantized transformer | ### Describe the bug
Since there is no proper documentation yet, I'm not sure if there is a difference to other video pipelines that I'm unaware of – but with the code below, the video results are reproducibly broken.
There is a warning:
`Expected types for image_encoder: (<class 'transformers.models.clip.modeling_cl... | open | null | false | 7 | [
"bug",
"stale"
] | [] | 2025-03-07T17:25:50Z | 2025-04-17T15:03:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rolux | 152,646 | MDQ6VXNlcjE1MjY0Ng== | User | false |
huggingface/diffusers | 2,904,465,041 | I_kwDOHa8MBc6tHp6R | 11,008 | https://github.com/huggingface/diffusers/issues/11008 | https://api.github.com/repos/huggingface/diffusers/issues/11008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | open | reopened | false | 6 | [
"stale"
] | [] | 2025-03-08T04:21:33Z | 2025-05-09T15:03:47Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kexul | 5,920,217 | MDQ6VXNlcjU5MjAyMTc= | User | false |
huggingface/diffusers | 1,432,755,291 | I_kwDOHa8MBc5VZhxb | 1,101 | https://github.com/huggingface/diffusers/issues/1101 | https://api.github.com/repos/huggingface/diffusers/issues/1101 | Add DPM-Solver scheduler | **Is your feature request related to a problem? Please describe.**
Add the state-of-the-art training-free fast sampler DPM-Solver, which has analytic coefficients and can greatly accelerate stable-diffusion: https://github.com/LuChengTHU/dpm-solver
**Describe the solution you'd like**
Add the support for DPM-... | closed | completed | false | 2 | [
"stale"
] | [] | 2022-11-02T09:07:18Z | 2022-12-02T17:33:41Z | 2022-12-02T17:33:41Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LuChengTHU | 25,171,708 | MDQ6VXNlcjI1MTcxNzA4 | User | false |
huggingface/diffusers | 2,904,785,400 | I_kwDOHa8MBc6tI4H4 | 11,010 | https://github.com/huggingface/diffusers/issues/11010 | https://api.github.com/repos/huggingface/diffusers/issues/11010 | Support Chroma - Flux based model with architecture changes | ### Describe the bug
I am trying to use this FLUX model
https://huggingface.co/lodestones/Chroma
Chroma is a 8.9 billion parameter rectified flow transformer capable of generating images from text descriptions. Based on FLUX.1 [schnell] with heavy architectural modifications.
GGUF version is posted here
https://huggi... | closed | completed | false | 12 | [
"New pipeline/model"
] | [] | 2025-03-08T13:11:35Z | 2025-06-15T21:42:58Z | 2025-06-14T01:22:57Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.