repo stringclasses 1
value | github_id int64 1.27B 4.22B | github_node_id stringlengths 18 18 | number int64 8 13.4k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-04-07 08:05:37 | updated_at stringdate 2022-06-12 22:18:01 2026-04-07 13:03:55 | closed_at stringdate 2022-06-12 22:18:01 2026-04-07 07:59:44 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 1
value | extracted_at stringdate 2026-04-07 13:34:13 2026-04-07 13:34:13 | author_login stringlengths 3 28 | author_id int64 1.54k 258M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 3,824,633,190 | I_kwDOHa8MBc7j90Vm | 12,990 | https://github.com/huggingface/diffusers/issues/12990 | https://api.github.com/repos/huggingface/diffusers/issues/12990 | [optimization] help us know which kernels we should integrate in Diffusers | This issue is for knowing which kernels we should integrate into the library through [`kernels`](https://github.com/huggingface/kernels/).
Currently, we leverage `kernels` for different attention backends (FA2, FA3, and SAGE). However, other layers can be optimized as well (RMS Norm, for example), depending on the mod... | open | null | false | 13 | [
"performance"
] | [] | 2026-01-17T07:41:01Z | 2026-02-24T09:08:25Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,774,178,783 | I_kwDOHa8MBc7g9WXf | 12,905 | https://github.com/huggingface/diffusers/issues/12905 | https://api.github.com/repos/huggingface/diffusers/issues/12905 | Wrong Z_Image guidance scale implementation | ### Describe the bug
With cfg implementation of Z_image: `pred = pos + current_guidance_scale * (pos - neg)`, the `do_classifier_free_guidance` function should be implement: `return self._guidance_scale > 0`, but currently in [diffusers code](https://github.com/huggingface/diffusers/blob/1cdb8723b85f1b427031e390e0bd0... | closed | completed | false | 10 | [
"bug",
"roadmap"
] | [] | 2026-01-01T06:04:15Z | 2026-02-26T01:08:13Z | 2026-02-26T01:08:13Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Trgtuan10 | 119,487,916 | U_kgDOBx89rA | User | false |
huggingface/diffusers | 3,914,618,072 | I_kwDOHa8MBc7pVFTY | 13,105 | https://github.com/huggingface/diffusers/issues/13105 | https://api.github.com/repos/huggingface/diffusers/issues/13105 | Wan-AI/Wan2.1-T2V-1.3B-Diffusers output noise | ### Describe the bug
Script copied from https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers
```python
import torch
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.utils import export_to_video
# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
model_id = "Wan-... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-09T05:58:25Z | 2026-02-26T05:05:19Z | 2026-02-26T05:05:19Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jiqing-feng | 107,918,818 | U_kgDOBm614g | User | false |
huggingface/diffusers | 3,276,799,103 | I_kwDOHa8MBc7DT_x_ | 12,022 | https://github.com/huggingface/diffusers/issues/12022 | https://api.github.com/repos/huggingface/diffusers/issues/12022 | _flash_attention_3 in dispatch_attention_fn is not compatible with the latest flash-atten interface. | ### Describe the bug
[FA3] Don't return lse:
https://github.com/Dao-AILab/flash-attention/commit/ed209409acedbb2379f870bbd03abce31a7a51b7
but in the current diffuser version, it is not updated.
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_dispatch.py#L608
when use fa3 backend, di... | closed | completed | false | 8 | [
"bug",
"stale"
] | [] | 2025-07-30T12:13:50Z | 2026-02-26T12:04:38Z | 2026-02-26T12:04:38Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hmzjwhmzjw | 23,023,261 | MDQ6VXNlcjIzMDIzMjYx | User | false |
huggingface/diffusers | 3,287,287,219 | I_kwDOHa8MBc7D8AWz | 12,053 | https://github.com/huggingface/diffusers/issues/12053 | https://api.github.com/repos/huggingface/diffusers/issues/12053 | Flux1.Dev Kohya Loras text encoder layers no more supported | Hello,
I trained a Lora with Kohya SS and I have a problem of conversion, I thought it should have been managed by your conversion script ?
```
Loading adapter weights from state_dict led to unexpected keys found in the model: single_transformer_blocks.0.proj_out.lora_A.default_0.weight, single_transformer_blocks.0.p... | closed | completed | false | 22 | [] | [] | 2025-08-03T15:47:59Z | 2026-02-27T10:13:43Z | 2026-02-27T10:13:43Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 3,344,856,231 | I_kwDOHa8MBc7HXnSn | 12,216 | https://github.com/huggingface/diffusers/issues/12216 | https://api.github.com/repos/huggingface/diffusers/issues/12216 | Qwen-Image-Edit Inferior Results Compared to ComfyUI | ### Describe the bug
I am trying to do multi-image editing with Qwen-Image-Edit (It is a simplified version of [this](https://x.com/hellorob/status/1958197227135906087)). The ComfyUI workflow and diffusers script are shared below for reproducibility. I am using the same (unquantized) models and the same parameters.
T... | open | null | false | 23 | [
"bug",
"stale"
] | [] | 2025-08-22T09:42:53Z | 2026-02-27T19:16:01Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | enesmsahin | 36,894,859 | MDQ6VXNlcjM2ODk0ODU5 | User | false |
huggingface/diffusers | 3,888,750,462 | I_kwDOHa8MBc7nyZ9- | 13,067 | https://github.com/huggingface/diffusers/issues/13067 | https://api.github.com/repos/huggingface/diffusers/issues/13067 | [Feature] Add support for Anima | **Is your feature request related to a problem? Please describe.**
I'd like support for the recently released Anima, it seems like a very good anime model https://huggingface.co/circlestone-labs/Anima
**Describe alternatives you've considered.**
Currently only has comfyUI support as far as im aware.
<img width="1024"... | open | null | false | 1 | [] | [] | 2026-02-03T02:55:55Z | 2026-03-01T05:10:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Metal079 | 7,731,400 | MDQ6VXNlcjc3MzE0MDA= | User | false |
huggingface/diffusers | 3,858,999,967 | I_kwDOHa8MBc7mA6qf | 13,035 | https://github.com/huggingface/diffusers/issues/13035 | https://api.github.com/repos/huggingface/diffusers/issues/13035 | cannot import name 'MT5Tokenizer' from 'transformers' | ### Describe the bug
When using diffusers with transformers, it shows this bug:
E RuntimeError: Failed to import diffusers.pipelines.auto_pipeline because of the following error (look up to see its traceback):
E Failed to import diffusers.pipelines.hunyuandit.pipeline_hunyuandit because of the following error (lo... | closed | completed | false | 7 | [
"bug"
] | [] | 2026-01-27T06:12:34Z | 2026-03-04T17:29:27Z | 2026-03-04T17:29:27Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | n1ck-guo | 110,074,967 | U_kgDOBo-cVw | User | false |
huggingface/diffusers | 3,937,632,567 | I_kwDOHa8MBc7qs4E3 | 13,137 | https://github.com/huggingface/diffusers/issues/13137 | https://api.github.com/repos/huggingface/diffusers/issues/13137 | Flux 2 Klein load lora weights LoKr error | ### Describe the bug
`ValueError: `original_state_dict` should be empty at this point but has original_state_dict.keys()=dict_keys(['double_blocks.0.img_attn.proj.alpha', 'double_blocks.0.img_attn.proj.lokr_w1', 'double_blocks.0.img_attn.proj.lokr_w2', 'double_blocks.0.img_attn.qkv.alpha', 'double_blocks.0.img_attn.qk... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-13T15:07:16Z | 2026-03-04T18:03:25Z | 2026-03-04T18:03:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yodo226 | 20,930,076 | MDQ6VXNlcjIwOTMwMDc2 | User | false |
huggingface/diffusers | 4,012,951,239 | I_kwDOHa8MBc7vMMbH | 13,203 | https://github.com/huggingface/diffusers/issues/13203 | https://api.github.com/repos/huggingface/diffusers/issues/13203 | Zimage lora support issue | ### Describe the bug
Error message:
```
lora_conversion_utils.py", line 2566, in _convert_non_diffusers_z_image_lora_to_diffusers
raise ValueError(f"`state_dict` should be empty at this point but has {state_dict.keys()=}")
ValueError: `state_dict` should be empty at this point but has state_dict.keys()=dict_keys(... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-03-02T19:39:53Z | 2026-03-05T02:54:21Z | 2026-03-05T02:54:21Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 3,791,542,713 | I_kwDOHa8MBc7h_lm5 | 12,926 | https://github.com/huggingface/diffusers/issues/12926 | https://api.github.com/repos/huggingface/diffusers/issues/12926 | LTX-2 condition pipeline | https://github.com/huggingface/diffusers/pull/12915 added support for LTX-2.
## What's supported?
* Single stage T2V and I2V
* Upsampling
Similar to the [LTX condition pipeline](https://huggingface.co/docs/diffusers/main/api/pipelines/ltx_video#diffusers.LTXConditionPipeline), it would be nice to have support for so... | closed | completed | false | 15 | [
"contributions-welcome",
"diffusers-mvp"
] | [] | 2026-01-08T06:36:38Z | 2026-03-05T08:42:56Z | 2026-03-05T08:42:56Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,829,412,673 | I_kwDOHa8MBc7kQDNB | 13,000 | https://github.com/huggingface/diffusers/issues/13000 | https://api.github.com/repos/huggingface/diffusers/issues/13000 | RAE support |
Can we have support for Representation Autoencoders in diffusers? Since RAE already uses hf load for encoders, it will be much simpler if we can directly from_pretrained the whole autoencoder model.
Link: https://github.com/bytetriper/RAE
Paper: https://arxiv.org/abs/2510.11690
| closed | completed | false | 14 | [
"contributions-welcome"
] | [] | 2026-01-19T12:05:11Z | 2026-03-05T14:47:15Z | 2026-03-05T14:47:15Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bytetriper | 53,568,159 | MDQ6VXNlcjUzNTY4MTU5 | User | false |
huggingface/diffusers | 3,999,985,925 | I_kwDOHa8MBc7uavEF | 13,191 | https://github.com/huggingface/diffusers/issues/13191 | https://api.github.com/repos/huggingface/diffusers/issues/13191 | Elastic-DiT support | ### Model/Pipeline/Scheduler description
Elastic-DiT was released a few hours ago: https://github.com/wangjiangshan0725/Elastic-DiT
It's supposed to greatly accelerate (~2x speed) the diffusion process of 2D image generators like qwen image and Flux with little impact to the quality of the output. This project was le... | open | null | false | 2 | [] | [] | 2026-02-27T09:09:17Z | 2026-03-07T09:22:49Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | geekuillaume | 1,301,701 | MDQ6VXNlcjEzMDE3MDE= | User | false |
huggingface/diffusers | 4,040,515,279 | I_kwDOHa8MBc7w1V7P | 13,221 | https://github.com/huggingface/diffusers/issues/13221 | https://api.github.com/repos/huggingface/diffusers/issues/13221 | Zimage lora support issue too | ### Describe the bug
diffusers\loaders\lora_pipeline.py", line 5475, in lora_state_dict
state_dict = _convert_non_diffusers_z_image_lora_to_diffusers(state_dict)
File "C:\Users\whr_u\anaconda3\envs\DBtrain\lib\site-packages\diffusers\loaders\lora_conversion_utils.py", line 2628, in _convert_non_diffusers_z_image... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-08T05:05:05Z | 2026-03-08T05:06:28Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhaoyun0071 | 35,762,050 | MDQ6VXNlcjM1NzYyMDUw | User | false |
huggingface/diffusers | 4,040,922,766 | I_kwDOHa8MBc7w25aO | 13,224 | https://github.com/huggingface/diffusers/issues/13224 | https://api.github.com/repos/huggingface/diffusers/issues/13224 | Regarding support for Qwen‑Image‑Layered‑Control | Do you have any plans to provide support for [Qwen‑Image‑Layered‑Control](https://huggingface.co/DiffSynth-Studio/Qwen-Image-Layered-Control)? | open | null | false | 0 | [] | [] | 2026-03-08T09:21:24Z | 2026-03-08T09:21:24Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | suzukimain | 131,413,573 | U_kgDOB9U2RQ | User | false |
huggingface/diffusers | 4,041,719,239 | I_kwDOHa8MBc7w573H | 13,225 | https://github.com/huggingface/diffusers/issues/13225 | https://api.github.com/repos/huggingface/diffusers/issues/13225 | RAE+DiT support for published checkpoints | **Is your feature request related to a problem? Please describe.**
`diffusers` has `AutoencoderRAE` support via #13046. There is still no native latent generator path for the published RAE checkpoints. I've been following the research and was excited to see the RAE support; this comes next and would be exciting to have... | open | null | false | 1 | [] | [] | 2026-03-08T17:03:35Z | 2026-03-08T17:15:53Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | plugyawn | 76,529,011 | MDQ6VXNlcjc2NTI5MDEx | User | false |
huggingface/diffusers | 4,042,324,989 | I_kwDOHa8MBc7w8Pv9 | 13,227 | https://github.com/huggingface/diffusers/issues/13227 | https://api.github.com/repos/huggingface/diffusers/issues/13227 | [Bug] GlmImagePipeline silently corrupts weights on MPS accelerator | ### Describe the bug
When loading `zai-org/GLM-Image` with `device_map="mps"` in diffusers, some model parameters become silently corrupted during `GlmImagePipeline.from_pretrained` call.
The corruption:
```
Happens only when tensors are placed directly on MPS during loading
Is non-deterministic across dtypes
```
* f... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-08T22:27:27Z | 2026-03-08T22:31:31Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yingding | 1,073,701 | MDQ6VXNlcjEwNzM3MDE= | User | false |
huggingface/diffusers | 4,043,639,379 | I_kwDOHa8MBc7xBQpT | 13,232 | https://github.com/huggingface/diffusers/issues/13232 | https://api.github.com/repos/huggingface/diffusers/issues/13232 | LTX-2.3 Support | ### Model/Pipeline/Scheduler description
New model LTX-2.3 models have been released recently. Maybe weight conversion is needed for it
HF repo: https://huggingface.co/Lightricks/LTX-2.3
I could support on this
### Open source status
- [x] The model implementation is available.
- [x] The model weights are availabl... | open | null | false | 1 | [] | [] | 2026-03-09T06:29:10Z | 2026-03-09T08:48:02Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rootonchair | 23,548,268 | MDQ6VXNlcjIzNTQ4MjY4 | User | false |
huggingface/diffusers | 3,864,231,967 | I_kwDOHa8MBc7mU4Af | 13,043 | https://github.com/huggingface/diffusers/issues/13043 | https://api.github.com/repos/huggingface/diffusers/issues/13043 | [Bug] `no kernel image is available for execution on the device` when using sage_hub attention backend on RTX 5090 (Blackwell, sm_120) | ### Describe the bug
When enabling the experimental sage_hub attention backend on RTX 5090 (Blackwell architecture, compute capability 12.0) with PyTorch 2.8 + CUDA 12.9, inference fails with CUDA kernel compatibility error:
```bash
Error no kernel image is available for execution on the device at line 73 in file /src... | open | null | false | 6 | [
"bug"
] | [
"sayakpaul"
] | 2026-01-28T08:35:07Z | 2026-03-09T10:00:37Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | timeuser4 | 167,759,220 | U_kgDOCf_NdA | User | false |
huggingface/diffusers | 4,000,607,675 | I_kwDOHa8MBc7udG27 | 13,196 | https://github.com/huggingface/diffusers/issues/13196 | https://api.github.com/repos/huggingface/diffusers/issues/13196 | Quantization documentation runnable in Colab | **Is your feature request related to a problem? Please describe.**
Hi there, I was glad to find [this Quantization example in your Quickstart](https://huggingface.co/docs/diffusers/en/quicktour#quantization), promising to run Qwen-Image using a little under 15GB, which sounds like it could just about work in Colab Fre... | closed | completed | false | 4 | [] | [] | 2026-02-27T11:37:48Z | 2026-03-09T10:45:41Z | 2026-03-09T10:45:41Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jchwenger | 34,098,722 | MDQ6VXNlcjM0MDk4NzIy | User | false |
huggingface/diffusers | 3,556,745,931 | I_kwDOHa8MBc7T_6LL | 12,550 | https://github.com/huggingface/diffusers/issues/12550 | https://api.github.com/repos/huggingface/diffusers/issues/12550 | Fal Flashpack | **Is your feature request related to a problem? Please describe.**
Safetensors loading seems it could be faster.
**Describe the solution you'd like.**
Recently fal has open sourced a new loading method for faster DIT and TE loading.
**Describe alternatives you've considered.**
[A clear and concise description of any... | open | null | false | 11 | [] | [] | 2025-10-27T13:17:16Z | 2026-03-09T11:56:19Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vgabbo | 105,066,170 | U_kgDOBkMuug | User | false |
huggingface/diffusers | 3,881,178,887 | I_kwDOHa8MBc7nVhcH | 13,061 | https://github.com/huggingface/diffusers/issues/13061 | https://api.github.com/repos/huggingface/diffusers/issues/13061 | microsoft/vq-diffusion-ithq `variant='fp16'` is not in the main branch | FutureWarning: You are loading the variant fp16 from microsoft/vq-diffusion-ithq via `revision='fp16'`. This behavior is deprecated and will be removed in diffusers v1. One should use `variant='fp16'` instead. However, it appears that microsoft/vq-diffusion-ithq currently does not have the required variant filenames in... | closed | completed | false | 0 | [] | [] | 2026-02-01T04:41:07Z | 2026-03-09T11:58:56Z | 2026-03-09T11:58:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | GabbySuwichaya | 49,115,268 | MDQ6VXNlcjQ5MTE1MjY4 | User | false |
huggingface/diffusers | 3,585,690,814 | I_kwDOHa8MBc7VuUy- | 12,589 | https://github.com/huggingface/diffusers/issues/12589 | https://api.github.com/repos/huggingface/diffusers/issues/12589 | [feature] implement TeaCache | Currently, we support PAB and FasterCache. More details: https://huggingface.co/docs/diffusers/main/en/optimization/cache.
It'd be cool to support [Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model](https://liewfeng.github.io/TeaCache/) as it's quite popular in the community. Code is here:
https:... | open | null | false | 11 | [
"contributions-welcome",
"performance",
"roadmap"
] | [] | 2025-11-04T09:46:26Z | 2026-03-09T12:08:06Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,543,521,911 | I_kwDOHa8MBc7TNdp3 | 12,533 | https://github.com/huggingface/diffusers/issues/12533 | https://api.github.com/repos/huggingface/diffusers/issues/12533 | Hooks conflicts: Context Parallelism and CPU Offload | ### Describe the bug
Enable cpu offload before enabling parallelism will raise shape error after first pipe call. It seems a bug of diffusers that cpu offload is not fully compatible with context parallelism, visa versa.
- cpu offload `before` context parallelism (not work)
```python
pipe.enable_model_cpu_offload(d... | open | null | false | 5 | [
"bug",
"roadmap",
"context-parallel"
] | [
"yiyixuxu"
] | 2025-10-23T07:33:37Z | 2026-03-10T03:36:33Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DefTruth | 31,974,251 | MDQ6VXNlcjMxOTc0MjUx | User | false |
huggingface/diffusers | 4,050,586,905 | I_kwDOHa8MBc7xbw0Z | 13,243 | https://github.com/huggingface/diffusers/issues/13243 | https://api.github.com/repos/huggingface/diffusers/issues/13243 | [BUG] FlowMatchEulerDiscreteScheduler.__init__ computes sigma_min/sigma_max after shift, causing duplicate shift in set_timesteps | ### Describe the bug
There is an initialization order issue in the FlowMatchEulerDiscreteScheduler class that causes set_timesteps to apply timestep shifting twice, resulting in different sigma values for the same timestep settings.
https://github.com/huggingface/diffusers/blob/07a63e197e10860a470576cf4f610381b31a4dd7... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-10T09:40:56Z | 2026-03-10T09:40:56Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Xilluill | 115,821,678 | U_kgDOBudMbg | User | false |
huggingface/diffusers | 3,886,617,488 | I_kwDOHa8MBc7nqROQ | 13,065 | https://github.com/huggingface/diffusers/issues/13065 | https://api.github.com/repos/huggingface/diffusers/issues/13065 | Qwen-Image controlnet inpaint + controlnet canny | **Is your feature request related to a problem? Please describe.**
Is it possible to load controlnet-canny and controlnet-inpaint in QWEN-image at the same time? like multi-controlnet
**Describe the solution you'd like.**
**Describe alternatives you've considered.**
**Additional context.**
| open | null | false | 1 | [] | [] | 2026-02-02T15:40:02Z | 2026-03-10T21:58:23Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xinli2008 | 71,311,087 | MDQ6VXNlcjcxMzExMDg3 | User | false |
huggingface/diffusers | 4,049,212,629 | I_kwDOHa8MBc7xWhTV | 13,241 | https://github.com/huggingface/diffusers/issues/13241 | https://api.github.com/repos/huggingface/diffusers/issues/13241 | [Chore] Set Diffusers 0.37 as latest release | Diffusers 0.36 is still showing up as the latest release on Github. This should probably be set to the most recent version, 0.37. You folks deserve to be able to strut sitting down, after all!
<img width="372" height="233" alt="Image" src="https://github.com/user-attachments/assets/978b596d-c435-402e-a150-6a3fd62e7620... | closed | completed | false | 1 | [] | [] | 2026-03-10T03:49:45Z | 2026-03-11T04:01:55Z | 2026-03-11T04:01:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | iwr-redmond | 142,086,261 | U_kgDOCHgQdQ | User | false |
huggingface/diffusers | 3,913,552,005 | I_kwDOHa8MBc7pRBCF | 13,104 | https://github.com/huggingface/diffusers/issues/13104 | https://api.github.com/repos/huggingface/diffusers/issues/13104 | Logger is not defined | ### Describe the bug
`diffusers.quantizers.pipe_quant_config` references an undefined logger instance.
### Reproduction
Install torch, torchvision, and torchao nightly, then import `PipelineQuantizationConfig, TorchAoConfig` from `diffusers`.
### Logs
```shell
Traceback (most recent call last):
File ".venv/lib/p... | closed | completed | false | 8 | [
"bug"
] | [] | 2026-02-08T20:57:13Z | 2026-03-11T04:45:31Z | 2026-03-11T04:45:30Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | teleprint-me | 77,757,836 | MDQ6VXNlcjc3NzU3ODM2 | User | false |
huggingface/diffusers | 3,755,520,700 | I_kwDOHa8MBc7f2LK8 | 12,880 | https://github.com/huggingface/diffusers/issues/12880 | https://api.github.com/repos/huggingface/diffusers/issues/12880 | Allow users to pass in a custom device_mesh to the enable_parallelism method in the diffusers library |
`def enable_parallelism(
self,
*,
config: Union[ParallelConfig, ContextParallelConfig],
cp_plan: Optional[Dict[str, ContextParallelModelPlan]] = None,
mesh: Optional[DeviceMesh] = None, # Add this parameter
):`
If users have already initialized a device mesh for other parallelism strategies (FSDP... | closed | completed | false | 8 | [
"roadmap"
] | [] | 2025-12-22T23:16:56Z | 2026-03-11T11:12:52Z | 2026-03-11T11:12:52Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pthombre | 204,920,687 | U_kgDODDbXbw | User | false |
huggingface/diffusers | 4,057,882,195 | I_kwDOHa8MBc7x3l5T | 13,256 | https://github.com/huggingface/diffusers/issues/13256 | https://api.github.com/repos/huggingface/diffusers/issues/13256 | diffusers,hugging-face and tokenizers 0.14.1 ,they have a compatiable version | ### Describe the bug
I downloaded them by pip ,and their version :huggingface-hub 0.17.3 diffusers 0.21.4
tokenizers 0.14.1 .
but when i run them, there always has an error :
Traceback (most recent call last):
File "/root/sj-tmp/migrate/echomimic-main/echomimic_api/app.py", line 34,... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-11T12:14:46Z | 2026-03-11T12:14:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | coderXEi | 49,114,210 | MDQ6VXNlcjQ5MTE0MjEw | User | false |
huggingface/diffusers | 4,059,796,060 | I_kwDOHa8MBc7x-5Jc | 13,258 | https://github.com/huggingface/diffusers/issues/13258 | https://api.github.com/repos/huggingface/diffusers/issues/13258 | Wan-AI/Wan2.2-TI2V-5B-Diffusers Image to Video Missing | ### Describe the bug
I cannot make it that the 5B version takes in an image and generate a video out of it
```
import torch
import numpy as np
from diffusers import WanPipeline, AutoencoderKLWan, WanTransformer3DModel, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image
dtype = torch.bflo... | open | null | false | 1 | [
"bug"
] | [] | 2026-03-11T18:06:28Z | 2026-03-11T18:15:10Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | james-imi | 149,590,311 | U_kgDOCOqRJw | User | false |
huggingface/diffusers | 4,053,689,271 | I_kwDOHa8MBc7xnmO3 | 13,249 | https://github.com/huggingface/diffusers/issues/13249 | https://api.github.com/repos/huggingface/diffusers/issues/13249 | Lora Stopped working for Z-Image | ### Describe the bug
all of a sudden im getting errors on using a lora with z-image pipeline. its the same pipeline i been using with the same loras...
### Reproduction
```
from diffusers import FlowMatchEulerDiscreteScheduler, ZImagePipeline
import torch
model_name = "dimitribarbot/Z-Image-Turbo-BF16"
lora_r... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-03-10T19:11:43Z | 2026-03-13T01:28:55Z | 2026-03-13T01:28:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AlpineVibrations | 10,145,679 | MDQ6VXNlcjEwMTQ1Njc5 | User | false |
huggingface/diffusers | 3,269,498,493 | I_kwDOHa8MBc7C4JZ9 | 12,003 | https://github.com/huggingface/diffusers/issues/12003 | https://api.github.com/repos/huggingface/diffusers/issues/12003 | Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. | I tried to load `Flux_Turbo_Alpha` into FluxKontextPipeline by this code, and save_pretrained to local.
```
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)... | open | null | false | 9 | [
"stale"
] | [
"sayakpaul"
] | 2025-07-28T11:27:24Z | 2026-03-14T05:20:21Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MinhVD-ZenAI | 191,195,078 | U_kgDOC2Vnxg | User | false |
huggingface/diffusers | 3,979,697,423 | I_kwDOHa8MBc7tNV0P | 13,177 | https://github.com/huggingface/diffusers/issues/13177 | https://api.github.com/repos/huggingface/diffusers/issues/13177 | Flux2KleinPipeline doesn't accept tensor format as input image | **Is your feature request related to a problem? Please describe.**
I would like to input a tensor in the Flux2KleinPipeline image argument. Only `image: list[PIL.Image.Image] | PIL.Image.Image | None = None` is accepted although in the comment section below the arguments it states that:
`image (`torch.Tensor`, `PIL.... | open | null | false | 1 | [] | [] | 2026-02-23T19:02:57Z | 2026-03-14T12:01:02Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | joansc | 17,720,862 | MDQ6VXNlcjE3NzIwODYy | User | false |
huggingface/diffusers | 3,346,692,898 | I_kwDOHa8MBc7Hensi | 12,221 | https://github.com/huggingface/diffusers/issues/12221 | https://api.github.com/repos/huggingface/diffusers/issues/12221 | [Looking for community contribution] support DiffSynth Controlnet in diffusers | ### Model/Pipeline/Scheduler description
Hi!
We want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community!
Let me know if you're interested!
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (... | open | null | false | 8 | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | [] | 2025-08-22T20:49:18Z | 2026-03-15T05:54:47Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yiyixuxu | 12,631,849 | MDQ6VXNlcjEyNjMxODQ5 | User | false |
huggingface/diffusers | 4,083,695,173 | I_kwDOHa8MBc7zaD5F | 13,274 | https://github.com/huggingface/diffusers/issues/13274 | https://api.github.com/repos/huggingface/diffusers/issues/13274 | [Bug][AMD] `stabilityai/stable-audio-open-1.0` encounter `RecursionError: maximum recursion depth exceeded` | ### Describe the bug
Running the diffuser example from the model card https://huggingface.co/stabilityai/stable-audio-open-1.0
```
import torch
import soundfile as sf
from diffusers import StableAudioPipeline
pipe = StableAudioPipeline.from_pretrained("stabilityai/stable-audio-open-1.0", torch_dtype=torch.float16)... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-16T17:03:56Z | 2026-03-16T17:03:56Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tjtanaa | 29,171,856 | MDQ6VXNlcjI5MTcxODU2 | User | false |
huggingface/diffusers | 4,076,636,492 | I_kwDOHa8MBc7y_IlM | 13,266 | https://github.com/huggingface/diffusers/issues/13266 | https://api.github.com/repos/huggingface/diffusers/issues/13266 | Torchao SD3 int8wo | ### Describe the bug
Exception when click Run, SD3 quantization
### Reproduction
- Mellon + ModularDiffuser setup (https://github.com/cubiq/Mellon/blob/main/modules/ModularDiffusers/README.md)
- uv pip install -U torch torchao
- python main.py
SD3 Text Encoder Loader: Dtype bfloat16, Quantization: TorchAO, Quant Ty... | open | null | false | 2 | [
"bug"
] | [] | 2026-03-14T20:14:14Z | 2026-03-17T20:14:36Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | WasamiKirua | 122,620,587 | U_kgDOB08Kqw | User | false |
huggingface/diffusers | 4,093,007,279 | I_kwDOHa8MBc7z9lWv | 13,281 | https://github.com/huggingface/diffusers/issues/13281 | https://api.github.com/repos/huggingface/diffusers/issues/13281 | Group offloading with use_stream=True breaks torchao quantized models (device mismatch) in qwen image | ### Describe the bug
When combining torchao quantization (TorchAoConfig with Float8WeightOnlyConfig) and group offloading with use_stream=True, inference fails with a device mismatch error. The quantized weight remains on CPU while the input tensor is on CUDA.
### Reproduction
```python
import torch
from diffusers i... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-18T06:16:00Z | 2026-03-18T06:16:00Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhangvia | 38,352,569 | MDQ6VXNlcjM4MzUyNTY5 | User | false |
huggingface/diffusers | 4,094,623,844 | I_kwDOHa8MBc70DwBk | 13,284 | https://github.com/huggingface/diffusers/issues/13284 | https://api.github.com/repos/huggingface/diffusers/issues/13284 | AttentionModuleMixin.set_attention_backend does not download hub kernels | ### Describe the bug
Hellooo :),
I believe there is a bug with per-module attention backend setting.
Currently, `set_attention_backend()` works correctly when called on a top-level model (e.g. `pipe.transformer.set_attention_backend("sage_hub")`), but fails silently when called on individual attention submodules. Th... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-18T11:51:03Z | 2026-03-18T12:09:20Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Marius-Graml | 85,825,232 | MDQ6VXNlcjg1ODI1MjMy | User | false |
huggingface/diffusers | 4,089,569,357 | I_kwDOHa8MBc7zweBN | 13,279 | https://github.com/huggingface/diffusers/issues/13279 | https://api.github.com/repos/huggingface/diffusers/issues/13279 | `AutoencoderRAE` loading error with older transformers | ### Describe the bug
Hye,
I am trying to use `AutoencoderRAE`, unfortunately I cannot load it and it seems to break by giving the following error:
```shell
File ".venv/lib/python3.13/site-packages/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py", line 529, in _init_weights
).to(module... | open | null | false | 3 | [
"bug"
] | [
"kashif"
] | 2026-03-17T16:10:16Z | 2026-03-18T13:00:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | arijit-hub | 66,744,342 | MDQ6VXNlcjY2NzQ0MzQy | User | false |
huggingface/diffusers | 4,099,557,440 | I_kwDOHa8MBc70WkhA | 13,286 | https://github.com/huggingface/diffusers/issues/13286 | https://api.github.com/repos/huggingface/diffusers/issues/13286 | torchao >= 0.16.0 quantization not supported | ### Describe the bug
Below sample code (taken from https://huggingface.co/blog/lora-fast) does not work because torchao has renamed the APIs and mentions it as a breaking change in 0.15.0 (with deprecation warning) and above as per the release notes:
https://github.com/pytorch/ao/releases/tag/v0.15.0
Before:
```
from... | open | null | false | 0 | [
"bug"
] | [] | 2026-03-19T06:09:21Z | 2026-03-19T06:09:21Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zzlol63 | 241,569,438 | U_kgDODmYOng | User | false |
huggingface/diffusers | 4,099,995,498 | I_kwDOHa8MBc70YPdq | 13,288 | https://github.com/huggingface/diffusers/issues/13288 | https://api.github.com/repos/huggingface/diffusers/issues/13288 | vbench | null | closed | completed | false | 0 | [] | [] | 2026-03-19T08:00:47Z | 2026-03-19T08:01:00Z | 2026-03-19T08:01:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | 3a1b2c3 | 74,843,139 | MDQ6VXNlcjc0ODQzMTM5 | User | false |
huggingface/diffusers | 4,113,672,409 | I_kwDOHa8MBc71MajZ | 13,301 | https://github.com/huggingface/diffusers/issues/13301 | https://api.github.com/repos/huggingface/diffusers/issues/13301 | Modular Pipeline: support for PixArtAlphaPipeline | ### Model/Pipeline/Scheduler description
I would like to implement a modular version of the PixArt pipeline family, as discussed in [#13295](https://github.com/huggingface/diffusers/issues/13295#issuecomment-4103395537).
The initial scope includes the `PixArtAlphaPipeline` under [src/diffusers/pipelines/pixart_alpha/... | open | null | false | 0 | [] | [] | 2026-03-21T20:02:38Z | 2026-03-21T20:02:38Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | gasparyanartur | 27,823,679 | MDQ6VXNlcjI3ODIzNjc5 | User | false |
huggingface/diffusers | 4,111,960,112 | I_kwDOHa8MBc71F4gw | 13,298 | https://github.com/huggingface/diffusers/issues/13298 | https://api.github.com/repos/huggingface/diffusers/issues/13298 | When generating images, if the generator device is on cuda, it break things | ### Describe the bug
This bug took me a while to relize, but when generating images with Flux Schnell, if the random generator device is on cuda, it create weird, blurry, and noisy images.
### Reproduction
```
"""
Reproduce: same seed, same model, same prompt — cpu vs cuda generator produces different images.
"""
... | open | null | false | 2 | [
"bug"
] | [] | 2026-03-21T09:48:01Z | 2026-03-21T20:03:44Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | weathon | 41,298,844 | MDQ6VXNlcjQxMjk4ODQ0 | User | false |
huggingface/diffusers | 2,578,713,715 | I_kwDOHa8MBc6ZtAxz | 9,635 | https://github.com/huggingface/diffusers/issues/9635 | https://api.github.com/repos/huggingface/diffusers/issues/9635 | [Flux ControlNet] Add support for de-distilled models with CFG | Flux with Inpaiting and ControlNet currently yields bad result with the base model.
To echo [this comment](https://github.com/huggingface/diffusers/pull/9571#issuecomment-2404857656), using de-distilled models could potentially help getting better outputs.
Currently the Flux ControlNet pipelines do not support mo... | open | null | false | 11 | [
"good first issue",
"contributions-welcome"
] | [] | 2024-10-10T12:33:01Z | 2026-03-22T06:15:45Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | simbrams | 25,414,628 | MDQ6VXNlcjI1NDE0NjI4 | User | false |
huggingface/diffusers | 3,763,732,711 | I_kwDOHa8MBc7gVgDn | 12,893 | https://github.com/huggingface/diffusers/issues/12893 | https://api.github.com/repos/huggingface/diffusers/issues/12893 | Z-Image text sequence length issue | ### Describe the bug
I think there might be an issue with calculating the sequence length of `cap_feat` (which is the text encoder output), and masking it accordingly.
I'm going to use code links from *before* the omni commit, because it's easier to read - but the issue seems exist in both, before and after the omni ... | closed | completed | false | 8 | [
"bug"
] | [] | 2025-12-26T16:52:11Z | 2026-03-24T17:29:21Z | 2026-03-24T17:29:21Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
huggingface/diffusers | 4,116,915,242 | I_kwDOHa8MBc71YyQq | 13,310 | https://github.com/huggingface/diffusers/issues/13310 | https://api.github.com/repos/huggingface/diffusers/issues/13310 | YIh | картинку в стиле нанвью | closed | completed | false | 0 | [] | [] | 2026-03-22T19:36:57Z | 2026-03-24T22:56:02Z | 2026-03-24T22:56:02Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nanvyu | 185,514,026 | U_kgDOCw64Kg | User | false |
huggingface/diffusers | 4,086,888,339 | I_kwDOHa8MBc7zmPeT | 13,277 | https://github.com/huggingface/diffusers/issues/13277 | https://api.github.com/repos/huggingface/diffusers/issues/13277 | QwenImagePipeline failed with Ulysses SP and batch inputs | ### Describe the bug
QwenImagePipeline cannot run with Ulysses SP together with batch prompt inputs. It is related to the mask is not correctly broadcasted.
### Reproduction
With the following code snippets
```python
import torch
import torch.distributed as dist
import argparse
import os
from diffusers import QwenIm... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-03-17T08:08:13Z | 2026-03-25T01:54:19Z | 2026-03-25T01:54:19Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhtmike | 8,342,575 | MDQ6VXNlcjgzNDI1NzU= | User | false |
huggingface/diffusers | 4,118,099,639 | I_kwDOHa8MBc71dTa3 | 13,311 | https://github.com/huggingface/diffusers/issues/13311 | https://api.github.com/repos/huggingface/diffusers/issues/13311 | black-forest-labs/FLUX.2-klein-9B lora No LoRA keys associated to Flux2Transformer2DModel found with the prefix='transformer' | ### Describe the bug
No LoRA keys associated to Flux2Transformer2DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any Flux2Transformer2DModel related params. You can also try specifying `prefix=None` to resolve the warning. Otherwise, open an issue if you thi... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-03-23T04:12:32Z | 2026-03-25T02:21:36Z | 2026-03-25T02:21:36Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chaowenguo | 229,142,208 | U_kgDODahuwA | User | false |
huggingface/diffusers | 4,062,471,274 | I_kwDOHa8MBc7yJGRq | 13,261 | https://github.com/huggingface/diffusers/issues/13261 | https://api.github.com/repos/huggingface/diffusers/issues/13261 | black-forest-labs/FLUX.2-klein-9B not working with lora with lokr | ### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/aiohttp/web_protocol.py", line 510, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/web_app.py", line 56... | open | null | false | 35 | [
"bug"
] | [
"sayakpaul"
] | 2026-03-12T06:18:50Z | 2026-03-25T10:08:38Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chaowenguo | 229,142,208 | U_kgDODahuwA | User | false |
huggingface/diffusers | 4,145,838,015 | I_kwDOHa8MBc73HHe_ | 13,351 | https://github.com/huggingface/diffusers/issues/13351 | https://api.github.com/repos/huggingface/diffusers/issues/13351 | Wan 2.2 Fun Control & Flux.2 Dev Fun ControlNet | ### Model/Pipeline/Scheduler description
It would be very nice if the diffusers could be expanded to support ControlNet models like [Flux.2 Dev Fun ControlNet](https://huggingface.co/alibaba-pai/FLUX.2-dev-Fun-Controlnet-Union) or [Wan 2.2 Fun Control](https://huggingface.co/alibaba-pai/Wan2.2-Fun-A14B-Control).
Thank... | open | null | false | 0 | [] | [] | 2026-03-26T17:13:17Z | 2026-03-26T17:13:17Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adrianmeyerart | 45,898,930 | MDQ6VXNlcjQ1ODk4OTMw | User | false |
huggingface/diffusers | 4,159,161,806 | I_kwDOHa8MBc7358XO | 13,357 | https://github.com/huggingface/diffusers/issues/13357 | https://api.github.com/repos/huggingface/diffusers/issues/13357 | KeyError: 'default' for discrete diffusion language model LLaDA2 | ### Describe the bug
Hi,
The following code block from the documentation (https://huggingface.co/docs/diffusers/main/api/pipelines/llada2#diffusers.LLaDA2PipelineOutput) is giving key error :
```
model_id = "inclusionAI/LLaDA2.1-mini"
model = AutoModelForCausalLM.from_pretrained(
model_id, trust_remote_code=Tru... | open | null | false | 2 | [
"bug"
] | [] | 2026-03-28T10:20:42Z | 2026-03-28T23:07:06Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ksasi | 7,089,391 | MDQ6VXNlcjcwODkzOTE= | User | false |
huggingface/diffusers | 4,161,986,180 | I_kwDOHa8MBc74Et6E | 13,361 | https://github.com/huggingface/diffusers/issues/13361 | https://api.github.com/repos/huggingface/diffusers/issues/13361 | Enable MPS backend for bitsandbytes quantization | **Is your feature request related to a problem? Please describe.**
Bitsandbytes now has basic support for the Apple MPS backend, as I can tell by https://github.com/bitsandbytes-foundation/bitsandbytes/pull/1818 and
https://github.com/bitsandbytes-foundation/bitsandbytes/pull/1875.
The issue is that diffusers does n... | open | null | false | 0 | [] | [] | 2026-03-28T23:48:15Z | 2026-03-28T23:48:15Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LucasSte | 38,472,950 | MDQ6VXNlcjM4NDcyOTUw | User | false |
huggingface/diffusers | 4,125,917,662 | I_kwDOHa8MBc717IHe | 13,319 | https://github.com/huggingface/diffusers/issues/13319 | https://api.github.com/repos/huggingface/diffusers/issues/13319 | The backward pass of QwenImageTransformer failed with Ulysses SP. | ### Describe the bug
I am not sure whether the backward pass of Ulysses SP is formally supported, but I found that backward ops like `_native_attention_backward_op` is implemented in the codebase. When I try to run QwenImageTransformer with the backward pass, I encounter errors related to shape mismatches.
### Reprod... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-03-24T07:46:36Z | 2026-03-30T10:12:02Z | 2026-03-30T10:12:02Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhtmike | 8,342,575 | MDQ6VXNlcjgzNDI1NzU= | User | false |
huggingface/diffusers | 4,113,586,208 | I_kwDOHa8MBc71MFgg | 13,300 | https://github.com/huggingface/diffusers/issues/13300 | https://api.github.com/repos/huggingface/diffusers/issues/13300 | diffusers fails in PyTorch when generating image using stabilityai/stable-diffusion-3.5-large-turbo, black-forest-labs/FLUX.1-dev on CPU | ### Describe the bug
Trace for stabilityai/stable-diffusion-3.5-large-turbo:
```
Traceback (most recent call last):
File "/disks/samsung-4TB-A/AI-models/from-hugging-face/stable-diffusion-3.5-large-turbo/run.py", line 15, in <module>
image = pipe(prompt).images[0]
^^^^^^^^^^^^
File "/usr/local/lib/... | open | null | false | 1 | [
"bug"
] | [] | 2026-03-21T19:35:52Z | 2026-03-30T10:40:53Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yurivict | 271,906 | MDQ6VXNlcjI3MTkwNg== | User | false |
huggingface/diffusers | 4,183,716,202 | I_kwDOHa8MBc75XnFq | 13,375 | https://github.com/huggingface/diffusers/issues/13375 | https://api.github.com/repos/huggingface/diffusers/issues/13375 | Investigate the current DtoH sync solution in pipelines | looks good, still feel like we need to broaden the scope of that fix within diffusers at some point :) I'll be out on sabbatical for the next month but I can help when I get back
_Originally posted by @jbschlosser in https://github.com/huggingface/diffusers/pull/13356#discussion_r3019482381_
| open | null | false | 0 | [
"performance"
] | [] | 2026-04-01T03:03:24Z | 2026-04-01T03:03:36Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 4,142,780,127 | I_kwDOHa8MBc727c7f | 13,343 | https://github.com/huggingface/diffusers/issues/13343 | https://api.github.com/repos/huggingface/diffusers/issues/13343 | [CI] Auto-label PRs for better insight and visibility | The Diffusers team is small, which makes the time-consuming process of triaging and reviewing PRs challenging. As a result, smaller well-written PRs can be accidentally ignored when they are easy to merge while larger poorly written PRs can consume precious time when they are hard to quickly understand.
It may be help... | open | null | false | 1 | [] | [] | 2026-03-26T11:01:38Z | 2026-04-01T05:32:12Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | iwr-redmond | 142,086,261 | U_kgDOCHgQdQ | User | false |
huggingface/diffusers | 2,692,979,032 | I_kwDOHa8MBc6gg5lY | 10,022 | https://github.com/huggingface/diffusers/issues/10022 | https://api.github.com/repos/huggingface/diffusers/issues/10022 | [core] refactor `attention_processor.py` the easy way | With @DN6 we have been discussing an idea about breaking up `src/diffusers/models/attention_processor.py` as it's getting excruciatingly longer and longer. The idea is simple and won't very likely require multiple rounds of PRs.
* Create a module named `attention_processor`.
* Split the attention processor classes... | open | null | false | 6 | [
"wip"
] | [
"DN6"
] | 2024-11-26T03:37:33Z | 2026-04-01T07:55:23Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 4,184,599,318 | I_kwDOHa8MBc75a-sW | 13,377 | https://github.com/huggingface/diffusers/issues/13377 | https://api.github.com/repos/huggingface/diffusers/issues/13377 | [Bug] `QwenImagePipeline` silently disables CFG when passing `negative_prompt_embeds` if mask is `None` (which `encode_prompt` returns by default) | ### Describe the bug
In `QwenImagePipeline`, when users manually pre-compute prompt embeddings to optimize memory usage (e.g., placing the encoder and transformer on different GPUs), Classifier-Free Guidance (CFG) is silently disabled if `negative_prompt_embeds_mask` is set to `None`.
However, `encode_prompt` explic... | open | null | false | 2 | [
"bug"
] | [] | 2026-04-01T06:54:19Z | 2026-04-01T08:54:19Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Sunhill666 | 29,660,906 | MDQ6VXNlcjI5NjYwOTA2 | User | false |
huggingface/diffusers | 3,659,783,058 | I_kwDOHa8MBc7aI9uS | 12,709 | https://github.com/huggingface/diffusers/issues/12709 | https://api.github.com/repos/huggingface/diffusers/issues/12709 | Meituan Longcat Video | ### Model/Pipeline/Scheduler description
https://huggingface.co/meituan-longcat/LongCat-Video
https://github.com/meituan-longcat/LongCat-Video
https://meituan-longcat.github.io/LongCat-Video/
Video generation model. Supports T2V, I2V and video continuation
### Open source status
- [x] The model implementation is av... | open | reopened | false | 6 | [] | [] | 2025-11-24T17:11:07Z | 2026-04-01T08:59:05Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | alerikaisattera | 73,682,764 | MDQ6VXNlcjczNjgyNzY0 | User | false |
huggingface/diffusers | 3,868,938,790 | I_kwDOHa8MBc7mm1Im | 13,053 | https://github.com/huggingface/diffusers/issues/13053 | https://api.github.com/repos/huggingface/diffusers/issues/13053 | [Feature] Support Unipic 3.0 Pipeline | **Is your feature request related to a problem? Please describe.**
Yes. Currently, there is no native pipeline in diffusers that supports Unified Multi-Image Conditioning as proposed in UniPic 3.0. https://huggingface.co/Skywork/Unipic3
**Describe the solution you'd like.**
I propose adding a UniPic 3.0–style pipeline... | open | null | false | 0 | [] | [] | 2026-01-29T05:39:30Z | 2026-04-01T10:01:15Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Ando233 | 74,404,658 | MDQ6VXNlcjc0NDA0NjU4 | User | false |
huggingface/diffusers | 4,144,345,286 | I_kwDOHa8MBc73BbDG | 13,350 | https://github.com/huggingface/diffusers/issues/13350 | https://api.github.com/repos/huggingface/diffusers/issues/13350 | how to use black-forest-labs/FLUX.2-klein-9B with Batch inference | ### Describe the bug
batch inference produce 4 times of images
I have prompt is list of str. the length of prompt is 7.
but finally give me 28 images
### Reproduction
```python3
import diffusers, torch
diffusers.Flux2KleinPipeline.from_pretrained('black-forest-labs/FLUX.2-klein-9B', torch_dtype=torch.bfloat16, qua... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-03-26T14:13:20Z | 2026-04-03T00:03:30Z | 2026-04-03T00:03:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chaowenguo | 229,142,208 | U_kgDODahuwA | User | false |
huggingface/diffusers | 4,165,542,116 | I_kwDOHa8MBc74SSDk | 13,362 | https://github.com/huggingface/diffusers/issues/13362 | https://api.github.com/repos/huggingface/diffusers/issues/13362 | DDIMPipeline does not validate eta range despite documented constraint [0, 1] | ### Describe the bug
In DDIMPipeline, the eta parameter is documented to be within the range [0, 1], but there is currently no validation enforcing this constraint.
From the docstring:
>"eta corresponds to η in paper and should be between [0, 1]"
However, users can pass arbitrary values (e.g., negative or >1) witho... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-03-29T19:06:50Z | 2026-04-03T02:07:30Z | 2026-04-03T02:07:30Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Akash504-ai | 199,394,642 | U_kgDOC-KFUg | User | false |
huggingface/diffusers | 4,195,378,610 | I_kwDOHa8MBc76EGWy | 13,386 | https://github.com/huggingface/diffusers/issues/13386 | https://api.github.com/repos/huggingface/diffusers/issues/13386 | [Bug] `train_dreambooth_lora_qwen_image.py` crashes with `--with_prior_preservation` due to tensor concatenation errors | ### Describe the bug
When running the `train_dreambooth_lora_qwen_image.py` script with the `--with_prior_preservation` flag, the training crashes during the text embedding extraction phase. There are two distinct bugs related to tensor concatenation at [L1323](https://www.google.com/search?q=https://github.com/huggin... | open | null | false | 0 | [
"bug"
] | [] | 2026-04-02T15:30:04Z | 2026-04-03T03:40:10Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenyangzhu1 | 102,785,092 | U_kgDOBiBgRA | User | false |
huggingface/diffusers | 4,198,259,024 | I_kwDOHa8MBc76PFlQ | 13,394 | https://github.com/huggingface/diffusers/issues/13394 | https://api.github.com/repos/huggingface/diffusers/issues/13394 | DDPMScheduler allows num_inference_steps=0 without validation (inconsistent with DDIMScheduler) | ### Describe the bug
## Bug description
The `DDPMScheduler.set_timesteps` method does not validate the value of `num_inference_steps`.
Passing `num_inference_steps=0` does not raise an error and can lead to invalid internal state or unexpected behavior.
This is inconsistent with `DDIMScheduler`, which already valid... | open | null | false | 0 | [
"bug"
] | [] | 2026-04-03T04:30:08Z | 2026-04-03T04:30:08Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Akash504-ai | 199,394,642 | U_kgDOC-KFUg | User | false |
huggingface/diffusers | 4,105,208,503 | I_kwDOHa8MBc70sIK3 | 13,292 | https://github.com/huggingface/diffusers/issues/13292 | https://api.github.com/repos/huggingface/diffusers/issues/13292 | [Bug] train_dreambooth_lora_flux2_klein.py: batch size mismatch with --with_prior_preservation | When using `--with_prior_preservation` with `train_dreambooth_lora_flux2_klein.py`,
the prompt embedding repeat logic doubles the batch incorrectly.
The line:
num_repeat_elements = len(prompts)
should be:
num_repeat_elements = len(prompts) // 2 if args.with_prior_preservation else len(prompts)
Be... | open | null | false | 2 | [] | [] | 2026-03-20T01:51:27Z | 2026-04-03T05:59:23Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vishk23 | 119,831,996 | U_kgDOByR9vA | User | false |
huggingface/diffusers | 4,143,698,226 | I_kwDOHa8MBc72-9Ey | 13,349 | https://github.com/huggingface/diffusers/issues/13349 | https://api.github.com/repos/huggingface/diffusers/issues/13349 | Noticed cropped results when generating with FLUX.2 klein 4B with reference images. | Hi, when using FLUX.2 Klein 4B with reference images. I noticed generated images were cropped/stretched. I looked at the already existing issues and found this:
> @nitinmukesh There is a private marked parameter that you can pass to disable the automatic resizing: https://github.com/huggingface/diffusers/blob/0454fbb3... | open | null | false | 9 | [] | [] | 2026-03-26T13:00:04Z | 2026-04-03T11:23:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | matemato | 47,794,629 | MDQ6VXNlcjQ3Nzk0NjI5 | User | false |
huggingface/diffusers | 3,611,548,955 | I_kwDOHa8MBc7XQ90b | 12,635 | https://github.com/huggingface/diffusers/issues/12635 | https://api.github.com/repos/huggingface/diffusers/issues/12635 | The Diffusers MVP 🚀 | Hello folks 👋
## What❓
We’re excited to bring a contributor-focused program to you! In this program, we want to work and collaborate with serious contributors to Diffusers and reward them for their time and energy spent with us. Keep reading this thread if that sounds interesting to you.
To ease the process, we hav... | open | null | false | 17 | [
"diffusers-mvp"
] | [
"DN6",
"yiyixuxu",
"sayakpaul"
] | 2025-11-11T09:54:56Z | 2026-04-03T17:19:22Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,365,892,623 | I_kwDOHa8MBc7In3IP | 12,257 | https://github.com/huggingface/diffusers/issues/12257 | https://api.github.com/repos/huggingface/diffusers/issues/12257 | [Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model | We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.
- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/
- **Source Code**: htt... | open | null | false | 4 | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | [] | 2025-08-29T08:04:43Z | 2026-04-03T17:57:58Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yiyixuxu | 12,631,849 | MDQ6VXNlcjEyNjMxODQ5 | User | false |
huggingface/diffusers | 4,205,196,872 | I_kwDOHa8MBc76pjZI | 13,411 | https://github.com/huggingface/diffusers/issues/13411 | https://api.github.com/repos/huggingface/diffusers/issues/13411 | `LTXEulerAncestralRFScheduler.set_timesteps(sigmas=...)` does not validate monotonicity, causing silent incorrect denoising | ## Describe the bug
`LTXEulerAncestralRFScheduler.set_timesteps` accepts an externally-supplied `sigmas` argument without validating that the schedule is monotonically non-increasing. When `step()` is called on a non-monotone schedule, the ancestral RF decomposition computes `sigma_down` outside `[0, 1]` and `alpha_do... | open | null | false | 0 | [] | [] | 2026-04-04T15:38:48Z | 2026-04-04T15:38:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | VittoriaLanzo | 258,044,740 | U_kgDOD2FzRA | User | false |
huggingface/diffusers | 4,209,097,932 | I_kwDOHa8MBc764bzM | 13,418 | https://github.com/huggingface/diffusers/issues/13418 | https://api.github.com/repos/huggingface/diffusers/issues/13418 | 2 gpu | diffusers/FLUX.2-dev-bnb-4bit
how run it on 2 gpu
kagle | closed | completed | false | 2 | [] | [] | 2026-04-06T02:03:38Z | 2026-04-06T11:09:37Z | 2026-04-06T11:09:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ayttop | 178,673,810 | U_kgDOCqZYkg | User | false |
huggingface/diffusers | 4,105,762,698 | I_kwDOHa8MBc70uPeK | 13,295 | https://github.com/huggingface/diffusers/issues/13295 | https://api.github.com/repos/huggingface/diffusers/issues/13295 | Modular Diffusers 🧨 | Hey folks 👋
We recently released [Modular Diffusers](https://huggingface.co/blog/modular-diffusers), which gives developers the flexibility to reuse existing pipeline blocks in different workflows and also easily implement custom "modular" blocks.
While the `DiffusionPipeline` class has helped establish a standard ... | open | null | false | 15 | [] | [] | 2026-03-20T05:01:29Z | 2026-04-06T13:33:31Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 4,205,710,790 | I_kwDOHa8MBc76rg3G | 13,416 | https://github.com/huggingface/diffusers/issues/13416 | https://api.github.com/repos/huggingface/diffusers/issues/13416 | how to use zimage and flux2 with negative prompt? | ### Describe the bug
```python3
from pipeline_flux_with_cfg import FluxCFGPipeline
```
it allows to use negative prompt with flux, is there any similar stuff with zimage and flux2?
I means diffusers.ZImagePipeline with Tongyi-MAI/Z-Image-Turbo and diffusers.flux2kleinpipeline with black-forest-labs/FLUX.2-klein-9B I... | open | null | false | 7 | [
"bug"
] | [] | 2026-04-04T20:01:47Z | 2026-04-06T14:13:17Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chaowenguo | 229,142,208 | U_kgDODahuwA | User | false |
huggingface/diffusers | 4,212,516,302 | I_kwDOHa8MBc77FeXO | 13,425 | https://github.com/huggingface/diffusers/issues/13425 | https://api.github.com/repos/huggingface/diffusers/issues/13425 | Division by zero in rescale_noise_cfg can produce NaNs during inference | ### Describe the bug
## Bug: Division by zero in `rescale_noise_cfg` can produce NaNs
### Description
The function `rescale_noise_cfg` performs a division by `std_cfg` without any numerical stability guard:
```python
noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
```
If std_cfg becomes zero, this leads to N... | open | null | false | 0 | [
"bug"
] | [] | 2026-04-06T15:32:50Z | 2026-04-06T15:32:50Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Akash504-ai | 199,394,642 | U_kgDOC-KFUg | User | false |
huggingface/diffusers | 4,212,129,820 | I_kwDOHa8MBc77EAAc | 13,421 | https://github.com/huggingface/diffusers/issues/13421 | https://api.github.com/repos/huggingface/diffusers/issues/13421 | Add negative_prompt parameter to GLMImagePipeline | ## Description
`GLMImagePipeline.__call__()` supports `negative_prompt_embeds` but does not accept a `negative_prompt` string parameter. When CFG is active (`guidance_scale > 1`), the unconditional prompt is hardcoded to `""` with no way for users to provide a custom negative prompt.
This is the same pattern that was... | closed | completed | false | 3 | [] | [] | 2026-04-06T14:17:32Z | 2026-04-06T23:59:20Z | 2026-04-06T23:59:20Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | akshan-main | 97,239,696 | U_kgDOBcvCkA | User | false |
huggingface/diffusers | 4,216,296,333 | I_kwDOHa8MBc77T5ON | 13,429 | https://github.com/huggingface/diffusers/issues/13429 | https://api.github.com/repos/huggingface/diffusers/issues/13429 | add JoyAI-Image-Edit | ### Model/Pipeline/Scheduler description
JoyAI-Image is a unified multimodal foundation model for image understanding, text-to-image generation, and instruction-guided image editing. It combines an 8B Multimodal Large Language Model (MLLM) with a 16B Multimodal Diffusion Transformer (MMDiT). A central principle of Joy... | closed | completed | false | 0 | [] | [] | 2026-04-07T07:58:41Z | 2026-04-07T07:59:45Z | 2026-04-07T07:59:44Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Moran232 | 9,348,990 | MDQ6VXNlcjkzNDg5OTA= | User | false |
huggingface/diffusers | 4,216,329,419 | I_kwDOHa8MBc77UBTL | 13,430 | https://github.com/huggingface/diffusers/issues/13430 | https://api.github.com/repos/huggingface/diffusers/issues/13430 | Support for JoyAI-Image-Edit | ### Model/Pipeline/Scheduler description
JoyAI-Image is a unified multimodal foundation model for image understanding, text-to-image generation, and instruction-guided image editing. It combines an 8B Multimodal Large Language Model (MLLM) with a 16B Multimodal Diffusion Transformer (MMDiT). A central principle of Joy... | open | null | false | 0 | [] | [] | 2026-04-07T08:05:37Z | 2026-04-07T08:05:37Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Moran232 | 9,348,990 | MDQ6VXNlcjkzNDg5OTA= | User | false |
huggingface/diffusers | 4,201,445,455 | I_kwDOHa8MBc76bPhP | 13,401 | https://github.com/huggingface/diffusers/issues/13401 | https://api.github.com/repos/huggingface/diffusers/issues/13401 | Help us profile important pipelines and improve if needed | In https://github.com/huggingface/diffusers/pull/13356, we added a guide to comprehensively profile our pipelines with Claude. It, in turn, helped us get rid of issues that can get in the way of the benefits provided by `torch.compile`.
We cannot profile all our important pipelines alone, and this is where the communi... | open | null | false | 9 | [
"performance",
"diffusers-mvp"
] | [] | 2026-04-03T17:18:13Z | 2026-04-07T13:03:55Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.