repo stringclasses 1
value | github_id int64 1.27B 4.25B | github_node_id stringlengths 18 18 | number int64 8 13.5k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-04-13 04:28:06 | updated_at stringdate 2022-06-12 22:18:01 2026-04-13 13:35:30 | closed_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 3
values | extracted_at stringdate 2026-04-07 13:34:13 2026-04-13 13:58:38 | author_login stringlengths 3 28 | author_id int64 1.54k 258M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,744,426,321 | I_kwDOHa8MBc6jlJ9R | 10,260 | https://github.com/huggingface/diffusers/issues/10260 | https://api.github.com/repos/huggingface/diffusers/issues/10260 | Make `time_embed_dim` of `UNet2DModel` changeable | **Is your feature request related to a problem? Please describe.**
I want to change the `time_embed_dim` of `UNet2DModel`, but it is hard coded as `time_embed_dim = block_out_channels[0] * 4` in the `__init__` function.
**Describe the solution you'd like.**
Make `time_embedding_dim` a parameter of the `__init__` f... | closed | completed | false | 0 | [] | [] | 2024-12-17T09:43:12Z | 2024-12-18T07:55:19Z | 2024-12-18T07:55:19Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Bichidian | 22,809,191 | MDQ6VXNlcjIyODA5MTkx | User | false |
huggingface/diffusers | 2,744,995,848 | I_kwDOHa8MBc6jnVAI | 10,266 | https://github.com/huggingface/diffusers/issues/10266 | https://api.github.com/repos/huggingface/diffusers/issues/10266 | UniPC with FlowMatch fails with index out-of-bounds | ### Describe the bug
If using new variant of `UniPCMultistepScheduler` introduced via #9982 it fails with index out-of-bounds on last step
(DPM is fine, I haven't checked other supported schedulers, but DEIS and SA are likely affected as well)
### Reproduction
```py
import torch
import diffusers
repo_i... | closed | completed | false | 0 | [
"bug"
] | [] | 2024-12-17T13:48:48Z | 2024-12-18T12:22:12Z | 2024-12-18T12:22:12Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,746,720,528 | I_kwDOHa8MBc6jt6EQ | 10,277 | https://github.com/huggingface/diffusers/issues/10277 | https://api.github.com/repos/huggingface/diffusers/issues/10277 | Using euler scheduler in fluxfill | ### Describe the bug
I am using the customfluxfill function and want to use the Euler scheduler (EulerAncestralDiscreteScheduler) in my code. However, I am encountering the following error:
### Reproduction
```
from diffusers.schedulers import (
DPMSolverMultistepScheduler,
EulerAncestralDiscreteSchedul... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2024-12-18T04:15:02Z | 2025-02-02T15:02:54Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | luna313 | 182,591,483 | U_kgDOCuIf-w | User | false |
huggingface/diffusers | 2,746,867,605 | I_kwDOHa8MBc6jud-V | 10,280 | https://github.com/huggingface/diffusers/issues/10280 | https://api.github.com/repos/huggingface/diffusers/issues/10280 | Safetensors loading uses mmap with multiple processes sharing the same fd cause slow gcsfuse performance | ### Describe the bug
When I use `StableDiffusionPipeline.from_single_file` to load a safetensors model, I noticed that the loading speed is extremely slow when the file is loaded from GCSFuse (https://cloud.google.com/storage/docs/cloud-storage-fuse/overview).
The reason is that the loader creates multiple proces... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-12-18T06:02:41Z | 2025-01-10T10:11:05Z | 2025-01-10T10:11:05Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | wlhee | 5,008,384 | MDQ6VXNlcjUwMDgzODQ= | User | false |
huggingface/diffusers | 2,746,914,988 | I_kwDOHa8MBc6jupis | 10,281 | https://github.com/huggingface/diffusers/issues/10281 | https://api.github.com/repos/huggingface/diffusers/issues/10281 | Request to implement FreeScale, a new diffusion scheduler | ### Model/Pipeline/Scheduler description
FreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs.
), but got torch.Size([3072, 384]) | ### Describe the bug
Get an error like the title when I load a flux fill gguf format file as a flux transformer.
### Reproduction
```python
from huggingface_hub import hf_hub_download
import os
def download_model():
# 设置模型信息
repo_id = "YarvixPA/FLUX.1-Fill-dev-gguf"
filename = "flux1-fill-dev... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-12-18T08:29:50Z | 2024-12-18T13:51:45Z | 2024-12-18T13:51:44Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chuck-ma | 74,402,255 | MDQ6VXNlcjc0NDAyMjU1 | User | false |
huggingface/diffusers | 2,747,426,014 | I_kwDOHa8MBc6jwmTe | 10,287 | https://github.com/huggingface/diffusers/issues/10287 | https://api.github.com/repos/huggingface/diffusers/issues/10287 | The example code in the Hugging Face documentation has an issue. | ### Describe the bug
https://huggingface.co/docs/diffusers/en/api/pipelines/ledits_pp
The examples in this link cannot be executed without encountering bugs.
## for LEditsPPPipelineStableDiffusion:
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-12-18T10:30:45Z | 2025-01-06T18:19:54Z | 2025-01-06T18:19:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Zhiyuan-Fan | 75,023,175 | MDQ6VXNlcjc1MDIzMTc1 | User | false |
huggingface/diffusers | 1,425,872,470 | I_kwDOHa8MBc5U_RZW | 1,029 | https://github.com/huggingface/diffusers/issues/1029 | https://api.github.com/repos/huggingface/diffusers/issues/1029 | [Community Pipeline] Mix Prompting pipeline? | ## Idea
I've been playing with this concept of mixed embeddings. The premise is simple: the pipeline takes a list of prompts and generates images based on a mix of embeddings of the prompts. It produces some interesting mixed-up outcomes.
## Example
Prompt 1: `a mountain, cinematic angle, studio Ghibli, cinematic ... | closed | completed | false | 4 | [
"community-examples",
"stale"
] | [] | 2022-10-27T15:52:37Z | 2023-04-21T15:03:50Z | 2023-04-21T15:03:50Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | daspartho | 59,410,571 | MDQ6VXNlcjU5NDEwNTcx | User | false |
huggingface/diffusers | 2,747,679,139 | I_kwDOHa8MBc6jxkGj | 10,290 | https://github.com/huggingface/diffusers/issues/10290 | https://api.github.com/repos/huggingface/diffusers/issues/10290 | the transformer of flux+contronet can not use torch.compile to speed | ### Describe the bug
When I use "pipeline.transformer = torch.compile(pipeline.transformer, mode="reduce-overhead", fullgraph=True)", the error is .......diffusers/models/transformers/transformer_flux.py", line 519, in forward
interval_control = int(np.ceil(interval_control))
Set TORCH_LOGS="+dynamo" and TORCH... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-12-18T12:26:11Z | 2024-12-25T04:08:22Z | 2024-12-18T13:23:13Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | algorithmconquer | 10,041,695 | MDQ6VXNlcjEwMDQxNjk1 | User | false |
huggingface/diffusers | 2,749,310,736 | I_kwDOHa8MBc6j3ycQ | 10,297 | https://github.com/huggingface/diffusers/issues/10297 | https://api.github.com/repos/huggingface/diffusers/issues/10297 | [GGUF] support serialization with GGUF | Thanks to @DN6 we support loading GGUF checkpoints and running on-the-fly dequants ([PR](https://github.com/huggingface/diffusers/pull/9964)).
Currently, we don't support `save_pretrained()` on a `DiffusionPipeline` with GGUF, which, IMO, could be massively impactful, too. | open | null | false | 6 | [
"wip",
"quantization"
] | [
"DN6"
] | 2024-12-19T05:38:59Z | 2025-02-20T19:17:27Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,750,197,988 | I_kwDOHa8MBc6j7LDk | 10,302 | https://github.com/huggingface/diffusers/issues/10302 | https://api.github.com/repos/huggingface/diffusers/issues/10302 | Using FP8 for inference without CPU offloading can introduce noise. | ### Describe the bug
If I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy.
### Reproduction
```python
from diffusers import (
FluxPipeline,
FluxTransformer2DModel... | open | null | false | 6 | [
"bug"
] | [] | 2024-12-19T12:39:06Z | 2025-03-10T14:18:58Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | todochenxi | 108,149,467 | U_kgDOBnI62w | User | false |
huggingface/diffusers | 2,750,241,562 | I_kwDOHa8MBc6j7Vsa | 10,303 | https://github.com/huggingface/diffusers/issues/10303 | https://api.github.com/repos/huggingface/diffusers/issues/10303 | 【BUG】Attention.head_to_batch_dim has bug in terms of tensor permutation | ### Describe the bug
https://github.com/huggingface/diffusers/blob/1826a1e7d31df48d345a20028b3ace48f09a4e60/src/diffusers/models/attention_processor.py#L613
when `out_dim==4`, the ourpout shape is mismatch to the function's comment ``[batch_size, seq_len, heads, dim // heads]`
here is original function
```
def... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2024-12-19T12:53:56Z | 2025-01-18T15:02:31Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Dawn-LX | 48,169,011 | MDQ6VXNlcjQ4MTY5MDEx | User | false |
huggingface/diffusers | 2,751,891,256 | I_kwDOHa8MBc6kBoc4 | 10,311 | https://github.com/huggingface/diffusers/issues/10311 | https://api.github.com/repos/huggingface/diffusers/issues/10311 | import error in train dreambooth.py | Traceback (most recent call last):
File "/content/train_dreambooth.py", line 21, in <module>
from diffusers import AutoencoderKL, DDIMScheduler, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
File "/usr/local/lib/python3.10/dist-packages/diffusers/__init__.py", line 3, in <module>
from .co... | closed | completed | false | 3 | [
"stale",
"training"
] | [] | 2024-12-20T04:10:04Z | 2025-01-27T01:27:15Z | 2025-01-27T01:27:15Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | priyadharsan1403 | 181,352,036 | U_kgDOCs82ZA | User | false |
huggingface/diffusers | 2,752,009,201 | I_kwDOHa8MBc6kCFPx | 10,313 | https://github.com/huggingface/diffusers/issues/10313 | https://api.github.com/repos/huggingface/diffusers/issues/10313 | Multiple bugs in flux dreambooth script: train_dreambooth_lora_flux_advanced.py | I haven't fully fixed the script, but I'm really not sure how anyone has had success with it (eg the [blog](https://huggingface.co/blog/linoyts/new-advanced-flux-dreambooth-lora) post). Many show stoppers when trying to get Textual Inversion working.
To debug, I started peeling away at [the main Flux training script... | open | null | false | 13 | [
"training"
] | [
"linoytsaban"
] | 2024-12-20T06:01:34Z | 2025-03-18T19:14:47Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | freckletonj | 8,399,149 | MDQ6VXNlcjgzOTkxNDk= | User | false |
huggingface/diffusers | 2,752,045,401 | I_kwDOHa8MBc6kCOFZ | 10,314 | https://github.com/huggingface/diffusers/issues/10314 | https://api.github.com/repos/huggingface/diffusers/issues/10314 | HunyuanVideoPipeline produces NaN values | ### Describe the bug
Running `diffusers.utils.export_to_video()` on the output of `HunyuanVideoPipeline` results in
```
/app/diffusers/src/diffusers/image_processor.py:147: RuntimeWarning: invalid value encountered in cast
images = (images * 255).round().astype("uint8")
```
After adding some checks to `num... | closed | completed | false | 19 | [
"bug"
] | [] | 2024-12-20T06:32:30Z | 2025-01-20T07:10:00Z | 2025-01-20T07:09:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | smedegaard | 3,075,382 | MDQ6VXNlcjMwNzUzODI= | User | false |
huggingface/diffusers | 2,752,063,950 | I_kwDOHa8MBc6kCSnO | 10,315 | https://github.com/huggingface/diffusers/issues/10315 | https://api.github.com/repos/huggingface/diffusers/issues/10315 | cogvideo training error | ### Describe the bug
Fine tuning the model on both Gpus reports the following error: RuntimeError: CUDA driver error: invalid argument
Do you know what the problem is?
### Reproduction
[rank1]: ^^^^^^
[rank1]: File "/home/conda_env/controlnet/lib/python3.11/site-package... | closed | completed | false | 4 | [
"bug",
"training"
] | [] | 2024-12-20T06:47:32Z | 2025-01-12T05:44:35Z | 2025-01-12T05:44:35Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | linwenzhao1 | 51,796,215 | MDQ6VXNlcjUxNzk2MjE1 | User | false |
huggingface/diffusers | 2,752,248,369 | I_kwDOHa8MBc6kC_ox | 10,317 | https://github.com/huggingface/diffusers/issues/10317 | https://api.github.com/repos/huggingface/diffusers/issues/10317 | The bug occurs when using torch.compile on StableVideoDiffusionPipeline, and it happens when passing different images for the second time. | ### Describe the bug
I created a page using Gradio to generate videos with the `StableVideoDiffusionPipeline`, and I used `torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) `for acceleration. I noticed that after inference with StableVideoDiffusionPipeline, the GPU memory usage increases from 4.8GB ... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2024-12-20T08:51:14Z | 2025-01-19T15:02:46Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ZHJ19970917 | 73,893,296 | MDQ6VXNlcjczODkzMjk2 | User | false |
huggingface/diffusers | 2,752,710,597 | I_kwDOHa8MBc6kEwfF | 10,321 | https://github.com/huggingface/diffusers/issues/10321 | https://api.github.com/repos/huggingface/diffusers/issues/10321 | Error no file named pytorch_model.bin, model.safetensors found in directory Lightricks/LTX-Video. | ### Describe the bug
```
(venv) C:\ai1\LTX-Video>python inference.py
Traceback (most recent call last):
File "C:\ai1\LTX-Video\inference.py", line 23, in <module>
text_encoder = T5EncoderModel.from_pretrained(
File "C:\ai1\LTX-Video\venv\lib\site-packages\transformers\modeling_utils.py", line 3779, in fro... | closed | completed | false | 14 | [
"bug"
] | [] | 2024-12-20T13:08:58Z | 2024-12-24T09:23:32Z | 2024-12-24T09:23:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,753,448,162 | I_kwDOHa8MBc6kHkji | 10,327 | https://github.com/huggingface/diffusers/issues/10327 | https://api.github.com/repos/huggingface/diffusers/issues/10327 | Apply applicable `quantization_config` to model components when loading a model | With new improvements to `quantization_config`, memory requirements of models such as SD35 and FLUX.1 are much lower.
However, user must load each model component that he wants quantized manually and then assemble the pipeline.
For example:
```py
quantization_config = BitsAndBytesConfig(...)
transformer = SD3Tra... | closed | completed | false | 16 | [
"quantization"
] | [
"sayakpaul",
"SunMarc"
] | 2024-12-20T20:33:20Z | 2025-05-09T04:37:29Z | 2025-05-09T04:37:16Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,753,531,893 | I_kwDOHa8MBc6kH4_1 | 10,328 | https://github.com/huggingface/diffusers/issues/10328 | https://api.github.com/repos/huggingface/diffusers/issues/10328 | Add `optimum.quanto` as supported load-time `quantization_config` | Recent additions to diffusers added `BitsAndBytesConfig` as well as `TorchAoConfig` options that can be used as `quantization_config` when loading model components using `from_pretrained`
for example:
```py
quantization_config = BitsAndBytesConfig(...)
transformer = SD3Transformer2DModel.from_pretrained(repo_id, ... | closed | completed | false | 8 | [
"wip",
"quantization"
] | [
"DN6"
] | 2024-12-20T21:54:41Z | 2025-04-17T13:25:22Z | 2025-04-17T13:25:22Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 1,426,142,054 | I_kwDOHa8MBc5VATNm | 1,033 | https://github.com/huggingface/diffusers/issues/1033 | https://api.github.com/repos/huggingface/diffusers/issues/1033 | IndexError: list index out of range, stable_diffusion_mega | ### Describe the bug
I am using custom_pipeline="stable_diffusion_mega" pipeline, and getting "IndexError: list index out of range" error
Here is traceback of error,
> Traceback (most recent call last):
> File "/opt/conda/envs/ldm/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
> ... | closed | completed | false | 2 | [
"bug"
] | [] | 2022-10-27T19:06:33Z | 2022-10-28T11:24:01Z | 2022-10-28T11:24:01Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adhikjoshi | 11,740,719 | MDQ6VXNlcjExNzQwNzE5 | User | false |
huggingface/diffusers | 2,753,892,553 | I_kwDOHa8MBc6kJRDJ | 10,333 | https://github.com/huggingface/diffusers/issues/10333 | https://api.github.com/repos/huggingface/diffusers/issues/10333 | Implement framewise encoding/decoding in LTX Video VAE | Currently, we do not implement framewise encoding/decoding in the LTX Video VAE. This leads to an opportunity for reducing memory usage, which will be beneficial for both inference and training.
LoRA finetuning LTX Video on 49x512x768 videos can be done in under 6 GB if prompts and latents are pre-computed, but the... | closed | completed | false | 2 | [
"enhancement",
"contributions-welcome"
] | [] | 2024-12-21T11:00:29Z | 2025-01-13T20:58:33Z | 2025-01-13T20:58:33Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | a-r-r-o-w | 72,266,394 | MDQ6VXNlcjcyMjY2Mzk0 | User | false |
huggingface/diffusers | 2,753,917,286 | I_kwDOHa8MBc6kJXFm | 10,334 | https://github.com/huggingface/diffusers/issues/10334 | https://api.github.com/repos/huggingface/diffusers/issues/10334 | Sana broke on MacOS. Grey images on MPS, NaN's on CPU. | ### Describe the bug
Just started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff.
Ran the example code, changing cuda to mps and got a grey image.
.round().astype("uint8") output black image | ### Describe the bug
Getting this error during inference and output is black image
```
C:\ai1\\venv\lib\site-packages\diffusers\image_processor.py:147: RuntimeWarning: invalid value encountered in cast
images = (images * 255).round().astype("uint8")
```
Inference code
```
import torch
from diffusers im... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-12-22T10:52:39Z | 2024-12-22T11:17:36Z | 2024-12-22T11:17:35Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,754,660,178 | I_kwDOHa8MBc6kMMdS | 10,345 | https://github.com/huggingface/diffusers/issues/10345 | https://api.github.com/repos/huggingface/diffusers/issues/10345 | safetensor streaming in from_single_file_loading() | can we add support for streaming safetensors while loading using `from_single_file`.
source:https://github.com/run-ai/runai-model-streamer
example:
```python
from runai_model_streamer import SafetensorsStreamer
file_path = "/path/to/file.safetensors"
with SafetensorsStreamer() as streamer:
streamer.str... | closed | completed | false | 2 | [
"stale"
] | [] | 2024-12-22T13:27:46Z | 2025-01-21T15:07:58Z | 2025-01-21T15:07:57Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AbhinavJangra29 | 107,471,490 | U_kgDOBmfigg | User | false |
huggingface/diffusers | 2,755,381,405 | I_kwDOHa8MBc6kO8id | 10,350 | https://github.com/huggingface/diffusers/issues/10350 | https://api.github.com/repos/huggingface/diffusers/issues/10350 | high memory consumption of VAE decoder in SD2.1 | ### Describe the bug
When I try to add the VAE decoder in SD2.1 to my training pipeline, I encountered a OOM error. After careful inspection, I found that the decoder really take a vast amount of memory. If input is in the shape of [1,4,96,96], the memory consumption is already 15G. If I increase the batch size, this ... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2024-12-23T07:08:34Z | 2025-01-27T01:23:36Z | 2025-01-27T01:23:35Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yfpeng1234 | 135,833,420 | U_kgDOCBinTA | User | false |
huggingface/diffusers | 2,755,757,821 | I_kwDOHa8MBc6kQYb9 | 10,357 | https://github.com/huggingface/diffusers/issues/10357 | https://api.github.com/repos/huggingface/diffusers/issues/10357 | [community] PyTorch/XLA support | There are pipelines with missing XLA support. We'd like to improve coverage with your help!
[Example 1](https://github.com/huggingface/diffusers/pull/10222/files)
[Example 2](https://github.com/huggingface/diffusers/pull/10109/files)
Please limit changes to a single pipeline in each PR. Changes must be only rela... | closed | completed | false | 1 | [
"good first issue",
"contributions-welcome"
] | [] | 2024-12-23T10:50:22Z | 2025-01-08T22:31:29Z | 2025-01-08T22:31:29Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hlky | 106,811,348 | U_kgDOBl3P1A | User | false |
huggingface/diffusers | 1,426,279,353 | I_kwDOHa8MBc5VA0u5 | 1,036 | https://github.com/huggingface/diffusers/issues/1036 | https://api.github.com/repos/huggingface/diffusers/issues/1036 | COLAB - CLIP_Guided_Stable_diffusion_with_diffusers.ipynb BUG | ### Describe the bug
# FROM
COLAB - version
https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion
Example | Description | Code Example | Colab | Author
-- | -- | -- | -- | --
CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with S... | closed | completed | false | 6 | [
"bug",
"stale"
] | [
"patil-suraj"
] | 2022-10-27T21:20:18Z | 2022-12-05T15:03:25Z | 2022-12-05T15:03:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | stromal | 19,979,901 | MDQ6VXNlcjE5OTc5OTAx | User | false |
huggingface/diffusers | 2,757,169,536 | I_kwDOHa8MBc6kVxGA | 10,369 | https://github.com/huggingface/diffusers/issues/10369 | https://api.github.com/repos/huggingface/diffusers/issues/10369 | replication_pad3d_cuda not implemented for BFloat16 | ### Describe the bug
# ComfyUI Error Report
## Error Details
- **Node ID:** 73
- **Node Type:** VAEDecodeTiled
- **Exception Type:** RuntimeError
- **Exception Message:** "replication_pad3d_cuda" not implemented for 'BFloat16'
### Reproduction
This error is caused when trying to run the Huyuan [workflow](https:... | closed | not_planned | false | 1 | [
"bug"
] | [] | 2024-12-24T04:49:02Z | 2024-12-24T13:38:33Z | 2024-12-24T13:38:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | andreszs | 10,298,071 | MDQ6VXNlcjEwMjk4MDcx | User | false |
huggingface/diffusers | 1,426,294,907 | I_kwDOHa8MBc5VA4h7 | 1,037 | https://github.com/huggingface/diffusers/issues/1037 | https://api.github.com/repos/huggingface/diffusers/issues/1037 | Add Stable Diffusion Telephone to community pipelines | Context: https://twitter.com/jayelmnop/status/1585695941788856320
**Is your feature request related to a problem? Please describe.**
The problem is that we don't have this awesome little pipeline!
**Describe the solution you'd like**
We add it to community pipelines per https://github.com/huggingface/diffuser... | closed | completed | false | 3 | [
"stale"
] | [] | 2022-10-27T21:38:00Z | 2022-12-05T15:03:24Z | 2022-12-05T15:03:24Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MarkRich | 2,020,546 | MDQ6VXNlcjIwMjA1NDY= | User | false |
huggingface/diffusers | 2,758,481,499 | I_kwDOHa8MBc6kaxZb | 10,373 | https://github.com/huggingface/diffusers/issues/10373 | https://api.github.com/repos/huggingface/diffusers/issues/10373 | [Request] Compatibility of textual inversion between `SD 1.5` and `SD 2.1` | **Is your feature request related to a problem? Please describe.**
Textual inversions trained on different versions of SD, such as SD 1.5 and SD 2.1, are not compatible.
Related to previously submitted issue #4030
**Describe the solution you'd like.**
To make textual inversion, which is compatible only with SD 1.... | open | null | false | 2 | [
"stale",
"contributions-welcome"
] | [] | 2024-12-25T04:30:27Z | 2025-01-24T15:02:58Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | suzukimain | 131,413,573 | U_kgDOB9U2RQ | User | false |
huggingface/diffusers | 2,758,495,651 | I_kwDOHa8MBc6ka02j | 10,374 | https://github.com/huggingface/diffusers/issues/10374 | https://api.github.com/repos/huggingface/diffusers/issues/10374 | Is there any plan to support TeaCache for training-free acceleration? | TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 5... | open | null | false | 4 | [
"wip"
] | [
"a-r-r-o-w"
] | 2024-12-25T05:00:23Z | 2025-01-27T01:28:53Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LiewFeng | 42,316,773 | MDQ6VXNlcjQyMzE2Nzcz | User | false |
huggingface/diffusers | 2,758,652,697 | I_kwDOHa8MBc6kbbMZ | 10,375 | https://github.com/huggingface/diffusers/issues/10375 | https://api.github.com/repos/huggingface/diffusers/issues/10375 | [low priority] Please fix links in documentation | https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video
Both links are broken
Make sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse co... | closed | completed | false | 0 | [] | [] | 2024-12-25T09:04:33Z | 2024-12-28T20:01:27Z | 2024-12-28T20:01:27Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,758,709,230 | I_kwDOHa8MBc6kbo_u | 10,379 | https://github.com/huggingface/diffusers/issues/10379 | https://api.github.com/repos/huggingface/diffusers/issues/10379 | QuantizedFluxTransformer2DModel save bug | ### Describe the bug
QuantizedFluxTransformer2DModel save bug
### Reproduction
```
class QuantizedFluxTransformer2DModel(QuantizedDiffusersModel):
base_class = FluxTransformer2DModel
transformer = FluxTransformer2DModel.from_pretrained(
'black-forest-labs/FLUX.1-Fill-dev', subfold... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2024-12-25T10:22:30Z | 2025-01-24T15:02:51Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | huangjun12 | 22,365,664 | MDQ6VXNlcjIyMzY1NjY0 | User | false |
huggingface/diffusers | 2,758,763,044 | I_kwDOHa8MBc6kb2Ik | 10,381 | https://github.com/huggingface/diffusers/issues/10381 | https://api.github.com/repos/huggingface/diffusers/issues/10381 | Inference error in Flux with quantization & LoRA applied, and a bug of Quanto with Zero GPU spaces | ### Describe the bug
Merry Christmas.🎅
The inference of the Flux model with LoRA applied and the inference of the quantized Flux model work fine on their own, but when combined, they often result in an error.
```
RuntimeError('Only Tensors of floating point and complex dtype can require gradients')
```
I don't... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2024-12-25T12:00:50Z | 2025-02-05T15:03:21Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | John6666cat | 186,692,226 | U_kgDOCyCygg | User | false |
huggingface/diffusers | 2,758,812,136 | I_kwDOHa8MBc6kcCHo | 10,382 | https://github.com/huggingface/diffusers/issues/10382 | https://api.github.com/repos/huggingface/diffusers/issues/10382 | [Bug] Encoder in diffusers.models.autoencoders.vae's forward method return type mismatch leads to AttributeError | ### Describe the bug
**Issue Description:**
When using the Encoder from the` diffusers.models.autoencoders.vae module`, calling its forward method returns a value type mismatch, resulting in an AttributeError during subsequent processing. Specifically, when calling the Encoder's forward method, the returned result is... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2024-12-25T13:34:30Z | 2025-02-07T15:03:04Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | mq-yuan | 82,648,854 | MDQ6VXNlcjgyNjQ4ODU0 | User | false |
huggingface/diffusers | 2,758,854,802 | I_kwDOHa8MBc6kcMiS | 10,383 | https://github.com/huggingface/diffusers/issues/10383 | https://api.github.com/repos/huggingface/diffusers/issues/10383 | [Request] Optimize HunyuanVideo Inference Speed with ParaAttention | Hi guys,
First and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects.
I am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/ch... | closed | completed | false | 10 | [
"roadmap"
] | [] | 2024-12-25T15:07:53Z | 2025-01-16T18:05:15Z | 2025-01-16T18:05:15Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chengzeyi | 23,494,160 | MDQ6VXNlcjIzNDk0MTYw | User | false |
huggingface/diffusers | 2,759,254,043 | I_kwDOHa8MBc6kduAb | 10,385 | https://github.com/huggingface/diffusers/issues/10385 | https://api.github.com/repos/huggingface/diffusers/issues/10385 | When training a Unet model (text_to_image.py) with 2 channels, images are not generating | I created an entire repo describing this bug:
https://github.com/kopyl/debug-unet-sampling-diffusers/
Also [opened an issue on accelerate repo](https://github.com/huggingface/accelerate/issues/3314) | closed | completed | false | 1 | [
"bug"
] | [] | 2024-12-26T04:19:51Z | 2024-12-26T12:08:58Z | 2024-12-26T12:08:58Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kopyl | 17,604,849 | MDQ6VXNlcjE3NjA0ODQ5 | User | false |
huggingface/diffusers | 2,759,734,507 | I_kwDOHa8MBc6kfjTr | 10,389 | https://github.com/huggingface/diffusers/issues/10389 | https://api.github.com/repos/huggingface/diffusers/issues/10389 | Explicit support of masked loss and schedulefree optimizers | ETA: this is my first massive involvement with scripts using diffusers, so I might not be getting some concepts for now, but I'm trying to learn as I go.
I'm trying to extend a script from the advanced_diffusion_training folder that deals with finetuning a dreambooth lora for flux (https://github.com/huggingface/d... | open | null | false | 2 | [
"stale"
] | [] | 2024-12-26T12:49:43Z | 2025-02-03T15:02:59Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | StrangeTcy | 2,532,099 | MDQ6VXNlcjI1MzIwOTk= | User | false |
huggingface/diffusers | 2,760,346,822 | I_kwDOHa8MBc6kh4zG | 10,392 | https://github.com/huggingface/diffusers/issues/10392 | https://api.github.com/repos/huggingface/diffusers/issues/10392 | Flux-dev-fp8 with Hyper-FLUX.1-dev-8steps-lora | ### Describe the bug
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.pa... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2024-12-27T03:36:37Z | 2025-01-26T15:02:39Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lhjlhj11 | 101,450,680 | U_kgDOBgwDuA | User | false |
huggingface/diffusers | 2,760,763,140 | I_kwDOHa8MBc6kjecE | 10,395 | https://github.com/huggingface/diffusers/issues/10395 | https://api.github.com/repos/huggingface/diffusers/issues/10395 | [Quantization] enable multi-backend `bitsandbytes` | Similar to https://github.com/huggingface/transformers/pull/31098/ | open | null | false | 6 | [
"wip",
"contributions-welcome",
"quantization",
"bitsandbytes"
] | [] | 2024-12-27T11:24:04Z | 2025-02-20T19:15:18Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,761,139,227 | I_kwDOHa8MBc6kk6Qb | 10,398 | https://github.com/huggingface/diffusers/issues/10398 | https://api.github.com/repos/huggingface/diffusers/issues/10398 | Target modules {'modulation.linear', 'txt_attn_proj', 'fc1', 'txt_attn_qkv', 'fc2', 'txt_mod.linear', 'img_mod.linear', 'linear1', 'linear2', 'img_attn_qkv', 'img_attn_proj'} not found in the base model. | ### Describe the bug
Without LORA works fine. I tried the latest version of peft as well as 0.6.0 which gives another error.
Is it that Lora is not supposed to work with GGUF weights?
### Reproduction
```
pipe = HunyuanVideoPipeline.from_pretrained(
model_id,
transformer=transformer,
torc... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-12-27T17:55:19Z | 2024-12-28T05:06:46Z | 2024-12-28T05:06:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,761,309,032 | I_kwDOHa8MBc6kljto | 10,399 | https://github.com/huggingface/diffusers/issues/10399 | https://api.github.com/repos/huggingface/diffusers/issues/10399 | Flux failures using `from_pipe` | ### Describe the bug
loading a flux model as usual works fine. pipeline can then be switched to img2img or inpaint without issues.
but once its img2img or inpaint, it cannot be switched back to txt2img since some of the pipeline modules are mandatory to be registered (even if as none) in txt2img and not present in ... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-12-27T22:11:41Z | 2025-01-02T21:06:52Z | 2025-01-02T21:06:52Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,761,645,789 | I_kwDOHa8MBc6km17d | 10,403 | https://github.com/huggingface/diffusers/issues/10403 | https://api.github.com/repos/huggingface/diffusers/issues/10403 | pipeline fail to move to "cuda" if one of the component is PeftModel | When I was doing infer, I loaded the pre-trained weights using the following method:
```
self.transformer = PeftModel.from_pretrained(
self.base_transformer,
lora_model_path
)
```
and loaded the pipeline in the following way:
```
pipe = FluxControlNetPipeline(transformer=transfo... | open | null | false | 3 | [
"bug",
"stale",
"needs-code-example"
] | [] | 2024-12-28T09:27:49Z | 2025-02-16T15:03:02Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Erisura | 72,057,715 | MDQ6VXNlcjcyMDU3NzE1 | User | false |
huggingface/diffusers | 2,762,040,255 | I_kwDOHa8MBc6koWO_ | 10,405 | https://github.com/huggingface/diffusers/issues/10405 | https://api.github.com/repos/huggingface/diffusers/issues/10405 | g | E | closed | not_planned | false | 1 | [] | [] | 2024-12-28T23:40:04Z | 2024-12-31T20:31:19Z | 2024-12-31T20:31:19Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ghost | 10,137 | MDQ6VXNlcjEwMTM3 | User | false |
huggingface/diffusers | 2,762,163,372 | I_kwDOHa8MBc6ko0Ss | 10,406 | https://github.com/huggingface/diffusers/issues/10406 | https://api.github.com/repos/huggingface/diffusers/issues/10406 | CogVideoX: RuntimeWarning: invalid value encountered in cast | Can be closed | closed | completed | false | 0 | [
"bug"
] | [] | 2024-12-29T08:50:56Z | 2024-12-29T08:59:00Z | 2024-12-29T08:59:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,762,734,830 | I_kwDOHa8MBc6kq_zu | 10,410 | https://github.com/huggingface/diffusers/issues/10410 | https://api.github.com/repos/huggingface/diffusers/issues/10410 | Questions Regarding the style_fidelity Parameter | ### Discussed in https://github.com/huggingface/diffusers/discussions/10353
<div type='discussions-op-text'>
<sup>Originally posted by **ShowLo** December 23, 2024</sup>
In [stable_diffusion_reference.py](https://github.com/huggingface/diffusers/blob/main/examples/community/stable_diffusion_reference.py), there ... | open | null | false | 1 | [
"stale"
] | [] | 2024-12-30T06:41:35Z | 2025-01-29T15:02:55Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ShowLo | 14,790,924 | MDQ6VXNlcjE0NzkwOTI0 | User | false |
huggingface/diffusers | 2,763,112,434 | I_kwDOHa8MBc6ksb_y | 10,411 | https://github.com/huggingface/diffusers/issues/10411 | https://api.github.com/repos/huggingface/diffusers/issues/10411 | How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py | I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!
the training set:
```
#!/bin/bas... | closed | completed | false | 0 | [] | [] | 2024-12-30T12:06:07Z | 2024-12-31T07:21:40Z | 2024-12-31T07:21:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yangzhenyu6 | 122,758,523 | U_kgDOB1Elew | User | false |
huggingface/diffusers | 2,763,331,563 | I_kwDOHa8MBc6ktRfr | 10,412 | https://github.com/huggingface/diffusers/issues/10412 | https://api.github.com/repos/huggingface/diffusers/issues/10412 | SD3.5-Large DreamBooth Training - Over 80GB VRAM Usage | ### Describe the bug
⚠️ We are running out of memory on step 0
❕It does work without '--train_text_encoder'. It seems that there might be a memory leak or issue with training the text encoder with the current script / model.
❓Does it make sense that the model uses over 80GB of VRAM?
❓Do you have any recommendatio... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2024-12-30T15:01:12Z | 2025-01-29T15:02:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | deman311 | 66,918,683 | MDQ6VXNlcjY2OTE4Njgz | User | false |
huggingface/diffusers | 2,764,006,072 | I_kwDOHa8MBc6kv2K4 | 10,413 | https://github.com/huggingface/diffusers/issues/10413 | https://api.github.com/repos/huggingface/diffusers/issues/10413 | partial models spec in model_index.json | one slightly offtopic note - if we're coming up with new format, lets make sure it covers the painpoints of existing ones, otherwise its-just-another-standard.
as is `dduf` does have benefits, but it would be great if it could cover one of the most common use-cases: partial models.
e.g.
- typical sdxl ... | closed | completed | false | 10 | [
"stale"
] | [
"DN6",
"yiyixuxu"
] | 2024-12-31T06:23:35Z | 2025-05-08T03:29:57Z | 2025-05-08T03:29:55Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,764,022,050 | I_kwDOHa8MBc6kv6Ei | 10,414 | https://github.com/huggingface/diffusers/issues/10414 | https://api.github.com/repos/huggingface/diffusers/issues/10414 | [<languageCode>] Translating docs to Chinese | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/m... | closed | completed | false | 0 | [] | [] | 2024-12-31T06:45:21Z | 2024-12-31T06:49:52Z | 2024-12-31T06:48:04Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | S20180576 | 57,211,708 | MDQ6VXNlcjU3MjExNzA4 | User | false |
huggingface/diffusers | 2,764,069,434 | I_kwDOHa8MBc6kwFo6 | 10,415 | https://github.com/huggingface/diffusers/issues/10415 | https://api.github.com/repos/huggingface/diffusers/issues/10415 | [Pipelines] Add AttentiveEraser | ### Model/Pipeline/Scheduler description
I’ve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate i... | closed | completed | false | 7 | [
"stale"
] | [] | 2024-12-31T07:44:48Z | 2025-02-05T15:54:43Z | 2025-02-05T15:54:41Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Wenhao-Sun77 | 110,756,446 | U_kgDOBpoCXg | User | false |
huggingface/diffusers | 2,764,369,167 | I_kwDOHa8MBc6kxO0P | 10,416 | https://github.com/huggingface/diffusers/issues/10416 | https://api.github.com/repos/huggingface/diffusers/issues/10416 | Euler flow matching scheduler is missing documentation for parameters | 
I think there are some undocumented parameters here. | closed | completed | false | 4 | [] | [] | 2024-12-31T13:15:35Z | 2025-01-09T18:54:41Z | 2025-01-09T18:54:41Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bghira | 59,658,056 | MDQ6VXNlcjU5NjU4MDU2 | User | false |
huggingface/diffusers | 2,765,167,229 | I_kwDOHa8MBc6k0Rp9 | 10,421 | https://github.com/huggingface/diffusers/issues/10421 | https://api.github.com/repos/huggingface/diffusers/issues/10421 | CPU Memory Leak When Moving Pipelines To Multiple GPUs | ### Describe the bug
When I move multiple pipelines to different GPUs separately, there is a significant memory leak on the CPU.
And there is no leak on any GPU, every GPU works fine without leak.
### Reproduction
```python
from diffusers import StableDiffusionXLPipeline
pipeline_0 = StableDiffusionXLPipeline... | closed | completed | false | 9 | [
"bug"
] | [] | 2025-01-01T17:01:49Z | 2025-01-02T22:08:15Z | 2025-01-02T14:18:59Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberVy | 72,680,847 | MDQ6VXNlcjcyNjgwODQ3 | User | false |
huggingface/diffusers | 2,765,351,969 | I_kwDOHa8MBc6k0-wh | 10,425 | https://github.com/huggingface/diffusers/issues/10425 | https://api.github.com/repos/huggingface/diffusers/issues/10425 | Euler Flow Matching Scheduler Missing Documentation for Parameters | ### Describe the bug
The Euler flow matching scheduler in Hugging Face Diffusers is missing clear documentation for its parameters, making it difficult for users to understand how to configure the scheduler effectively for different use cases.
### Reproduction
Steps to Reproduce:
Visit the Hugging Face Diffusers ... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-02T01:37:38Z | 2025-01-02T01:38:38Z | 2025-01-02T01:38:38Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hanshengzhu0001 | 74,083,194 | MDQ6VXNlcjc0MDgzMTk0 | User | false |
huggingface/diffusers | 2,765,577,353 | I_kwDOHa8MBc6k11yJ | 10,427 | https://github.com/huggingface/diffusers/issues/10427 | https://api.github.com/repos/huggingface/diffusers/issues/10427 | Add support for GGUF loading in AuraFlow arch | **Is your feature request related to a problem? Please describe.**
Thank you for adding GGUF loading, unfortunately AuraFlow has been left unsupported (understandable due to lack of GGUF models), now with the recent addition of https://huggingface.co/city96/AuraFlow-v0.3-gguf by @city96 we should extend loading to AF.... | closed | completed | false | 1 | [] | [] | 2025-01-02T07:42:03Z | 2025-01-08T07:53:14Z | 2025-01-08T07:53:14Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AstraliteHeart | 81,396,681 | MDQ6VXNlcjgxMzk2Njgx | User | false |
huggingface/diffusers | 2,765,837,764 | I_kwDOHa8MBc6k21XE | 10,428 | https://github.com/huggingface/diffusers/issues/10428 | https://api.github.com/repos/huggingface/diffusers/issues/10428 | Flux inference error on ascend npu | ### Describe the bug
It fails to run the demo flux inference code. reporting errors:
> RuntimeError: call aclnnRepeatInterleaveIntWithDim failed, detail:EZ1001: [PID: 23975] 2025-01-02-11:00:00.313.502 self not implemented for DT_DOUBLE, should be in dtype support list [DT_UINT8,DT_INT8,DT_INT16,DT_INT32,DT_INT64,DT_... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-02T11:06:29Z | 2025-01-02T19:52:54Z | 2025-01-02T19:52:54Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | gameofdimension | 32,255,912 | MDQ6VXNlcjMyMjU1OTEy | User | false |
huggingface/diffusers | 2,766,456,818 | I_kwDOHa8MBc6k5Mfy | 10,433 | https://github.com/huggingface/diffusers/issues/10433 | https://api.github.com/repos/huggingface/diffusers/issues/10433 | [Docs] Broken Links in a Section of Documentation | ### Broken Links in a Section of Documentation
>Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how ... | closed | completed | false | 0 | [] | [] | 2025-01-02T18:24:44Z | 2025-01-06T18:07:39Z | 2025-01-06T18:07:39Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | SahilCarterr | 110,806,554 | U_kgDOBprGGg | User | false |
huggingface/diffusers | 2,766,845,707 | I_kwDOHa8MBc6k6rcL | 10,436 | https://github.com/huggingface/diffusers/issues/10436 | https://api.github.com/repos/huggingface/diffusers/issues/10436 | Not loading runwayml/stable-diffusion-inpainting | ### Describe the bug

And then my code just pauses forever. Sometimes the first bar reaches like 14% and the second bar like 71%.
I was trying to do something like the code under "If preserving the unmasked area is importa... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-01-03T01:11:25Z | 2025-01-06T23:14:11Z | 2025-01-06T23:14:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | thebest132 | 149,287,295 | U_kgDOCOXxfw | User | false |
huggingface/diffusers | 2,767,943,278 | I_kwDOHa8MBc6k-3Zu | 10,446 | https://github.com/huggingface/diffusers/issues/10446 | https://api.github.com/repos/huggingface/diffusers/issues/10446 | multi-gpu/model-sharding incompatible with FluxControlNetPipeline | ### Describe the bug
I have an environment with 2 A10 (each 22GB), when I follow [distributed_inference.md] to apply model sharding and run flux.1-dev, it works fine for basic scenario (ie. txt2img, img2img), even lora, **but not controlnet**.
An error got:
_Expected all tensors to be on the same device, but fo... | open | null | false | 22 | [
"bug",
"stale"
] | [] | 2025-01-03T17:41:48Z | 2025-03-14T15:04:21Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | GinoBelief | 140,038,721 | U_kgDOCFjSQQ | User | false |
huggingface/diffusers | 2,768,092,437 | I_kwDOHa8MBc6k_b0V | 10,447 | https://github.com/huggingface/diffusers/issues/10447 | https://api.github.com/repos/huggingface/diffusers/issues/10447 | convert_original_stable_diffusion_to_diffusers.py splits unet data into two files, breaking some workflows for larger models | ### Describe the bug
When converting SDXL and Pony models, `convert_original_stable_diffusion_to_diffusers.py` splits the unet data into two files. When this happens, converting from diffusers to MLMODELC breaks because `python_coreml_stable_diffusion.torch2coreml` from Apple's implementation of Stable Diffusion in ... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-01-03T19:52:31Z | 2025-01-03T23:21:27Z | 2025-01-03T22:31:43Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | notapreppie | 72,140,031 | MDQ6VXNlcjcyMTQwMDMx | User | false |
huggingface/diffusers | 2,768,329,341 | I_kwDOHa8MBc6lAVp9 | 10,448 | https://github.com/huggingface/diffusers/issues/10448 | https://api.github.com/repos/huggingface/diffusers/issues/10448 | Load DDUF file with Diffusers using mmap | DDUF support for diffusers is there and DDUF support mmap.
But diffusers example doesn't use or support mmap,
How can I load DDUF file to diffusers with mmap?
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", t... | open | null | false | 1 | [
"stale"
] | [] | 2025-01-04T00:42:09Z | 2025-02-03T15:02:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adhikjoshi | 11,740,719 | MDQ6VXNlcjExNzQwNzE5 | User | false |
huggingface/diffusers | 2,768,815,413 | I_kwDOHa8MBc6lCMU1 | 10,450 | https://github.com/huggingface/diffusers/issues/10450 | https://api.github.com/repos/huggingface/diffusers/issues/10450 | FluxSingleTransformerBlock is different from black-forest-labs's SingleStreamBlock | ### Describe the bug
diffusers version:
https://github.com/huggingface/diffusers/blob/a17832b2d96c0df9b41ce2faab5659ef46916c39/src/diffusers/models/transformers/transformer_flux.py#L94-L101
black-forest-labs version:
https://github.com/black-forest-labs/flux/blob/140cbfb88685d8ec33e07b44d79bbc43d375b351/src/flu... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-04T12:25:03Z | 2025-01-07T17:22:42Z | 2025-01-07T17:21:03Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenxiao111222 | 154,797,505 | U_kgDOCToFwQ | User | false |
huggingface/diffusers | 2,768,917,260 | I_kwDOHa8MBc6lClMM | 10,452 | https://github.com/huggingface/diffusers/issues/10452 | https://api.github.com/repos/huggingface/diffusers/issues/10452 | pipe.disable_model_cpu_offload | **Is your feature request related to a problem? Please describe.**
If I enable the following in Gradio interface
sana_pipe.enable_model_cpu_offload()
and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless... | closed | completed | false | 3 | [] | [] | 2025-01-04T16:39:01Z | 2025-01-07T08:29:32Z | 2025-01-05T08:16:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,769,011,349 | I_kwDOHa8MBc6lC8KV | 10,453 | https://github.com/huggingface/diffusers/issues/10453 | https://api.github.com/repos/huggingface/diffusers/issues/10453 | Hunyuan Video does not support batch size > 1 | ### Describe the bug
The HunyuanVideoPipeline (and I believe the model itself) does not support execution with a batch size > 1. There are some shape mismatches in the attention calculation. Trying to set the batch size to 2 will result in an error like this:
### Reproduction
This example is directly taken from the ... | closed | completed | false | 2 | [
"bug"
] | [
"a-r-r-o-w"
] | 2025-01-04T21:38:15Z | 2025-01-06T20:07:55Z | 2025-01-06T20:07:55Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Nerogar | 3,390,934 | MDQ6VXNlcjMzOTA5MzQ= | User | false |
huggingface/diffusers | 2,769,051,204 | I_kwDOHa8MBc6lDF5E | 10,455 | https://github.com/huggingface/diffusers/issues/10455 | https://api.github.com/repos/huggingface/diffusers/issues/10455 | opened issue on wrong account.. my bad | edit: opened issue on wrong account.. my bad #10455
| closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-05T00:18:37Z | 2025-01-05T00:25:57Z | 2025-01-05T00:25:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | enabldigital | 177,833,558 | U_kgDOCpmGVg | User | false |
huggingface/diffusers | 2,769,276,606 | I_kwDOHa8MBc6lD86- | 10,457 | https://github.com/huggingface/diffusers/issues/10457 | https://api.github.com/repos/huggingface/diffusers/issues/10457 | Comments bug in FluxPriorReduxPipeline | https://github.com/huggingface/diffusers/blob/fdcbbdf0bb4fb6ae3c2b676af525fced84aa9850/src/diffusers/pipelines/flux/pipeline_flux_prior_redux.py#L388-L392
For both numpy array and pytorch tensor, the expected value range should be between `[0, 255]`, not `[0, 1]`. | open | null | false | 5 | [] | [
"yiyixuxu"
] | 2025-01-05T13:31:21Z | 2025-07-09T06:25:58Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenxiao111222 | 154,797,505 | U_kgDOCToFwQ | User | false |
huggingface/diffusers | 1,427,531,265 | I_kwDOHa8MBc5VFmYB | 1,046 | https://github.com/huggingface/diffusers/issues/1046 | https://api.github.com/repos/huggingface/diffusers/issues/1046 | custom pipelines and from_pretrained | **What API design would you like to have changed or added to the library? Why?**
custom pipelines is available through the master `from_pretrained()` method, but do not work for other pipelines. It would be beneficial if you could load custom features into any pipeline, such as the much needed and missing functional... | closed | completed | false | 5 | [
"stale"
] | [] | 2022-10-28T16:59:12Z | 2022-12-31T15:03:39Z | 2022-12-31T15:03:39Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | WASasquatch | 1,151,589 | MDQ6VXNlcjExNTE1ODk= | User | false |
huggingface/diffusers | 2,769,372,171 | I_kwDOHa8MBc6lEUQL | 10,460 | https://github.com/huggingface/diffusers/issues/10460 | https://api.github.com/repos/huggingface/diffusers/issues/10460 | `UNet2DModel` is missing the documented `mid_block_type` argument | ### Describe the bug
The [documentation](https://huggingface.co/docs/diffusers/api/models/unet2d) for the `UNet2DModel` class claims that the constructor accepts a `mid_block_type` argument. However, this argument does not seem to actually be present in the code, and you get an error if you try to provide it.
##... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-05T17:26:03Z | 2025-01-08T20:50:31Z | 2025-01-08T20:50:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kalekundert | 298,132 | MDQ6VXNlcjI5ODEzMg== | User | false |
huggingface/diffusers | 2,769,729,630 | I_kwDOHa8MBc6lFrhe | 10,462 | https://github.com/huggingface/diffusers/issues/10462 | https://api.github.com/repos/huggingface/diffusers/issues/10462 | Multi-ID customization with CogvideoX 5B | ### Model/Pipeline/Scheduler description
Consistency video creations by incorporating multiple ID photos, with CogvideoX 5B.
Some results:
https://github.com/user-attachments/assets/d0397848-b646-4715-8331-1395a9b37f68
### Open source status
- [X] The model implementation is available.
- [X] The model weig... | closed | completed | false | 2 | [
"stale"
] | [] | 2025-01-06T03:15:39Z | 2025-07-05T21:31:30Z | 2025-07-05T21:31:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | feizc | 37,614,046 | MDQ6VXNlcjM3NjE0MDQ2 | User | false |
huggingface/diffusers | 2,769,952,258 | I_kwDOHa8MBc6lGh4C | 10,467 | https://github.com/huggingface/diffusers/issues/10467 | https://api.github.com/repos/huggingface/diffusers/issues/10467 | FLUX.1-dev FP8 Example Code: tmpxft_00000788_00000000-10_fp8_marlin.cudafe1.cpp | ### Describe the bug
Unable to inference using Flux FP8
Logs
[FP8_logs.txt](https://github.com/user-attachments/files/18314458/FP8_logs.txt)
### Reproduction
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux#single-file-loading-for-the-fluxtransformer2dmodel
```
import torch
from diff... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-01-06T06:42:14Z | 2025-01-06T16:58:21Z | 2025-01-06T07:03:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,770,061,871 | I_kwDOHa8MBc6lG8ov | 10,468 | https://github.com/huggingface/diffusers/issues/10468 | https://api.github.com/repos/huggingface/diffusers/issues/10468 | What is accelerate_ds2.yaml? | I can't find accelerate config file named "accelerate_ds2.yaml".
Please give me the file.
Thanks very much! | closed | completed | false | 1 | [] | [] | 2025-01-06T07:53:06Z | 2025-01-12T05:32:01Z | 2025-01-12T05:32:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | aa327chenge | 100,577,598 | U_kgDOBf6xPg | User | false |
huggingface/diffusers | 1,427,610,862 | I_kwDOHa8MBc5VF5zu | 1,047 | https://github.com/huggingface/diffusers/issues/1047 | https://api.github.com/repos/huggingface/diffusers/issues/1047 | cpu_offload | ### Describe the bug
I tried CPU offloading and found a few issues:
1. if safety is set to None, it will error out. An if-else in the enable_sequential_cpu_offload should fix that
2. multi-GPU use (I think) is not supported since it just offloads to cuda. Could add in an optional device passthrough to tell it where ... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2022-10-28T18:07:42Z | 2022-12-19T15:04:24Z | 2022-12-19T15:04:24Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dblunk88 | 39,381,389 | MDQ6VXNlcjM5MzgxMzg5 | User | false |
huggingface/diffusers | 2,770,379,605 | I_kwDOHa8MBc6lIKNV | 10,470 | https://github.com/huggingface/diffusers/issues/10470 | https://api.github.com/repos/huggingface/diffusers/issues/10470 | Flux - torchao inference not working | ### Describe the bug
1. Flux with torchao int8wo not working
2. enable_sequential_cpu_offload not working

### Reproduction
example taken from (merged)
https://github.com/huggingface/diffusers/pull/10009
```
imp... | closed | completed | false | 11 | [
"bug"
] | [] | 2025-01-06T10:46:06Z | 2025-01-18T15:45:23Z | 2025-01-18T15:45:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,770,640,403 | I_kwDOHa8MBc6lJJ4T | 10,472 | https://github.com/huggingface/diffusers/issues/10472 | https://api.github.com/repos/huggingface/diffusers/issues/10472 | I fine-tuned Stable Diffusion using the LoRA method on my own dataset. However, during the inference process, I encountered the error: TypeError: __init__() got an unexpected keyword argument 'lora_bias'. | ### Describe the bug
I fine-tuned Stable Diffusion using the LoRA method on my own dataset. However, during the inference process, I encountered the error: TypeError: __init__() got an unexpected keyword argument 'lora_bias'.
### Reproduction
CUDA_VISIBLE_DEVICES=1 accelerate launch train_text_to_image_lora.py \
--... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-06T13:15:31Z | 2025-01-07T05:29:17Z | 2025-01-06T15:57:57Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | qinchangchang | 95,670,613 | U_kgDOBbPRVQ | User | false |
huggingface/diffusers | 2,770,829,464 | I_kwDOHa8MBc6lJ4CY | 10,475 | https://github.com/huggingface/diffusers/issues/10475 | https://api.github.com/repos/huggingface/diffusers/issues/10475 | [SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning? | ### Describe the bug
Why is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?
Maybe I need some other plugin or parameter setting to maintain the same image quality as the... | closed | completed | false | 8 | [
"bug",
"stale"
] | [] | 2025-01-06T14:52:57Z | 2025-02-06T12:17:47Z | 2025-02-05T15:56:47Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ytwo-hub | 191,698,973 | U_kgDOC20YHQ | User | false |
huggingface/diffusers | 2,773,016,235 | I_kwDOHa8MBc6lSN6r | 10,485 | https://github.com/huggingface/diffusers/issues/10485 | https://api.github.com/repos/huggingface/diffusers/issues/10485 | HunyuanVideo with IP2V | ComfyUI has an implementation of using images to generate videos - Image-Prompt to Video (IP2V)
https://stable-diffusion-art.com/hunyuan-video-ip2v/
This is a very useful feature that radically changes the generation capabilities.
Is something similar expected in diffusers?
| closed | completed | false | 2 | [
"stale"
] | [] | 2025-01-07T14:43:16Z | 2025-06-10T20:54:26Z | 2025-06-10T20:54:26Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Yakonrus | 4,978,662 | MDQ6VXNlcjQ5Nzg2NjI= | User | false |
huggingface/diffusers | 2,773,033,593 | I_kwDOHa8MBc6lSSJ5 | 10,486 | https://github.com/huggingface/diffusers/issues/10486 | https://api.github.com/repos/huggingface/diffusers/issues/10486 | [core] introduce auto classes for T2V and I2V | We have a bunch of T2V and I2V pipelines now. I think it makes sense to introduce `auto` classes for them.
cc: @yiyixuxu @a-r-r-o-w | open | null | false | 6 | [
"Good second issue",
"contributions-welcome"
] | [] | 2025-01-07T14:51:02Z | 2025-05-02T02:33:09Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,773,357,493 | I_kwDOHa8MBc6lThO1 | 10,489 | https://github.com/huggingface/diffusers/issues/10489 | https://api.github.com/repos/huggingface/diffusers/issues/10489 | Bug in SanaPipeline example? | ### Describe the bug
I think there might be something wrong with the `SanaPipeline` example code at https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline
It results in a shape mismatch (see detailed logs below): `mat1 and mat2 shapes cannot be multiplied (600x256000 and 2304x1152)`... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-07T17:14:27Z | 2025-01-08T05:18:05Z | 2025-01-08T05:18:05Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | geronimi73 | 141,400,217 | U_kgDOCG2YmQ | User | false |
huggingface/diffusers | 1,427,755,846 | I_kwDOHa8MBc5VGdNG | 1,049 | https://github.com/huggingface/diffusers/issues/1049 | https://api.github.com/repos/huggingface/diffusers/issues/1049 | Update docs to include Euler schedulers | Reference: #1019. We should at least add them here: https://huggingface.co/docs/diffusers/api/schedulers | closed | completed | false | 2 | [] | [
"patil-suraj"
] | 2022-10-28T19:54:12Z | 2022-11-02T10:33:42Z | 2022-11-02T10:33:42Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pcuenca | 1,177,582 | MDQ6VXNlcjExNzc1ODI= | User | false |
huggingface/diffusers | 2,774,088,026 | I_kwDOHa8MBc6lWTla | 10,490 | https://github.com/huggingface/diffusers/issues/10490 | https://api.github.com/repos/huggingface/diffusers/issues/10490 | support Infinity | code:https://github.com/FoundationVision/Infinity
The model is fast and of high quality. Would you consider supporting it? | open | null | false | 3 | [
"stale"
] | [] | 2025-01-08T02:21:00Z | 2025-02-07T15:02:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Thekey756 | 75,780,148 | MDQ6VXNlcjc1NzgwMTQ4 | User | false |
huggingface/diffusers | 2,774,460,705 | I_kwDOHa8MBc6lXukh | 10,492 | https://github.com/huggingface/diffusers/issues/10492 | https://api.github.com/repos/huggingface/diffusers/issues/10492 | error in torchao quantize and lora fuse | ### Describe the bug
when i use torchao to quantize the flux, and then fuse lora weights, there is a bug.
```python
Traceback (most recent call last):
File "/media/74nvme/research/test.py", line 278, in <module>
pipe.fuse_lora(
File "/media/74nvme/software/miniconda3/envs/comfyui/lib/python3.10/site-packa... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-08T06:57:40Z | 2025-01-13T07:08:30Z | 2025-01-12T05:38:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhangvia | 38,352,569 | MDQ6VXNlcjM4MzUyNTY5 | User | false |
huggingface/diffusers | 2,775,178,441 | I_kwDOHa8MBc6ladzJ | 10,496 | https://github.com/huggingface/diffusers/issues/10496 | https://api.github.com/repos/huggingface/diffusers/issues/10496 | NF4 quantized flux models with loras | Is there any update here ? With nf4 quantized flux models, i could not use any lora
> **Update**: NF4 serialization and loading are working fine. @DN6 let's brainstorm how we can support it more easily? This would help us unlock doing LoRAs on the quantized weights, too (cc: @BenjaminBossan for PEFT). ... | closed | completed | false | 12 | [] | [] | 2025-01-08T11:41:01Z | 2025-01-13T19:42:03Z | 2025-01-13T19:42:02Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hamzaakyildiz | 69,676,637 | MDQ6VXNlcjY5Njc2NjM3 | User | false |
huggingface/diffusers | 1,427,832,020 | I_kwDOHa8MBc5VGvzU | 1,050 | https://github.com/huggingface/diffusers/issues/1050 | https://api.github.com/repos/huggingface/diffusers/issues/1050 | something wrong with Tesla T4 ¯\_(ツ)_/¯ | ### Describe the bug
I tested `nitrosocke/mo-di-diffusion` vs `moDi-v1-pruned.ckpt`
like [00:29<00:00, 1.88it/s] vs [00:10<00:00, 4.68it/s]
#
test video with colab t4 side by side:
https://www.youtube.com/watch?v=VfjITfEXRNY
or am I doing something wrong? 👻🎃
### Reproduction
_No response_
### Logs
_No re... | closed | completed | false | 15 | [
"bug"
] | [] | 2022-10-28T21:04:48Z | 2022-10-30T21:33:50Z | 2022-10-29T23:57:56Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | camenduru | 54,370,274 | MDQ6VXNlcjU0MzcwMjc0 | User | false |
huggingface/diffusers | 2,775,592,237 | I_kwDOHa8MBc6lcC0t | 10,500 | https://github.com/huggingface/diffusers/issues/10500 | https://api.github.com/repos/huggingface/diffusers/issues/10500 | HunyuanVideo w. BitsAndBytes (local): Expected all tensors to be on the same device | ### Describe the bug
Errors in the HunyuanVideo examples here:
[hunyuan_video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video)
### Reproduction
Run this code from the link:
```
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, HunyuanVideoTransform... | closed | completed | false | 13 | [
"bug"
] | [] | 2025-01-08T14:50:35Z | 2025-01-17T11:26:06Z | 2025-01-17T02:55:09Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,777,868,207 | I_kwDOHa8MBc6lkuev | 10,511 | https://github.com/huggingface/diffusers/issues/10511 | https://api.github.com/repos/huggingface/diffusers/issues/10511 | Update FlowMatchEulerDiscreteScheduler with new design to support SD3 / SD3.5 / Flux moving forward | ### Model/Pipeline/Scheduler description
Fully support SD3 / SD3.5 / FLUX models using new scheduler design template with a more standardized approach including new parameters to support models. This approach can be utilized in existing schedules to support flow match models.
I have a working FlowMatchEulerDiscreteSc... | open | null | false | 2 | [
"stale"
] | [] | 2025-01-09T13:37:51Z | 2025-02-08T15:02:41Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ukaprch | 107,368,096 | U_kgDOBmZOoA | User | false |
huggingface/diffusers | 2,778,372,197 | I_kwDOHa8MBc6lmphl | 10,512 | https://github.com/huggingface/diffusers/issues/10512 | https://api.github.com/repos/huggingface/diffusers/issues/10512 | [LoRA] Quanto Flux LoRA can't load | ### Describe the bug
Cannot load LoRAs into quanto-quantized Flux.
```py
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from huggingface_hub import hf_hub_download
from optimum.quanto import qfloat8, quantize, freeze
from transformers import T5EncoderModel
bfl_repo = "black-forest-labs/FLUX... | open | null | false | 35 | [
"bug"
] | [] | 2025-01-09T17:22:02Z | 2025-12-01T13:19:35Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Mino1289 | 68,814,671 | MDQ6VXNlcjY4ODE0Njcx | User | false |
huggingface/diffusers | 2,778,514,150 | I_kwDOHa8MBc6lnMLm | 10,513 | https://github.com/huggingface/diffusers/issues/10513 | https://api.github.com/repos/huggingface/diffusers/issues/10513 | Errors in Google Colab with FLUX Schnell starting 2025-01-08 | ### Describe the bug
I've had code working fine in a Google Colab for the FLUX schnell model for months. Suddenly on January 8, 2025, the code no longer runs as I get a tensor size error (see Logs).
I suspect that how FLUX packs the latents has changed? Anyone know?
### Reproduction
```
model_id="black... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-01-09T18:32:14Z | 2025-01-10T05:45:51Z | 2025-01-10T05:45:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | aduchon | 7,476,820 | MDQ6VXNlcjc0NzY4MjA= | User | false |
huggingface/diffusers | 2,778,942,702 | I_kwDOHa8MBc6lo0zu | 10,514 | https://github.com/huggingface/diffusers/issues/10514 | https://api.github.com/repos/huggingface/diffusers/issues/10514 | Sana 4k with use_resolution_binning not supported due to sample_size 128 | ### Describe the bug
Using the new 4k model fails with defaults values. Specifically with use_resolution_binning=True which is the default.
```
Traceback (most recent call last):
File "/home/rockerboo/code/others/sana-diffusers/main.py", line 28, in <module>
image = pipe(
^^^^^
File "/home... | open | null | false | 5 | [
"bug",
"stale"
] | [] | 2025-01-09T23:16:20Z | 2025-02-12T15:03:30Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rockerBOO | 15,027 | MDQ6VXNlcjE1MDI3 | User | false |
huggingface/diffusers | 2,779,456,093 | I_kwDOHa8MBc6lqyJd | 10,518 | https://github.com/huggingface/diffusers/issues/10518 | https://api.github.com/repos/huggingface/diffusers/issues/10518 | Some wrong in "diffusers/examples/research_projects/sd3_lora_colab /train_dreambooth_lora_sd3_miniature.py" | ### Describe the bug
https://github.com/huggingface/diffusers/blob/89e4d6219805975bd7d253a267e1951badc9f1c0/examples/research_projects/sd3_lora_colab/train_dreambooth_lora_sd3_miniature.py#L768
<img width="791" alt="截屏2025-01-10 15 09 29" src="https://github.com/user-attachments/assets/4d470d53-56c8-4a4a-be22-9308f... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-10T07:10:11Z | 2025-01-13T13:47:29Z | 2025-01-13T13:47:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CuddleSabe | 61,224,076 | MDQ6VXNlcjYxMjI0MDc2 | User | false |
huggingface/diffusers | 2,779,646,417 | I_kwDOHa8MBc6lrgnR | 10,520 | https://github.com/huggingface/diffusers/issues/10520 | https://api.github.com/repos/huggingface/diffusers/issues/10520 | Sana 4K: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! | ### Describe the bug
Inference not working with quantization
### Reproduction
Use the sample code from here
https://github.com/NVlabs/Sana/blob/main/asset/docs/8bit_sana.md#quantization
Replace model with Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers
and dtype torch.bfloat16
### Logs
```shell
(venv) C:... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-01-10T09:03:32Z | 2025-01-16T18:09:43Z | 2025-01-16T17:42:57Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,779,795,900 | I_kwDOHa8MBc6lsFG8 | 10,522 | https://github.com/huggingface/diffusers/issues/10522 | https://api.github.com/repos/huggingface/diffusers/issues/10522 | Add fine-tuning script for BlipDiffusionPipeline and BlipDiffusionControlNetPipeline | Hi!
I noticed that BlipDiffusionPipeline and BlipDiffusionControlNetPipeline provided in diffusers can only be used for zero-shot tasks. Is is possible to fine-tuning that? I'd like to use this function to fine-tune my own model.
| closed | completed | false | 1 | [] | [] | 2025-01-10T10:11:03Z | 2025-01-12T05:28:31Z | 2025-01-12T05:28:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nikkishi | 157,780,513 | U_kgDOCWeKIQ | User | false |
huggingface/diffusers | 2,780,108,846 | I_kwDOHa8MBc6ltRgu | 10,526 | https://github.com/huggingface/diffusers/issues/10526 | https://api.github.com/repos/huggingface/diffusers/issues/10526 | Flux FP8 with optimum.quanto TypeError: WeightQBytesTensor.__new__() missing 6 required positional arguments: 'axis', 'size', 'stride', 'data', 'scale', and 'activation_qtype' | ### Describe the bug
Flux FP8 model with optimum.quanto
pipe.enable_model_cpu_offload() - Works
pipe.enable_sequential_cpu_offload() - Doesn't work
### Reproduction
```
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-01-10T12:45:25Z | 2025-05-23T13:06:13Z | 2025-05-23T10:03:43Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,780,779,062 | I_kwDOHa8MBc6lv1I2 | 10,529 | https://github.com/huggingface/diffusers/issues/10529 | https://api.github.com/repos/huggingface/diffusers/issues/10529 | Missing `"linspace"` timestep spacing option for `DDIMInverseScheduler` | **Is your feature request related to a problem? Please describe.**
I was wondering why the `"linspace"` timestep spacing option is not available also for the `DDIMInverseScheduler` class, as it is for other schedulers like `DDIMScheduler` and `DDIMScheduler`.
https://github.com/huggingface/diffusers/blob/9f06a0d1a4a... | open | null | false | 4 | [
"stale"
] | [] | 2025-01-10T18:07:14Z | 2025-03-24T15:04:11Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | andreabosisio | 79,710,398 | MDQ6VXNlcjc5NzEwMzk4 | User | false |
huggingface/diffusers | 2,781,635,244 | I_kwDOHa8MBc6lzGKs | 10,533 | https://github.com/huggingface/diffusers/issues/10533 | https://api.github.com/repos/huggingface/diffusers/issues/10533 | generated black image in flux fill fp16 | ### Describe the bug
when I load flux fill in fp16. I get the black image as generated image.
### Reproduction


my infe... | closed | completed | false | 7 | [
"bug",
"stale"
] | [] | 2025-01-11T05:43:13Z | 2025-02-13T20:10:34Z | 2025-02-13T20:10:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | saeedkhanehgir | 65,589,645 | MDQ6VXNlcjY1NTg5NjQ1 | User | false |
huggingface/diffusers | 1,428,037,223 | I_kwDOHa8MBc5VHh5n | 1,054 | https://github.com/huggingface/diffusers/issues/1054 | https://api.github.com/repos/huggingface/diffusers/issues/1054 | [Community] train example of sde_ve and karras_ve | Great project!
I notice I cannot find the train example of sde_ve and karras_ve whose schedulers and pipeline has been implemented in this repo. You know its loss and set of sigma is tricky so I hope I can get your official implement.
Thanks.
| closed | completed | false | 3 | [
"community-examples",
"stale"
] | [] | 2022-10-29T02:59:32Z | 2023-04-20T15:04:06Z | 2023-04-20T15:04:06Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vsfh | 30,598,979 | MDQ6VXNlcjMwNTk4OTc5 | User | false |
huggingface/diffusers | 2,782,519,994 | I_kwDOHa8MBc6l2eK6 | 10,540 | https://github.com/huggingface/diffusers/issues/10540 | https://api.github.com/repos/huggingface/diffusers/issues/10540 | Loading Flux Dev transformers from_single_file fails since support for Flux Fill was added | ### Describe the bug
Flux models are sometimes distributed as transformer only, because all the other pipeline components are usually not changed.
Examples include:
- finetunes on CivitAI, to offer a smaller download size
- https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/tree/main
Since support for F... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-01-12T14:41:12Z | 2025-01-13T19:26:02Z | 2025-01-13T19:26:01Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.