repo stringclasses 1
value | github_id int64 1.27B 4.24B | github_node_id stringlengths 18 18 | number int64 8 13.4k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-04-10 09:58:27 | updated_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 | closed_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 2
values | extracted_at stringdate 2026-04-07 13:34:13 2026-04-10 21:59:46 | author_login stringlengths 3 28 | author_id int64 1.54k 258M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,984,700,623 | I_kwDOHa8MBc6x5urP | 11,275 | https://github.com/huggingface/diffusers/issues/11275 | https://api.github.com/repos/huggingface/diffusers/issues/11275 | TypeError in VAE with `ResnetUpsampleBlock2D`: NoneType used in addition | ### Describe the bug
# TypeError in VAE with `ResnetUpsampleBlock2D`: NoneType used in addition
When I attempted to modify the encoder and decoder types within the VAE model to ResnetDownsampleBlock2D and ResnetUpsampleBlock2D respectively, I encountered an error:
.to("cuda")
```
It will result into
```bash
Traceback (most recent call last):
File "/fsx/sayak/diffusers/... | open | null | false | 6 | [
"stale"
] | [
"SunMarc"
] | 2025-04-10T08:54:14Z | 2026-01-09T15:23:15Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,985,225,694 | I_kwDOHa8MBc6x7u3e | 11,283 | https://github.com/huggingface/diffusers/issues/11283 | https://api.github.com/repos/huggingface/diffusers/issues/11283 | Wan: Error while deserializing header | ### Describe the bug
Wan doc ex.: Error while deserializing header
### Reproduction
The #2 example from the Wan docs will cause an error: https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan
```
import torch
from diffusers import WanPipeline, AutoencoderKLWan
from diffusers.utils import export_to_video
... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-10T10:35:12Z | 2025-04-10T15:59:48Z | 2025-04-10T15:59:47Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,986,365,446 | I_kwDOHa8MBc6yAFIG | 11,285 | https://github.com/huggingface/diffusers/issues/11285 | https://api.github.com/repos/huggingface/diffusers/issues/11285 | value errors in convert to/from diffusers from original stable diffusion | ### Describe the bug
There's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model.
I have a diffusers-layout SD1.5 model, with LongCLIP.
https://huggingface.co/opendiffusionai/xllsd-alpha0
I can pull it locally, then convert to single file format, with
python ... | open | reopened | false | 11 | [
"bug",
"stale"
] | [] | 2025-04-10T17:16:42Z | 2026-02-04T15:19:02Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ppbrown | 1,723,129 | MDQ6VXNlcjE3MjMxMjk= | User | false |
huggingface/diffusers | 2,986,810,943 | I_kwDOHa8MBc6yBx4_ | 11,286 | https://github.com/huggingface/diffusers/issues/11286 | https://api.github.com/repos/huggingface/diffusers/issues/11286 | Error while loading Lora | ### Describe the bug
`Error(s) in loading state_dict for UNet2DConditionModel`. I have uploaded the model on hugging face. Error appears on load_lora_weights() function.
### Reproduction
```
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-04-10T20:29:46Z | 2025-05-11T15:02:44Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | D1-3105 | 65,292,437 | MDQ6VXNlcjY1MjkyNDM3 | User | false |
huggingface/diffusers | 2,988,029,801 | I_kwDOHa8MBc6yGbdp | 11,289 | https://github.com/huggingface/diffusers/issues/11289 | https://api.github.com/repos/huggingface/diffusers/issues/11289 | Wan2.1 Out of Memory for WanI2V-720P-14B model | Trying to run Wan2.1 i2v Model for 720 p as per documentation but getting OOM even for 80GB Vram Gpu.
Tring for 81 frames and 720p resolution | closed | completed | false | 3 | [] | [] | 2025-04-11T09:00:13Z | 2025-04-30T02:11:07Z | 2025-04-30T02:11:06Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | varadrane1707 | 205,765,094 | U_kgDODEO55g | User | false |
huggingface/diffusers | 2,988,217,285 | I_kwDOHa8MBc6yHJPF | 11,291 | https://github.com/huggingface/diffusers/issues/11291 | https://api.github.com/repos/huggingface/diffusers/issues/11291 | Inconsistent variable usage in `tile_latent_min_width` computation in `AutoencoderKLMochi` decoder | ### Describe the bug
There seems to be an inconsistency in the calculation of `tile_latent_min_width` inside the decoder of the AutoencoderKLMochi model. In the [following line of code](https://github.com/huggingface/diffusers/blob/7054a34978e68bc2b7241378c07d938066c1aa64/src/diffusers/models/autoencoders/autoencoder_... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-04-11T10:12:35Z | 2025-04-12T00:32:43Z | 2025-04-11T13:41:09Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kuantuna | 66,808,459 | MDQ6VXNlcjY2ODA4NDU5 | User | false |
huggingface/diffusers | 2,988,729,455 | I_kwDOHa8MBc6yJGRv | 11,295 | https://github.com/huggingface/diffusers/issues/11295 | https://api.github.com/repos/huggingface/diffusers/issues/11295 | CompVis/stable-diffusion-v1-4 is missing fp16 files | Hello!
I got this error:
```[/root/jvenv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py:285](https://file+.vscode-resource.vscode-cdn.net/root/jvenv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py:285): FutureWarning: You are loading the variant fp16 from CompVis... | closed | completed | false | 3 | [
"stale"
] | [] | 2025-04-11T13:47:05Z | 2025-05-12T14:07:06Z | 2025-05-12T14:07:05Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | esraiak | 14,075,545 | MDQ6VXNlcjE0MDc1NTQ1 | User | false |
huggingface/diffusers | 2,990,210,975 | I_kwDOHa8MBc6yOv-f | 11,298 | https://github.com/huggingface/diffusers/issues/11298 | https://api.github.com/repos/huggingface/diffusers/issues/11298 | Hotswapping multiple LoRAs throws a peft key error. | ### Describe the bug
When trying to hotswap multiple flux loras you get a runtime error around unexpected keys
`RuntimeError: Hot swapping the adapter did not succeed, unexpected keys found: transformer_blocks.13.norm1.linear.lora_B.weight,`
### Reproduction
Download two Flux Dev loras (this example uses http://bas... | open | null | false | 18 | [
"bug",
"stale",
"lora"
] | [] | 2025-04-12T04:52:06Z | 2026-02-03T15:22:58Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jonluca | 13,029,040 | MDQ6VXNlcjEzMDI5MDQw | User | false |
huggingface/diffusers | 2,990,386,152 | I_kwDOHa8MBc6yPavo | 11,301 | https://github.com/huggingface/diffusers/issues/11301 | https://api.github.com/repos/huggingface/diffusers/issues/11301 | HiDream auxiliary loss for MoE experts not tied to computation graph | ### Describe the bug
The MOEFeedForward for HiDream has the auxiliary loss commented out in the upstream prototype code.
Additionally, the MoEGate has a memory leak in it.
### Reproduction
- Train HiDream
- Observe outOfMemory on backward pass
- Resolve MoEGate OOM by implementing gradient checkpointing
- Observe... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-04-12T10:08:13Z | 2025-05-12T15:02:57Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bghira | 59,658,056 | MDQ6VXNlcjU5NjU4MDU2 | User | false |
huggingface/diffusers | 2,990,420,514 | I_kwDOHa8MBc6yPjIi | 11,302 | https://github.com/huggingface/diffusers/issues/11302 | https://api.github.com/repos/huggingface/diffusers/issues/11302 | First time using optimum quanto and struggling to save quantized model for HiDream | I want to save HiDream model in int4 (only text encoder 3 and transformer, one at a time).
This code save the other 3 (TE1, TE2 and VAE) but does not save text_encoder_3. Even waited for 30m. If I do not supply TE1, TE2,TE4 and VAE it throws error.
Any suggestions please.
:
File "/fsx/sayak/diffusers/check_hidream.py", line 115, in <module>
latents = pipe(
File "/fsx/sayak/miniconda3/envs/diffusers/lib/python3.10/site-pack... | closed | completed | false | 2 | [] | [
"a-r-r-o-w"
] | 2025-04-14T03:28:08Z | 2025-04-23T12:47:55Z | 2025-04-23T12:47:55Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,991,965,016 | I_kwDOHa8MBc6yVcNY | 11,309 | https://github.com/huggingface/diffusers/issues/11309 | https://api.github.com/repos/huggingface/diffusers/issues/11309 | Potential incorrect reshaping in 2D positional embedding | ### Describe the bug
Hi there,
I have concerns with this line of code (https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py#L282).
Specifically, `grid_size` is the tuple consisting of the height `H` and width `W` of the image. `grid` computed in L280 should have the shape `2*H*W`, an... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-14T05:53:23Z | 2025-05-14T15:03:00Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jinhong-ni | 87,235,406 | MDQ6VXNlcjg3MjM1NDA2 | User | false |
huggingface/diffusers | 2,992,242,995 | I_kwDOHa8MBc6yWgEz | 11,312 | https://github.com/huggingface/diffusers/issues/11312 | https://api.github.com/repos/huggingface/diffusers/issues/11312 | Issue regarding compatibility improvements for textual inversion. | Regarding https://github.com/huggingface/diffusers/pull/10949#issuecomment-2779216989, no error occurs, but the expected result is not achieved. Could you provide some advice?
| closed | completed | false | 0 | [] | [] | 2025-04-14T08:08:00Z | 2025-04-16T00:36:37Z | 2025-04-16T00:36:37Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | suzukimain | 131,413,573 | U_kgDOB9U2RQ | User | false |
huggingface/diffusers | 2,992,594,365 | I_kwDOHa8MBc6yX129 | 11,314 | https://github.com/huggingface/diffusers/issues/11314 | https://api.github.com/repos/huggingface/diffusers/issues/11314 | flux controlnet training script was WRONG, FATAL mistake! | ### Describe the bug
flux controlnet train script was contributed by @PromeAIpro and merged around 0.31.0, see https://github.com/huggingface/diffusers/pull/9324/files
but around 0.32.0, with modification of code prettify and diffusers api introducing, see https://github.com/huggingface/diffusers/blob/8170dc368d278ec... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-14T10:14:43Z | 2025-04-15T01:36:38Z | 2025-04-15T01:14:42Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PromeAIpro | 178,361,217 | U_kgDOCqGTgQ | User | false |
huggingface/diffusers | 2,993,115,365 | I_kwDOHa8MBc6yZ1Dl | 11,315 | https://github.com/huggingface/diffusers/issues/11315 | https://api.github.com/repos/huggingface/diffusers/issues/11315 | HiDream rope implementation uses float64, broke on MPS. | ### Describe the bug
This seems to be a regular issue with MPS support, the rope implementation has an hard coded float64 parameter arrange call
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hidream_image.py", line 98, in rope
scale = torch.arange(0, d... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-04-14T13:39:57Z | 2025-04-14T19:19:22Z | 2025-04-14T19:19:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Vargol | 62,868 | MDQ6VXNlcjYyODY4 | User | false |
huggingface/diffusers | 2,994,721,127 | I_kwDOHa8MBc6yf9Fn | 11,321 | https://github.com/huggingface/diffusers/issues/11321 | https://api.github.com/repos/huggingface/diffusers/issues/11321 | flux controlnet train ReadMe have a bug | ### Describe the bug

what is the controlnet config parameters? text is num_single_layers = 10, but the code set num_single_layers=0?
### Reproduction
check readme file
### Logs
```shell
```
### System Info
diffusers ==0.... | closed | completed | false | 14 | [
"bug",
"stale"
] | [] | 2025-04-15T01:30:58Z | 2025-10-11T09:58:52Z | 2025-10-11T09:58:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Johnson-yue | 10,268,274 | MDQ6VXNlcjEwMjY4Mjc0 | User | false |
huggingface/diffusers | 2,996,146,745 | I_kwDOHa8MBc6ylZI5 | 11,326 | https://github.com/huggingface/diffusers/issues/11326 | https://api.github.com/repos/huggingface/diffusers/issues/11326 | Wan Video Vae RemoteVae | ### Did you like the remote VAE solution?
Yes but Missing Wan Video Vae 2.1
### What can be improved about the current solution?
Add Wan Video Vae 2.1 to Remote Vae
### What other VAEs you would like to see if the pilot goes well?
Wan Video Vae
### Notify the members of the team
_No response_ | open | null | false | 10 | [] | [] | 2025-04-15T11:45:59Z | 2026-02-08T14:56:26Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xSemi | 2,349,624 | MDQ6VXNlcjIzNDk2MjQ= | User | false |
huggingface/diffusers | 2,997,137,371 | I_kwDOHa8MBc6ypK_b | 11,329 | https://github.com/huggingface/diffusers/issues/11329 | https://api.github.com/repos/huggingface/diffusers/issues/11329 | bitsandbytes 8bit quant memory leak | ### Describe the bug
Using 8bit quant on pipeline modules results in un-freeable VRAM usage
This is probably more of a bitsandbytes issue?
I am wondering if there is a way to resolve this in the context of diffusers.
If you move the pipe to CPU, the modules are left on the GPU due to a bitsandbytes limitation.
If ... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-04-15T17:30:59Z | 2025-04-15T21:31:51Z | 2025-04-15T21:31:49Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Teriks | 14,919,098 | MDQ6VXNlcjE0OTE5MDk4 | User | false |
huggingface/diffusers | 2,998,506,252 | I_kwDOHa8MBc6yuZMM | 11,334 | https://github.com/huggingface/diffusers/issues/11334 | https://api.github.com/repos/huggingface/diffusers/issues/11334 | Standardization of additional token identifiers across pipelines | `FluxPipeline` has utilities that give us `img_ids` and `txt_ids`:
https://github.com/huggingface/diffusers/blob/ce1063acfa0cbc2168a7e9dddd4282ab8013b810/src/diffusers/pipelines/flux/pipeline_flux.py#L514
https://github.com/huggingface/diffusers/blob/ce1063acfa0cbc2168a7e9dddd4282ab8013b810/src/diffusers/pipelines/fl... | open | null | false | 3 | [
"stale"
] | [] | 2025-04-16T06:00:50Z | 2026-01-09T15:22:52Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,998,670,229 | I_kwDOHa8MBc6yvBOV | 11,336 | https://github.com/huggingface/diffusers/issues/11336 | https://api.github.com/repos/huggingface/diffusers/issues/11336 | diffusers中的cogvideo image2video中建议的分辨率尺寸与cogvideo原项目中建议的尺寸不一致? | 下面这个是cogvideo原项目中建议的分辨率尺寸

diffusers中pipeline_cogvideox_image2video.py建议的width和height和上面的好像是相反的。 | open | null | false | 2 | [
"stale"
] | [] | 2025-04-16T07:02:29Z | 2025-05-24T15:03:01Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ZhuangPeiyu | 26,140,437 | MDQ6VXNlcjI2MTQwNDM3 | User | false |
huggingface/diffusers | 2,999,185,669 | I_kwDOHa8MBc6yw_EF | 11,339 | https://github.com/huggingface/diffusers/issues/11339 | https://api.github.com/repos/huggingface/diffusers/issues/11339 | How to multi-GPU WAN inference | Hi,I didn't find multi-gpu inferences example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers.
I would appreciate some help on that, thank you in advance | closed | completed | false | 2 | [
"stale"
] | [] | 2025-04-16T10:22:41Z | 2025-07-05T21:18:01Z | 2025-07-05T21:18:01Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HeathHose | 14,328,405 | MDQ6VXNlcjE0MzI4NDA1 | User | false |
huggingface/diffusers | 2,999,557,687 | I_kwDOHa8MBc6yyZ43 | 11,342 | https://github.com/huggingface/diffusers/issues/11342 | https://api.github.com/repos/huggingface/diffusers/issues/11342 | Sana-sprint pipeline failing on MPS | ### Describe the bug
I noticed that if I run the `SanaSprintPipeline` on MPS the output is totally broken. Running on CPU works fine.
### Reproduction
```pyhon
from diffusers.pipelines.sana.pipeline_sana_sprint import SanaSprintPipeline
pipeline = SanaSprintPipeline.from_pretrained("Efficient-Large-Model/Sana_Sprin... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-16T12:51:11Z | 2025-04-18T08:08:21Z | 2025-04-18T08:07:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bertaveira | 9,644,729 | MDQ6VXNlcjk2NDQ3Mjk= | User | false |
huggingface/diffusers | 2,999,558,037 | I_kwDOHa8MBc6yyZ-V | 11,343 | https://github.com/huggingface/diffusers/issues/11343 | https://api.github.com/repos/huggingface/diffusers/issues/11343 | Since last update of transformers I can't import diffusers modules | ### Describe the bug
After updating pytorch, and transformers to latest version, diffusers stopped working:
My code: from diffusers import CogVideoXPipeline, StableVideoDiffusionPipeline, MochiPipeline
Failed to import diffusers.loaders.lora_pipeline because of the following error (look up to see its traceback):
can... | closed | completed | false | 5 | [
"bug",
"stale"
] | [] | 2025-04-16T12:51:20Z | 2026-02-03T16:32:05Z | 2026-02-03T16:32:05Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ParisNeo | 827,993 | MDQ6VXNlcjgyNzk5Mw== | User | false |
huggingface/diffusers | 2,999,615,070 | I_kwDOHa8MBc6yyn5e | 11,344 | https://github.com/huggingface/diffusers/issues/11344 | https://api.github.com/repos/huggingface/diffusers/issues/11344 | HiDream support for safetensors and gguf loading | As title says, request is to add support for qquf and pre-quant safetensors loaders for `HiDreamImageTransformer2DModel`
Unsurprisingly, this is one of the most common requests from my user base.
Sample models can be found at:
- <https://civitai.com/models/1457126/hidream-i1-full-dev-fast-nf4>
- <https://civitai.com/m... | closed | completed | false | 5 | [
"stale"
] | [
"DN6"
] | 2025-04-16T13:10:34Z | 2025-05-16T17:15:17Z | 2025-05-16T17:15:17Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 3,000,145,962 | I_kwDOHa8MBc6y0pgq | 11,345 | https://github.com/huggingface/diffusers/issues/11345 | https://api.github.com/repos/huggingface/diffusers/issues/11345 | hello i cant generate beautiful anime women | ### Did you like the remote VAE solution?
### Issue Description
keeps saying 404 for remote VAEs
is it only me or
### Version Platform Description
00:07:32-832996 INFO Logger: file="E:\sd-next\automatic\sdnext.log" level=INFO host="..." size=15384103
mode=append
00:07:32-836107 INFO ... | closed | not_planned | false | 15 | [] | [
"yiyixuxu"
] | 2025-04-16T16:22:54Z | 2025-05-28T02:15:59Z | 2025-05-15T08:02:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | teamolhuang | 82,561,200 | MDQ6VXNlcjgyNTYxMjAw | User | false |
huggingface/diffusers | 3,000,450,806 | I_kwDOHa8MBc6y1z72 | 11,347 | https://github.com/huggingface/diffusers/issues/11347 | https://api.github.com/repos/huggingface/diffusers/issues/11347 | DDIM previous timestep issue | ### Describe the bug
When using `diffusers.schedulers.scheduling_ddim.DDIMScheduler` with timestep_spacing='linspace' the value of the previous timestep that is calculated is not the right one, leading to a drop in model performance.
If you run the attached code with `print(timestep,prev_timestep,prev_timestep_)` ad... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-16T18:44:16Z | 2025-05-17T15:02:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | mt-clemente | 92,925,667 | U_kgDOBYnu4w | User | false |
huggingface/diffusers | 3,001,621,473 | I_kwDOHa8MBc6y6Rvh | 11,351 | https://github.com/huggingface/diffusers/issues/11351 | https://api.github.com/repos/huggingface/diffusers/issues/11351 | Why Wan i2v video processor always float32 datatype? | ### Describe the bug
I found
image = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32)
https://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633
in pipeline_wan_i2v.py
why datatype ... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-17T07:00:42Z | 2025-05-07T03:48:24Z | 2025-04-30T02:05:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DamonsJ | 28,893,028 | MDQ6VXNlcjI4ODkzMDI4 | User | false |
huggingface/diffusers | 3,001,800,228 | I_kwDOHa8MBc6y69Yk | 11,352 | https://github.com/huggingface/diffusers/issues/11352 | https://api.github.com/repos/huggingface/diffusers/issues/11352 | [bnb] Moving a pipeline with a 8bit quantized model to CPU doesn't throw warning | @SunMarc `tests/quantization/bnb/test_mixed_int8.py::SlowBnb8bitTests::test_moving_to_cpu_throws_warning` is failing in the `diffusers` `main`. It passes on v0.32.0-release branch.
My `diffusers-cli env`:
```bash
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers vers... | closed | completed | false | 1 | [] | [] | 2025-04-17T08:25:37Z | 2025-04-22T10:10:09Z | 2025-04-22T10:10:09Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,004,336,031 | I_kwDOHa8MBc6zEoef | 11,359 | https://github.com/huggingface/diffusers/issues/11359 | https://api.github.com/repos/huggingface/diffusers/issues/11359 | [Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model. | **Is your feature request related to a problem? Please describe.**
No problem. This request is Low priority. As and when time allows.
**Describe the solution you'd like.**
Please support the new release of LTX-Video 0.9.6
**Describe alternatives you've considered.**
Original repo have support but it is easier to use ... | closed | completed | false | 6 | [] | [
"yiyixuxu"
] | 2025-04-18T08:05:27Z | 2025-05-09T16:03:34Z | 2025-05-09T16:03:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 3,004,354,723 | I_kwDOHa8MBc6zEtCj | 11,360 | https://github.com/huggingface/diffusers/issues/11360 | https://api.github.com/repos/huggingface/diffusers/issues/11360 | [performance] investigating FluxPipeline for recompilations on resolution changes | Similar to https://github.com/huggingface/diffusers/pull/11297/, I was investigating potential recompilations for Flux on resolution changes.
<details>
<summary>Code</summary>
```py
from diffusers import FluxTransformer2DModel, FluxPipeline
from diffusers.utils.torch_utils import randn_tensor
import torch.utils.benc... | closed | completed | false | 11 | [
"performance",
"torch.compile"
] | [] | 2025-04-18T08:16:10Z | 2025-06-23T07:24:25Z | 2025-06-23T07:24:25Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,005,344,222 | I_kwDOHa8MBc6zIene | 11,362 | https://github.com/huggingface/diffusers/issues/11362 | https://api.github.com/repos/huggingface/diffusers/issues/11362 | "unexpected keyword argument" Error when passing "return_dict=True" to HiDreamImageTransformer2DModel | ### Describe the bug
The HiDreamImageTransformer2DModel returns `Transformer2DModelOutput(sample=output, mask=hidden_states_masks)` if `return_dict` is set to True, but `mask` is not a valid keyword argument, which results in an error
### Reproduction
Just pass `return_dict=True` to HiDreamImageTransformer2DModel.fo... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-04-18T16:39:23Z | 2025-04-19T00:07:22Z | 2025-04-19T00:07:22Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Nerogar | 3,390,934 | MDQ6VXNlcjMzOTA5MzQ= | User | false |
huggingface/diffusers | 3,005,344,422 | I_kwDOHa8MBc6zIeqm | 11,363 | https://github.com/huggingface/diffusers/issues/11363 | https://api.github.com/repos/huggingface/diffusers/issues/11363 | Controlnet Inference example, CUDA OOM | ### Describe the bug
When running [inference example](https://github.com/huggingface/diffusers/blob/main/examples%2Fcontrolnet%2FREADME_sd3.md) on a single RTX2080Ti, error CUDA out of memory
### Reproduction
```
# simple_inference.py
from diffusers import StableDiffusion3ControlNetPipeline, SD3ControlNetModel
from... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-04-18T16:39:33Z | 2025-05-24T15:02:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | roguxivlo | 42,584,370 | MDQ6VXNlcjQyNTg0Mzcw | User | false |
huggingface/diffusers | 3,007,065,676 | I_kwDOHa8MBc6zPC5M | 11,370 | https://github.com/huggingface/diffusers/issues/11370 | https://api.github.com/repos/huggingface/diffusers/issues/11370 | convert_hunyuan_video_to_diffusers.py not working - invalid load key, '\xd8'. | I am trying to convert this lora and tested all configs but failing
https://civitai.com/models/1084814/studio-ghibli-style-hunyuanvideo
tested configs as below

Error is
```
Conversion complete. Processed 1 files with 0 succ... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2025-04-20T15:01:08Z | 2025-07-05T21:14:46Z | 2025-07-05T21:14:46Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | FurkanGozukara | 19,240,467 | MDQ6VXNlcjE5MjQwNDY3 | User | false |
huggingface/diffusers | 3,008,094,156 | I_kwDOHa8MBc6zS9_M | 11,374 | https://github.com/huggingface/diffusers/issues/11374 | https://api.github.com/repos/huggingface/diffusers/issues/11374 | [Feature request] Integrate SkyReels-V2 support in diffusers | **Is your feature request related to a problem? Please describe.**
No problem. The new version of Skyreels is released.
https://github.com/SkyworkAI/SkyReels-V2
https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9
**Describe the solution you'd like.**
Similar to V1, please add inference supp... | closed | completed | false | 3 | [
"contributions-welcome"
] | [] | 2025-04-21T10:20:03Z | 2025-07-16T18:24:42Z | 2025-07-16T18:24:42Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 3,008,144,624 | I_kwDOHa8MBc6zTKTw | 11,376 | https://github.com/huggingface/diffusers/issues/11376 | https://api.github.com/repos/huggingface/diffusers/issues/11376 | Offloading behaviour for HiDream seems broken | When using `enable_model_cpu_offload()` I would expect the final component in the offloading string chain to be offloaded to the CPU to realize memory savings. But it's not likely the case.
Consider the script below:
<details>
<summary>Code</summary>
```py
from transformers import PreTrainedTokenizerFast, LlamaForC... | closed | completed | false | 2 | [] | [] | 2025-04-21T10:47:36Z | 2025-04-22T07:22:18Z | 2025-04-22T07:22:17Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 1,436,338,333 | I_kwDOHa8MBc5VnMid | 1,138 | https://github.com/huggingface/diffusers/issues/1138 | https://api.github.com/repos/huggingface/diffusers/issues/1138 | safety_checker=None returns None is of type: <class 'NoneType'>, but should be <class 'diffusers.onnx_utils.OnnxRuntimeModel'> (OnnxStableDiffusionPipeline) | ### Describe the bug
safety_checker=None is not accepted by OnnxStableDiffusionPipeline.from_pretrained due to how its passed to pipeline_utils.py
### Reproduction
```
pipe = OnnxStableDiffusionPipeline.from_pretrained(
model,
revision="onnx",
provider="DmlExecutionProvider",
safety_check... | closed | completed | false | 2 | [
"bug"
] | [
"anton-l"
] | 2022-11-04T16:29:10Z | 2022-11-10T19:39:05Z | 2022-11-08T13:39:13Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | averad | 640,619 | MDQ6VXNlcjY0MDYxOQ== | User | false |
huggingface/diffusers | 3,010,995,992 | I_kwDOHa8MBc6zeCcY | 11,382 | https://github.com/huggingface/diffusers/issues/11382 | https://api.github.com/repos/huggingface/diffusers/issues/11382 | HiDream pipeline update breaks downstream | ### Describe the bug
HiDream pipeline is currently broken on diffusers main branch due to PR #11296 that was merged few days ago.
it introduces a bad deprecation check which simply fails on main branch.
### Reproduction
N/A
### Logs
```shell
│ /home/vlado/dev/sdnext/venv/lib/python3.12/site-packages/diffusers/mod... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-04-22T12:49:50Z | 2025-04-22T18:08:09Z | 2025-04-22T18:08:09Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 3,011,181,322 | I_kwDOHa8MBc6zevsK | 11,383 | https://github.com/huggingface/diffusers/issues/11383 | https://api.github.com/repos/huggingface/diffusers/issues/11383 | HiDream LoRA support | ### Describe the bug
i've tried loras currently available on civitai and most appear to load without any errors using `load_lora_weights`.
but later, they do not appear in `get_list_adapters` or `get_active_adapters` and do not seem to be applied to the model.
if loras were not loaded, i'd expect to see some error... | closed | completed | false | 18 | [
"bug",
"enhancement"
] | [
"sayakpaul"
] | 2025-04-22T13:56:54Z | 2025-05-09T09:12:40Z | 2025-05-09T09:12:40Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 3,012,117,292 | I_kwDOHa8MBc6ziUMs | 11,387 | https://github.com/huggingface/diffusers/issues/11387 | https://api.github.com/repos/huggingface/diffusers/issues/11387 | RMSNorm in AutoencoderDC should be RMSNorm2d | ### Describe the bug
tl;dr I think that the Sana autoencoder should use RMSNorm2d instead of RMSNorm.
## Bug description
Sana's Autoencoder uses the `EfficientVit` block, which uses a linear attention layer called [SanaMultiScaleLinearAttention](https://github.com/huggingface/diffusers/blob/448c72a230ceb31ae60e70e3... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-04-22T20:59:07Z | 2025-04-28T09:37:03Z | 2025-04-23T09:20:53Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | SimonCoste | 13,707,870 | MDQ6VXNlcjEzNzA3ODcw | User | false |
huggingface/diffusers | 3,012,225,092 | I_kwDOHa8MBc6ziuhE | 11,388 | https://github.com/huggingface/diffusers/issues/11388 | https://api.github.com/repos/huggingface/diffusers/issues/11388 | extend AutoModel to be able to load transformer models & custom diffusers models | current AutoModel implementation only support models that importable from diffusers https://github.com/huggingface/diffusers/blob/448c72a230ceb31ae60e70e3b62e327314928a5e/src/diffusers/models/auto_model.py#L164
it uses this field in model config.json https://huggingface.co/HiDream-ai/HiDream-I1-Full/blob/main/transfor... | closed | completed | false | 3 | [
"Good second issue",
"contributions-welcome"
] | [
"yiyixuxu"
] | 2025-04-22T21:59:01Z | 2025-05-26T08:36:37Z | 2025-05-26T08:36:37Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yiyixuxu | 12,631,849 | MDQ6VXNlcjEyNjMxODQ5 | User | false |
huggingface/diffusers | 3,012,231,824 | I_kwDOHa8MBc6ziwKQ | 11,389 | https://github.com/huggingface/diffusers/issues/11389 | https://api.github.com/repos/huggingface/diffusers/issues/11389 | scheduler refactor to support different `set_timesteps` and `step` method | see more context https://github.com/huggingface/diffusers/pull/11311#discussion_r2051822011
| open | null | false | 1 | [
"stale"
] | [
"yiyixuxu",
"a-r-r-o-w"
] | 2025-04-22T22:04:06Z | 2025-05-23T15:03:03Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yiyixuxu | 12,631,849 | MDQ6VXNlcjEyNjMxODQ5 | User | false |
huggingface/diffusers | 1,436,378,656 | I_kwDOHa8MBc5VnWYg | 1,139 | https://github.com/huggingface/diffusers/issues/1139 | https://api.github.com/repos/huggingface/diffusers/issues/1139 | [Community] Attention Slice/Flash Attention for JAX implementation of unet_2d_condition | **Is your feature request related to a problem? Please describe.**
Currently, the base pytorch implementation of unet_2d_condition have attention slice feature and can be enabled/set through set_attention_slice. However, JAX's implementation doesn't have this feature. Attention slicing allows saving a lot of vram when... | closed | completed | false | 3 | [
"good first issue"
] | [] | 2022-11-04T16:59:14Z | 2022-12-17T11:15:05Z | 2022-12-17T07:04:08Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Lime-Cakes | 91,322,985 | MDQ6VXNlcjkxMzIyOTg1 | User | false |
huggingface/diffusers | 3,012,382,231 | I_kwDOHa8MBc6zjU4X | 11,390 | https://github.com/huggingface/diffusers/issues/11390 | https://api.github.com/repos/huggingface/diffusers/issues/11390 | Better image interpolation in training scripts follow up | With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing.
This issue is to ask for help from the community to bring this change to the other training scripts, specially for t... | closed | completed | false | 20 | [
"good first issue",
"contributions-welcome"
] | [
"asomoza"
] | 2025-04-23T00:04:10Z | 2025-05-05T16:35:18Z | 2025-05-05T16:35:17Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | asomoza | 5,442,875 | MDQ6VXNlcjU0NDI4NzU= | User | false |
huggingface/diffusers | 3,013,340,517 | I_kwDOHa8MBc6zm-1l | 11,394 | https://github.com/huggingface/diffusers/issues/11394 | https://api.github.com/repos/huggingface/diffusers/issues/11394 | [Feature request] ostris / Flex.2-preview. Open Source 8B parameter T2I Diffusion Model with universal control and inpainting support built in. | It was not a bug so removed unnecessary information.
The next version of Flex.1-alpha is released.
https://huggingface.co/ostris/Flex.2-preview
The solution is provided using custom pipeline by author of model which is working. Please refer the Model card for code.
However, it will be good to have this integrated i... | closed | completed | false | 6 | [
"bug",
"New pipeline/model",
"contributions-welcome"
] | [] | 2025-04-23T09:33:41Z | 2025-10-09T15:57:10Z | 2025-10-09T15:57:10Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 3,013,976,524 | I_kwDOHa8MBc6zpaHM | 11,396 | https://github.com/huggingface/diffusers/issues/11396 | https://api.github.com/repos/huggingface/diffusers/issues/11396 | How to convert the hidream lora trained by diffusers to a format that comfyui can load? | ### Describe the bug
The hidream lora trained by diffusers can't load in comfyui, how could I convert it?
### Reproduction
No
### Logs
```shell
```
### System Info
No
### Who can help?
_No response_ | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2025-04-23T13:13:34Z | 2025-06-23T09:49:19Z | 2025-06-23T09:49:19Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yinguoweiOvO | 56,142,257 | MDQ6VXNlcjU2MTQyMjU3 | User | false |
huggingface/diffusers | 3,014,725,657 | I_kwDOHa8MBc6zsRAZ | 11,398 | https://github.com/huggingface/diffusers/issues/11398 | https://api.github.com/repos/huggingface/diffusers/issues/11398 | FluxTransformer2DModel does not have config and cannot set_default_attn_processor | ### Describe the bug
```
scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(
BFL_REPO, subfolder="scheduler", revision=REVISION
)
text_encoder = CLIPTextModel.from_pretrained(
"openai/clip-vit-large-patch14", torch_dtype=DTYPE
)
tokenizer = CLIPTokenizer... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-23T17:06:46Z | 2025-05-24T15:02:41Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | james-imi | 149,590,311 | U_kgDOCOqRJw | User | false |
huggingface/diffusers | 1,311,938,865 | I_kwDOHa8MBc5OMpkx | 114 | https://github.com/huggingface/diffusers/issues/114 | https://api.github.com/repos/huggingface/diffusers/issues/114 | Higher level pipeline | Just as in `transformers`, I was wondering if we have any plans for a general `pipeline` function that could use the different pipelines under the hood. | closed | completed | false | 1 | [] | [] | 2022-07-20T21:19:46Z | 2022-08-03T10:50:15Z | 2022-08-03T10:50:15Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | osanseviero | 7,246,357 | MDQ6VXNlcjcyNDYzNTc= | User | false |
huggingface/diffusers | 1,436,388,951 | I_kwDOHa8MBc5VnY5X | 1,140 | https://github.com/huggingface/diffusers/issues/1140 | https://api.github.com/repos/huggingface/diffusers/issues/1140 | [Community] Hypernetworks | **Is your feature request related to a problem? Please describe.**
Support for hypernetworks since they seem to yield great results. An example can be seen here:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/hypernetworks/hypernetwork.py
info: https://github.com/AUTOMATIC1111/stable-d... | open | null | false | 15 | [
"wip",
"Good second issue"
] | [] | 2022-11-04T17:07:49Z | 2024-03-02T15:06:22Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dblunk88 | 39,381,389 | MDQ6VXNlcjM5MzgxMzg5 | User | false |
huggingface/diffusers | 3,015,837,471 | I_kwDOHa8MBc6zwgcf | 11,408 | https://github.com/huggingface/diffusers/issues/11408 | https://api.github.com/repos/huggingface/diffusers/issues/11408 | set_adapters not support for compiled model | ### Describe the bug
Thank you for making LORA hotswap currently compatible with torch.compile. However, I'm trying to modify weights via set_adapters when hotswap LORA, but after torch.compile, the model type is no longer ModelMixin, which causes the error. Could you please help look into this issue? Thanks
### Repr... | closed | completed | false | 10 | [
"bug",
"stale"
] | [] | 2025-04-24T03:37:00Z | 2026-01-09T18:59:00Z | 2026-01-09T18:59:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | songh11 | 75,419,275 | MDQ6VXNlcjc1NDE5Mjc1 | User | false |
huggingface/diffusers | 1,436,421,979 | I_kwDOHa8MBc5Vng9b | 1,141 | https://github.com/huggingface/diffusers/issues/1141 | https://api.github.com/repos/huggingface/diffusers/issues/1141 | Add support for local files in custom pipelines | **Is your feature request related to a problem? Please describe.**
Currently there is no way to pass a local pipeline file to `DiffusionPipeline` as a custom pipeline. This makes the development process when adding custom pipelines quite difficult, as it requires pushing to `main` first or clearing the cache (see [thi... | closed | completed | false | 3 | [
"good first issue"
] | [
"patrickvonplaten"
] | 2022-11-04T17:35:50Z | 2022-11-17T15:00:31Z | 2022-11-17T15:00:31Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vvvm23 | 44,398,246 | MDQ6VXNlcjQ0Mzk4MjQ2 | User | false |
huggingface/diffusers | 3,016,030,806 | I_kwDOHa8MBc6zxPpW | 11,410 | https://github.com/huggingface/diffusers/issues/11410 | https://api.github.com/repos/huggingface/diffusers/issues/11410 | File "/usr/local/python/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 3074, in __call__ [rank0]: batch_size, key_tokens, _ = ( [rank0]: ValueError: too many values to unpack (expected 3) | ### Describe the bug
添加完xformer加速之后出现了这个问题
### Reproduction
def __init__():
self.unet = pipeline.unet
self.set_diffusers_xformers_flag(self.unet,True)
def set_diffusers_xformers_flag( model, valid):
def fn_recursive_set_mem_eff(module: torch.nn.Module):
if hasattr(m... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-24T05:17:58Z | 2025-05-24T15:02:35Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | laoniandisko | 93,729,155 | U_kgDOBZYxgw | User | false |
huggingface/diffusers | 3,016,262,229 | I_kwDOHa8MBc6zyIJV | 11,412 | https://github.com/huggingface/diffusers/issues/11412 | https://api.github.com/repos/huggingface/diffusers/issues/11412 | prs-eth/marigold-iid-appearance-v1-1 is 404 | ### Describe the bug
Hi, @DN6 , seems [prs-eth/marigold-iid-appearance-v1-1](https://huggingface.co/prs-eth/marigold-iid-appearance-v1-1) is 404 now, and it's needed for MarigoldIntrinsicsPipelineIntegrationTests UT, do you have any insights on it? Thx.
### Reproduction
N/A
### Logs
```shell
```
### System Info
... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-24T07:12:23Z | 2025-04-28T16:09:14Z | 2025-04-28T16:09:13Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yao-matrix | 7,245,027 | MDQ6VXNlcjcyNDUwMjc= | User | false |
huggingface/diffusers | 3,016,357,589 | I_kwDOHa8MBc6zyfbV | 11,413 | https://github.com/huggingface/diffusers/issues/11413 | https://api.github.com/repos/huggingface/diffusers/issues/11413 | RuntimeError: Failed to import diffusers.pipelines.cogvideo.pipeline_cogvideox because of the following error (look up to see its traceback): | ### Describe the bug
Getting this issue when running the cogvideox model on Google Tesla T4 GPU text to video generation use case
``` python
!pip install -q git+https://github.com/huggingface/diffusers.git
!pip install -q torch transformers bitsandbytes torchao accelerate psutil
```
``` python
import torch
impor... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2025-04-24T07:47:29Z | 2025-05-26T07:21:11Z | 2025-05-26T07:21:09Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | KaifAhmad1 | 98,801,504 | U_kgDOBeOXYA | User | false |
huggingface/diffusers | 3,018,896,507 | I_kwDOHa8MBc6z8LR7 | 11,417 | https://github.com/huggingface/diffusers/issues/11417 | https://api.github.com/repos/huggingface/diffusers/issues/11417 | attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'? | ### Describe the bug
attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?
### Reproduction
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export OUTPUT_DIR="trained-flux-dev-dreambooth-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-25T03:30:52Z | 2025-05-25T15:02:30Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | asjqmasjqm | 17,588,130 | MDQ6VXNlcjE3NTg4MTMw | User | false |
huggingface/diffusers | 3,020,307,095 | I_kwDOHa8MBc60BjqX | 11,418 | https://github.com/huggingface/diffusers/issues/11418 | https://api.github.com/repos/huggingface/diffusers/issues/11418 | How to add flux1-fill-dev-fp8.safetensors | ### Describe the bug
Hi!
How to use flux1-fill-dev-fp8.safetensors in diffusers?
Now I have code:
```
def init_pipeline(device: str):
logger.info(f"Loading FLUX Inpaint Pipeline (Fill‑dev) on {device}")
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev",
torch_dtype=t... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-25T14:58:08Z | 2025-04-28T19:06:17Z | 2025-04-28T19:06:07Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | SlimRG | 39,348,033 | MDQ6VXNlcjM5MzQ4MDMz | User | false |
huggingface/diffusers | 3,020,615,861 | I_kwDOHa8MBc60CvC1 | 11,419 | https://github.com/huggingface/diffusers/issues/11419 | https://api.github.com/repos/huggingface/diffusers/issues/11419 | How to know that "Textual inversion" file I have loaded and not turn it on? | Reviewing the documentation I understand the load of IT with:
# Add Embeddings
Pipeline.load_textual_inversion("Sd-Concepts-Library/Cat-Toy"),
# Remave All Token Embeddings
Pipeline.unload_textual_inversion()
# Remove Just One Token
Pipeline.unload_textual_inversion ("<Moe-Bius>")
But how do you know which are c... | closed | completed | false | 1 | [
"stale"
] | [] | 2025-04-25T17:18:07Z | 2025-05-27T18:09:45Z | 2025-05-27T18:09:45Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Eduardishion | 47,488,420 | MDQ6VXNlcjQ3NDg4NDIw | User | false |
huggingface/diffusers | 3,021,676,287 | I_kwDOHa8MBc60Gx7_ | 11,422 | https://github.com/huggingface/diffusers/issues/11422 | https://api.github.com/repos/huggingface/diffusers/issues/11422 | validation prompt argument crashes the training [HiDream][Lora] | ### Describe the bug
Error when using validation prompt argument:
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
### Reproduction
> "accelerate launch --num_processes 1 --num_machines 1 "
... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-04-26T09:57:07Z | 2025-04-28T16:22:33Z | 2025-04-28T16:22:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yash2code | 20,199,248 | MDQ6VXNlcjIwMTk5MjQ4 | User | false |
huggingface/diffusers | 3,021,884,556 | I_kwDOHa8MBc60HkyM | 11,423 | https://github.com/huggingface/diffusers/issues/11423 | https://api.github.com/repos/huggingface/diffusers/issues/11423 | Lora Hotswap no clear documentation | Hello everyone.
Here is the scenario I have.
I have say 10 LoRAs that I would like to load and use depending on the request.
Option one:
using `load_lora_weights` - reads from the disk and moves to device: expensive operation
Option two:
load all loras and weights of non-used LoRAS with `set_adapters` method to 0.... | open | null | false | 2 | [
"stale"
] | [] | 2025-04-26T13:44:08Z | 2025-05-26T15:03:03Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vahe-toffee | 192,042,540 | U_kgDOC3JWLA | User | false |
huggingface/diffusers | 3,022,115,041 | I_kwDOHa8MBc60IdDh | 11,424 | https://github.com/huggingface/diffusers/issues/11424 | https://api.github.com/repos/huggingface/diffusers/issues/11424 | RuntimeError: group_norm in VAE decode for SDXL Masked Img2Img (even with ControlNets disabled & FP32 VAE/Latents) | ### Describe the bug
When using the StableDiffusionXLControlNetPipeline for masked image-to-image generation, a persistent RuntimeError occurs during the final VAE decoding step, specifically within torch.nn.functional.group_norm.
The error occurs even under the following simplified conditions:
The SDXL Refiner stag... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2025-04-26T16:39:00Z | 2025-05-30T16:34:23Z | 2025-05-30T16:34:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Haniubub | 134,706,537 | U_kgDOCAd1aQ | User | false |
huggingface/diffusers | 3,023,510,784 | I_kwDOHa8MBc60Nx0A | 11,430 | https://github.com/huggingface/diffusers/issues/11430 | https://api.github.com/repos/huggingface/diffusers/issues/11430 | [tests] help us test `torch.compile()` for impactful models | https://github.com/huggingface/diffusers/pull/11085 added a test for checking if there's any graph break or recompilation issue for `torch.compile`d model.
We should add this test to the most impactful models to ensure our code is `torch.compile` friendly and has the potential to benefit from it. So far, we test it f... | open | null | false | 9 | [
"Good second issue",
"performance",
"torch.compile"
] | [] | 2025-04-28T02:05:52Z | 2025-05-14T05:56:21Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 3,023,670,771 | I_kwDOHa8MBc60OY3z | 11,432 | https://github.com/huggingface/diffusers/issues/11432 | https://api.github.com/repos/huggingface/diffusers/issues/11432 | `.from_pretrained` `torch_dtype="auto"` argument not working a expected | ### Describe the bug
Hey dear diffusers team,
thanks a lot for all your hard work!
I would like to make use of the `torch_dtype="auto"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.t... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-04-28T04:31:26Z | 2025-05-13T01:42:37Z | 2025-05-13T01:42:37Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | johannaSommer | 26,871,807 | MDQ6VXNlcjI2ODcxODA3 | User | false |
huggingface/diffusers | 3,023,829,714 | I_kwDOHa8MBc60O_rS | 11,435 | https://github.com/huggingface/diffusers/issues/11435 | https://api.github.com/repos/huggingface/diffusers/issues/11435 | ImportError: cannot import name 'HiDreamImagePipeline' from 'diffusers' | ### Describe the bug
The getting an import error for Hidream
### Reproduction
Diffusers version: 0.33.1
```
from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
from diffusers import UniPCMultistepScheduler, HiDreamImagePipeline
```
### Logs
```shell
Traceback (most recent call last):
File "/mnt/... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-04-28T06:19:24Z | 2025-04-28T06:52:40Z | 2025-04-28T06:52:39Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | omrastogi | 43,903,014 | MDQ6VXNlcjQzOTAzMDE0 | User | false |
huggingface/diffusers | 3,024,028,069 | I_kwDOHa8MBc60PwGl | 11,436 | https://github.com/huggingface/diffusers/issues/11436 | https://api.github.com/repos/huggingface/diffusers/issues/11436 | Hidream Inference is giving error: RuntimeError: expected mat1 and mat2 to have the same dtype, but got: c10::Half != c10::BFloat16 | ### Describe the bug
There appears to be an issue in the _get_clip_prompt_embeds [function](https://github.com/huggingface/diffusers/blob/0e3f2713c2c054053a244909e24e7eff697a35c0/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L338), possibly due to the dtype of self.text_encoder being float16 instead ... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-04-28T07:48:17Z | 2025-04-28T09:49:27Z | 2025-04-28T09:49:26Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | omrastogi | 43,903,014 | MDQ6VXNlcjQzOTAzMDE0 | User | false |
huggingface/diffusers | 3,024,037,481 | I_kwDOHa8MBc60PyZp | 11,437 | https://github.com/huggingface/diffusers/issues/11437 | https://api.github.com/repos/huggingface/diffusers/issues/11437 | no outputs(weights)/validation result images are created during training | ### Describe the bug
I'm running train_controlnet_flux.py in A100 with accelerate launch train_controlnet_flux.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--conditioning_image_column=conditioning_image \
--image_column=image \
--caption_column=text \
--output_dir="output" ... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-04-28T07:52:16Z | 2025-04-29T08:31:45Z | 2025-04-29T08:31:45Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dedoogong | 12,013,568 | MDQ6VXNlcjEyMDEzNTY4 | User | false |
huggingface/diffusers | 1,436,641,480 | I_kwDOHa8MBc5VoWjI | 1,144 | https://github.com/huggingface/diffusers/issues/1144 | https://api.github.com/repos/huggingface/diffusers/issues/1144 | Please implement train_text_to_image_flax.py --report_to | Hi, @duongna21 is it possible to send data to wandb?
https://github.com/huggingface/diffusers/blob/bde4880c9cceada20b387d3110061c65249dabcc/examples/text_to_image/train_text_to_image_flax.py#L177
https://github.com/huggingface/diffusers/blob/bde4880c9cceada20b387d3110061c65249dabcc/examples/text_to_image/train_te... | closed | completed | false | 0 | [] | [] | 2022-11-04T20:57:15Z | 2022-11-06T14:45:52Z | 2022-11-06T14:45:52Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | camenduru | 54,370,274 | MDQ6VXNlcjU0MzcwMjc0 | User | false |
huggingface/diffusers | 3,026,072,155 | I_kwDOHa8MBc60XjJb | 11,441 | https://github.com/huggingface/diffusers/issues/11441 | https://api.github.com/repos/huggingface/diffusers/issues/11441 | Unable to load Flux LoRA trained with OneTrainer – NotImplementedError in `_convert_mixture_state_dict_to_diffusers` | ### Describe the bug
Loading a LoRA that was:
- trained with [OneTrainer](https://github.com/Nerogar/OneTrainer) (master, FLUX-1 mode)
- exported as a single .safetensors file (on [Civitai](https://civitai.com/models/1056401))
via `DiffusionPipeline.load_lora_weights()` (or indirectly through [Nunchaku](https://git... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-04-28T20:20:14Z | 2025-05-06T13:14:59Z | 2025-05-06T13:14:59Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | iamwavecut | 239,034 | MDQ6VXNlcjIzOTAzNA== | User | false |
huggingface/diffusers | 3,026,897,644 | I_kwDOHa8MBc60asrs | 11,443 | https://github.com/huggingface/diffusers/issues/11443 | https://api.github.com/repos/huggingface/diffusers/issues/11443 | model_cpu_offload failed in unidiffusers pipeline | ### Describe the bug
unidiffusers cpu_offload failed with the log in Reproduction column.
I took a deeper look, it seems that in this case, [self.text_decoder.encode](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L1288) will be called after `text_encode... | closed | completed | false | 5 | [
"bug",
"stale"
] | [] | 2025-04-29T03:25:29Z | 2026-01-09T17:54:59Z | 2026-01-09T17:54:59Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yao-matrix | 7,245,027 | MDQ6VXNlcjcyNDUwMjc= | User | false |
huggingface/diffusers | 3,027,282,200 | I_kwDOHa8MBc60cKkY | 11,447 | https://github.com/huggingface/diffusers/issues/11447 | https://api.github.com/repos/huggingface/diffusers/issues/11447 | _get_checkpoint_shard_files not work when HF_HUB_OFFLINE=1 | ### Describe the bug
diffusers 0.33.1
_get_checkpoint_shard_files() has local_files_only param, but it can't work, because _get_checkpoint_shard_files() called model_info(), which connect huggingface.co to get model info. And if you set HF_HUB_OFFLINE=1, you will get error like this
```
huggingface_hub.errors.Offline... | closed | completed | false | 5 | [
"bug",
"stale"
] | [] | 2025-04-29T07:00:01Z | 2026-01-09T19:00:28Z | 2026-01-09T19:00:28Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | thomascatlee | 484,823 | MDQ6VXNlcjQ4NDgyMw== | User | false |
huggingface/diffusers | 3,027,313,933 | I_kwDOHa8MBc60cSUN | 11,448 | https://github.com/huggingface/diffusers/issues/11448 | https://api.github.com/repos/huggingface/diffusers/issues/11448 | fusing/stable-unclip-2-1-h-img2img cannot found | ### Describe the bug
Hi, @yiyixuxu, model `fusing/stable-unclip-2-1-h-img2img` which is need by https://github.com/huggingface/diffusers/blob/main/tests/pipelines/stable_unclip/test_stable_unclip_img2img.py#L217 test case cannot found, same case is `fusing/stable-unclip-2-1-l`, thx.
### Reproduction
N/A
### Logs
`... | closed | completed | false | 4 | [
"bug"
] | [
"DN6"
] | 2025-04-29T07:10:36Z | 2026-01-10T03:29:16Z | 2026-01-10T03:29:16Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yao-matrix | 7,245,027 | MDQ6VXNlcjcyNDUwMjc= | User | false |
huggingface/diffusers | 1,436,856,808 | I_kwDOHa8MBc5VpLHo | 1,145 | https://github.com/huggingface/diffusers/issues/1145 | https://api.github.com/repos/huggingface/diffusers/issues/1145 | [Flax] 🚨 0.7.0 not working 🚨 | ### Describe the bug

### Reproduction
_No response_
### Logs
_No response_
### System Info
TPU v3-8 | closed | completed | false | 11 | [
"bug"
] | [] | 2022-11-05T05:55:47Z | 2022-11-05T22:38:12Z | 2022-11-05T21:17:42Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | camenduru | 54,370,274 | MDQ6VXNlcjU0MzcwMjc0 | User | false |
huggingface/diffusers | 3,028,657,503 | I_kwDOHa8MBc60haVf | 11,454 | https://github.com/huggingface/diffusers/issues/11454 | https://api.github.com/repos/huggingface/diffusers/issues/11454 | Mochi OutOfMemory | It was run success(2-3 days ago) but It doesn't run.
I've tryed same command.
Command:
.....,"fps":24,"width":848,"height":480,"num_frames":80,"inference_steps":30
My env:
accelerate 1.2.1
addict 2.4.0
aiofiles 23.2.1
aiosignal 1.3.2
annotated-types ... | closed | completed | false | 7 | [
"stale"
] | [] | 2025-04-29T14:54:56Z | 2026-01-09T19:04:52Z | 2026-01-09T19:04:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MehmetcanTozlu | 65,121,799 | MDQ6VXNlcjY1MTIxNzk5 | User | false |
huggingface/diffusers | 3,029,314,434 | I_kwDOHa8MBc60j6uC | 11,456 | https://github.com/huggingface/diffusers/issues/11456 | https://api.github.com/repos/huggingface/diffusers/issues/11456 | onnx export failure - timestep parameter with static value | ### Describe the bug
Hi,
Failing to export this diffusion policy model to onnx.
Originally opened this issue with the PyTorch onnx team but they have identified this to be an issue with the HF diffusers.
There is time step parameter that is passed in as a static value (not torch Tensor):
https://github.com/hugging... | open | null | false | 5 | [
"bug",
"stale"
] | [] | 2025-04-29T19:24:49Z | 2026-02-03T15:22:27Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kraza8 | 154,946,534 | U_kgDOCTxL5g | User | false |
huggingface/diffusers | 1,436,911,706 | I_kwDOHa8MBc5VpYha | 1,146 | https://github.com/huggingface/diffusers/issues/1146 | https://api.github.com/repos/huggingface/diffusers/issues/1146 | encrypted neural networks? | ### Model/Pipeline/Scheduler description
Hi,
there is influx of profile pics apps and many will come in the near future. I think that user privacy should be main concern for such apps. Some services are just deleting model after the train, others leave it on the server.I am thinking ... is there any reliable way to p... | closed | completed | false | 4 | [
"stale"
] | [] | 2022-11-05T09:00:32Z | 2022-12-12T14:53:28Z | 2022-12-12T14:53:28Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | batrlatom | 11,804,368 | MDQ6VXNlcjExODA0MzY4 | User | false |
huggingface/diffusers | 3,029,985,075 | I_kwDOHa8MBc60mecz | 11,460 | https://github.com/huggingface/diffusers/issues/11460 | https://api.github.com/repos/huggingface/diffusers/issues/11460 | CompVis/stable-diffusion-v1-4 is missing fp16 files | Getting the warning hereunder when loading diffusers pipeline
You are loading the variant fp16 from CompVis/stable-diffusion-v1-4 via `revision='fp16'`. This behavior is deprecated and will be removed in diffusers v1. One should use `variant='fp16'` instead. However, it appears that CompVis/stable-diffusion-v1-4 curren... | closed | completed | false | 3 | [
"stale"
] | [] | 2025-04-30T03:06:25Z | 2025-06-03T10:39:30Z | 2025-06-03T10:39:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ONPanday | 98,148,850 | U_kgDOBdmh8g | User | false |
huggingface/diffusers | 3,030,293,743 | I_kwDOHa8MBc60npzv | 11,464 | https://github.com/huggingface/diffusers/issues/11464 | https://api.github.com/repos/huggingface/diffusers/issues/11464 | Unable to load FLUX.1-Canny-dev-lora into FluxControlPipeline | ### Describe the bug
I tryied to run the example code of FLUX.1-Canny-dev-lora from https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/flux#canny-control, but get error:
RuntimeError: Error(s) in loading state_dict for FluxTransformer2DModel:
size mismatch for proj_out.lora_A.default_0.weight: cop... | closed | completed | false | 7 | [
"bug"
] | [] | 2025-04-30T06:40:45Z | 2025-05-06T07:05:38Z | 2025-05-06T07:05:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | laolongboy | 25,175,047 | MDQ6VXNlcjI1MTc1MDQ3 | User | false |
huggingface/diffusers | 3,030,435,430 | I_kwDOHa8MBc60oMZm | 11,466 | https://github.com/huggingface/diffusers/issues/11466 | https://api.github.com/repos/huggingface/diffusers/issues/11466 | Finetuning of flux or scratch training | I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning. | closed | completed | true | 3 | [
"stale"
] | [] | 2025-04-30T07:45:49Z | 2026-01-09T19:42:16Z | 2026-01-09T19:42:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | preethamp0197 | 170,499,459 | U_kgDOCimdgw | User | false |
huggingface/diffusers | 3,030,632,440 | I_kwDOHa8MBc60o8f4 | 11,468 | https://github.com/huggingface/diffusers/issues/11468 | https://api.github.com/repos/huggingface/diffusers/issues/11468 | [quant] about float8_e4m3_tensor | ### Describe the bug
`quantization_config = TorchAoConfig("float8wo_e4m3")
transformer = AutoModel.from_pretrained(
"models/community_hunyuanvideo",
subfolder="transformer",
quantization_config=quantization_config,
torch_dtype=torch.bfloat16,
)`
The following error was encountered
` quantization_... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-04-30T09:04:57Z | 2025-05-06T02:46:11Z | 2025-05-06T02:24:29Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | 1145284121 | 41,956,449 | MDQ6VXNlcjQxOTU2NDQ5 | User | false |
huggingface/diffusers | 3,031,708,903 | I_kwDOHa8MBc60tDTn | 11,470 | https://github.com/huggingface/diffusers/issues/11470 | https://api.github.com/repos/huggingface/diffusers/issues/11470 | Train_controlnet_sdxl.py and tensorboard log images | ### Describe the bug
When logging validation images on tensorboard, one tile is always created, even if there are several validation projects and images. I suggest making changes to the code:
```
for tracker in accelerator.trackers:
if tracker.name == "tensorboard":
i=0
for log in image... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-04-30T15:45:53Z | 2025-05-31T15:02:27Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | RomixERR | 13,717,583 | MDQ6VXNlcjEzNzE3NTgz | User | false |
huggingface/diffusers | 3,033,129,099 | I_kwDOHa8MBc60yeCL | 11,474 | https://github.com/huggingface/diffusers/issues/11474 | https://api.github.com/repos/huggingface/diffusers/issues/11474 | IndexError: index 999 is out of bounds for dimension 0 with size 51 | ```py
File "Z:\software\python11\Lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1230, in __call__
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^... | closed | completed | false | 4 | [] | [] | 2025-05-01T04:18:33Z | 2025-05-02T20:04:12Z | 2025-05-02T20:03:49Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xalteropsx | 103,671,642 | U_kgDOBi3nWg | User | false |
huggingface/diffusers | 1,437,100,748 | I_kwDOHa8MBc5VqGrM | 1,148 | https://github.com/huggingface/diffusers/issues/1148 | https://api.github.com/repos/huggingface/diffusers/issues/1148 | DPM-Solver++ |
**Describe the solution you'd like**
A new Sampler based on the paper: https://arxiv.org/abs/2206.00927
"DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-ti... | closed | completed | false | 2 | [] | [] | 2022-11-05T16:46:06Z | 2022-11-06T01:14:23Z | 2022-11-06T01:14:23Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dblunk88 | 39,381,389 | MDQ6VXNlcjM5MzgxMzg5 | User | false |
huggingface/diffusers | 3,035,285,925 | I_kwDOHa8MBc606sml | 11,480 | https://github.com/huggingface/diffusers/issues/11480 | https://api.github.com/repos/huggingface/diffusers/issues/11480 | [New Project] Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment | **Is your feature request related to a problem? Please describe.**
With Runway and Midjourney, just now releasing updates tackling visual reference input, to be able to do character, location, and style consistency in narrative images and video, maybe this project could very well be the open-source answer to this chall... | open | null | false | 3 | [
"stale"
] | [] | 2025-05-02T06:27:57Z | 2026-01-09T15:21:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 3,037,484,832 | I_kwDOHa8MBc61DFcg | 11,486 | https://github.com/huggingface/diffusers/issues/11486 | https://api.github.com/repos/huggingface/diffusers/issues/11486 | enable_xformers_memory_efficient_attention in the training script | ### Describe the bug
With --enable_xformers_memory_efficient_attention in the sdxl dreambooth the script crashes.
### Reproduction
--use_8bit_adam --push_to_hub --enable_xformers_memory_efficient_attention
### Logs
```shell
raceback (most recent call last):
File "E:\diffusers\examples\dreambooth\train_dreamboot... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-05-03T16:05:10Z | 2025-06-03T15:02:54Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | master-key-g | 18,029,602 | MDQ6VXNlcjE4MDI5NjAy | User | false |
huggingface/diffusers | 3,037,939,461 | I_kwDOHa8MBc61E0cF | 11,488 | https://github.com/huggingface/diffusers/issues/11488 | https://api.github.com/repos/huggingface/diffusers/issues/11488 | Sincerely Request The Support for Flux PAG Pipeline | When the pag pipeline of flux can be supported? | open | null | false | 2 | [
"help wanted",
"Good second issue"
] | [] | 2025-05-04T11:12:05Z | 2025-05-16T04:53:52Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PlutoQyl | 119,106,116 | U_kgDOBxlqRA | User | false |
huggingface/diffusers | 3,038,251,765 | I_kwDOHa8MBc61GAr1 | 11,489 | https://github.com/huggingface/diffusers/issues/11489 | https://api.github.com/repos/huggingface/diffusers/issues/11489 | Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced | ### Describe the bug
Hi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script.
When I'm trying to train my model with prior preservation tag I give an error.
... | open | null | false | 6 | [
"bug",
"stale",
"training"
] | [] | 2025-05-04T21:19:23Z | 2026-02-03T15:22:18Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Mnwa | 8,988,331 | MDQ6VXNlcjg5ODgzMzE= | User | false |
huggingface/diffusers | 3,040,431,336 | I_kwDOHa8MBc61OUzo | 11,497 | https://github.com/huggingface/diffusers/issues/11497 | https://api.github.com/repos/huggingface/diffusers/issues/11497 | SD3 ControlNet Script (and others?): dataset preprocessing cache depends on unrelated arguments | ### Describe the bug
When using the SD3 ControlNet training script, the training dataset embeddings are precomputed and the results are given a fingerprint based on the input script arguments, which will cause subsequent runs to use the cached preprocessed dataset instead of recomputing the embeddings, which in my exp... | open | null | false | 11 | [
"bug",
"stale"
] | [] | 2025-05-05T18:13:22Z | 2026-02-03T15:22:14Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kentdan3msu | 46,754,450 | MDQ6VXNlcjQ2NzU0NDUw | User | false |
huggingface/diffusers | 3,040,869,411 | I_kwDOHa8MBc61P_wj | 11,499 | https://github.com/huggingface/diffusers/issues/11499 | https://api.github.com/repos/huggingface/diffusers/issues/11499 | [Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change. | ### Sys env:
OS Ubuntu 22.04
PyTorch 2.4.0+cu121
sana == 0.0.1
Diffusers == 0.34.0.dev0
### Reproduce:
Try the demo test code:
```
import torch
from diffusers import SanaPAGPipeline
pipe = SanaPAGPipeline.from_pretrained(
# "Efficient-Large-Model/Sana_1600M_512px_diffusers",
"Efficient-Large-Model/SANA1.5_1.6... | closed | completed | false | 11 | [] | [
"a-r-r-o-w"
] | 2025-05-05T21:26:51Z | 2025-08-08T23:44:59Z | 2025-08-08T23:44:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | David-Dingle | 16,317,224 | MDQ6VXNlcjE2MzE3MjI0 | User | false |
huggingface/diffusers | 3,047,707,594 | I_kwDOHa8MBc61qFPK | 11,519 | https://github.com/huggingface/diffusers/issues/11519 | https://api.github.com/repos/huggingface/diffusers/issues/11519 | Request support for MAGI-1 | ### Model/Pipeline/Scheduler description
MAGI-1 is a video generation model that has achieved stunning visual effects.
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementa... | open | null | false | 9 | [
"stale"
] | [] | 2025-05-08T03:36:40Z | 2026-01-09T15:21:13Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lavinal712 | 98,888,959 | U_kgDOBeTs_w | User | false |
huggingface/diffusers | 1,437,294,266 | I_kwDOHa8MBc5Vq166 | 1,152 | https://github.com/huggingface/diffusers/issues/1152 | https://api.github.com/repos/huggingface/diffusers/issues/1152 | CLIP Guided with DDIM support | I wanted to test out the CLIP guided pipeline and noticed it does not support ETA. DDIM with ETA on the CLIP guided pipeline might yield some good results. | closed | completed | false | 2 | [] | [
"patil-suraj"
] | 2022-11-06T05:40:32Z | 2022-11-09T10:46:14Z | 2022-11-09T10:46:14Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dblunk88 | 39,381,389 | MDQ6VXNlcjM5MzgxMzg5 | User | false |
huggingface/diffusers | 3,050,655,800 | I_kwDOHa8MBc611VA4 | 11,528 | https://github.com/huggingface/diffusers/issues/11528 | https://api.github.com/repos/huggingface/diffusers/issues/11528 | [Docs Update] AutoPipelineForInpainting.from_pretrained fails to load runwayml/stable-diffusion-inpainting without variant="fp16" | ### Describe the bug
I was following this [part of the docs](https://huggingface.co/docs/diffusers/v0.28.1/using-diffusers/inpaint?regular-specific=runwayml%2Fstable-diffusion-inpainting&inpaint=runwayml%2Fstable-diffusion-inpaint#non-inpaint-specific-checkpoints), the code above [configure pipeline parameters](https:... | closed | completed | false | 2 | [
"bug",
"stale"
] | [] | 2025-05-09T03:41:00Z | 2026-01-09T20:32:15Z | 2026-01-09T20:32:15Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Player256 | 92,082,372 | U_kgDOBX0QxA | User | false |
huggingface/diffusers | 1,437,311,901 | I_kwDOHa8MBc5Vq6Od | 1,153 | https://github.com/huggingface/diffusers/issues/1153 | https://api.github.com/repos/huggingface/diffusers/issues/1153 | Why does train_text_to_image.py perform so differently from the CompVis script? | I posted about this on the forum but didn't get any useful feedback - would love to hear from someone who knows the in and outs of the diffusers codebase!
https://discuss.huggingface.co/t/discrepancies-between-compvis-and-diffuser-fine-tuning/25556
To summarize the post: the `train_text_to_image.py` script and or... | closed | completed | false | 18 | [] | [
"patil-suraj"
] | 2022-11-06T07:13:41Z | 2023-01-09T01:57:36Z | 2022-12-30T20:49:48Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | john-sungjin | 12,533,291 | MDQ6VXNlcjEyNTMzMjkx | User | false |
huggingface/diffusers | 3,053,955,566 | I_kwDOHa8MBc62B6nu | 11,536 | https://github.com/huggingface/diffusers/issues/11536 | https://api.github.com/repos/huggingface/diffusers/issues/11536 | Prompt adherence for FluxPipeline is broken | ### Describe the bug
For prompts much shorter than the `max_sequence_length` (which is most prompts, because default 512), prompts are not followed because the attention calculation spends most of the attention to the padding tokens of the encoder hidden state.
Prompt: Portrait photo of an angry man
First picture: Fl... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-05-10T09:27:45Z | 2025-05-10T14:49:07Z | 2025-05-10T14:41:53Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
huggingface/diffusers | 1,437,318,470 | I_kwDOHa8MBc5Vq71G | 1,154 | https://github.com/huggingface/diffusers/issues/1154 | https://api.github.com/repos/huggingface/diffusers/issues/1154 | Why does the performance get worse after I converted stable diffusion checkpoint to diffuses? | I fine tuned a stable diffusion model and saved the check point which is ~14G. And then I used the script in this repo [convert_original_stable_diffusion_to_diffusers.py](https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py) to convert it to diffusers, which is gr... | closed | completed | false | 14 | [
"stale"
] | [] | 2022-11-06T07:45:26Z | 2023-07-08T01:29:35Z | 2022-12-24T15:03:24Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rorepmezzz | 116,416,574 | U_kgDOBvBgPg | User | false |
huggingface/diffusers | 3,057,321,250 | I_kwDOHa8MBc62OwUi | 11,540 | https://github.com/huggingface/diffusers/issues/11540 | https://api.github.com/repos/huggingface/diffusers/issues/11540 | FluxPipeline produces noise when .enable_vae_slicing is used, and FluxImage2ImagePipeline does not support .enable_vae_slicing. | ### Describe the bug
When using the flux pipeline, if vae slicing is enabled, it produces noise instead of images, and in the image2image pipeline it is not usable at all.
### Reproduction
```python
from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKL, FluxTransformer2DModel, FluxPipeline, utils, Flu... | open | null | false | 7 | [
"bug",
"stale"
] | [] | 2025-05-12T15:14:12Z | 2026-02-03T15:22:02Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Meatfucker | 74,834,323 | MDQ6VXNlcjc0ODM0MzIz | User | false |
huggingface/diffusers | 3,058,535,600 | I_kwDOHa8MBc62TYyw | 11,542 | https://github.com/huggingface/diffusers/issues/11542 | https://api.github.com/repos/huggingface/diffusers/issues/11542 | What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ? | I want to use the "--train_text_encoder" argument, but it only exists in the latter script. | closed | completed | false | 2 | [] | [] | 2025-05-13T01:41:19Z | 2025-06-10T20:35:10Z | 2025-06-10T20:35:10Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | night-train-zhx | 68,258,352 | MDQ6VXNlcjY4MjU4MzUy | User | false |
huggingface/diffusers | 3,059,193,218 | I_kwDOHa8MBc62V5WC | 11,547 | https://github.com/huggingface/diffusers/issues/11547 | https://api.github.com/repos/huggingface/diffusers/issues/11547 | Flux fill dev with controlnet support | **Is your feature request related to a problem? Please describe.**
Flux fill dev is really good model. It is really useful for In-context editing purpose which maintains subject properties extremely good. But it does not have control over the diffusion process.
**Describe the solution you'd like.**
Ideally plug and p... | closed | completed | false | 1 | [] | [] | 2025-05-13T08:24:10Z | 2025-05-19T08:29:51Z | 2025-05-19T08:29:50Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hardikdava | 39,372,750 | MDQ6VXNlcjM5MzcyNzUw | User | false |
huggingface/diffusers | 3,063,723,839 | I_kwDOHa8MBc62nLc_ | 11,555 | https://github.com/huggingface/diffusers/issues/11555 | https://api.github.com/repos/huggingface/diffusers/issues/11555 | `device_map="auto"` supported for diffusers pipelines? | ### Describe the bug
Hey dear diffusers team,
for `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_ma... | open | null | false | 6 | [
"bug",
"stale"
] | [] | 2025-05-14T16:49:32Z | 2026-02-03T15:21:58Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | johannaSommer | 26,871,807 | MDQ6VXNlcjI2ODcxODA3 | User | false |
huggingface/diffusers | 1,437,422,361 | I_kwDOHa8MBc5VrVMZ | 1,156 | https://github.com/huggingface/diffusers/issues/1156 | https://api.github.com/repos/huggingface/diffusers/issues/1156 | Training doesnt work | Hi,
im having problems with executing the training file. My bug report says:
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_cpu_threads_per_process` was set to `12` to improve out-of-box performance
To avoid this warning pass in values for each of the ... | closed | completed | false | 4 | [] | [] | 2022-11-06T14:23:33Z | 2022-11-08T09:34:59Z | 2022-11-08T09:34:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | renzixilef | 102,833,185 | U_kgDOBiEcIQ | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.