vrgamedevgirl84/Wan14BT2VFusionX GGUF Conversion
This is a GGUF conversion of vrgamedevgirl84/Wan14BT2VFusionX with additional VACE functionality.
All quantized versions were created from the base FP16 model Wan14BT2VFusioniX_fp16_.safetensors using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.
The process involved first patching and converting the safetensors model to a FP16 GGUF, then quantizing it, and finally applying the 5D fixes.
Usage
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan-14B-T2V-FusionX-VACE-GGUF | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
Reference
- For an overview of quantization types, please see the LLaMA 3 8B Scoreboard quantization chart.
- Downloads last month
- 0
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF
Base model
Wan-AI/Wan2.1-T2V-14B