vrgamedevgirl84/Wan14BT2VFusionX GGUF Conversion

This repository contains a direct GGUF conversion of the vrgamedevgirl84/Wan14BT2VFusionX model, originally sourced from vrgamedevgirl84/Wan14BT2VFusionX.

All quantized versions were created from the base FP16 model Wan14BT2VFusioniX_fp16_.safetensors using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

The process involved first converting the safetensors model to a FP16 GGUF, then quantizing it, and finally applying the 5D fix.

Usage

  • The model files are compatible with the ComfyUI-GGUF custom node.
  • Place the model files in the directory:
    ComfyUI/models/unet
  • For detailed installation instructions, please refer to the ComfyUI-GGUF GitHub repository.

Additional Resources

Reference

Downloads last month
36
GGUF
Model size
14.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lym00/Wan14BT2VFusionX_fp16_GGUF

Quantized
(2)
this model