Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
DolphinVision 72b - 3.5bpw EXL2 🐬
Base model: cognitivecomputations/dolphin-vision-72b
Language model quantized to 3.5bpw with FP16 vision layers merged back in.
Text working in exllamav2/tabbyapi. Vision input not working yet.
n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support