OsirisCortex-v6-GGUF
GGUF quantized version of OsirisCortex-v6 (Qwen3.5-9B Identity Fusion, abliterated). For use with llama.cpp on Apple Silicon.
Files
| File | Description |
|---|---|
OsirisCortex-v6-Q6_K.gguf |
GGUF quantized model |
OsirisCortex-v6-mmproj-BF16.gguf |
GGUF quantized model |
Usage with llama.cpp
./llama-server -m <model-file>.gguf -ngl 99
- Downloads last month
- 68
Hardware compatibility
Log In to add your hardware
4-bit
6-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support