ZygAI OSS Flash GGUF
This repository is the GGUF distribution of ZygAI OSS Flash, released as a legacy milestone by ZygAIโs creator at age 23.
The goal is simple: make the model easy to run locally, easy to share, and easy to preserve.
Legacy Note
ZygAI OSS Flash GGUF is part of the same legacy release as ZygAI/ZygAI_OSS_Flash:
- open-source by intention
- local-first friendly
- shared so the work can outlive its original timeline
If this model helps your project, teaching, or experiments, that is exactly what this release was meant for.
Available GGUF files
zygai_flash-f16.ggufzygai_flash-q2_k.ggufzygai_flash-q3_k_s.ggufzygai_flash-q3_k_m.ggufzygai_flash-q3_k_l.ggufzygai_flash-q4_0.ggufzygai_flash-q4_k_s.ggufzygai_flash-q4_k_m.ggufzygai_flash-q5_0.ggufzygai_flash-q5_k_s.ggufzygai_flash-q5_k_m.ggufzygai_flash-q6_k.ggufzygai_flash-q8_0.gguf
Recommended quant
Start with:
q4_k_m(best default quality/size/speed balance)
Personal Thanks
A huge, heartfelt thank you to Ruby2001, 0daysophie, italian_tech_person, and Julia's Tech Spot.
Your help was not just technical, it was life-changing.
Without you, my life would have been dark.
Thank you for standing with me and helping make ZygAI OSS Flash real.
llama.cpp run example
./llama.cpp/build/bin/llama-cli \
-m zygai_flash-q4_k_m.gguf \
-p "Labas! Papasakok trumpฤ
ฤฏkvepianฤiฤ
mintฤฏ." \
-n 256
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit