Post
2291
Uncensored, Heretic GGUF quants of GLM 4.7 (30B-A3B) with correct Llamacpp and all updates ; NEO-CODE Imatrix W 16 bit OTs.
Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor.
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF
"Reg quants, non-heretic" :
Also 16 bit ot, NEO-CODE Imatrix and specialized:
DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF
Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor.
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF
"Reg quants, non-heretic" :
Also 16 bit ot, NEO-CODE Imatrix and specialized:
DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF