--- base_model: - qingy2024/GRMR-V3-Q4B --- # Quantized GGUF models for GRMR-V3-Q4B This repository contains GGUF quantized versions of [qingy2024/GRMR-V3-Q4B](https://huggingface.co/qingy2024/GRMR-V3-Q4B). ## Available quantizations: - FP16 (full precision) - Q2_K - Q3_K_L - Q3_K_M - Q3_K_S - Q4_K_M - Q4_K_S - Q5_K_M - Q5_K_S - Q6_K - Q8_0 ## Original model This is a quantized version of [qingy2024/GRMR-V3-Q4B](https://huggingface.co/qingy2024/GRMR-V3-Q4B). ## Generated on Wed Jun 4 17:08:22 UTC 2025