For now just a static Q5_K_M quant, will add more.

Downloads last month
1,225
GGUF
Model size
173B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ilintar/MiniMax-M2-REAP-172B-A10B-GGUF

Quantized
(4)
this model