metadata
license: other
inference: false
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
base_model_relation: quantized
tags:
- p24
- gguf
- llmware-chat
- green
mistral-3.2-24b-gguf
mistral-3.2-24b-gguf is a GGUF Q4_K_M quantized version of mistral-3.2-24b, which is a 24B general purpose chat/instruct model from Mistral.
Model Description
- Developed by: mistralai
- Quantized by: llmware
- Model type: mistral-3.2-24b
- Parameters: 24 billion
- Model Parent: mistralai/Mistral-Small-3.2-24B-Instruct-2506
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: General use
- Quantization: int4