license: apache-2.0 | |
# CorticalStack/mistral-7b-openhermes-awq | |
CorticalStack/mistral-7b-openhermes-awq is an AWQ quantised version of [CorticalStack/mistral-7b-openhermes-sft](https://huggingface.co/CorticalStack/mistral-7b-openhermes-sft). | |
### About AWQ | |
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. | |
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. | |
It is supported by: | |
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ | |
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. | |
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) | |
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers | |
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code | |
### AWQ configuration | |
- Zero point: True | |
- Q group size: 128 | |
- W bit: 4 | |
- Version: GEMM |