metadata
base_model: Locutusque/llama-3-neural-chat-v2.2-8B
inference: false
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
library_name: transformers
quantized_by: Suparious
Locutusque/llama-3-neural-chat-v2.2-8B AWQ
- Model creator: Locutusque
- Original model: llama-3-neural-chat-v2.2-8B
Model Details
I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO-Positive. DPO-Positive dramatically improves performance over DPO.
- Developed by: Locutusque
- Model type: Built with Meta Llama 3
- Language(s) (NLP): Many?
- License: Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code