File size: 2,075 Bytes
cd8838d
1f3893a
cd8838d
de7f492
 
 
 
 
 
 
 
 
 
 
 
cd8838d
 
 
 
 
 
de7f492
 
 
 
 
 
 
 
 
 
 
 
cd8838d
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model: Locutusque/llama-3-neural-chat-v2.2-8B
inference: false
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
library_name: transformers
quantized_by: Suparious
---
# Locutusque/llama-3-neural-chat-v2.2-8B AWQ

- Model creator: [Locutusque](https://huggingface.co/Locutusque)
- Original model: [llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/6XQuhjWNr6C4RbU9f1k99.png)

## Model Details

I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO-Positive.
DPO-Positive dramatically improves performance over DPO. 

- **Developed by:** Locutusque
- **Model type:** Built with Meta Llama 3
- **Language(s) (NLP):** Many?
- **License:** Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code