File size: 934 Bytes
ce0aaf7 55a922a 8de437b e0d0998 8de437b 5e7d07d 4393170 8de437b 5e7d07d 55a922a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: mlx
tags:
- vllm
- mlx
base_model: openai/gpt-oss-20b
---
**See gpt-oss-20b 6.5bit MLX in action - [demonstration video](https://youtu.be/mlpFG8e_fLw)**
*q6.5bit quant typically achieves 1.128 perplexity in our testing which is equivalent to q8.*
| Quantization | Perplexity |
|:------------:|:----------:|
| **q2** | 41.293 |
| **q3** | 1.900 |
| **q4** | 1.168 |
| **q6** | 1.128 |
| **q8** | 1.128 |
## Usage Notes
* Tested to run with [Inferencer app](https://inferencer.com)
* Memory usage: ~17 GB (down from ~46GB required by native MXFP4 format)
* Expect ~100 tokens/s
* Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.26
* For more details see [demonstration video](https://youtu.be/mlpFG8e_fLw) or visit [OpenAI gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). |