|
|
--- |
|
|
library_name: mlx |
|
|
pipeline_tag: text-generation |
|
|
inference: false |
|
|
license: apache-2.0 |
|
|
base_model: Qwen/Qwen3-Next-80B-A3B-Instruct |
|
|
base_model_relation: quantized |
|
|
tags: |
|
|
- apple-silicon |
|
|
- metal |
|
|
- arm64 |
|
|
- 4-bit |
|
|
- group-size-64 |
|
|
- mlx |
|
|
- mlx-lm |
|
|
- qwen |
|
|
- halley-ai |
|
|
--- |
|
|
|
|
|
# Qwen3-Next-80B-A3B-Instruct — MLX 4-bit (group size 64) |
|
|
|
|
|
**Summary.** This is a 4-bit (Q4) MLX quantization of Qwen3-Next-80B-A3B-Instruct with group size 64. Built for Apple Silicon with Metal acceleration. |
|
|
|
|
|
- Base model: `Qwen/Qwen3-Next-80B-A3B-Instruct` (apache-2.0) |
|
|
- Quantization: MLX Q4, `q_group_size=64` (some tensors may remain 16-bit for stability) |
|
|
- Files: MLX weight shards + `config.json`; tokenizer files included for drop-in use |
|
|
- Intended use: lightweight local inference on M-series Macs |
|
|
- Not intended for: safety-critical decisions; outputs may be inaccurate or biased |
|
|
|
|
|
## Requirements |
|
|
|
|
|
Runs on Apple Silicon (M1 or newer) with macOS ≥ 13.5 via MLX (Metal). |
|
|
|
|
|
- Not supported: Intel macOS / Linux / Windows (consider a GGUF build + llama.cpp instead). |
|
|
- Memory guidance: large unified memory recommended (e.g., 64 GB+; 96 GB provides comfortable headroom). The effective GPU working set is capped by Metal’s budget; keep 5–10% headroom. |
|
|
|
|
|
## How to use (MLX) |
|
|
|
|
|
```bash |
|
|
pip install mlx-lm |
|
|
``` |
|
|
|
|
|
```python |
|
|
from mlx_lm import load, generate |
|
|
|
|
|
model, tokenizer = load("halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64") |
|
|
print(generate( |
|
|
model, tokenizer, |
|
|
prompt="Explain the Chudnovsky algorithm to compute π.", |
|
|
max_tokens=256, max_kv_size=512 |
|
|
)) |
|
|
``` |
|
|
|
|
|
```bash |
|
|
python -m mlx_lm generate --model halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64 \ |
|
|
--prompt "Explain the Chudnovsky algorithm to compute pi." \ |
|
|
--max-kv-size 512 --max-tokens 256 |
|
|
``` |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with `window=stride=4096`, ~100k tokens, EOS inserted between docs. |
|
|
|
|
|
| Variant | PPL (ctx=4096, fast) | |
|
|
|-------------------------|----------------------------------------| |
|
|
| MLX bf16 (reference) | 5.14 | |
|
|
| MLX 6-bit (gs=64) | 5.14 (≈0.0% vs bf16) | |
|
|
| MLX 5-bit (gs=32) | 5.20 (+1.2% vs bf16, +1.2% vs 6b/gs64) | |
|
|
| MLX 4-bit (gs=64) | 5.43 (+5.6% vs bf16, +5.6% vs 6b/gs64) | |
|
|
|
|
|
### Interpretation |
|
|
|
|
|
- 4-bit gs64 is the smallest footprint and shows a modest PPL increase versus 5/6‑bit. |
|
|
- 5-bit gs32 is a strong “quality‑light” option if you can spare ~15 GB more. |
|
|
- 6-bit gs64 matches bf16 on this corpus and is the quality pick. |
|
|
|
|
|
Reproduce locally: |
|
|
|
|
|
```bash |
|
|
python python/scripts/test_perplexity-mlx.py \ |
|
|
--model_path "/path/to/Qwen3-Next-80B-A3B-Instruct-4bit-gs64" \ |
|
|
--fast --progress |
|
|
``` |
|
|
|
|
|
## Conversion details (provenance) |
|
|
|
|
|
```bash |
|
|
python -m mlx_lm convert \ |
|
|
--hf-path Qwen3-Next-80B-A3B-Instruct \ |
|
|
--mlx-path /path/to/Qwen3-Next-80B-A3B-Instruct-4bit-gs64 \ |
|
|
-q --q-bits 4 --q-group-size 64 |
|
|
``` |
|
|
|
|
|
- Some tensors (for example, embeddings/norms/router) may remain 16-bit for numerical stability. |
|
|
|
|
|
## Sibling & reference models |
|
|
|
|
|
- halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64 |
|
|
- halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-5bit-gs32 |
|
|
|
|
|
## Verify quantization |
|
|
|
|
|
```bash |
|
|
jq '.quantization | {bits, group_size}' /path/to/export/config.json |
|
|
``` |
|
|
|
|
|
## Limitations and biases |
|
|
|
|
|
Compared to 5‑bit/6‑bit, Q4 may show small but noticeable quality drops on some tasks (for example, perplexity, instruction following). Choose this build for footprint/throughput over maximum accuracy. |
|
|
|
|
|
## License and credits |
|
|
|
|
|
- License: apache-2.0 (inherits from the base model) |
|
|
- Base model: Qwen/Qwen3-Next-80B-A3B-Instruct |
|
|
- Quantization: Halley AI Lab (MLX Q4, gs=64) |
|
|
- Please cite both the base model and this repository when you use the weights. |
|
|
|