File size: 11,930 Bytes
4eee8e6 58198a2 4eee8e6 58198a2 4eee8e6 58198a2 4eee8e6 58198a2 4eee8e6 58198a2 4eee8e6 58198a2 4eee8e6 27670a9 4eee8e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
---
license: other
license_name: tencent-hunyuan-a13b
license_link: LICENSE
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
 <a href="https://github.com/Tencent/Hunyuan-A13B"><b>GITHUB</b></a>  
Welcome to the official repository of **Hunyuan-A13B**, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
## Key Features and Highlights
- **High Performance with Fewer Parameters**: With only 13B active parameters (out of a total of 80B), Hunyuan-A13B achieves competitive results compared to much larger models across diverse benchmark tasks.
- **Robust Pre-Training and Optimization**: Trained on a massive 20TB high-quality dataset, the model benefits from structured supervised fine-tuning and reinforcement learning strategies to enhance its reasoning, language comprehension, and general knowledge capabilities.
- **Dual-Mode Chain-of-Thought (CoT) Framework**: This unique feature allows dynamic adjustment of reasoning depth, balancing computational efficiency with accuracy. It supports both concise responses for simple tasks and in-depth reasoning for complex challenges.
- **Exceptional Long-Context Understanding**: Hunyuan-A13B natively supports a 256K context window, maintaining robust performance in long-text tasks.
- **Advanced Agent-Oriented Capabilities**: Tailored optimizations enable effective handling of complex decision-making, with leading performance on agent benchmarks such as BFCL-v3 and τ-Bench.
- **Superior Inference Efficiency**: Architectural innovations, including Grouped Query Attention (GQA) and support for multiple quantization formats , result in exceptional inference speed.
## Why Choose Hunyuan-A13B?
Hunyuan-A13B stands out as a powerful, scalable, and computationally efficient LLM, perfectly suited for researchers and developers seeking high performance without the burden of excessive resource demands. Whether you're working on academic research, building cost-effective AI solutions, or exploring novel applications, Hunyuan-A13B provides a versatile foundation to build upon.
## Related News
* 2025.6.27 We have open-sourced **Hunyuan-A13B-Pretrain** , **Hunyuan-A13B-Instruct** , **Hunyuan-A13B-Instruct-FP8** , **Hunyuan-80B-A13B-Instruct-GPTQ-Int4** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-32B | Qwen3-A22B | Hunyuan-A13B |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 88.4 | 86.1 | 83.61 | 87.81 | 88.17 |
| MMLU-Pro | 60.20 | 58.10 | 65.54 | 68.18 | 67.23 |
| MMLU-Redux | 87.47 | 83.90 | 83.41 | 87.40 | 87.67 |
| BBH | 86.30 | 85.8 | 87.38 | 88.87 | 87.56 |
| SuperGPQA | 38.90 | 37.84 * | 39.78 | 44.06 | 41.32 |
| EvalPlus | 75.69 | 66.05 | 72.05 | 77.60 | 78.64 |
| MultiPL-E | 59.13 | 61.00 | 67.06 | 65.94 | 69.33 |
| MBPP | 72.60 | 84.70 | 78.20 | 81.40 | 83.86 |
| CRUX-O | 60.63 | 56.00 * | 72.50 | 79.00 | 77.00 |
| MATH | 69.80 | 62.1 | 61.62 | 71.84 | 72.35 |
| GSM8k | 92.80 | 91.5 | 93.40 | 94.39 | 91.83 |
| GPQA | - | 45.9 | 47.97 | 47.47 | 43.44 |
| INCLUDE | 66.48 | 76.98 * | 67.97 | 73.46 | 74.90 |
| MGSM | 67.52 | 79.53 * | 82.68 | 83.53 | 76.00 |
| MMMLU | 76.89 | 79.28 * | 83.83 | 86.70 | 84.68 |
| Topic | Bench | OpenAI-o1-1217 | DeepSeek R1 | Qwen3-A22B | Hunyuan-A13B-Instruct |
|:-------------------:|:-----------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 74.3<br>79.2<br>96.4 | 79.8<br>70<br>94.9 | 85.7<br>81.5<br>94.0 | 87.3<br>76.8<br>94.3 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 78<br>83.1 | 71.5<br>82.4 | 71.1<br>85.7 | 71.2<br>82.7 |
| **Coding** | Livecodebench<br>Fullstackbench<br>ArtifactsBench | 63.9<br>64.6<br>38.6 | 65.9<br>71.6<br>44.6 | 70.7<br>65.6<br>44.6 | 63.9<br>67.8<br>43 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 80.4<br>90.2<br>81 | 83.7<br>92.2<br>78.7 | 88.9<br>90.3<br>80.3 | 89.1<br>91.1<br>84.7 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 91.8<br>82.5 | 88.3<br>77.7 | 83.4<br>74.2 | 84.7<br>76.1 |
| **Text<br>Creation**| LengthCtrl<br>InsCtrl | 60.1<br>74.8 | 55.9<br>69 | 53.3<br>73.7 | 55.4<br>71.9 |
| **NLU** | ComplexNLU<br>Word-Task | 64.7<br>67.1 | 64.5<br>81.8 | 59.8<br>56.4 | 61.2<br>62.9 |
| **Agent** | BDCL v3<br> $\tau$-bench<br>ComplexFuncBench<br> $C^3$-Bench | 67.8<br>60.4<br>47.6<br>58.8 | 63.8<br>58.7<br>n/a<br>55.3 | 70.8<br>46.7<br>n/a<br>51.7 | 78.3<br>54.7<br>51.2<br>63.5 |
| **Average** | - | n/a | n/a | n/a | n/a |
## Quick Start
You can refer to the content in [Hunyuan-A13B](https://github.com/Tencent-Hunyuan/Hunyuan-A13B) to get started quickly. The training and inference code can use the version provided in this github repository.
### Transformer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
def main():
model_name_or_path = os.environ['MODEL_PATH']
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",
trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
for name, param in model.named_parameters():
print(f"{name}: {param.size()}")
messages = [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{"role": "user", "content": "Write a short summary of the benefits of regular exercise."},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=100,do_sample=True)
print(tokenizer.decode(outputs[0]))
if __name__ == '__main__':
main()
```
## Deployment
For deployment, you can use frameworks such as *vLLM*, *SGLang*, or *TensorRT-LLM* to serve the model and create an OpenAI-compatible API endpoint.
### vllm
#### Docker Image
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official support is currently under development.
- To get started:
```
Pull the Docker image:docker pull xxx
```
- Start the API server:
```
docker start xxx
```
#### Source Code
Support for this model has been added via this PR: (https://github.com/vllm-project/vllm/pull/20114 )in the vLLM project.
You can build and run vLLM from source after merging this pull request into your local repository.
After applying the changes, you can start the API server by following the standard vLLM setup instructions.
### SGLlang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
To get started:
- Pull the Docker image
```
docker pull xxx
```
- Start the API server:
```
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ipc=host \
xxx \
python3 -m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
#### Source Code
The necessary integration has already been merged into the main branch via this PR(https://github.com/sgl-project/sglang/pull/7549 ).
Once you have cloned or updated your local SGLang repository, you can build and run the API server using the standard SGLang setup process.
After applying the changes, you can start the API server by following the standard SGLang setup instructions.
```
python3 -m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
### TensorRT-LLM
#### Docker Image
We also provide a pre-built Docker image based on the latest version of TensorRT-LLM.
To get started:
- Pull the Docker image
```
docker pull xxx
```
- Start the API server:
```
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ipc=host \
xxx \
python3 -m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
#### Source Code
The necessary integration has already been merged into the main branch via this PR(xxx ).
Once you have cloned or updated your local TensorRT-LLM. repository, you can build and run the API server using the standard TensorRT-LLM. setup process.
After applying the changes, you can start the API server by following the standard TensorRT-LLM. setup instructions.
## Inference Performance
This section presents the efficiency test results of deploying various models using vLLM, including inference speed (tokens/s) under different batch sizes.
Evaluation Script:
```python
python3 benchmark_throughput.py --backend vllm \
--input-len 2048 \
--output-len 14336 \
--model $MODEL_PATH \
--tensor-parallel-size $TP \
--use-v2-block-manager \
--async-engine \
--trust-remote-code \
--num_prompts $BATCH_SIZE \
--max-num-seqs $BATCH_SIZE
```
| Inference Framework | Model | Number of GPUs (GPU productA) | input_length | batch=1 | batch=16 | batch=32 |
|------|-----------------------------|-----------|-------------------------|---------------------|----------------------|----------------------|
| vLLM | Hunyuan-A13B-Instruct | 8 | 2048 | 190.84 | 1246.54 | 1981.99 |
| vLLM | Hunyuan-A13B-Instruct | 4 | 2048 | 158.90 | 779.10 | 1301.75 |
| vLLM | Hunyuan-A13B-Instruct | 2 | 2048 | 111.72 | 327.31 | 346.54 |
| vLLM | Hunyuan-A13B-Instruct(int8 weight only) | 2 | 2048 | 109.10 | 444.17 | 721.93 |
| vLLM | Hunyuan-A13B-Instruct(W8A8C8-FP8) | 2 | 2048 | 91.83 | 372.01 | 617.70 |
| vLLM | Hunyuan-A13B-Instruct(W8A8C8-FP8) | 1 | 2048 | 60.07 | 148.80 | 160.41 |
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com). |