# VITA-Audio: Fast Interleaved Audio-Text Token Generation for Efficient Large Speech-Language Model
## :fire: News
* **`2025.05.06`** ๐ We are proud to launch VITA-Audio, an end-to-end large speech model with fast audio-text token generation.
## ๐ Contents
- [Highlights](#-highlights)
- [Exhibition](#-exhibition)
- [Models](#-models)
- [Experimental Results](#-experimental-results)
- [Training](#-training)
- [Inference](#-inference)
- [Evaluation](#-evaluation)
## โจ Highlights
- **Low Latency**. VITA-Audio is the first end-to-end speech model capable of generating audio during the initial forward pass. By utilizing a set of 32 prefill tokens, VITA-Audio reduces the time required to generate the first audio token chunk from 217 ms to just 47 ms.
- **Fast Inference**. VITA-Audio achieves an inference speedup of 3-5x at the 7B parameter scale.
- **Open Source**. VITA-Audio is trained on **open-source data** only, consisting of 200k hours of publicly available audio.
- **Strong Performance**. VITA-Audio achieves competitive results on ASR,TTS and SQA benchmarks among cutting-edge models under 7B parameters.
## ๐ Exhibition
### Inference Acceleration
Model inference speed under different inference modes.
### Time to Generate the First Audio Segment In Streaming Inference
### Generated Audio Case
> ๆๅ่พนๆฅไบไธชๅๅทด๏ผ่
ฐ้ๅซไบไธชๅๅญ๏ผๆๅ่พนๆฅไบไธชๅๅ๏ผๆ้ๆไบไธช็ญ็ธใ
> ๆ็็ญ็ธ็ๅๅ่ฆๆฟ็ญ็ธๆขๅซ็ๅๅญ็ๅๅทด็ๅๅญ๏ผๅซ็ๅๅญ็ๅๅทดไธๆฟๆฟๅๅญๆขๆ็็ญ็็ๅๅ็็ญ็ธใ
> ไธ็ฅๆฏๅซ็ๅๅญ็ๅๅทดๆไบๆ็็ญ็็ๅๅไธๅๅญ๏ผ่ฟๆฏๆ็็ญ็็ๅๅๆไบๅซ็ๅๅญ็ๅๅทดไธ็ญ็ใ
> ๅๅๅๅฎถ็็ญ็ธ๏ผๅๅทดๅๅๅๅๅนๅๅญใ
https://github.com/user-attachments/assets/38da791f-5d72-4d9c-a9b2-cec97c2f2b2b
---
> To be or not to be--to live intensely and richly,
> merely to exist, that depends on ourselves. Let widen and intensify our relations.
> While we live, let live!
https://github.com/user-attachments/assets/fd478065-4041-4eb8-b331-0c03b304d853
---
> The hair has been so little, don't think about it, go to bed early, for your hair. Good night!
https://github.com/user-attachments/assets/4cfe4742-e237-42bd-9f17-7935b2285799
---
> ไธคไธช้ป้น้ธฃ็ฟ ๆณ๏ผ
> ไธ่ก็ฝ้นญไธ้ๅคฉใ
> ็ชๅซ่ฅฟๅฒญๅ็ง้ช๏ผ
> ้จๆณไธๅดไธ้่นใ
https://github.com/user-attachments/assets/382620ee-bb2a-488e-9e00-71afd2342b56
---
## ๐ Models
| Model | LLM Size | Huggingface Weights |
|-------------------------|----------|---------------------------------------------------------------|
| VITA-Audio-Boost | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Boost |
| VITA-Audio-Balance | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Balance |
| VITA-Audio-Plus-Vanilla | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Plus-Vanilla |
## ๐ Experimental Results
- **Comparison of Spoken Question Answering**.

- **Comparison of Text to Speech**.

- **Comparison of Automatic Speech Recognition**.


- **Effectiveness of Inference Acceleration**.


## ๐ Requirements and Installation
### Prepare Environment
```
docker pull shenyunhang/pytorch:24.11-py3_2024-1224
```
### Get the Code
```
git clone https://github.com/VITA-MLLM/VITA-Audio.git
cd VITA-Audio
pip install -r requirements_ds_gpu.txt
pip install -e .
```
### Prepare Pre-trained Weight
#### LLM
- Download the LLM from https://huggingface.co/Qwen/Qwen2.5-7B-Instruct.
- Put it into '../models/Qwen/Qwen2.5-7B-Instruct/'
#### Audio Encoder and Audio Decoder
- Download the Audio Encoder from https://huggingface.co/THUDM/glm-4-voice-tokenizer.
- Put it into '../models/THUDM/glm-4-voice-tokenizer'
- Download the Audio Decoder from https://huggingface.co/THUDM/glm-4-voice-decoder.
- Put it into '../models/THUDM/glm-4-voice-decoder'
### Data Format
#### **Speech QA Interleaved Data Format**
> This format shows how text and audio sequences are interleaved in a structured JSON conversation between a user and an assistant.
```jsonc
{
"messages": [
{
"role": "user",
"content": "<|begin_of_audio|> audio_sequence <|end_of_audio|>"
},
{
"role": "assistant",
"content": "text_sequence_1 <|begin_of_audio|> audio_sequence_1 <|end_of_audio|> text_sequence_2 <|begin_of_audio|> audio_sequence_2 <|end_of_audio|>"
}
]
}
```
## ๐ฒ Training
The following tutorial will take `VITA-Audio-Boost` as an example.
- To train `VITA-Audio-Balance` and other variants, you should modify the `text-audio-interval-ratio`.
VITA-Audio-Boost:
```
--text-audio-interval-ratio 1 10 4 10 \
```
VITA-Audio-Balance:
```
--text-audio-interval-ratio 1 4 3 8 4 10 \
```
- To train `VITA-Audio-Plus-*`, you should use the script like `scripts/deepspeed/sts_qwen25/finetune_sensevoice_glm4voice...`
### Stage-1 (Audio-Text Alignment)
```
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
```
The above script may need some adjustments.
- Set `ROOT_PATH` to your code root folder.
- Set `LOCAL_ROOT_PATH` to a temporary code root folder.
- Modify other variables as needed for your environment.
### Stage-2 (Single MCTP Module Training)
```
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp1_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
```
The above script may need some adjustments.
- Set `ROOT_PATH` to your code root folder.
- Set `LOCAL_ROOT_PATH` to a temporary code root folder.
- Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 1.
- Modify other variables as needed for your environment.
### Stage-3 (Multiple MCTP Modules Training)
```
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
```
The above script may need some adjustments.
- Set `ROOT_PATH` to your code root folder.
- Set `LOCAL_ROOT_PATH` to a temporary code root folder.
- Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 2.
- Modify other variables as needed for your environment.
### Stage-4 (Supervised Fine-tuning)
```
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage2.sh 2048 `date +'%Y%m%d_%H%M%S'`
```
The above script may need some adjustments.
- Set `ROOT_PATH` to your code root folder.
- Set `LOCAL_ROOT_PATH` to a temporary code root folder.
- Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 3.
- Modify other variables as needed for your environment.
## ๐ Inference
Here we implement a simple script for inference.
It includes examples of speech-to-speech, ASR, and TTS tasks, as well as inference speed testing.
```
python tools/inference_sts.py
```
- Set `model_name_or_path` to VITA-Audio weights.
- Set `audio_tokenizer_path` to the path of the audio encoder.
- Set `flow_path` to the path of the audio decoder.
## ๐ Evaluation
Evaluate SQA, ASR, and TTS benchmarks
```
bash scripts/deepspeed/evaluate_sts.sh
```