VITA-Audio / README.md
shenyunhang's picture
-a
52e4f53
|
raw
history blame
8.71 kB

VITA-Audio: Fast Interleaved Audio-Text Token Generation for Efficient Large Speech-Language Model

:fire: News

  • 2025.05.06 ๐ŸŒŸ We are proud to launch VITA-Audio, an end-to-end large speech model with fast audio-text token generation.

๐Ÿ“„ Contents

โœจ Highlights

  • Low Latency. VITA-Audio is the first end-to-end speech model capable of generating audio during the initial forward pass. By utilizing a set of 32 prefill tokens, VITA-Audio reduces the time required to generate the first audio token chunk from 217 ms to just 47 ms.
  • Fast Inference. VITA-Audio achieves an inference speedup of 3-5x at the 7B parameter scale.
  • Open Source. VITA-Audio is trained on open-source data only, consisting of 200k hours of publicly available audio.
  • Strong Performance. VITA-Audio achieves competitive results on ASR,TTS and SQA benchmarks among cutting-edge models under 7B parameters.

๐Ÿ“Œ Exhibition

Inference Acceleration

Model inference speed under different inference modes.

demogif second_gif

Time to Generate the First Audio Segment In Streaming Inference

first audio generate time

Generated Audio Case

ๆ‰“ๅ—่พนๆฅไบ†ไธชๅ“‘ๅทด๏ผŒ่…ฐ้‡Œๅˆซไบ†ไธชๅ–‡ๅญ๏ผ›ๆ‰“ๅŒ—่พนๆฅไบ†ไธชๅ–‡ๅ˜›๏ผŒๆ‰‹้‡Œๆไบ†ไธช็ญ็Šธใ€‚
ๆ็€็ญ็Šธ็š„ๅ–‡ๅ˜›่ฆๆ‹ฟ็ญ็Šธๆขๅˆซ็€ๅ–‡ๅญ็š„ๅ“‘ๅทด็š„ๅ–‡ๅญ๏ผ›ๅˆซ็€ๅ–‡ๅญ็š„ๅ“‘ๅทดไธๆ„ฟๆ‹ฟๅ–‡ๅญๆขๆ็€็ญ็Ž›็š„ๅ–‡ๅ˜›็š„็ญ็Šธใ€‚
ไธ็Ÿฅๆ˜ฏๅˆซ็€ๅ–‡ๅญ็š„ๅ“‘ๅทดๆ‰“ไบ†ๆ็€็ญ็Ž›็š„ๅ–‡ๅ˜›ไธ€ๅ–‡ๅญ๏ผ›่ฟ˜ๆ˜ฏๆ็€็ญ็Ž›็š„ๅ–‡ๅ˜›ๆ‰“ไบ†ๅˆซ็€ๅ–‡ๅญ็š„ๅ“‘ๅทดไธ€็ญ็Ž›ใ€‚
ๅ–‡ๅ˜›ๅ›žๅฎถ็‚–็ญ็Šธ๏ผ›ๅ“‘ๅทดๅ˜€ๅ˜€ๅ“’ๅ“’ๅนๅ–‡ๅญใ€‚

https://github.com/user-attachments/assets/38da791f-5d72-4d9c-a9b2-cec97c2f2b2b


To be or not to be--to live intensely and richly, merely to exist, that depends on ourselves. Let widen and intensify our relations.
While we live, let live!

https://github.com/user-attachments/assets/fd478065-4041-4eb8-b331-0c03b304d853


The hair has been so little, don't think about it, go to bed early, for your hair. Good night!

https://github.com/user-attachments/assets/4cfe4742-e237-42bd-9f17-7935b2285799


ไธคไธช้ป„้น‚้ธฃ็ฟ ๆŸณ๏ผŒ ไธ€่กŒ็™ฝ้นญไธŠ้’ๅคฉใ€‚
็ช—ๅซ่ฅฟๅฒญๅƒ็ง‹้›ช๏ผŒ ้—จๆณŠไธœๅดไธ‡้‡Œ่ˆนใ€‚

https://github.com/user-attachments/assets/382620ee-bb2a-488e-9e00-71afd2342b56


๐Ÿ”” Models

Model LLM Size Huggingface Weights
VITA-Audio-Boost 7B https://huggingface.co/VITA-MLLM/VITA-Audio-Boost
VITA-Audio-Balance 7B https://huggingface.co/VITA-MLLM/VITA-Audio-Balance
VITA-Audio-Plus-Vanilla 7B https://huggingface.co/VITA-MLLM/VITA-Audio-Plus-Vanilla

๐Ÿ“ˆ Experimental Results

  • Comparison of Spoken Question Answering.

Clipboard_Screenshot_1746531780

  • Comparison of Text to Speech.

image

  • Comparison of Automatic Speech Recognition.

Clipboard_Screenshot_1746532039

Clipboard_Screenshot_1746532022

  • Effectiveness of Inference Acceleration.

Clipboard_Screenshot_1746532167

Image

๐Ÿ“” Requirements and Installation

Prepare Environment

docker pull shenyunhang/pytorch:24.11-py3_2024-1224

Get the Code

git clone https://github.com/VITA-MLLM/VITA-Audio.git
cd VITA-Audio
pip install -r requirements_ds_gpu.txt
pip install -e .

Prepare Pre-trained Weight

LLM

Audio Encoder and Audio Decoder

Data Format

Speech QA Interleaved Data Format

This format shows how text and audio sequences are interleaved in a structured JSON conversation between a user and an assistant.

{
  "messages": [
    {
      "role": "user",
      "content": "<|begin_of_audio|> audio_sequence <|end_of_audio|>"
    },
    {
      "role": "assistant",
      "content": "text_sequence_1 <|begin_of_audio|> audio_sequence_1 <|end_of_audio|> text_sequence_2 <|begin_of_audio|> audio_sequence_2 <|end_of_audio|>"
    }
  ]
}

๐ŸŽฒ Training

The following tutorial will take VITA-Audio-Boost as an example.

  • To train VITA-Audio-Balance and other variants, you should modify the text-audio-interval-ratio.

    VITA-Audio-Boost:

    --text-audio-interval-ratio 1 10 4 10 \
    

    VITA-Audio-Balance:

    --text-audio-interval-ratio 1 4 3 8 4 10 \
    
  • To train VITA-Audio-Plus-*, you should use the script like scripts/deepspeed/sts_qwen25/finetune_sensevoice_glm4voice...

Stage-1 (Audio-Text Alignment)

bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`

The above script may need some adjustments.

  • Set ROOT_PATH to your code root folder.
  • Set LOCAL_ROOT_PATH to a temporary code root folder.
  • Modify other variables as needed for your environment.

Stage-2 (Single MCTP Module Training)

bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp1_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`

The above script may need some adjustments.

  • Set ROOT_PATH to your code root folder.
  • Set LOCAL_ROOT_PATH to a temporary code root folder.
  • Set MODEL_NAME_OR_PATH to the path of the model trained in Stage 1.
  • Modify other variables as needed for your environment.

Stage-3 (Multiple MCTP Modules Training)

bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`

The above script may need some adjustments.

  • Set ROOT_PATH to your code root folder.
  • Set LOCAL_ROOT_PATH to a temporary code root folder.
  • Set MODEL_NAME_OR_PATH to the path of the model trained in Stage 2.
  • Modify other variables as needed for your environment.

Stage-4 (Supervised Fine-tuning)

bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage2.sh 2048 `date +'%Y%m%d_%H%M%S'`

The above script may need some adjustments.

  • Set ROOT_PATH to your code root folder.
  • Set LOCAL_ROOT_PATH to a temporary code root folder.
  • Set MODEL_NAME_OR_PATH to the path of the model trained in Stage 3.
  • Modify other variables as needed for your environment.

๐Ÿ“ Inference

Here we implement a simple script for inference.

It includes examples of speech-to-speech, ASR, and TTS tasks, as well as inference speed testing.

python tools/inference_sts.py
  • Set model_name_or_path to VITA-Audio weights.
  • Set audio_tokenizer_path to the path of the audio encoder.
  • Set flow_path to the path of the audio decoder.

๐Ÿ”Ž Evaluation

Evaluate SQA, ASR, and TTS benchmarks

bash scripts/deepspeed/evaluate_sts.sh