VibeVoice-1.5B: A Frontier Open-Source Text-to-Speech Model
🚨 Note (20 Nov 2025): this Transformers-compatible checkpoint is not yet part of a PyPI release. See Usage below for setup and examples.
VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to 90 minutes long with up to 4 distinct speakers, surpassing the typical 1-2 speaker limits of many prior models.
➡️ Technical Report: VibeVoice Technical Report
➡️ Project Page: microsoft/VibeVoice
Models
| Model | Context Length | Generation Length | Weight |
|---|---|---|---|
| VibeVoice-1.5B | 64K | ~90 min | This model |
| VibeVoice-7B | 32K | ~45 min | HF link |
| VibeVoice-0.5B-Streaming | - | - | On the way |
Usage
Setup
VibeVoice is not yet merged into Transformers but can be used by downloading the source code from the following fork:
pip install git+https://github.com/pengzhiliang/transformers.git@4b4f4bdc64baca807e8364692313a6183a6116f6
pip install torch torchvision torchaudio
pip install diffusers soundfile accelerate
Loading model
from transformers import AutoProcessor, VibeVoiceForConditionalGeneration
repo_id = "microsoft/VibeVoice-1.5B-hf"
processor = AutoProcessor.from_pretrained(repo_id)
model = VibeVoiceForConditionalGeneration.from_pretrained(repo_id)
Text-to-speech (TTS) example with pipeline
TTS with pipeline (dropdown)
import time
import numpy as np
import torch
import os
import soundfile as sf
from transformers import pipeline
repo_id = "microsoft/VibeVoice-1.5B-hf"
sampling_rate = 24000
text = "Hello, nice to meet you. I'm Vibey."
# Optional parameters for diffusion process, defaults are in the model's generation_config.json
cfg_scale = 1.3 # classifier-free guidance for diffusion process
n_diffusion_steps = 8 # number of diffusion steps for each audio chunk
# Set seed for deterministic
seed = 42
torch.manual_seed(seed)
np.random.seed(seed)
# Load pipeline
pipe = pipeline("text-to-speech", model=repo_id, no_processor=False)
# Prepare input
input_data = pipe.processor.apply_chat_template(
[{"role": "0", "content": [{"type": "text", "text": text}]}], tokenize=False,
)
# Generate!
start_time = time.time()
generate_kwargs = {
"cfg_scale": cfg_scale,
"n_diffusion_steps": n_diffusion_steps,
}
output = pipe(input_data, generate_kwargs=generate_kwargs)
end_time = time.time()
print(f"Generation took {end_time - start_time:.2f} seconds.")
# Save to file
audio = output["audio"][0].squeeze()
fn = f"{os.path.basename(repo_id)}_pipeline_tts.wav"
sf.write(fn, audio, sampling_rate)
print(f"Audio saved to {fn}")
Generating a podcast from a script
Below is a full example that uses an (optional) progress bar to track generation progress.
Single file generation (dropdown)
import time
import numpy as np
import torch
from tqdm import tqdm
import os
from transformers import AutoProcessor, VibeVoiceForConditionalGeneration
repo_id = "microsoft/VibeVoice-1.5B-hf"
sampling_rate = 24000
max_new_tokens = 400 # set to None for generation till the end of script
# Optional parameters for diffusion process, defaults are in the model's generation_config.json
cfg_scale = 1.3 # classifier-free guidance for diffusion process
n_diffusion_steps = 10 # number of diffusion steps for each audio chunk
# set seed for deterministic
seed = 42
torch.manual_seed(seed)
np.random.seed(seed)
conversation = [
{"role": "0", "content": [
{"type": "text", "text": "Hello everyone, and welcome to the VibeVoice podcast. I'm your host, Alex, and today we're getting into one of the biggest debates in all of sports: who's the greatest basketball player of all time? I'm so excited to have Sam here to talk about it with me."},
]},
{"role": "1", "content": [
{"type": "text", "text": "Thanks so much for having me, Alex. You're absolutely right—this question always brings out some seriously strong feelings."},
]},
{"role": "0", "content": [
{"type": "text", "text": "Okay, so let's get right into it. For me, it has to be Michael Jordan. Six trips to the Finals, six championships. That kind of perfection is just incredible."},
]},
{"role": "1", "content": [
{"type": "text", "text": "Oh man, the first thing that always pops into my head is that shot against the Cleveland Cavaliers back in '89. Jordan just rises, hangs in the air forever, and just sinks it"},
]},
]
# load model
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(repo_id)
model = VibeVoiceForConditionalGeneration.from_pretrained(repo_id, device_map=device).eval()
# prepare inputs
inputs = processor.apply_chat_template(
conversation, tokenize=True, return_dict=True
).to(device)
# Generate audio with a callback to track progress
start_time = time.time()
completed_samples = set()
with tqdm(desc="Generating") as pbar:
def monitor_progress(p_batch):
# p_batch format: [current_step, max_step, completion_step] for each sample
finished_samples = (p_batch[:, 0] == p_batch[:, 1]).nonzero(as_tuple=False).squeeze(1)
if finished_samples.numel() > 0:
for sample_idx in finished_samples.tolist():
if sample_idx not in completed_samples:
completed_samples.add(sample_idx)
completion_step = int(p_batch[sample_idx, 2])
print(f"Sample {sample_idx} completed at step {completion_step}", flush=True)
active_samples = p_batch[:, 0] < p_batch[:, 1]
if active_samples.any():
active_progress = p_batch[active_samples]
max_active_idx = torch.argmax(active_progress[:, 0])
p = active_progress[max_active_idx].detach().cpu()
else:
p = p_batch[0].detach().cpu()
pbar.total = int(p[1])
pbar.n = int(p[0])
pbar.update()
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
cfg_scale=cfg_scale,
n_diffusion_steps=n_diffusion_steps,
monitor_progress=monitor_progress,
return_dict_in_generate=True,
)
generation_time = time.time() - start_time
print(f"Generation time: {generation_time:.2f} seconds")
# Save audio
output_fp = f"{os.path.basename(repo_id)}_output.wav"
processor.save_audio(outputs.audio[0], output_fp)
print(f"Saved output to {output_fp}")
Batch generation
For batch processing, a list of conversations can be passed to processor.apply_chat_template to prepare the inputs:
inputs = processor.apply_chat_template(
[conversation1, conversation2],
tokenize=True,
return_dict=True
)
Full batch example (with a script with 4 voices!)
import time
import diffusers
import numpy as np
import torch
from tqdm import tqdm
from transformers import AutoProcessor, VibeVoiceForConditionalGeneration
from transformers.audio_utils import load_audio_librosa
repo_id = "microsoft/VibeVoice-1.5B-hf"
sampling_rate = 24000
max_new_tokens = 400 # set to None for generation till the end of script
# Optional parameters for diffusion process, defaults are in the model's generation_config.json
cfg_scale = 1.3 # classifier-free guidance for diffusion process
n_diffusion_steps = 10 # number of diffusion steps for each audio chunk
# set seed for deterministic
seed = 42
torch.manual_seed(seed)
np.random.seed(seed)
conversations = [
[
{"role": "0", "content": [
{"type": "text", "text": "Hello everyone, and welcome to the VibeVoice podcast. I'm your host, Alex, and today we're getting into one of the biggest debates in all of sports: who's the greatest basketball player of all time? I'm so excited to have Sam here to talk about it with me."},
]},
{"role": "1", "content": [
{"type": "text", "text": "Thanks so much for having me, Alex. You're absolutely right—this question always brings out some seriously strong feelings."},
]},
{"role": "0", "content": [
{"type": "text", "text": "Okay, so let's get right into it. For me, it has to be Michael Jordan. Six trips to the Finals, six championships. That kind of perfection is just incredible."},
]},
{"role": "1", "content": [
{"type": "text", "text": "Oh man, the first thing that always pops into my head is that shot against the Cleveland Cavaliers back in '89. Jordan just rises, hangs in the air forever, and just sinks it"},
]},
],
[
{"role": "0", "content": [
{"type": "text", "text": "Hello and welcome to Planet in Peril. I'm your host, Alex. We're here today to discuss a really sobering new report that looks back at the last ten years of climate change, from 2015 to 2025. It paints a picture not just of steady warming, but of a dangerous acceleration. And to help us unpack this, I'm joined by our expert panel. Welcome Sam, Morgan, and Jordan."},
]},
{"role": "1", "content": [
{"type": "text", "text": "Hi Alex, it's great to be here. I'm Sam."},
]},
{"role": "2", "content": [
{"type": "text", "text": "Hello, uh, I'm Morgan. Good to be on."},
]},
{"role": "3", "content": [
{"type": "text", "text": "And I'm Jordan. Thanks for having me."},
]},
],
]
# load model
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(repo_id)
model = VibeVoiceForConditionalGeneration.from_pretrained(
repo_id,
device_map=device,
).to(device).eval()
# prepare inputs
inputs = processor.apply_chat_template(
conversations, return_dict=True, tokenize=True,
).to(device)
# Generate audio with a callback to track progress
start_time = time.time()
completed_samples = set()
with tqdm(desc="Generating") as pbar:
def monitor_progress(p_batch):
# p_batch format: [current_step, max_step, completion_step] for each sample
finished_samples = (p_batch[:, 0] == p_batch[:, 1]).nonzero(as_tuple=False).squeeze(1)
if finished_samples.numel() > 0:
for sample_idx in finished_samples.tolist():
if sample_idx not in completed_samples:
completed_samples.add(sample_idx)
completion_step = int(p_batch[sample_idx, 2])
print(f"Sample {sample_idx} completed at step {completion_step}", flush=True)
active_samples = p_batch[:, 0] < p_batch[:, 1]
if active_samples.any():
active_progress = p_batch[active_samples]
max_active_idx = torch.argmax(active_progress[:, 0])
p = active_progress[max_active_idx].detach().cpu()
else:
p = p_batch[0].detach().cpu()
pbar.total = int(p[1])
pbar.n = int(p[0])
pbar.update()
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
cfg_scale=cfg_scale,
n_diffusion_steps=n_diffusion_steps,
monitor_progress=monitor_progress,
return_dict_in_generate=True,
)
generation_time = time.time() - start_time
print(f"Generation time: {generation_time:.2f} seconds")
# Save audio
for i, audio in enumerate(outputs.audio):
output_fp = f"{os.path.basename(repo_id)}_output_{i}.wav"
processor.save_audio(audio, output_fp)
print(f"Saved output to {output_fp}")
Second output with four voices
Training Details
Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head.
- LLM: Qwen2.5-1.5B for this release.
- Tokenizers:
- Acoustic Tokenizer: Based on a σ-VAE variant (proposed in LatentLM), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each.
- Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task.
- Diffusion Head: Lightweight module (4 layers, ~123M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference.
- Context Length: Trained with a curriculum increasing up to 65,536 tokens.
- Training Stages:
- Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately.
- VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K -> 64K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers.
Responsible Usage
Direct intended uses
The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the tech report.
Out-of-scope uses
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios:
- Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass.
- Disinformation or impersonation – creating audio presented as genuine recordings of real people or events.
- Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications.
- Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive.
- Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio.
Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release). Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content. English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs. Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects. Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
Recommendations
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
To mitigate the risks of misuse, we have: Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file. Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card. Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly. Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns.
Contact
This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at VibeVoice@microsoft.com. If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
- Downloads last month
- 963