Whisper-Large-v3 Dutch - Mid-High Quality Filtered Synthetic Data

This model is a fine-tuned version of openai/whisper-large-v3 for Dutch automatic speech recognition (ASR). It was trained on Common Voice 17.0 Dutch combined with WAVe-filtered synthetic speech data using a balanced quality threshold (q ≥ 0.5).

Introduction

How the Data Was Created

The training data combines real speech from Common Voice 17.0 with synthetic speech generated through a two-stage pipeline:

  1. Transcript Generation: We used GPT-4o-mini to generate Dutch transcripts that match the word count distribution observed in Common Voice, ensuring realistic utterance lengths and diverse linguistic content.

  2. Speech Synthesis: Each transcript was converted to audio using OpenAI's TTS-1 model with 9 different voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer), producing 34,898 synthetic samples.

  3. Quality Filtering with WAVe: Raw synthetic speech often contains defects such as mispronunciations, omitted words, or prosodic anomalies. To address this, we applied WAVe (Word-Aligned Verification), a model that assesses audio-text alignment at the word level rather than the sentence level. WAVe uses multi-head attention to align each word to its corresponding audio frames and assigns per-word confidence scores via a GLU-based scorer. For this model, we retained samples scoring above the balanced threshold (q ≥ 0.5), resulting in 30,182 mid-to-high quality synthetic samples.

How the Model Was Created

The model was fine-tuned from openai/whisper-large-v3 using the Hugging Face Transformers library with the following approach:

  1. Mixed Training: Combined 34,952 real speech samples from Common Voice 17.0 Dutch with 30,182 WAVe-filtered synthetic samples (65,134 total).

  2. Optimization: Trained for 5 epochs with a learning rate of 5e-6, global batch size of 256, and BF16 precision on an NVIDIA H200 GPU.

  3. Checkpoint Selection: The best checkpoint was selected based on validation loss, occurring at step 500 with a validation loss of 0.0558.

This balanced filtering approach achieves excellent cross-domain generalization (17.25% MLS WER) while requiring only 7% fewer training steps than using all synthetic data.

Model Details

Property Value
Base Model openai/whisper-large-v3
Language Dutch (nl)
Task Automatic Speech Recognition (transcribe)
Parameters 1550M
Training Data Common Voice 17.0 + Mid-High Quality Synthetic (q ≥ 0.5)
Total Training Samples 65,134
Sampling Rate 16kHz

Evaluation Results

This Model (whisper-large-v3-mixed-cv-nl)

Metric Value
Validation Loss 0.0570
Validation WER 3.63%
Test WER (Common Voice) 4.48%
Test WER (MLS) 17.25%
Best Checkpoint Step 500
Max Training Steps 1,270

Comparison with Other Training Configurations (Whisper-Large-v3 Dutch)

Training Data Max Steps Val Loss Val WER Test WER (CV) Test WER (MLS)
Common Voice Only 680 0.0549 3.56% 4.39% 22.43%
High-Quality Filtered + CV 890 0.0520 3.57% 4.43% 20.29%
Mid-High Quality Filtered + CV 1,270 0.0570 3.63% 4.48% 17.25%
All Synthetic + CV (Unfiltered) 1,365 0.0560 3.61% 4.44% 17.02%

Key Performance Highlights

  • Strong cross-domain performance: 17.25% MLS WER (23.1% relative improvement vs baseline)
  • Near-optimal efficiency: Only 7% more steps than unfiltered while maintaining quality filtering
  • Balanced approach: 86.5% of synthetic data included (30,182 of 34,898 samples)
  • Competitive in-domain: 4.48% Test WER on Common Voice

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Dutch 34,952 Real speech from Mozilla's crowdsourced dataset
Synthetic Transcript NL (q ≥ 0.5) 30,182 WAVe-filtered TTS audio (mid-high quality)
Total 65,134

Synthetic Data Generation Pipeline

The synthetic dataset (yuriyvnv/synthetic_transcript_nl) was generated using:

  1. Transcript Generation: GPT-4o-mini, matching Common Voice word count distribution
  2. Speech Synthesis: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
  3. Quality Filtering: WAVe model with balanced threshold q ≥ 0.5

WAVe Quality Distribution (Dutch Synthetic Data)

Quality Level Samples Percentage Used in This Model
High (q ≥ 0.8) 10,555 30.2%
Medium (0.5 ≤ q < 0.8) 19,627 56.2%
Low (q < 0.5) 4,716 13.5%

This threshold retains 86.5% of the synthetic dataset, filtering only the lowest-quality samples while preserving volume for robust training.

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-6
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Training Curve

Step  100: val_loss = 0.0612
Step  200: val_loss = 0.0584
Step  300: val_loss = 0.0572
Step  450: val_loss = 0.0564
Step  500: val_loss = 0.0558 ← Best checkpoint
Step  600: val_loss = 0.0592
Step  800: val_loss = 0.0623
Step 1000: val_loss = 0.0632
Step 1250: val_loss = 0.0694

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-large-v3-mixed-cv-nl",
    device="cuda"
)

result = transcriber("path/to/dutch_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-large-v3-mixed-cv-nl")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-large-v3-mixed-cv-nl")
model.to("cuda")

audio, sr = librosa.load("path/to/dutch_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "nl"
model.generation_config.task = "transcribe"

Methodology

This model leverages WAVe (Word-Aligned Verification), a word-level quality assessment method for filtering synthetic speech data. Unlike sentence-level filtering approaches, WAVe:

  • Aligns each word to its corresponding audio frames using multi-head attention
  • Assigns per-word confidence scores via a GLU-based scorer
  • Detects localized synthesis errors (mispronunciations, omitted words, prosodic anomalies)
  • Achieves 6.5% improvement over sentence-level filtering methods

The balanced threshold (q ≥ 0.5) retains 86.5% of synthetic samples, striking an optimal balance between data volume and quality for robust cross-domain generalization.

When to Use This Model

This model is ideal when:

  • Balanced performance required: Strong on both in-domain and cross-domain benchmarks
  • Cross-domain robustness is critical: 23.1% relative improvement on MLS vs baseline
  • Reasonable compute budget: 7% fewer steps than unfiltered, 43% more than high-quality only

Consider other variants based on your needs:

Quality vs Quantity Tradeoff

This model represents the optimal balance point for Whisper-Large-v3:

Approach Synthetic Samples Training Steps Test WER (CV) Test WER (MLS) Efficiency
High-Quality (q≥0.8) 10,555 890 4.43% 20.29% Best
Mid-High (q≥0.5) 30,182 1,270 4.48% 17.25% Good
Unfiltered 34,898 1,365 4.44% 17.02% Baseline

Key insight: The mid-high threshold achieves 98.5% of unfiltered's cross-domain performance (17.25% vs 17.02%) while filtering out 13.5% of low-quality data, making it the sweet spot for practical applications.

Limitations

  • Domain specificity: Optimized for general Dutch; may underperform on technical domains
  • Acoustic conditions: Trained on clean speech; noise robustness not guaranteed
  • Dialect coverage: Performance may vary across Dutch regional variants

Citation

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0

Downloads last month
26
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/whisper-large-v3-mixed-cv-nl

Finetuned
(664)
this model

Datasets used to train yuriyvnv/whisper-large-v3-mixed-cv-nl

Collection including yuriyvnv/whisper-large-v3-mixed-cv-nl

Evaluation results