π΅ AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficient Inference in Multimodal LLMs
Abstract
AudioMarathon is a large-scale, multi-task audio understanding benchmark designed to systematically evaluate audio language models' capabilities in processing and comprehending long-form audio content. It provides a diverse set of 10 tasks built upon three pillars: long-context audio inputs with durations ranging from 90.0 to 300.0 seconds, which correspond to encoded sequences of 2,250 to 7,500 audio tokens, respectively, full domain coverage across speech, sound, and music, and complex reasoning that requires multi-hop inference.
π Task Taxonomy & Statistics
Task Categories
AudioMarathon organizes tasks into four primary categories:
- Speech Content Extraction -
- Audio Classification
- Speaker Information Modeling
Dataset Statistics
| Task ID | Dataset | Task Type | # Samples | Duration | Format | License | Status |
|---|---|---|---|---|---|---|---|
| 1 | LibriSpeech-long | Automatic Speech Recognition (ASR) | 204 | 1-4min | FLAC 16kHz | CC BY 4.0 | β Full |
| 2 | RACE | Speech Content Reasoning (SCR) | 820 | 2-4.22min | WAV 16kHz | Apache-2.0 | β Full |
| 3 | HAD | Speech Detection (SD) | 776 | 3~5min | WAV 16kHz | CC BY 4.0 | β Full |
| 4 | GTZAN | Music classifier (MC) | 120 | 4min | WAV 22kHz | Research Only | β Full |
| 5 | TAU | Audio scene classifier (ASC) | 1145 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | β Full |
| 6 | VESUS | Emotion Recognition (ER) | 185 | 1.5-2min | WAV 16kHz | Academic Only | β Full |
| 7 | SLUE | Speech Entity Recognition (SER) | 490 | 2.75~5min | WAV 16kHz | CC BY 4.0 | β Full |
| 8 | DESED | Sound event detection (SED) | 254 | 4.5-5min | WAV 16kHz | Mixed CC* | β Full |
| 9 | VoxCeleb-Gender | Speaker Gender Recognition (SGR) | 1614 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | β Full |
| 10 | VoxCeleb-Age | Speaker Age Recognition (SAR) | 959 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | β Full |
Total: 6567 samples | ~64GB
* DESED requires per-clip Freesound attribution (CC0/CC BY 3.0/4.0)
π― Benchmark Objectives
AudioMarathon is designed to evaluate:
- Long-Audio Processing: Ability to maintain coherence across extended audio sequences
- Multi-Domain Generalization: Performance across diverse acoustic environments and tasks
- Semantic Understanding: Comprehension of spoken content, not just acoustic patterns
- Efficiency: Computational requirements for long-form audio processing
π Directory Structure
Dataset/
βββ librispeech-long/ # Automatic Speech Recognition
β βββ README.md
β βββ test-clean/ # Clean test set
β βββ test-other/ # Noisy test set
β βββ dev-clean/ # Clean dev set
β βββ dev-other/ # Noisy dev set
β
βββ race_audio/ # Reading Comprehension
β βββ race_benchmark.json # Task metadata
β βββ test/ # Audio articles
β βββ article_*/
β
βββ HAD/ # Half-truth Audio Detection
β βββ concatenated_audio/
β βββ had_audio_classification_task.json
β βββ real/ # Authentic audio
β βββ fake/ # Synthesized audio
β
βββ GTZAN/ # Music Genre Classification
β βββ concatenated_audio/
β βββ music_genre_classification_meta.json
β βββ wav/ # Genre-labeled music clips
β
βββ TAU/ # Acoustic Scene Classification
β βββ acoustic_scene_task_meta.json
β βββ LICENSE
β βββ README.md
β βββ concatenated_resampled/
β
βββ VESUS/ # Emotion Recognition
β βββ audio_emotion_dataset.json
β βββ [1-10]/ # Speaker directories
β
βββ SLUE/ # Named Entity Recognition
β βββ merged_audio_data.json
β βββ dev/
β βββ test/
β βββ fine-tune/
β
βββ DESED/ # Sound Event Detection
β βββ DESED_dataset/
β βββ license_public_eval.tsv
β βββ concatenated_audio/
β
βββ VoxCeleb/ # Speaker Recognition
β βββ concatenated_audio/
β β βββ gender_id_task_meta.json
β βββ concatenated_audio_age/
β β βββ age_classification_task_meta.json
β βββ txt/
β
βββ README.md # This file
π― Dataset Details
1. LibriSpeech-long
Task: Automatic Speech Recognition (ASR)
Description: Long-form English speech from audiobooks
Format: FLAC files with .trans.txt transcriptions
Splits: test-clean, test-other, dev-clean, dev-other
License: CC BY 4.0
Source: https://github.com/google-deepmind/librispeech-long
Structure:
librispeech-long/
test-clean/
<speaker_id>/
<chapter_id>/
<speaker>-<chapter>-<utterance>.flac
<speaker>-<chapter>.trans.txt
2. RACE
Task: Reading Comprehension from Audio
Description: Multiple-choice questions based on audio passages
Format: WAV files + JSON metadata
Sample Count: ~200 articles
License: Apache-2.0 (verify)
Source: https://huggingface.co/datasets/ehovy/race
JSON Format:
{
"article_id": 7870154,
"audio_path": "test/article_7870154/audio.wav",
"question": "What did the author do...?",
"options": ["A", "B", "C", "D"],
"answer": "A"
}
3. HAD
Task: Half-truth Audio Detection
Description: Classify audio as real or containing synthesized segments
License: CC BY 4.0
Source: https://zenodo.org/records/10377492
JSON Format:
{
"path": "real/HAD_train_real_249.wav",
"question": "Is this audio authentic or fake?",
"choice_a": "Real",
"choice_b": "Fake",
"answer_gt": "Real",
"duration_seconds": 297.78
}
4. GTZAN
Task: Music Genre Classification
Description: 10-genre music classification dataset
Genres: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock
β οΈ License: Research Use Only
Source: https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification
5. TAU
Task: Acoustic Scene Classification
Description: Urban sound scene recognition
Scenes: airport, bus, metro, park, public_square, shopping_mall, street_pedestrian, street_traffic, tram
License: CC BY 4.0
Source: https://zenodo.org/records/7870258
Files:
acoustic_scene_task_meta.json: Task metadataLICENSE: Original license textconcatenated_resampled/: Resampled audio files
6. VESUS
Task: Emotion Recognition from Speech
Description: Actors reading neutral script with emotional inflections
Emotions: neutral, angry, happy, sad, fearful
Actors: 10 (5 male, 5 female)
β οΈ License: Academic Use Only (access by request)
Source: https://engineering.jhu.edu/nsa/vesus/
7. SLUE
Task: Named Entity Recognition (NER) from Speech
Description: Count named entities in audio segments
Entity Types: LAW, NORP, ORG, PLACE, QUANT, WHEN
License: CC BY 4.0 (VoxPopuli-derived)
JSON Format:
{
"path": "dev/concatenated_audio_with/concatenated_audio_0000.wav",
"question": "How many named entities appear?",
"options": ["49 entities", "51 entities", "52 entities", "46 entities"],
"answer_gt": "D",
"entity_count": 49
}
8. DESED
Task: Sound Event Detection
Description: Detect domestic sound events
Events: Alarm bell, Blender, Cat, Dishes, Dog, Electric shaver, Frying, Running water, Speech, Vacuum cleaner
License: Mixed CC (Freesound sources: CC0, CC BY 3.0/4.0)
Source: https://github.com/turpaultn/DESED
β οΈ ATTRIBUTION REQUIRED:
- Audio clips sourced from Freesound.org
- Each clip has individual CC license
- Must maintain attribution when redistributing
- See
license_public_eval.tsvfor per-file credits
9. VoxCeleb-Gender
Task: Speaker Gender Identification
Description: Binary classification (male/female)
License: CC BY 4.0
Source: https://www.robots.ox.ac.uk/~vgg/data/voxceleb/
JSON Format:
{
"path": "concatenated_audio/speaker_001.wav",
"question": "What is the gender of the speaker?",
"choice_a": "Male",
"choice_b": "Female",
"answer_gt": "A"
}
10. VoxCeleb-Age
Task: Speaker Age Classification
Description: Multi-class age group classification
Age Groups: 20s, 30s, 40s, 50s, 60s, 70s
License: CC BY 4.0
Source: VoxCeleb + https://github.com/hechmik/voxceleb_enrichment_age_gender
Note: Age/gender labels are derivative annotations on VoxCeleb corpus
π§ Usage Guidelines
You can load the dataset via Hugging Face datasets:
from datasets import load_dataset ds = load_dataset("Hezep/AudioMarathon")
Special Requirements
Disclaimer
This benchmark is provided "AS IS" without warranty. Users bear sole responsibility for:
- License compliance verification
- Obtaining restricted datasets independently
- Proper attribution maintenance
- Determining fitness for specific use cases
π Benchmark Statistics
Overview
| Metric | Value |
|---|---|
| Total Tasks | 10 |
| Total Samples | 6567 |
| Total Duration | 392h |
| Total Size | ~60 GB |
| Languages | English |
| Domains | Speech, Music, Soundscape, Environmental |
Audio Characteristics
| Property | Range | Predominant |
|---|---|---|
| Sampling Rate | 16 kHz - 22.05 kHz | 16 kHz (90%) |
| Duration | 30s - 5+ min | 2-3 min (avg) |
| Channels | Mono | Mono (100%) |
| Format | FLAC, WAV | WAV (80%) |
| Bit Depth | 16-bit | 16-bit (100%) |
Task Distribution
| Category | # Tasks | # Samples | % of Total |
|---|---|---|---|
| Speech Understanding | 3 | 1514 | 23% |
| Acoustic Analysis | 3 | 1519 | 23% |
| Speaker Characterization | 3 | 2758 | 42% |
| Content Authenticity | 1 | 776 | 12% |
π Related Resources
- Paper (arXiv): https://arxiv.org/abs/2510.07293
- GitHub Repository: https://github.com/DabDans/AudioMarathon
- Hugging Face Dataset: https://huggingface.co/datasets/Hezep/AudioMarathon
π Citation
If you use AudioMarathon in your research, please cite:
@article{he2025audiomarathon,
title={AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficiency in Audio LLMs},
author={He, Peize and Wen, Zichen and Wang, Yubo and Wang, Yuxuan and Liu, Xiaoqian and Huang, Jiajie and Lei, Zehui and Gu, Zhuangcheng and Jin, Xiangqi and Yang, Jiabing and Li, Kai and Liu, Zhifei and Li, Weijia and Wang, Cunxiang and He, Conghui and Zhang, Linfeng},
journal={arXiv preprint arXiv:2510.07293},
year={2025},
url={https://arxiv.org/abs/2510.07293}
}
Citing Component Datasets
When using specific tasks, please also cite the original datasets (see individual task documentation above for BibTeX entries):
- LibriSpeech: Panayotov et al. (2015)
- RACE: Lai et al. (2017)
- HAD: Zenodo record 10377492
- GTZAN: Tzanetakis & Cook (2002)
- TAU: DCASE Challenge (Mesaros et al., 2018)
- VESUS: Sager et al. (2019)
- SLUE: Shon et al. (2022)
- DESED: Turpault et al. (2019)
- VoxCeleb: Nagrani et al. (2017, 2018)
Full BibTeX entries available in individual task sections.
π€ Contributing & Support
π§ Contact
- GitHub Issues: https://github.com/DabDans/AudioMarathon/issues
π Acknowledgments
AudioMarathon builds upon the pioneering work of numerous research teams. We gratefully acknowledge the creators of:
- LibriSpeech (Panayotov et al.)
- RACE (Lai et al.)
- HAD (Zenodo contributors)
- GTZAN (Tzanetakis & Cook)
- TAU/DCASE (Mesaros et al.)
- VESUS (Sager et al., JHU)
- SLUE (Shon et al.)
- DESED (Turpault et al. & Freesound community)
- VoxCeleb (Nagrani et al., Oxford VGG)
Their datasets enable comprehensive audio understanding research.
AudioMarathon
A Comprehensive Long-Form Audio Understanding Benchmark
Version 1.0.0 | October 2025
- Downloads last month
- 257