You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

🎡 AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficient Inference in Multimodal LLMs

arXiv GitHub Dataset License: CC BY-NC 4.0


Abstract

AudioMarathon is a large-scale, multi-task audio understanding benchmark designed to systematically evaluate audio language models' capabilities in processing and comprehending long-form audio content. It provides a diverse set of 10 tasks built upon three pillars: long-context audio inputs with durations ranging from 90.0 to 300.0 seconds, which correspond to encoded sequences of 2,250 to 7,500 audio tokens, respectively, full domain coverage across speech, sound, and music, and complex reasoning that requires multi-hop inference.

πŸ“Š Task Taxonomy & Statistics

Task Categories

AudioMarathon organizes tasks into four primary categories:

  1. Speech Content Extraction -
  2. Audio Classification
  3. Speaker Information Modeling

Dataset Statistics

Task ID Dataset Task Type # Samples Duration Format License Status
1 LibriSpeech-long Automatic Speech Recognition (ASR) 204 1-4min FLAC 16kHz CC BY 4.0 βœ… Full
2 RACE Speech Content Reasoning (SCR) 820 2-4.22min WAV 16kHz Apache-2.0 βœ… Full
3 HAD Speech Detection (SD) 776 3~5min WAV 16kHz CC BY 4.0 βœ… Full
4 GTZAN Music classifier (MC) 120 4min WAV 22kHz Research Only βœ… Full
5 TAU Audio scene classifier (ASC) 1145 1.5-3.5min WAV 16kHz CC BY 4.0 βœ… Full
6 VESUS Emotion Recognition (ER) 185 1.5-2min WAV 16kHz Academic Only βœ… Full
7 SLUE Speech Entity Recognition (SER) 490 2.75~5min WAV 16kHz CC BY 4.0 βœ… Full
8 DESED Sound event detection (SED) 254 4.5-5min WAV 16kHz Mixed CC* βœ… Full
9 VoxCeleb-Gender Speaker Gender Recognition (SGR) 1614 1.5-3.5min WAV 16kHz CC BY 4.0 βœ… Full
10 VoxCeleb-Age Speaker Age Recognition (SAR) 959 1.5-3.5min WAV 16kHz CC BY 4.0 βœ… Full

Total: 6567 samples | ~64GB

* DESED requires per-clip Freesound attribution (CC0/CC BY 3.0/4.0)


🎯 Benchmark Objectives

AudioMarathon is designed to evaluate:

  1. Long-Audio Processing: Ability to maintain coherence across extended audio sequences
  2. Multi-Domain Generalization: Performance across diverse acoustic environments and tasks
  3. Semantic Understanding: Comprehension of spoken content, not just acoustic patterns
  4. Efficiency: Computational requirements for long-form audio processing

πŸ“ Directory Structure

Dataset/
β”œβ”€β”€ librispeech-long/        # Automatic Speech Recognition
β”‚   β”œβ”€β”€ README.md
β”‚   β”œβ”€β”€ test-clean/          # Clean test set
β”‚   β”œβ”€β”€ test-other/          # Noisy test set
β”‚   β”œβ”€β”€ dev-clean/           # Clean dev set
β”‚   └── dev-other/           # Noisy dev set
β”‚
β”œβ”€β”€ race_audio/              # Reading Comprehension
β”‚   β”œβ”€β”€ race_benchmark.json  # Task metadata
β”‚   └── test/                # Audio articles
β”‚       └── article_*/
β”‚
β”œβ”€β”€ HAD/                     # Half-truth Audio Detection
β”‚   └── concatenated_audio/
β”‚       β”œβ”€β”€ had_audio_classification_task.json
β”‚       β”œβ”€β”€ real/            # Authentic audio
β”‚       └── fake/            # Synthesized audio
β”‚
β”œβ”€β”€ GTZAN/                   # Music Genre Classification
β”‚   └── concatenated_audio/
β”‚       β”œβ”€β”€ music_genre_classification_meta.json
β”‚       └── wav/             # Genre-labeled music clips
β”‚
β”œβ”€β”€ TAU/                     # Acoustic Scene Classification
β”‚   β”œβ”€β”€ acoustic_scene_task_meta.json
β”‚   β”œβ”€β”€ LICENSE
β”‚   β”œβ”€β”€ README.md
β”‚   └── concatenated_resampled/
β”‚
β”œβ”€β”€ VESUS/                   # Emotion Recognition
β”‚   β”œβ”€β”€ audio_emotion_dataset.json
β”‚   └── [1-10]/              # Speaker directories
β”‚
β”œβ”€β”€ SLUE/                    # Named Entity Recognition
β”‚   β”œβ”€β”€ merged_audio_data.json
β”‚   β”œβ”€β”€ dev/
β”‚   β”œβ”€β”€ test/
β”‚   └── fine-tune/
β”‚
β”œβ”€β”€ DESED/                   # Sound Event Detection
β”‚   └── DESED_dataset/
β”‚       β”œβ”€β”€ license_public_eval.tsv
β”‚       └── concatenated_audio/
β”‚
β”œβ”€β”€ VoxCeleb/                # Speaker Recognition
β”‚   β”œβ”€β”€ concatenated_audio/
β”‚   β”‚   └── gender_id_task_meta.json
β”‚   β”œβ”€β”€ concatenated_audio_age/
β”‚   β”‚   └── age_classification_task_meta.json
β”‚   └── txt/
β”‚
└── README.md                # This file

🎯 Dataset Details

1. LibriSpeech-long

Task: Automatic Speech Recognition (ASR)
Description: Long-form English speech from audiobooks
Format: FLAC files with .trans.txt transcriptions
Splits: test-clean, test-other, dev-clean, dev-other
License: CC BY 4.0
Source: https://github.com/google-deepmind/librispeech-long

Structure:

librispeech-long/
  test-clean/
    <speaker_id>/
      <chapter_id>/
        <speaker>-<chapter>-<utterance>.flac
        <speaker>-<chapter>.trans.txt

2. RACE

Task: Reading Comprehension from Audio
Description: Multiple-choice questions based on audio passages
Format: WAV files + JSON metadata
Sample Count: ~200 articles
License: Apache-2.0 (verify)
Source: https://huggingface.co/datasets/ehovy/race

JSON Format:

{
  "article_id": 7870154,
  "audio_path": "test/article_7870154/audio.wav",
  "question": "What did the author do...?",
  "options": ["A", "B", "C", "D"],
  "answer": "A"
}

3. HAD

Task: Half-truth Audio Detection
Description: Classify audio as real or containing synthesized segments
License: CC BY 4.0
Source: https://zenodo.org/records/10377492

JSON Format:

{
  "path": "real/HAD_train_real_249.wav",
  "question": "Is this audio authentic or fake?",
  "choice_a": "Real",
  "choice_b": "Fake",
  "answer_gt": "Real",
  "duration_seconds": 297.78
}

4. GTZAN

Task: Music Genre Classification
Description: 10-genre music classification dataset
Genres: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock
⚠️ License: Research Use Only Source: https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification


5. TAU

Task: Acoustic Scene Classification
Description: Urban sound scene recognition
Scenes: airport, bus, metro, park, public_square, shopping_mall, street_pedestrian, street_traffic, tram
License: CC BY 4.0
Source: https://zenodo.org/records/7870258

Files:

  • acoustic_scene_task_meta.json: Task metadata
  • LICENSE: Original license text
  • concatenated_resampled/: Resampled audio files

6. VESUS

Task: Emotion Recognition from Speech
Description: Actors reading neutral script with emotional inflections
Emotions: neutral, angry, happy, sad, fearful
Actors: 10 (5 male, 5 female)
⚠️ License: Academic Use Only (access by request)
Source: https://engineering.jhu.edu/nsa/vesus/


7. SLUE

Task: Named Entity Recognition (NER) from Speech
Description: Count named entities in audio segments
Entity Types: LAW, NORP, ORG, PLACE, QUANT, WHEN
License: CC BY 4.0 (VoxPopuli-derived)

JSON Format:

{
  "path": "dev/concatenated_audio_with/concatenated_audio_0000.wav",
  "question": "How many named entities appear?",
  "options": ["49 entities", "51 entities", "52 entities", "46 entities"],
  "answer_gt": "D",
  "entity_count": 49
}

8. DESED

Task: Sound Event Detection
Description: Detect domestic sound events
Events: Alarm bell, Blender, Cat, Dishes, Dog, Electric shaver, Frying, Running water, Speech, Vacuum cleaner
License: Mixed CC (Freesound sources: CC0, CC BY 3.0/4.0)
Source: https://github.com/turpaultn/DESED

⚠️ ATTRIBUTION REQUIRED:

  • Audio clips sourced from Freesound.org
  • Each clip has individual CC license
  • Must maintain attribution when redistributing
  • See license_public_eval.tsv for per-file credits

9. VoxCeleb-Gender

Task: Speaker Gender Identification
Description: Binary classification (male/female)
License: CC BY 4.0
Source: https://www.robots.ox.ac.uk/~vgg/data/voxceleb/

JSON Format:

{
  "path": "concatenated_audio/speaker_001.wav",
  "question": "What is the gender of the speaker?",
  "choice_a": "Male",
  "choice_b": "Female",
  "answer_gt": "A"
}

10. VoxCeleb-Age

Task: Speaker Age Classification
Description: Multi-class age group classification
Age Groups: 20s, 30s, 40s, 50s, 60s, 70s
License: CC BY 4.0
Source: VoxCeleb + https://github.com/hechmik/voxceleb_enrichment_age_gender

Note: Age/gender labels are derivative annotations on VoxCeleb corpus


πŸ”§ Usage Guidelines

You can load the dataset via Hugging Face datasets:

from datasets import load_dataset ds = load_dataset("Hezep/AudioMarathon")


Special Requirements

Disclaimer

This benchmark is provided "AS IS" without warranty. Users bear sole responsibility for:

  • License compliance verification
  • Obtaining restricted datasets independently
  • Proper attribution maintenance
  • Determining fitness for specific use cases

πŸ“Š Benchmark Statistics

Overview

Metric Value
Total Tasks 10
Total Samples 6567
Total Duration 392h
Total Size ~60 GB
Languages English
Domains Speech, Music, Soundscape, Environmental

Audio Characteristics

Property Range Predominant
Sampling Rate 16 kHz - 22.05 kHz 16 kHz (90%)
Duration 30s - 5+ min 2-3 min (avg)
Channels Mono Mono (100%)
Format FLAC, WAV WAV (80%)
Bit Depth 16-bit 16-bit (100%)

Task Distribution

Category # Tasks # Samples % of Total
Speech Understanding 3 1514 23%
Acoustic Analysis 3 1519 23%
Speaker Characterization 3 2758 42%
Content Authenticity 1 776 12%

πŸ”— Related Resources


πŸ“ Citation

If you use AudioMarathon in your research, please cite:

@article{he2025audiomarathon,
  title={AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficiency in Audio LLMs},
  author={He, Peize and Wen, Zichen and Wang, Yubo and Wang, Yuxuan and Liu, Xiaoqian and Huang, Jiajie and Lei, Zehui and Gu, Zhuangcheng and Jin, Xiangqi and Yang, Jiabing and Li, Kai and Liu, Zhifei and Li, Weijia and Wang, Cunxiang and He, Conghui and Zhang, Linfeng},
  journal={arXiv preprint arXiv:2510.07293},
  year={2025},
  url={https://arxiv.org/abs/2510.07293}
}

Citing Component Datasets

When using specific tasks, please also cite the original datasets (see individual task documentation above for BibTeX entries):

  • LibriSpeech: Panayotov et al. (2015)
  • RACE: Lai et al. (2017)
  • HAD: Zenodo record 10377492
  • GTZAN: Tzanetakis & Cook (2002)
  • TAU: DCASE Challenge (Mesaros et al., 2018)
  • VESUS: Sager et al. (2019)
  • SLUE: Shon et al. (2022)
  • DESED: Turpault et al. (2019)
  • VoxCeleb: Nagrani et al. (2017, 2018)

Full BibTeX entries available in individual task sections.


🀝 Contributing & Support

πŸ“§ Contact


πŸ™ Acknowledgments

AudioMarathon builds upon the pioneering work of numerous research teams. We gratefully acknowledge the creators of:

  • LibriSpeech (Panayotov et al.)
  • RACE (Lai et al.)
  • HAD (Zenodo contributors)
  • GTZAN (Tzanetakis & Cook)
  • TAU/DCASE (Mesaros et al.)
  • VESUS (Sager et al., JHU)
  • SLUE (Shon et al.)
  • DESED (Turpault et al. & Freesound community)
  • VoxCeleb (Nagrani et al., Oxford VGG)

Their datasets enable comprehensive audio understanding research.


AudioMarathon
A Comprehensive Long-Form Audio Understanding Benchmark
Version 1.0.0 | October 2025

Downloads last month
257