The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ποΈ AMI-Refined: High-Fidelity Meeting Summarization Dataset
This repository contains a refined version of the AMI Meeting Corpus, specifically re-engineered for Long-context Abstractive Speech Summarization. Unlike existing fragmented ASR datasets, this version restores the continuous discourse flow and ensures strict alignment between audio and human-annotated summaries.
π οΈ Data Processing & Engineering (How we matched it)
To bridge the gap between fragmented ASR chunks and long-form summarization, we implemented a rigorous preprocessing pipeline:
1. Temporal Discourse Restoration
The original Hugging Face AMI dataset (e.g., edinburghcstr/ami) provides audio in short, shuffled segments. We restored the original meeting structure by:
- Meeting-level Grouping: Grouping 100k+ utterances by their unique
meeting_id. - Time-sequential Sorting: Sorting segments within each meeting based on the exact
begin_timemetadata to reconstruct the chronological conversation flow. - Physical Audio Reconstruction: Concatenating validated audio arrays using
numpyand exporting them as single, high-quality WAV files (16kHz) to prevent any frame decoding errors found in streaming versions.
2. Multi-stage Validation & Cleaning
We ensured 100% data integrity through a strict filtering process:
- Audio Integrity Check: Every audio chunk was pre-decoded to detect and exclude corrupted frames or empty arrays (
RuntimeErrorprevention). - Textual Ground-Truth Alignment: We matched each reassembled audio with Gold-Standard Manual Annotations (XML-based transcripts and abstractive summaries) from the AMI native metadata.
- Scenario-only Selection: We filtered for meetings that have verified human-written summaries (mostly
ESandTSseries), ensuring that the model is trained on professional-grade labels rather than noisy or synthetic ones.
3. Native Hugging Face Integration
The dataset is structured to be compatible with modern deep learning pipelines:
- WAV-JSON Mapping: Audio is stored as physical WAV files and indexed via JSON to ensure persistent paths.
- Hugging Face
datasetsFeature: The finalDatasetDictuses thedatasets.Audiofeature, allowing for automatic resampling and seamless loading withmap()functions.
π Dataset Structure & Usage
Data Fields
meeting_id: Unique identifier for each meeting (e.g.,ES2002a).audio: Audio feature containing the decoded array and sampling rate (16kHz).summary: Human-annotated abstractive summary (Ground Truth).transcript: Complete meeting transcript for context.duration_sec: Total duration of the meeting audio.
How to Load
from datasets import load_dataset
# Load the refined AMI dataset
dataset = load_dataset("eeoonn/ami-refined", use_auth_token=True)
# Audio is ready to use with librosa or transformers
example = dataset['train'][0]
print(example['summary'])
π‘οΈ Reliability for Research (Defense against Reviewers)
When comparing this dataset to others used in recent research (like SQuBa):
- No Synthetic Bias: All labels are 100% human-annotated, avoiding the "synthetic noise" issue in LLM-generated labels.
- Verified Alignment: By sorting by
begin_timeand checking for corrupted frames, we guarantee that the audio signal and the transcript are perfectly synchronized. - Long-form Context: Our re-assembly provides a real-world long-context challenge (average duration: ~30 min), which is far more rigorous than evaluating on short audio clips.
- Downloads last month
- 21