Datasets:

Formats:
parquet
Languages:
English
DOI:
Libraries:
Datasets
Dask
License:
Treble10-RIR / README.md
ssm-treble's picture
patching Room 7 (#3)
72e4a29 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: Room
      dtype: string
    - name: Room Description
      dtype: string
    - name: Room Volume []
      dtype: string
    - name: Direct Path Length [m]
      dtype: string
    - name: Source Label
      dtype: string
    - name: Source Position
      dtype: string
    - name: Receiver Label
      dtype: string
    - name: Receiver Position
      dtype: string
    - name: Frequencies
      dtype: string
    - name: EDT
      dtype: string
    - name: T30
      dtype: string
    - name: C50
      dtype: string
    - name: Average Absorption (Octave Band)
      dtype: string
    - name: Avg EDT
      dtype: string
    - name: Avg T30
      dtype: string
    - name: Avg C50
      dtype: string
    - name: Avg Absorption (Single Value)
      dtype: string
    - name: Receiver Type
      dtype: string
  splits:
    - name: rir_mono
      num_bytes: 691813092
      num_examples: 3085
    - name: rir_6ch
      num_bytes: 4143547712
      num_examples: 3085
    - name: rir_hoa8
      num_bytes: 55913856052
      num_examples: 3085
  download_size: 61330385155
  dataset_size: 60749216856
configs:
  - config_name: default
    data_files:
      - split: rir_mono
        path: data/rir_mono-*
      - split: rir_6ch
        path: data/rir_6ch-*
      - split: rir_hoa8
        path: data/rir_hoa8-*
license: cc-by-nc-sa-4.0
pretty_name: Treble10-RIR
language:
  - en
tags:
  - audio
  - acoustics
size_categories:
  - 1K<n<10K

Dataset Description

Treble10-RIR (32 kHz)

Open In Colab

The Treble10-RIR dataset is a dataset for automatic speech recognition (ASR), containing high fidelity room-acoustic simulations from 10 different furnished rooms: 2 bathrooms, 2 bedrooms, 2 living rooms with hallway, 2 living rooms without hallway, 2 meeting rooms. The room volumes range between 14 and 46 m3, resulting in reverberation times between 0.17 and 0.84 s.

Illustrative plots of the rooms and device included in this dataset may be found in the repository outside of the dataset.

This datacard provides examples of how to work with the data, explains how the data was generated, and describes the extensive metadata included in the dataset.

Example: Convolve speech with a Treble10 RIR

from datasets import load_dataset
from scipy.signal import fftconvolve, resample_poly, spectrogram
from scipy.io.wavfile import write
import matplotlib.pyplot as plt
import numpy as np

sr = 16000

# 1. Load one LibriSpeech sample
speech_ds = load_dataset("openslr/librispeech_asr", "clean", split="test[:1]")
speech = speech_ds[0]["audio"]["array"]          

# 2. Load one Treble10 RIR
rir_ds = load_dataset("treble-technologies/Treble10-RIR", split="rir_mono", streaming=True)
rir_rec = next(iter(rir_ds))
rir = rir_rec["audio"]["array"]
rir_sr = rir_rec["audio"]["sampling_rate"]

# 3. Downsample RIR
if rir_sr != sr:
    rir = resample_poly(rir, sr, rir_sr)

# 4. Convolve and normalize
rev = fftconvolve(speech, rir, mode="full")
rev /= np.max(np.abs(rev)) + 1e-12

# 5. Plot spectrogram
f, t, Sxx = spectrogram(rev, fs=sr, nperseg=512, noverlap=256)
plt.pcolormesh(t, f, 10*np.log10(Sxx+1e-12), shading="auto")
plt.xlabel("Time [s]")
plt.ylabel("Frequency [Hz]")
plt.title("Spectrogram of Reverberated Speech")
plt.tight_layout()
plt.show()

# 6. Save
write("audio_reverb.wav", sr, (rev * 32767).astype(np.int16))
print("✅ Saved: audio_reverb.wav")

Example: Read a batch of mono RIRs from Treble10 into a PyTorch dataloader

# Load a batch of Treble10 RIRs with PyTorch
import torch
from datasets import load_dataset, Audio
from torch.utils.data import DataLoader

# Load the dataset in streaming mode
rir_ds = load_dataset("treble-technologies/Treble10-RIR", split="rir_mono", streaming=True)
rir_ds = rir_ds.cast_column("audio", Audio())


def collate_fn(batch):
    """Convert the RIRs to torch tensors and pad them to the same length."""
    arrays = [torch.tensor(ex["audio"]["array"]) for ex in batch]

    # Pad to the longest RIR in the batch
    max_len = max(rir.shape[0] for rir in arrays)
    padded = torch.stack([torch.nn.functional.pad(rir, (0, max_len - rir.shape[0])) for rir in arrays])
    sampling_rate = batch[0]["audio"]["sampling_rate"]
    return {"rirs": padded, "sampling_rate": sampling_rate}


# Set up a torch dataloader
rir_loader = DataLoader(rir_ds, batch_size=4, collate_fn=collate_fn)

# Fetch one batch
batch = next(iter(rir_loader))
rirs = batch["rirs"]  # Tensor (batch size, number time samples)
sr = batch["sampling_rate"]
print(f"Batch shape: {rirs.shape}, Sample rate: {sr}")

Example: Read a 6 channel device RIR from Treble10 and compare two of the microphone signals

from datasets import load_dataset, Audio
import matplotlib.pyplot as plt
import numpy as np

ds = load_dataset(
    "treble-technologies/Treble10-RIR",
    split="rir_6ch",
    streaming=True,
)
ds = ds.cast_column("audio", Audio())

# Read the samples from the TorchCodec decoder object:
rec = next(iter(ds))
samples = rec["audio"].get_all_samples()
rir_6ch = samples.data
sr = samples.sample_rate
print(f"6 channel RIR has this shape: {rir_6ch.shape}, and a sampling rate of {sr} Hz.")

# We can access and compare individual channels from the 6ch device like this
rir0 = rir_6ch[0]  # mic 0
rir1 = rir_6ch[4]  # mic 4
t_axis = np.arange(rir0.shape[0]) / sr
plt.figure()
plt.plot(t_axis, rir0.numpy(), label="Microphone 0")
plt.plot(t_axis, rir1.numpy(), label="Microphone 4")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.legend()
plt.show()

Example: Read a HOA8 RIR from Treble10

from datasets import load_dataset, Audio
import io, soundfile as sf

# Load dataset in streaming mode
ds = load_dataset("treble-technologies/Treble10-RIR", split="rir_hoa8", streaming=True)

# Disable automatic decoding (we'll do it manually)
ds = ds.cast_column("audio", Audio(decode=False))

# Get one sample from the iterator
sample = next(iter(ds))

# Fetch raw audio bytes
audio_bytes = sample["audio"]["bytes"]

# Some older datasets may not have "bytes", so fall back to reading from the file
if audio_bytes is None:
    # Use huggingface's file object directly
    with sample["audio"]["path"].open("rb") as f:
        audio_bytes = f.read()

# Decode the HOA audio directly from memory
rir_hoa, sr = sf.read(io.BytesIO(audio_bytes))
print(f"Loaded HOA RIR: shape={rir_hoa.shape}, sr={sr}")

Dataset Details

The dataset contains three subsets:

  • Treble10-RIR-mono: This subset contains mono room impulse responses (RIRs). In each room, RIRs are available between 5 sound sources and several receivers. The receivers are placed along horizontal receiver grids with 0.5 m resolution at three heights (0.5 m, 1.0 m, 1.5 m). The validity of all source and receiver positions is checked to ensure that none of them intersects with the room geometry or furniture.
  • Treble10-RIR-hoa8: This subset contains 8th-order Ambisonics RIRs. The sound sources and receivers are identical to the RIR-mono subset.
  • Treble10-RIR-6ch: For this subset, a 6-channel cylindrical device is placed at the receiver positions from the RIR-mono subset. RIRs are then acquired between the 5 sound sources from above and each of the 6 device microphones. In other words, there is a 6-channel DeviceRIR for each source-receiver combination of the RIR-mono subset. The microphone coordinates are part of the metadata for the 6ch split.

All RIRs (mono/HOA/device) were simulated with the Treble SDK, and more details on the tool can be found in the dedicated section below. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For the Treble10-RIR dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.

A small subset of simulations from the same rooms has previously been released as part of the Generative Data Augmentation (GenDA) challenge at ICASSP 2025. The Treble10-RIR dataset differs from the GenDA dataset in three fundamental aspects:

  1. The Treble10-RIR dataset contains broadband RIRs from a hybrid simulation paradigm (wave-based below 5 kHz, GA above 5 kHz), covering the entire frequency range of a 32 kHz signal. In contrast to the GenDA subset, which only contained the wave-based portion, the Treble10-RIR dataset therefore more than doubles the usable frequency range.
  2. The Treble10-RIR dataset consists of 6 subsets in total. While three of those subsets contain RIRs (mono, 8th-order Ambisonics, 6-channel device), the other three contain pre-convolved scenes in identical channel formats. The GenDA subset was limited to mono and 8th-order Ambisonics RIRs, and no pre-convolved scenes were provided.
  3. With Treble10-RIR, we publish the entire dataset, containing approximately 3100 source-receiver configurations. The GenDA subset only contained a small fraction of approximately 60 randomly selected source-receiver configurations.

Uses

Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the Treble10-RIR dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the room impulse response (RIR). To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what Treble10-RIR provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.

Dataset Structure

Each subset of Treble10-RIR corresponds to a different channel configuration of the simulated room impulse responses (RIRs). All subsets share the same metadata schema and organization.

Split Description Channels
rir_mono Single-channel mono RIRs 1
rir_hoa8 8th-order Ambisonics RIRs (ACN/SN3D format) 81
rir_6ch Six-channel home audio device layout 6

File Contents

Each .parquet file contains the metadata for one subset (split) of the dataset.
As this set of RIRs may be used for a variety of potential audio machine-learning tasks, we leave the actual segmentation of the data to the users. The metadata links each impulse response to its corresponding audio file and includes detailed acoustic parameters.

Column Description
audio Reference to the RIR audio file.
Filename Filename and relative path of the WAV file.
Room Short room nickname (e.g., Room1, Room5).
Room Description Descriptive room type (e.g., meeting_room, living_room).
Room Volume [m³] Volume of the room in cubic meters.
Direct Path Length [m] Distance between source and receiver.
Source Label / Position Label and 3D coordinates of the source.
Receiver Label / Position Label and 3D coordinates of the receiver.
Receiver Type Receiver configuration (mono, 8th order, or 6-channel).
Frequencies, EDT, T30, C50, Average Absorption Octave-band acoustic parameters.
Avg EDT, Avg T30, Avg Absorption Broadband summary values.

Acoustic Parameters

The RIRs are presented with a few relevant acoustical parameters describing the acoustical field as sampled with the specific source/receiver pairs.

T30: Reverberation Time

T30 is a measure of how long a sound takes to fade away in a room after the sound source stops emitting noise. It is a key measure of how reverberant a space is. Specifically, it's the time needed for the sound energy to drop by 60 decibels, estimated from the first 30 dBs of the decay.'

A short T30 correlates to a "dry" sounding room, like a small office or recording booth (ideally, under 0.2s). A long T30 correlates to a room that sounds "wet", such as a concert hall or parking garage (1.0s or more).

EDT: Early Decay Time

Early Decay Time is another measure of reverberation, but is calculated from the first 10 dB of energy decay. EDT is highly correlated with the psychoacoustic perception of reverberation, and can also provide information about the uniformity of the acoustic field within a space.

If EDT is approximately equal to T30, the reverberation is approximately a single-slope decay. IF EDT is much shorter than T30, this indicates the existence of a double-slope energy decay, which may form when two rooms are acoustically coupled.

C50: Clarity Index (Speech)

C50 is an energy ratio between the early arriving sound (the first 50 milliseconds) to the late arrinng sound (from 50 milliseconds to the end of the RIR). C50 is typically used as a measure of the potential speech intelligibility and clarity of a room, as it quantifies how much the early sound is obscured by the room's reverberation. '

High C50 values (above 0dB) are typically considered to be ideal for clear and intelligible speech. Low C50 values (below 0dB) are typically considered to be difficult for speech clarity.

More Information

More information on the dataset can be found on the corresponding blog post.

Licensing Information

The Treble10-RIR dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Citation Information

@misc{
  author    = {Mullins, Sarabeth S. and
               Goetz, Georg and
               Bezzam, Eric and
               Zheng, Steven and
               Nielsen, Daniel Gert},
  year      = {2025}
  title     = {Treble10-RIR: A high fidelity spatial and multichannel room impulse dataset},
  url       = { https://huggingface.co/datasets/treble-technologies/Treble10-Speech },
  doi       = { 10.57967/hf/6687 },
  publisher = { Hugging Face }
}