ssm-treble commited on
Commit
1ba6e94
·
verified ·
1 Parent(s): 6b8bf7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -2
README.md CHANGED
@@ -70,10 +70,21 @@ task_categories:
70
  pretty_name: Treble10-Speech
71
  size_categories:
72
  - 1K<n<10K
 
 
 
 
 
 
 
73
  ---
74
 
75
  # **Treble10-Speech (16 kHz)**
76
 
 
 
 
 
77
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1A8CDbaY9q2ezr0LhpySW2Ygc7rwgObgL?usp=sharing)
78
 
79
 
@@ -173,8 +184,8 @@ The dataset contains three subsets:
173
 
174
  All RIRs (mono/HOA/device) that were used to generate reverberant speech for this dataset were simulated with the Treble SDK. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For this dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.
175
 
176
- All dry speech files that were used to generate reverberant speech files through convolution with the above RIRs were taken from the [LibriSpeech corpus](https://www.openslr.org/12).
177
-
178
  ## Uses
179
 
180
  Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the **Treble10-Speech** dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the RIR. To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what **Treble10-Speech** provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.
 
70
  pretty_name: Treble10-Speech
71
  size_categories:
72
  - 1K<n<10K
73
+ tags:
74
+ - audio
75
+ - speech
76
+ - acoustics
77
+ source_datasets:
78
+ - treble-technologies/Treble10-RIR
79
+ - openslr/librispeech_asr
80
  ---
81
 
82
  # **Treble10-Speech (16 kHz)**
83
 
84
+ ## Dataset Description
85
+ - **Paper:** Coming soon
86
+ - **Point of contact:** contact@treble.tech
87
+
88
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1A8CDbaY9q2ezr0LhpySW2Ygc7rwgObgL?usp=sharing)
89
 
90
 
 
184
 
185
  All RIRs (mono/HOA/device) that were used to generate reverberant speech for this dataset were simulated with the Treble SDK. We use a hybrid simulation paradigm that combines a numerical wave-based solver (discontinuous Galerkin finite element method, DG-FEM) at low to midrange frequencies with geometrical acoustics (GA) simulations at high frequencies. For this dataset, the transition frequency between the wave-based and the GA simulation is set at 5 kHz. The resulting hybrid RIRs are broadband signals with a 32 kHz sampling rate, thus covering the entire frequency range of the signal and containing audio content up to 16 kHz.
186
 
187
+ All dry speech files that were used to generate reverberant speech files through convolution with the above RIRs were taken from the _test_ splits of the [LibriSpeech corpus](https://www.openslr.org/12).
188
+ As the dry speech files were sampled at 16 kHz, the RIRs were downsampled while generating the Treble10-Speech set. You can create your own 32kHz speech samples by downloading the [Treble10-RIR](https://huggingface.co/datasets/treble-technologies/Treble10-RIR) dataset and convolving them with audio signals of your choice.
189
  ## Uses
190
 
191
  Use cases such as far-field automatic speech recognition (ASR), speech enhancement, dereverberation, and source separation benefit greatly from the **Treble10-Speech** dataset. To illustrate this, consider the contrast between near-field and far-field ASR. In near-field setups, such as smartphones or headsets, the microphone is close to the speaker, capturing a clean signal dominated by the direct sound. In far-field scenarios, as in smart speakers or conference-room devices, the microphone is several meters away, and the recorded signal becomes a complex blend of direct sound, reverberation, and background noise. This difference is not merely spatial but physical: in far-field conditions, sound waves reflect off walls, diffract around objects, and decay over time, all of which are captured by the RIR. To achieve robust performance in such environments, ASR and related models must be trained on datasets that accurately represent these intricate acoustic interactions—precisely what **Treble10-Speech** provides. Similarly, the performance of such systems can only be reliably determined when evaluating them on data that is accurate enough to model sound propagation in complex environments.