|
--- |
|
license: cc-by-2.0 |
|
task_categories: |
|
- automatic-speech-recognition |
|
- text-to-speech |
|
language: |
|
- cs |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# ParCzech4Speech (Sentence-Segmented Variant) |
|
|
|
## Dataset Summary |
|
|
|
**ParCzech4Speech (Sentence-Segmented Variant)** is a large-scale Czech speech dataset based on parliamentary recordings and official transcripts. |
|
This sentence-segmented variant is designed for speech recognition and synthesis tasks, offering clean audio-text alignment and reliable segment boundaries. |
|
|
|
It is derived from the [**ParCzech 4.0**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5360) corpus and [**AudioPSP 24.01**](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5404) audio collection. |
|
Using WhisperX and Wav2Vec 2.0 for automatic alignment, this dataset ensures high-quality segments and provides rich metadata for filtering and quality control. |
|
|
|
The dataset is released under a permissive **CC-BY**, allowing unrestricted commercial and academic use. |
|
|
|
## π Note |
|
|
|
π’ A **larger unsegmented variant** of this dataset is now available! The unsegmented version provides **longer, continuous speech segments** that do **not follow sentence boundaries**, making it especially suitable for streaming ASR. You can find it under [ParCzech4Speech (Unsegmented Variant)](https://huggingface.co/datasets/ufal/parczech4speech-unsegmented) on Hugging Face. |
|
|
|
|
|
## Data Splits |
|
|
|
| Split | Segments | Hours | Speakers | |
|
|-------|----------|-------|----------| |
|
| Train | 682,254 | 1131 | 525 | |
|
| Dev | 5,094 | 10.14 | 29 | |
|
| Test | 11,379 | 20.63 | 30 | |
|
|
|
## Dataset Structure |
|
|
|
Each row corresponds to a sentence-level audio segment with accompanying metadata: |
|
|
|
| Column | Description | |
|
|--------|-------------| |
|
| `true_text` | Official transcript from parliamentary stenographic records (unnormalized). | |
|
| `rec_text` | Automatically recognized transcript using Whisper model. | |
|
| `speaker` | Speaker identifier in the format `NameSurname.YearOfBirth`. | |
|
| `dur` | Duration of the segment in seconds. | |
|
| `vert` | Vertical file name from ParCzech 4.0 for backward compatibility. | |
|
| `n_numbers` | Number of number tokens detected in `true_text`. | |
|
| `n_true_words` | Number of true words in the segment. | |
|
| `seg_edit_dist` | Levenshtein distance between `true_text` and `rec_text`. | |
|
| `align_edit_dist_max` | Maximum word-level edit distance between aligned word pairs. | |
|
| `true_char_avg_dur` | Average duration per character in `true_text` (ignoring whitespace). | |
|
| `start_token_id` | Start token index (from vertical data) indicating the original source of the segment. | |
|
| `end_token_id` | End token index (from vertical data). | |
|
| `wav2vec_rec` | Transcript from Wav2Vec 2.0 model with greedy decoding strategy used as a secondary ASR reference. | |
|
| `wav2vec_rec_edit_dist` | Normalized edit distance between `wav2vec_rec` and `rec_text`. | |
|
| `speaker_text_cnt` | Frequency count of the given speaker-text pair, can be used for deduplication. | |
|
|
|
|
|
## Citation |
|
|
|
Please cite the dataset as follows: |
|
|
|
```bibtex |
|
TODO |
|
``` |