File size: 1,625 Bytes
b3e9ed6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8cce9db
 
 
 
 
 
 
 
 
 
a0e5b97
 
 
 
8cce9db
 
b3e9ed6
8cce9db
 
626c34e
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
dataset_info:
  features:
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: text
    dtype: string
  - name: cleaned_text
    dtype: string
  - name: speaker_age
    dtype: string
  - name: speaker_gender
    dtype: string
  - name: speaker_dialect
    dtype: string
  - name: input_features
    sequence:
      sequence: float32
  - name: input_length
    dtype: float64
  - name: labels
    sequence: int64
  - name: cleaned_labels
    sequence: int64
  splits:
  - name: validation
    num_bytes: 5862458096.364273
    num_examples: 5024
  download_size: 2002683497
  dataset_size: 5862458096.364273
configs:
- config_name: default
  data_files:
  - split: validation
    path: data/validation-*
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- ar
tags:
- WhisperTiny
- WhisperSmall
- WhisperBase
- WhisperMedium
- OpenAI
- ASR
- Arabic
- Preprocessed
size_categories:
- 1K<n<10K
---

# Details
This is the SADA 2022 dataset with the input_features whish are log mels and the cleaned_labels which is the tokenized version of the cleaned_text. You can directly use this as the validation dataset when training Whisper Tiny, Small, Base & Medium models, as they all use the same tokenizer. Please double check this as well from the original model repo.

In addtition, the following filters were applied to this data:
- All audios are less than 30 seconds and greater than 0 seconds.
- All cleaned_text have token lengths less than 448 and greater than 0.
- All rows with 'nan' in cleaned_text or cleaned_text only having whitespace or being empty were dropped.