mosama's picture
Update README.md
a0e5b97 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: text
      dtype: string
    - name: cleaned_text
      dtype: string
    - name: speaker_age
      dtype: string
    - name: speaker_gender
      dtype: string
    - name: speaker_dialect
      dtype: string
    - name: input_features
      sequence:
        sequence: float32
    - name: input_length
      dtype: float64
    - name: labels
      sequence: int64
    - name: cleaned_labels
      sequence: int64
  splits:
    - name: validation
      num_bytes: 5862458096.364273
      num_examples: 5024
  download_size: 2002683497
  dataset_size: 5862458096.364273
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*
license: apache-2.0
task_categories:
  - automatic-speech-recognition
language:
  - ar
tags:
  - WhisperTiny
  - WhisperSmall
  - WhisperBase
  - WhisperMedium
  - OpenAI
  - ASR
  - Arabic
  - Preprocessed
size_categories:
  - 1K<n<10K

Details

This is the SADA 2022 dataset with the input_features whish are log mels and the cleaned_labels which is the tokenized version of the cleaned_text. You can directly use this as the validation dataset when training Whisper Tiny, Small, Base & Medium models, as they all use the same tokenizer. Please double check this as well from the original model repo.

In addtition, the following filters were applied to this data:

  • All audios are less than 30 seconds and greater than 0 seconds.
  • All cleaned_text have token lengths less than 448 and greater than 0.
  • All rows with 'nan' in cleaned_text or cleaned_text only having whitespace or being empty were dropped.