obadx's picture
add: MIT license
ced6a2b verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - data/recitation_0/train/*.parquet
          - data/recitation_1/train/*.parquet
          - data/recitation_2/train/*.parquet
          - data/recitation_3/train/*.parquet
          - data/recitation_5/train/*.parquet
          - data/recitation_6/train/*.parquet
          - data/recitation_7/train/*.parquet
      - split: validation
        path:
          - data/recitation_0/validation/*.parquet
          - data/recitation_1/validation/*.parquet
          - data/recitation_2/validation/*.parquet
          - data/recitation_3/validation/*.parquet
          - data/recitation_5/validation/*.parquet
          - data/recitation_6/validation/*.parquet
          - data/recitation_7/validation/*.parquet
      - split: test
        path:
          - data/recitation_8/train/*.parquet
          - data/recitation_8/validation/*.parquet
dataset_info:
  splits:
    - name: train
      num_examples: 54823
    - name: test
      num_examples: 8787
    - name: validation
      num_examples: 7175
  featrues:
    - dtype: string
      name: aya_name
    - dtype: string
      name: aya_id
    - dtype: string
      name: reciter_name
    - dtype: int32
      name: recitation_id
    - dtype: string
      name: url
    - dtype:
        audio:
          decode: false
          sampling_rate: 16000
      name: audio
    - dtype: float32
      name: duration
    - dtype: float32
      name: speed
    - dtype:
        array2_d:
          dtype: float32
          shape:
            - null
            - 2
      name: speech_intervals
    - dtype: bool
      name: is_interval_complete
    - dtype: bool
      name: is_augmented
    - dtype:
        array2_d:
          dtype: float32
          shape:
            - null
            - 2
      name: input_features
    - dtype:
        array2_d:
          dtype: int32
          shape:
            - null
            - 1
      name: attention_mask
    - dtype:
        array2_d:
          dtype: int32
          shape:
            - null
            - 1
      name: labels
language:
  - ar
license: mit
task_categories:
  - automatic-speech-recognition
tags:
  - quran
  - arabic
  - speech-segmentation
  - audio-segmentation
  - audio

Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning

Paper | Project Page | Code

Introduction

This dataset is developed as part of the research presented in the paper "Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning". The work introduces a 98% automated pipeline to produce high-quality Quranic datasets, comprising over 850 hours of audio (~300K annotated utterances). This dataset supports a novel ASR-based approach for pronunciation error detection, utilizing a custom Quran Phonetic Script (QPS) designed to encode Tajweed rules.

Recitation Segmentations Dataset

This is a modified version of this dataset with these modifications:

  • adding augmentation to the speed of the recitations utterance with column speed reflects the speed from 0.8 to 1.5 on 40% of the dataset using audumentations.
  • adding data augmentation with audiomentations on 40% of the dataset to prepare it for training the recitations spliter.

The codes for building this dataset is available at github

Results

The model trained with this dataset achieved the following results on an unseen test set:

Metric Value
Accuracy 0.9958
F1 0.9964
Loss 0.0132
Precision 0.9976
Recall 0.9951

Sample Usage

Below is a Python example demonstrating how to use the recitations-segmenter library (developed alongside this dataset) to segment Holy Quran recitations.

First, ensure you have the necessary Python packages and ffmpeg/libsndfile installed:

Linux

sudo apt-get update
sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev

Winodws & Mac

You can create an anaconda environment and then download these two libraries:

conda create -n segment python=3.12
conda activate segment
conda install -c conda-forge ffmpeg libsndfile

Install the library using pip:

pip install recitations-segmenter

Then, you can run the following Python script:

from pathlib import Path

from recitations_segmenter import segment_recitations, read_audio, clean_speech_intervals
from transformers import AutoFeatureExtractor, AutoModelForAudioFrameClassification
import torch

if __name__ == '__main__':
    device = torch.device('cuda')
    dtype = torch.bfloat16

    processor = AutoFeatureExtractor.from_pretrained(
        "obadx/recitation-segmenter-v2")
    model = AutoModelForAudioFrameClassification.from_pretrained(
        "obadx/recitation-segmenter-v2",
    )

    model.to(device, dtype=dtype)

    # Change this to the file pathes of Holy Quran recitations
    # File pathes with the Holy Quran Recitations
    file_pathes = [
        './assets/dussary_002282.mp3',
        './assets/hussary_053001.mp3',
    ]
    waves = [read_audio(p) for p in file_pathes]

    # Extracting speech inervals in samples according to 16000 Sample rate
    sampled_outputs = segment_recitations(
        waves,
        model,
        processor,
        device=device,
        dtype=dtype,
        batch_size=8,
    )

    for out, path in zip(sampled_outputs, file_pathes):
        # Clean The speech intervals by:
        # * merging small silence durations
        # * remove small speech durations
        # * add padding to each speech duration
        # Raises:
        # * NoSpeechIntervals: if the wav is complete silence
        # * TooHighMinSpeechDruation: if `min_speech_duration` is too high which
        # resuls for deleting all speech intervals
        clean_out = clean_speech_intervals(
            out.speech_intervals,
            out.is_complete,
            min_silence_duration_ms=30,
            min_speech_duration_ms=30,
            pad_duration_ms=30,
            return_seconds=True,
        )

        print(f'Speech Intervals of: {Path(path).name}: ')
        print(clean_out.clean_speech_intervals)
        print(f'Is Recitation Complete: {clean_out.is_complete}')
        print('-' * 40)

License

This dataset is licensed under the MIT