File size: 6,502 Bytes
3e14b8e
 
bb5e0e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e14b8e
262640f
bb5e0e7
 
 
 
 
 
3e14b8e
 
 
262640f
 
3e14b8e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
262640f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bb5e0e7
 
ced6a2b
bb5e0e7
 
 
 
 
 
 
 
0ebe3aa
 
bb5e0e7
 
 
 
 
 
 
 
 
 
0ebe3aa
 
 
 
bb5e0e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ced6a2b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
---
configs:
- config_name: default
  data_files:
  - split: train
    path:
    - data/recitation_0/train/*.parquet
    - data/recitation_1/train/*.parquet
    - data/recitation_2/train/*.parquet
    - data/recitation_3/train/*.parquet
    - data/recitation_5/train/*.parquet
    - data/recitation_6/train/*.parquet
    - data/recitation_7/train/*.parquet
  - split: validation
    path:
    - data/recitation_0/validation/*.parquet
    - data/recitation_1/validation/*.parquet
    - data/recitation_2/validation/*.parquet
    - data/recitation_3/validation/*.parquet
    - data/recitation_5/validation/*.parquet
    - data/recitation_6/validation/*.parquet
    - data/recitation_7/validation/*.parquet
  - split: test
    path:
    - data/recitation_8/train/*.parquet
    - data/recitation_8/validation/*.parquet
dataset_info:
  splits:
  - name: train
    num_examples: 54823
  - name: test
    num_examples: 8787
  - name: validation
    num_examples: 7175
  featrues:
  - dtype: string
    name: aya_name
  - dtype: string
    name: aya_id
  - dtype: string
    name: reciter_name
  - dtype: int32
    name: recitation_id
  - dtype: string
    name: url
  - dtype:
      audio:
        decode: false
        sampling_rate: 16000
    name: audio
  - dtype: float32
    name: duration
  - dtype: float32
    name: speed
  - dtype:
      array2_d:
        dtype: float32
        shape:
        - null
        - 2
    name: speech_intervals
  - dtype: bool
    name: is_interval_complete
  - dtype: bool
    name: is_augmented
  - dtype:
      array2_d:
        dtype: float32
        shape:
        - null
        - 2
    name: input_features
  - dtype:
      array2_d:
        dtype: int32
        shape:
        - null
        - 1
    name: attention_mask
  - dtype:
      array2_d:
        dtype: int32
        shape:
        - null
        - 1
    name: labels
language:
- ar
license: mit
task_categories:
- automatic-speech-recognition
tags:
- quran
- arabic
- speech-segmentation
- audio-segmentation
- audio
---

# Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning

[Paper](https://huggingface.co/papers/2509.00094) | [Project Page](https://obadx.github.io/prepare-quran-dataset/) | [Code](https://github.com/obadx/recitations-segmenter)

## Introduction
This dataset is developed as part of the research presented in the paper "Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning". The work introduces a 98% automated pipeline to produce high-quality Quranic datasets, comprising over 850 hours of audio (~300K annotated utterances). This dataset supports a novel ASR-based approach for pronunciation error detection, utilizing a custom Quran Phonetic Script (QPS) designed to encode Tajweed rules.

## Recitation Segmentations Dataset

This is a modified version of [this dataset](https://huggingface.co/datasets/obadx/recitation-segmentation) with these modifications:
* adding augmentation to the speed of the recitations utterance with column `speed` reflects the speed from 0.8 to 1.5 on 40% of the dataset using [audumentations](https://iver56.github.io/audiomentations/).
* adding data augmentation with [audiomentations](https://iver56.github.io/audiomentations/) on 40% of the dataset to prepare it for training the recitations spliter.

The codes for building this dataset is available at [github](https://github.com/obadx/recitations-segmenter)

## Results
The model trained with this dataset achieved the following results on an unseen test set:

| Metric    | Value  |
|-----------|--------|
| Accuracy  | 0.9958 |
| F1        | 0.9964 |
| Loss      | 0.0132 |
| Precision | 0.9976 |
| Recall    | 0.9951 |

## Sample Usage

Below is a Python example demonstrating how to use the `recitations-segmenter` library (developed alongside this dataset) to segment Holy Quran recitations.

First, ensure you have the necessary Python packages and `ffmpeg`/`libsndfile` installed:

#### Linux

```bash
sudo apt-get update
sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
```

#### Winodws & Mac

You can create an `anaconda` environment and then download these two libraries:

```bash
conda create -n segment python=3.12
conda activate segment
conda install -c conda-forge ffmpeg libsndfile
```

Install the library using pip:
```bash
pip install recitations-segmenter
```

Then, you can run the following Python script:

```python
from pathlib import Path

from recitations_segmenter import segment_recitations, read_audio, clean_speech_intervals
from transformers import AutoFeatureExtractor, AutoModelForAudioFrameClassification
import torch

if __name__ == '__main__':
    device = torch.device('cuda')
    dtype = torch.bfloat16

    processor = AutoFeatureExtractor.from_pretrained(
        "obadx/recitation-segmenter-v2")
    model = AutoModelForAudioFrameClassification.from_pretrained(
        "obadx/recitation-segmenter-v2",
    )

    model.to(device, dtype=dtype)

    # Change this to the file pathes of Holy Quran recitations
    # File pathes with the Holy Quran Recitations
    file_pathes = [
        './assets/dussary_002282.mp3',
        './assets/hussary_053001.mp3',
    ]
    waves = [read_audio(p) for p in file_pathes]

    # Extracting speech inervals in samples according to 16000 Sample rate
    sampled_outputs = segment_recitations(
        waves,
        model,
        processor,
        device=device,
        dtype=dtype,
        batch_size=8,
    )

    for out, path in zip(sampled_outputs, file_pathes):
        # Clean The speech intervals by:
        # * merging small silence durations
        # * remove small speech durations
        # * add padding to each speech duration
        # Raises:
        # * NoSpeechIntervals: if the wav is complete silence
        # * TooHighMinSpeechDruation: if `min_speech_duration` is too high which
        # resuls for deleting all speech intervals
        clean_out = clean_speech_intervals(
            out.speech_intervals,
            out.is_complete,
            min_silence_duration_ms=30,
            min_speech_duration_ms=30,
            pad_duration_ms=30,
            return_seconds=True,
        )

        print(f'Speech Intervals of: {Path(path).name}: ')
        print(clean_out.clean_speech_intervals)
        print(f'Is Recitation Complete: {clean_out.is_complete}')
        print('-' * 40)
```

## License

This dataset is licensed under the [MIT](https://mit-license.org/)