nug_myanmar_asr / README.md
freococo's picture
Update README.md
2f99968 verified
---
license: cc0-1.0
tags:
- automatic-speech-recognition
- audio
- burmese
- myanmar
- civic-education
- teacher-voices
- public-domain
- webdataset
language:
- my
pretty_name: NUG Myanmar ASR
task_categories:
- automatic-speech-recognition
- audio-to-audio
- audio-classification
---
# 366 Hours NUG Myanmar ASR Dataset
The **NUG Myanmar ASR Dataset** is the first large-scale open Burmese speech dataset — now expanded to over **521,476 audio-text pairs**, totaling **~366 hours** of clean, segmented audio. All data was collected from public-service educational broadcasts by the **National Unity Government (NUG)** of Myanmar and the **FOEIM Academy**.
This dataset is released under a **CC0 1.0 Universal license** — fully open and public domain. No attribution required.
## 🕊️ Background & Acknowledgment
This dataset was curated from public-service educational videos produced by the **National Unity Government (NUG)** and disseminated via platforms like the **FOEIM Academy** YouTube Channel. These resources were broadcast under conditions of censorship, surveillance, and revolution — to ensure the continuity of education for Myanmar’s students.
> The teachers behind these voices are not just narrators — they are frontline educators, civil servants in resistance, and voices of an entire generation refusing silence.
This corpus preserves their courage.
## 🕊️ A Moment in History
This is more than data — it’s a declaration.
> The **National Unity Government (NUG)** of Myanmar may not yet be able to **liberate its people** from the grip of a **LOW_LIFE militants junta**,
> but it **has already liberated the Burmese language** from being called a “low-resource” language in the global AI landscape.
🔓 For the first time in history, over **521,476** Burmese audio-text segments have been openly released — with **no copyright**, **no paywall**, and **no restrictions**.
This dataset is a **milestone** — not only for Myanmar, but for every effort to make language technology **equitable**, **inclusive**, and **resistant to erasure**.
> **Free the voice. Free the language. Free the people.**
## 📖 Data Preview
Here is a representative sample from the dataset:
```json
{
"duration": 16.849,
"transcript": "ပညာ လေ့လာ ဆည်းပူးနေကြတဲ့ ကျောင်းသား ကျောင်းသူ လူငယ်မောင်မယ်များ အားလုံး မင်္ဂလာပါ ဆရာကတော့ ဒီနေ့ ဒသမတန်း စနစ်ဟောင်း မြန်မာစာ ဘာသာရပ်မှာ ပါဝင်တဲ့ မြန်မာကဗျာ လက်ရွေးစင် စာအုပ်ရဲ့ ပထမဦးဆုံး ကဗျာဖြစ်တဲ့ ရွှေနှင့်ရိုးမှား ပန်းစကားကဗျာကို သားတို့ သမီးတို့ကို မိတ်ဆက် သင်ကြားပေးသွားမှာ ဖြစ်ပါတယ်"
}
```
## 📦 Dataset Structure (WebDataset Format)
The dataset is organized as a Hugging Face-compatible WebDataset archive, with `.tar` files stored under the `train/` folder.
Each `.tar` contains:
```
- `XXXXXX.mp3` — audio chunk
- `XXXXXX.json` — corresponding metadata with:
- `"transcript"` (string)
- `"duration"` (float)
```
Dataset Volumes:
```
- **Volume 1**: `train/00000.tar` – `train/00015.tar`
→ 150,082 clips, ~100 hours
- **Volume 2**: `train/00016.tar` – `train/00031.tar`
→ 160,000 clips, ~112 hours
- **Volume 3**: `train/00032.tar` – `train/00053.tar`
→ 211,394 clips, ~154 hours
**Total**: 521,476 clips across 54 `.tar` files — **~366 hours**
```
## 🧪 Example Usage with Hugging Face `datasets`
You can stream the dataset directly using the `datasets` library with WebDataset support:
```python
from datasets import load_dataset
ds = load_dataset(
"freococo/nug_myanmar_asr",
data_dir="train",
split="train",
streaming=True
)
# Example usage
sample = next(iter(ds))
print(sample["json"]) # {'transcript': "...", 'duration': 2.3}
print(sample["mp3"]["array"]) # NumPy waveform
print(sample["mp3"]["sampling_rate"]) # e.g., 44100
```
> This method avoids downloading the full dataset at once, making it suitable for low-resource environments.
## ⚠️ Known Limitations & Cautions
While this dataset is a historic release, it is not without imperfections. Developers should be aware of:
- **Machine-aligned transcripts**:
- Not manually proofread.
- May contain typos, inconsistencies, or incorrect alignments.
- Acceptable for general-purpose ASR model training (e.g., Whisper).
- For high-accuracy applications (medical, legal), **manual review is advised**.
- **Spoken-language variation**:
- Teachers use formal and informal tones.
- Regional accents are not explicitly annotated.
- May contain spoken ellipses or emphasis markers typical of Burmese teaching.
- **Comma Warning**:
- All English commas (`,`) in transcripts were converted to Burmese `၊` to avoid CSV parsing issues and Hugging Face ingestion errors.
## 🕊️ Personal Note
This dataset is created independently, without funding, sponsorship, or financial benefit. It is dedicated purely to the Myanmar people, out of deep respect for the beauty of the Burmese language. My hope is that it serves as a bridge, bringing Myanmar’s voice clearly and powerfully into the global AI community.
## 📚 Citation
```bibtex
@misc{nug_asr_2025,
title = {NUG Myanmar Open ASR Corpus},
author = {freococo},
year = {2025},
publisher = {Hugging Face Datasets},
howpublished = {\url{https://huggingface.co/datasets/freococo/366hours_nug_myanmar_asr_dataset}},
note = {366-hour Burmese ASR dataset from NUG civic broadcasts. CC0 licensed.}
}
```