File size: 1,947 Bytes
db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 db9e4c0 30e5f58 bb3c2af 30e5f58 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
language:
- ku
- ckb
- kmr
- hac
size_categories:
- 1B<n<10B
license: cc-by-4.0
pretty_name: KurCorpus 2B
tags:
- kurdish
- low-resource
- language-modeling
---
# KurCorpus 2B

-informational)

[](https://doi.org/10.17632/fb5xhhn6m5.1)
**KurCorpus 2B** is a multidialectal Kurdish text corpus (>2B tokens) for large-scale language modeling and downstream NLP.
- **Dialects:** Sorani (ckb), Kurmanji/Badini (kmr), Hawrami/Gorani (hac)
- **License:** CC BY 4.0
- **Repo:** https://huggingface.co/datasets/abdulhade/Kurdishcorpus
- **External record:** Mendeley Data DOI `10.17632/fb5xhhn6m5.1`
---
## TL;DR
- Ready for **pretraining** and **finetuning** Kurdish LMs
- Single field **`text`** (UTF-8), offered as large archives or sharded `.txt(.gz)`
- Includes **normalization and cleaning** (Unicode, orthography, noise removal with placeholders like `[URL]`, `[EMAIL]`)
- No official splits; create your own task-specific splits and report dialect coverage
---
## Quickstart
```python
from datasets import load_dataset
# Stream without downloading everything at once (recommended for very large corpora)
ds = load_dataset("abdulhade/Kurdishcorpus", split="train", streaming=True)
for i, ex in enumerate(ds.take(5)):
print(ex["text"])
```
---
## cite
@dataset{rawf2025kurcorpus2b,
title = {KurCorpus 2B: A Multidialectal 2-Billion-Token Corpus for Kurdish Language Modeling},
author = {Rawf, Karwan Mahdi and Abdullah, Abdullah and Hussein, Amanj and Mohammed, Haukar},
year = {2025},
version = {1},
howpublished = {Mendeley Data},
doi = {10.17632/fb5xhhn6m5.1}
}
|