Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
Divehi
Size:
10K - 100K
License:
Dhivehi NER Dataset
This dataset is a weakly-supervised Named Entity Recognition (NER) dataset for the Dhivehi language, built from a large unlabeled sentence corpus using dictionary-based tagging and BIO post-processing.
Dataset Summary
- Language: Dhivehi (ދިވެހި) + Arabic (For Dhivehi names only)
- Records: 90,735 (cleaned from 97,308 original)
- Total Tokens: 775,136
- Total Entities: 128,764
- Data Quality: 93.2% (6,573 invalid records removed)
- Average Sentence Length: 8.5 tokens
- Average Entity Length: 1.1 tokens
Entity Types and Labels
The dataset uses the BIO (Beginning-Inside-Outside) tagging scheme:
| Label ID | Label Name | Description | Count | Percentage |
|---|---|---|---|---|
| 0 | O | Outside any entity | 646,372 | 83.4% |
| 1 | B-PER | Beginning of Person | 43,973 | 5.7% |
| 2 | I-PER | Inside Person | 7,228 | 0.9% |
| 3 | B-ORG | Beginning of Organization | 28,401 | 3.7% |
| 4 | I-ORG | Inside Organization | 4,910 | 0.6% |
| 5 | B-LOC | Beginning of Location | 39,495 | 5.1% |
| 6 | I-LOC | Inside Location | 2,211 | 0.3% |
| 7 | B-MISC | Beginning of Miscellaneous | 1,861 | 0.2% |
| 8 | I-MISC | Inside Miscellaneous | 685 | 0.1% |
Dataset Fields
Each record contains the following fields:
| Field | Type | Description |
|---|---|---|
text |
string | Original sentence text |
token |
list[string] | Tokenized words/tokens |
ner_tags |
list[int] | Numeric entity labels (0-8) |
ner_class |
list[string] | String entity labels (B-PER, I-ORG, etc.) |
Data Quality
The dataset has been cleaned and validated:
- Length Consistency: All records have matching token, tag, and class lengths
- Label Validation: All tags are valid integers (0-8)
- BIO Consistency: Proper B-/I- prefix usage for all entity types
- Invalid Records Removed: 6,573 records with quality issues were excluded
Dataset Preview
{
"text": "މާދަމާގެ އެއްވުމަށް ފުލުހުން ޝަރުތުތަކެއް ކަނޑައަޅައި އަދާލަތު ޕާޓީ އަށް ސިޓީ ފޮނުވައިފި",
"token": ["މާދަމާގެ", "އެއްވުމަށް", "ފުލުހުން", "ޝަރުތުތަކެއް", "ކަނޑައަޅައި", "އަދާލަތު", "ޕާޓީ", "އަށް", "ސިޓީ", "ފޮނުވައިފި"],
"ner_tags": [0, 0, 3, 0, 0, 3, 4, 0, 0, 0],
"ner_class": ["O", "O", "B-ORG", "O", "O", "B-ORG", "I-ORG", "O", "O", "O"]
}
Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("alakxender/dhivehi-ner-dataset")
# Access records
for record in dataset["train"]:
print(f"Text: {record['text']}")
print(f"Tokens: {record['token']}")
print(f"NER Tags: {record['ner_tags']}")
print(f"NER Classes: {record['ner_class']}")
Accuracy Notice
This dataset was processed using automated cleaning and validation techniques. While quality issues have been addressed, some entity boundaries and classifications may still require manual review for production use.
Recommended Use Cases:
- Pretraining NER models for Dhivehi
- Research and development
- Baseline model training
- Weak supervision pipelines
- Downloads last month
- 9
Models trained or fine-tuned on alakxender/dhivehi-ner-dataset
Token Classification
•
0.3B
•
Updated
•
9