IrokoBench
Paper
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models https://arxiv.org/pdf/2406.03368
IrokoBench is a human-translated benchmark dataset for 16 typologically diverse low-resource African languages covering three tasks: natural language inference (AfriXNLI), mathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU).
Citation
@misc{adelani2024irokobenchnewbenchmarkafrican,
title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models},
author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp},
year={2024},
eprint={2406.03368},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.03368},
}
Groups and Tasks
Groups
afrixnli
: All afrixnli tasksafrixnli_en_direct
: afrixnli_en_direct evaluates models performance using the anli prompt on the curated datasetafrixnli_native_direct
: afrixnli_native_direct evaluates models performance using the anli prompt translated to the respective languages on the curated datasetafrixnli_translate
: afrixnli_translate evaluates models using the anli prompt in translate-test settingafrixnli_manual_direct
: afrixnli_manual_direct evaluates models performance using Lai's prompt on the curated datasetafrixnli_manual_translate
: afrixnli_manual_translate evaluates models using Lai's prompt in translate-test setting
Tasks
afrixnli_en_direct_{language_code}
: each task evaluates for one languageafrixnli_native_direct_{language_code}
: each task evaluates for one languageafrixnli_translate_{language_code}
: each task evaluates for one languageafrixnli_manual_direct_{language_code}
: each task evaluates for one languageafrixnli_manual_translate_{language_code}
: each task evaluates for one language
Checklist
For adding novel benchmarks/datasets to the library:
- Is the task an existing benchmark in the literature?
- Have you referenced the original paper that introduced the task?
- If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- Is the "Main" variant of this task clearly denoted?
- Have you provided a short sentence in a README on what each new variant adds / evaluates?
- Have you noted which, if any, published evaluation setups are matched by this variant?
- Checked for equivalence with v0.3.0 LM Evaluation Harness