ABOUT_TEXT = ABOUT_TEXT = """ # ๐Ÿ“ About AfroBench Leaderboard The **AfroBench Leaderboard** is a platform for evaluating multilingual language models across **64 African languages** and over **15 diverse NLP tasks**. These tasks span **classification**, **reasoning**, **question answering**, **summarization**, and **machine translation**, and are grounded in over **22 benchmark datasets** focused on low-resource and underrepresented languages. The goal of this leaderboard is to: - ๐ŸŒ Highlight the performance of LLMs on African languages. - ๐Ÿงช Support diagnostic and task-level evaluation across different LLMs. - โš–๏ธ Enable fair comparisons between open-source and closed models using both full and lite subsets of the benchmark. This leaderboard supports two main views: - **AfroBench**: The full evaluation benchmark organized by task, subtask, and dataset. - **AfroBench-Lite**: A lightweight subset of the benchmark with a consistent set of languages across tasks, designed for efficient evaluation. Each score is computed as the average across all selected columns and views, allowing flexible filtering and analysis. --- ## ๐Ÿ”— More Information To learn more about the benchmark, datasets, task definitions, and evaluation procedures, please visit the official project site: ๐Ÿ‘‰ [AfroBench Website](https://mcgill-nlp.github.io/AfroBench/index.html) You can also explore: - ๐Ÿ“„ [AfroBench Paper on arXiv](https://arxiv.org/abs/2311.07978) - ๐Ÿง‘๐Ÿฝโ€๐Ÿ’ป [AfroBench GitHub Repository](https://github.com/McGill-NLP/AfroBench) """ SUBMISSION_TEXT = """ Details on how to upload to teh huggingface leaderboard, COMING SOON! In the meantime view how to run the evaluation in our [Github codebase](https://github.com/McGill-NLP/AfroBench) """ SUBMISSION_TEXT_2 = """ """ SUBMISSION_TEXT_3 = """ """