Spaces:
Running
Running
ABOUT_TEXT = ABOUT_TEXT = """ | |
# π About AfroBench Leaderboard | |
The **AfroBench Leaderboard** is a platform for evaluating multilingual language models across **64 African languages** and over **15 diverse NLP tasks**. These tasks span **classification**, **reasoning**, **question answering**, **summarization**, and **machine translation**, and are grounded in over **22 benchmark datasets** focused on low-resource and underrepresented languages. | |
The goal of this leaderboard is to: | |
- π Highlight the performance of LLMs on African languages. | |
- π§ͺ Support diagnostic and task-level evaluation across different LLMs. | |
- βοΈ Enable fair comparisons between open-source and closed models using both full and lite subsets of the benchmark. | |
This leaderboard supports two main views: | |
- **AfroBench**: The full evaluation benchmark organized by task, subtask, and dataset. | |
- **AfroBench-Lite**: A lightweight subset of the benchmark with a consistent set of languages across tasks, designed for efficient evaluation. | |
Each score is computed as the average across all selected columns and views, allowing flexible filtering and analysis. | |
--- | |
## π More Information | |
To learn more about the benchmark, datasets, task definitions, and evaluation procedures, please visit the official project site: | |
π [AfroBench Website](https://mcgill-nlp.github.io/AfroBench/index.html) | |
You can also explore: | |
- π [AfroBench Paper on arXiv](https://arxiv.org/abs/2311.07978) | |
- π§π½βπ» [AfroBench GitHub Repository](https://github.com/McGill-NLP/AfroBench) | |
""" | |
SUBMISSION_TEXT = """ | |
<h1 align="center"> | |
How to submit models/results to the leaderboard? | |
</h1> | |
We welcome the community to submit evaluation results of new models. We also provide an experiental feature for submitting models that our team will evaluate on the π€ cluster. | |
## Submitting Models (experimental feature) | |
Inspired from the Open LLM Leaderboard, we welcome code models submission from the community that will be automatically evaluated. Please note that this is still an experimental feature. | |
Below are some guidlines to follow before submitting your model: | |
#### 1) Make sure you can load your model and tokenizer using AutoClasses: | |
```python | |
from transformers import AutoConfig, AutoModel, AutoTokenizer | |
config = AutoConfig.from_pretrained("your model name", revision=revision) | |
model = AutoModel.from_pretrained("your model name", revision=revision) | |
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision) | |
``` | |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. | |
Note: make sure your model is public! | |
Note: if your model needs `use_remote_code=True`, we do not support this option yet. | |
#### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) | |
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! | |
#### 3) Make sure your model has an open license! | |
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model π€ | |
#### 4) Fill up your model card | |
When we add extra information about models to the leaderboard, it will be automatically taken from the model card. | |
""" | |
SUBMISSION_TEXT_2 = """ | |
## Sumbitting Results | |
You also have the option for running evaluation yourself and submitting results. These results will be added as non-verified, the authors are however required to upload their generations in case other members want to check. | |
### 1 - Running Evaluation | |
We wrote a detailed guide for running the evaluation on your model. You can find the it in [bigcode-evaluation-harness/leaderboard](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main/leaderboard). This will generate a json file summarizing the results, in addition to the raw generations and metric files. | |
### 2- Submitting Results π | |
To submit your results create a **Pull Request** in the community tab to add them under the [folder](https://huggingface.co/spaces/bigcode/multilingual-code-evals/tree/main/community_results) `community_results` in this repository: | |
- Create a folder called `ORG_MODELNAME_USERNAME` for example `bigcode_starcoder_loubnabnl` | |
- Put your json file with grouped scores from the guide, in addition generations folder and metrics folder in it. | |
The title of the PR should be `[Community Submission] Model: org/model, Username: your_username`, replace org and model with those corresponding to the model you evaluated. | |
""" | |
SUBMISSION_TEXT_3 = """ | |
<h1 align="center"> | |
How to submit models/results to the leaderboard? | |
</h1> | |
We welcome the community to submit evaluation results of new models. These results will be added as non-verified, the authors are however required to upload their generations in case other members want to check. | |
### 1 - Running Evaluation | |
We wrote a detailed guide for running the evaluation on your model. You can find the it in [bigcode-evaluation-harness/leaderboard](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main/leaderboard). This will generate a json file summarizing the results, in addition to the raw generations and metric files. | |
### 2- Submitting Results π | |
To submit your results create a **Pull Request** in the community tab to add them under the [folder](https://huggingface.co/spaces/bigcode/multilingual-code-evals/tree/main/community_results) `community_results` in this repository: | |
- Create a folder called `ORG_MODELNAME_USERNAME` for example `bigcode_starcoder_loubnabnl` | |
- Put your json file with grouped scores from the guide, in addition generations folder and metrics folder in it. | |
The title of the PR should be `[Community Submission] Model: org/model, Username: your_username`, replace org and model with those corresponding to the model you evaluated. | |
""" | |