Spaces:
Running
Running
Update src/about.py
Browse files- src/about.py +1 -1
src/about.py
CHANGED
@@ -41,7 +41,7 @@ INTRODUCTION_TEXT = """
|
|
41 |
|
42 |
π This leaderboard tracks the performance of various models on the [MMLongBench-Doc](https://arxiv.org/abs/2407.01523) benchmark, focusing on their ability to understand and process long documents with both text and visual elements.
|
43 |
|
44 |
-
π§ You can use the official [GitHub repo](https://github.com/mayubo2333/MMLongBench-Doc) or [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate your model on [MMLongBench-Doc](https://arxiv.org/abs/2407.01523). We provide the [official evaluation results](https://huggingface.co/OpenIXCLab/mmlongbench-doc-results) of GPT-4.1 and GPT-4o.
|
45 |
|
46 |
π To add your own model to the leaderboard, please send an Email to yubo001@e.ntu.edu.sg or zangyuhang@pjlab.org.cn then we will help with the evaluation and updating the leaderboard.
|
47 |
"""
|
|
|
41 |
|
42 |
π This leaderboard tracks the performance of various models on the [MMLongBench-Doc](https://arxiv.org/abs/2407.01523) benchmark, focusing on their ability to understand and process long documents with both text and visual elements.
|
43 |
|
44 |
+
π§ You can use the official [GitHub repo](https://github.com/mayubo2333/MMLongBench-Doc) or [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate your model on [MMLongBench-Doc](https://arxiv.org/abs/2407.01523). We provide the [official evaluation results](https://huggingface.co/datasets/OpenIXCLab/mmlongbench-doc-results) of GPT-4.1 and GPT-4o.
|
45 |
|
46 |
π To add your own model to the leaderboard, please send an Email to yubo001@e.ntu.edu.sg or zangyuhang@pjlab.org.cn then we will help with the evaluation and updating the leaderboard.
|
47 |
"""
|