nielsr HF Staff commited on
Commit
a1b4c76
·
verified ·
1 Parent(s): 274e9f4

Improve model card: Add pipeline tag, library_name, and links to paper/code

Browse files

This PR enhances the model card for `AndesVL-4B-Instruct` by:
- Adding `library_name: transformers` metadata, which enables the automated "How to use" widget on the Hub, as evidenced by the `transformers` imports in the `Quick Start` section and `config.json`.
- Adding `pipeline_tag: image-text-to-text` metadata, improving discoverability for multimodal tasks, as indicated by the paper abstract and sample usage.
- Including a direct link to the Hugging Face paper page: [AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model](https://huggingface.co/papers/2510.11496) at the top of the model card.
- Adding a link to the official GitHub repository for the evaluation toolkit: [https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation).

Please review and merge this PR.

Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -1,7 +1,15 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # AndesVL-4B-Instruct
 
 
 
 
 
5
  AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with **0.6B to 4B parameters**, built upon Qwen3's LLM and various visual encoders. Designed for efficient edge deployment, it achieves first-tier performance on diverse benchmarks, including those for text-rich tasks, reasoning tasks, Visual Question Answering (VQA), multi-image tasks, multilingual tasks, and GUI tasks. Its "1+N" LoRA architecture and QALFT framework facilitate efficient task adaptation and model compression, enabling a 6.7x peak decoding speedup and a 1.8 bits-per-weight compression ratio on mobile chips.
6
 
7
  Detailed model sizes and components are provided below:
@@ -60,4 +68,4 @@ If you find our work helpful, feel free to give us a cite.
60
  ```
61
 
62
  # Acknowledge
63
- We are very grateful for the efforts of the [Qwen](https://huggingface.co/Qwen), [AimV2](https://huggingface.co/apple/aimv2-large-patch14-224) and [Siglip 2](https://arxiv.org/abs/2502.14786) projects.
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
  ---
6
+
7
  # AndesVL-4B-Instruct
8
+
9
+ This model is presented in the paper [AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model](https://huggingface.co/papers/2510.11496).
10
+
11
+ The evaluation code for this model is available at: [https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation)
12
+
13
  AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with **0.6B to 4B parameters**, built upon Qwen3's LLM and various visual encoders. Designed for efficient edge deployment, it achieves first-tier performance on diverse benchmarks, including those for text-rich tasks, reasoning tasks, Visual Question Answering (VQA), multi-image tasks, multilingual tasks, and GUI tasks. Its "1+N" LoRA architecture and QALFT framework facilitate efficient task adaptation and model compression, enabling a 6.7x peak decoding speedup and a 1.8 bits-per-weight compression ratio on mobile chips.
14
 
15
  Detailed model sizes and components are provided below:
 
68
  ```
69
 
70
  # Acknowledge
71
+ We are very grateful for the efforts of the [Qwen](https://huggingface.co/Qwen), [AimV2](https://huggingface.co/apple/aimv2-large-patch14-224) and [Siglip 2](https://arxiv.org/abs/2502.14786) projects.