hmhm1229 commited on
Commit
663ccc0
·
verified ·
1 Parent(s): a7233dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -8,6 +8,8 @@ datasets:
8
  - openbmb/EVisRAG-Train
9
  ---
10
 
 
 
11
  # VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
12
  [![Github](https://img.shields.io/badge/VisRAG-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https://github.com/OpenBMB/VisRAG)
13
  [![arXiv](https://img.shields.io/badge/arXiv-2510.09733-ff0000.svg?style=for-the-badge)](https://arxiv.org/abs/2510.09733)
@@ -37,7 +39,7 @@ datasets:
37
 
38
  # ✨ EVisRAG Pipeline
39
 
40
- **EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and realeased VLRMs with EVisRAG built on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
41
 
42
  # ⚙️ Setup
43
  ```bash
@@ -112,7 +114,7 @@ If none of the images contain sufficient information to answer the question, res
112
  Formatting Requirements:
113
  Use the exact tags <observe>, <evidence>, <think>, and <answer> for structured output.
114
  It is possible that none, one, or several images contain relevant evidence.
115
- If you find no evidence or few evidences, and insufficient to help you answer the question, follow the instruction above for insufficient information.
116
 
117
  Question and images are provided below. Please follow the steps as instructed.
118
  Question: {query}
 
8
  - openbmb/EVisRAG-Train
9
  ---
10
 
11
+ This is a temporary repo forked from openbmb/evisrag-7b.
12
+
13
  # VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
14
  [![Github](https://img.shields.io/badge/VisRAG-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https://github.com/OpenBMB/VisRAG)
15
  [![arXiv](https://img.shields.io/badge/arXiv-2510.09733-ff0000.svg?style=for-the-badge)](https://arxiv.org/abs/2510.09733)
 
39
 
40
  # ✨ EVisRAG Pipeline
41
 
42
+ **EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and released VLRMs with EVisRAG built on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
43
 
44
  # ⚙️ Setup
45
  ```bash
 
114
  Formatting Requirements:
115
  Use the exact tags <observe>, <evidence>, <think>, and <answer> for structured output.
116
  It is possible that none, one, or several images contain relevant evidence.
117
+ If you find no evidence or little evidence, and insufficient to help you answer the question, follow the instructions above for insufficient information.
118
 
119
  Question and images are provided below. Please follow the steps as instructed.
120
  Question: {query}