Update README.md
#1
by
						
hmhm1229
	
							
						- opened
							
					
    	
        README.md
    CHANGED
    
    | @@ -8,6 +8,8 @@ datasets: | |
| 8 | 
             
            - openbmb/EVisRAG-Train
         | 
| 9 | 
             
            ---
         | 
| 10 |  | 
|  | |
|  | |
| 11 | 
             
            # VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
         | 
| 12 | 
             
            [](https://github.com/OpenBMB/VisRAG)
         | 
| 13 | 
             
            [](https://arxiv.org/abs/2510.09733)
         | 
| @@ -37,7 +39,7 @@ datasets: | |
| 37 |  | 
| 38 | 
             
            # ✨ EVisRAG Pipeline
         | 
| 39 |  | 
| 40 | 
            -
            **EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and  | 
| 41 |  | 
| 42 | 
             
            # ⚙️ Setup
         | 
| 43 | 
             
            ```bash
         | 
| @@ -112,7 +114,7 @@ If none of the images contain sufficient information to answer the question, res | |
| 112 | 
             
            Formatting Requirements:
         | 
| 113 | 
             
            Use the exact tags <observe>, <evidence>, <think>, and <answer> for structured output.
         | 
| 114 | 
             
            It is possible that none, one, or several images contain relevant evidence.
         | 
| 115 | 
            -
            If you find no evidence or  | 
| 116 |  | 
| 117 | 
             
            Question and images are provided below. Please follow the steps as instructed.
         | 
| 118 | 
             
            Question: {query}
         | 
|  | |
| 8 | 
             
            - openbmb/EVisRAG-Train
         | 
| 9 | 
             
            ---
         | 
| 10 |  | 
| 11 | 
            +
            This is a temporary repo forked from openbmb/evisrag-7b.
         | 
| 12 | 
            +
             | 
| 13 | 
             
            # VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation
         | 
| 14 | 
             
            [](https://github.com/OpenBMB/VisRAG)
         | 
| 15 | 
             
            [](https://arxiv.org/abs/2510.09733)
         | 
|  | |
| 39 |  | 
| 40 | 
             
            # ✨ EVisRAG Pipeline
         | 
| 41 |  | 
| 42 | 
            +
            **EVisRAG** is an end-to-end framework which equips VLMs with precise visual perception during reasoning in multi-image scenarios. We trained and released VLRMs with EVisRAG built on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
         | 
| 43 |  | 
| 44 | 
             
            # ⚙️ Setup
         | 
| 45 | 
             
            ```bash
         | 
|  | |
| 114 | 
             
            Formatting Requirements:
         | 
| 115 | 
             
            Use the exact tags <observe>, <evidence>, <think>, and <answer> for structured output.
         | 
| 116 | 
             
            It is possible that none, one, or several images contain relevant evidence.
         | 
| 117 | 
            +
            If you find no evidence or little evidence, and insufficient to help you answer the question, follow the instructions above for insufficient information.
         | 
| 118 |  | 
| 119 | 
             
            Question and images are provided below. Please follow the steps as instructed.
         | 
| 120 | 
             
            Question: {query}
         | 
