Improve model card: Add pipeline tag, library name, paper link, abstract, and detailed usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +138 -4
README.md CHANGED
@@ -6,15 +6,149 @@ tags:
6
  - multimodal
7
  - qwen
8
  - visurf
 
 
9
  ---
10
 
11
- # Visurf Model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ```python
14
- from transformers import AutoTokenizer, AutoModelForCausalLM
15
  import torch
 
 
 
16
 
 
17
  model_name = "Ricky06662/Visurf-7B-NoThink-Best-on-gRefCOCO"
18
- tokenizer = AutoTokenizer.from_pretrained(model_name)
19
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - multimodal
7
  - qwen
8
  - visurf
9
+ pipeline_tag: image-text-to-text
10
+ library_name: transformers
11
  ---
12
 
13
+ # ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models
14
+
15
+ This repository contains the `Visurf-7B-NoThink-Best-on-gRefCOCO` model, as presented in the paper [ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models](https://huggingface.co/papers/2510.10606).
16
+
17
+ **Abstract:**
18
+ Typical post-training paradigms for Large Vision-and-Language Models (LVLMs) include Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Rewards (RLVR). SFT leverages external guidance to inject new knowledge, whereas RLVR utilizes internal reinforcement to enhance reasoning capabilities and overall performance. However, our analysis reveals that SFT often leads to sub-optimal performance, while RLVR struggles with tasks that exceed the model's internal knowledge base. To address these limitations, we propose ViSurf (**Vi**sual **Su**pervised-and-**R**einforcement **F**ine-Tuning), a unified post-training paradigm that integrates the strengths of both SFT and RLVR within a single stage. We analyze the derivation of the SFT and RLVR objectives to establish the ViSurf objective, providing a unified perspective on these two paradigms. The core of ViSurf involves injecting ground-truth labels into the RLVR rollouts, thereby providing simultaneous external supervision and internal reinforcement. Furthermore, we introduce three novel reward control strategies to stabilize and optimize the training process. Extensive experiments across several diverse benchmarks demonstrate the effectiveness of ViSurf, outperforming both individual SFT, RLVR, and two-stage SFT $\rightarrow$ RLVR. In-depth analysis corroborates these findings, validating the derivation and design principles of ViSurf.
19
+
20
+ For more details, including the code and training procedures, please refer to the official [GitHub repository](https://github.com/dvlab-research/ViSurf).
21
+
22
+ ## Overview
23
+ An overview of ViSurf:
24
+
25
+ <div align=center>
26
+ <img width="98%" src="https://github.com/dvlab-research/ViSurf/raw/main/assets/overview.png"/>
27
+ </div>
28
+
29
+ ViSurf (**Vi**sual **Su**pervised-and-**R**einforcement **F**ine-Tuning) is a unified post-training paradigm that integrates the strengths of both SFT and RLVR within a single stage.
30
+
31
+ ## Installation
32
+
33
+ First, clone the repository and create a conda environment as described in the GitHub README:
34
+
35
+ ```bash
36
+ git clone https://github.com/dvlab-research/ViSurf.git
37
+ cd ViSurf
38
+ conda create -n visionreasoner python=3.12
39
+ conda activate visionreasoner
40
+ pip install -e .
41
+ ```
42
+
43
+ ## Inference
44
+
45
+ You can load and use the model with the `transformers` library. The `tokenizer_config.json` indicates the use of `Qwen2_5_VLProcessor`, so `AutoProcessor` is also needed.
46
 
47
  ```python
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM, AutoProcessor
49
  import torch
50
+ from PIL import Image
51
+ import requests
52
+ from io import BytesIO
53
 
54
+ # Load model and processor
55
  model_name = "Ricky06662/Visurf-7B-NoThink-Best-on-gRefCOCO"
56
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
57
+ processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ torch_dtype=torch.float16, # or torch.bfloat16
61
+ low_cpu_mem_usage=True,
62
+ trust_remote_code=True
63
+ ).cuda() # Move to GPU if available
64
+
65
+ # --- Example 1 (from GitHub README) ---
66
+ # Question: "I want to rest, where should I sit?"
67
+ # Image: A kitchen counter with food and flowers. The model should indicate no related object.
68
+
69
+ # Placeholder image for demonstration purposes
70
+ # In a real scenario, replace with your actual image loading
71
+ # For instance, if running locally, you'd use:
72
+ # image = Image.open("path/to/your/image.jpg").convert("RGB")
73
+ # Or download an example image:
74
+ image_url = "https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_1.png"
75
+ response = requests.get(image_url)
76
+ image_1 = Image.open(BytesIO(response.content)).convert("RGB")
77
+
78
+
79
+ text_query_1 = "I want to rest, where should I sit?"
80
+
81
+ messages_1 = [
82
+ {"role": "user", "content": [{"type": "image", "image": image_1}, {"type": "text", "text": text_query_1}]},
83
+ ]
84
+ prompt_1 = processor.apply_chat_template(messages_1, tokenize=False, add_generation_prompt=True)
85
+
86
+ inputs_1 = processor(text=prompt_1, images=image_1, return_tensors="pt").to(model.device)
87
+ generated_ids_1 = model.generate(**inputs_1, max_new_tokens=1024, do_sample=False)
88
+ response_1 = tokenizer.batch_decode(generated_ids_1, skip_special_tokens=True)[0]
89
+ print(f"Assistant (Query 1: '{text_query_1}'): {response_1}")
90
+
91
+ # Expected output (from GitHub README): "The question seems to be asking where to sit, but the image only shows a kitchen counter with food and flowers."
92
  ```
93
+
94
+ Output mask for Example 1:
95
+ <div align=center>
96
+ <img width="98%" src="https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_1.png"/>
97
+ </div>
98
+
99
+ ```python
100
+ # --- Example 2 (from GitHub README) ---
101
+ # Question: "I want to cook food, what can I use?"
102
+ # Image: A kitchen scene. The model should identify tools/ingredients.
103
+
104
+ # Using a different example image for this query
105
+ image_url = "https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_2.png"
106
+ response = requests.get(image_url)
107
+ image_2 = Image.open(BytesIO(response.content)).convert("RGB")
108
+
109
+ text_query_2 = "I want to cook food, what can I use?"
110
+
111
+ messages_2 = [
112
+ {"role": "user", "content": [{"type": "image", "image": image_2}, {"type": "text", "text": text_query_2}]},
113
+ ]
114
+ prompt_2 = processor.apply_chat_template(messages_2, tokenize=False, add_generation_prompt=True)
115
+
116
+ inputs_2 = processor(text=prompt_2, images=image_2, return_tensors="pt").to(model.device)
117
+ generated_ids_2 = model.generate(**inputs_2, max_new_tokens=1024, do_sample=False)
118
+ response_2 = tokenizer.batch_decode(generated_ids_2, skip_special_tokens=True)[0]
119
+ print(f"Assistant (Query 2: '{text_query_2}'): {response_2}")
120
+
121
+ # Expected output (from GitHub README): "The question asks what kitchen tools or ingredients are visible that could be used for cooking."
122
+ ```
123
+
124
+ Output mask for Example 2:
125
+ <div align=center>
126
+ <img width="98%" src="https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_2.png"/>
127
+ </div>
128
+
129
+ ## Citation
130
+
131
+ If you find our work helpful or inspiring, please feel free to cite it.
132
+
133
+ ```bibtex
134
+ @article{liu2025visurf,
135
+ title = {ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models},
136
+ author = {Liu, Yuqi and Chen, Liangyu and Liu, Jiazhen and Zhu, Mingkang and Zhong, Zhisheng and Yu, Bei and Jia, Jiaya},
137
+ journal = {arXiv preprint arXiv:2503.06520},
138
+ year = {2025}
139
+ }
140
+
141
+ @article{liu2025segzero,
142
+ title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
143
+ author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
144
+ journal = {arXiv preprint arXiv:2503.06520},
145
+ year = {2025}
146
+ }
147
+
148
+ @article{liu2025visionreasoner,
149
+ title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
150
+ author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
151
+ journal = {arXiv preprint arXiv:2505.12081},
152
+ year = {2025}
153
+ }
154
+ ```