ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models
This repository contains the Visurf-7B-NoThink-Best-on-gRefCOCO model, as presented in the paper ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models.
Abstract: Typical post-training paradigms for Large Vision-and-Language Models (LVLMs) include Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Rewards (RLVR). SFT leverages external guidance to inject new knowledge, whereas RLVR utilizes internal reinforcement to enhance reasoning capabilities and overall performance. However, our analysis reveals that SFT often leads to sub-optimal performance, while RLVR struggles with tasks that exceed the model's internal knowledge base. To address these limitations, we propose ViSurf (Visual Supervised-and-Reinforcement Fine-Tuning), a unified post-training paradigm that integrates the strengths of both SFT and RLVR within a single stage. We analyze the derivation of the SFT and RLVR objectives to establish the ViSurf objective, providing a unified perspective on these two paradigms. The core of ViSurf involves injecting ground-truth labels into the RLVR rollouts, thereby providing simultaneous external supervision and internal reinforcement. Furthermore, we introduce three novel reward control strategies to stabilize and optimize the training process. Extensive experiments across several diverse benchmarks demonstrate the effectiveness of ViSurf, outperforming both individual SFT, RLVR, and two-stage SFT $\rightarrow$ RLVR. In-depth analysis corroborates these findings, validating the derivation and design principles of ViSurf.
For more details, including the code and training procedures, please refer to the official GitHub repository.
Overview
An overview of ViSurf:
ViSurf (Visual Supervised-and-Reinforcement Fine-Tuning) is a unified post-training paradigm that integrates the strengths of both SFT and RLVR within a single stage.
Installation
First, clone the repository and create a conda environment as described in the GitHub README:
git clone https://github.com/dvlab-research/ViSurf.git
cd ViSurf
conda create -n visionreasoner python=3.12
conda activate visionreasoner
pip install -e .
Inference
You can load and use the model with the transformers library. The tokenizer_config.json indicates the use of Qwen2_5_VLProcessor, so AutoProcessor is also needed.
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoProcessor
import torch
from PIL import Image
import requests
from io import BytesIO
# Load model and processor
model_name = "Ricky06662/Visurf-7B-NoThink-Best-on-gRefCOCO"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16, # or torch.bfloat16
low_cpu_mem_usage=True,
trust_remote_code=True
).cuda() # Move to GPU if available
# --- Example 1 (from GitHub README) ---
# Question: "I want to rest, where should I sit?"
# Image: A kitchen counter with food and flowers. The model should indicate no related object.
# Placeholder image for demonstration purposes
# In a real scenario, replace with your actual image loading
# For instance, if running locally, you'd use:
# image = Image.open("path/to/your/image.jpg").convert("RGB")
# Or download an example image:
image_url = "https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_1.png"
response = requests.get(image_url)
image_1 = Image.open(BytesIO(response.content)).convert("RGB")
text_query_1 = "I want to rest, where should I sit?"
messages_1 = [
{"role": "user", "content": [{"type": "image", "image": image_1}, {"type": "text", "text": text_query_1}]},
]
prompt_1 = processor.apply_chat_template(messages_1, tokenize=False, add_generation_prompt=True)
inputs_1 = processor(text=prompt_1, images=image_1, return_tensors="pt").to(model.device)
generated_ids_1 = model.generate(**inputs_1, max_new_tokens=1024, do_sample=False)
response_1 = tokenizer.batch_decode(generated_ids_1, skip_special_tokens=True)[0]
print(f"Assistant (Query 1: '{text_query_1}'): {response_1}")
# Expected output (from GitHub README): "The question seems to be asking where to sit, but the image only shows a kitchen counter with food and flowers."
Output mask for Example 1:
# --- Example 2 (from GitHub README) ---
# Question: "I want to cook food, what can I use?"
# Image: A kitchen scene. The model should identify tools/ingredients.
# Using a different example image for this query
image_url = "https://github.com/dvlab-research/ViSurf/raw/main/assets/test_output_2.png"
response = requests.get(image_url)
image_2 = Image.open(BytesIO(response.content)).convert("RGB")
text_query_2 = "I want to cook food, what can I use?"
messages_2 = [
{"role": "user", "content": [{"type": "image", "image": image_2}, {"type": "text", "text": text_query_2}]},
]
prompt_2 = processor.apply_chat_template(messages_2, tokenize=False, add_generation_prompt=True)
inputs_2 = processor(text=prompt_2, images=image_2, return_tensors="pt").to(model.device)
generated_ids_2 = model.generate(**inputs_2, max_new_tokens=1024, do_sample=False)
response_2 = tokenizer.batch_decode(generated_ids_2, skip_special_tokens=True)[0]
print(f"Assistant (Query 2: '{text_query_2}'): {response_2}")
# Expected output (from GitHub README): "The question asks what kitchen tools or ingredients are visible that could be used for cooking."
Output mask for Example 2:
Citation
If you find our work helpful or inspiring, please feel free to cite it.
@article{liu2025visurf,
title = {ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models},
author = {Liu, Yuqi and Chen, Liangyu and Liu, Jiazhen and Zhu, Mingkang and Zhong, Zhisheng and Yu, Bei and Jia, Jiaya},
journal = {arXiv preprint arXiv:2503.06520},
year = {2025}
}
@article{liu2025segzero,
title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
journal = {arXiv preprint arXiv:2503.06520},
year = {2025}
}
@article{liu2025visionreasoner,
title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
journal = {arXiv preprint arXiv:2505.12081},
year = {2025}
}
- Downloads last month
- 32