Abstract
SimpleSeg enables multimodal large language models to perform pixel-level segmentation by predicting point sequences within language space, achieving competitive results without specialized architectures.
We present SimpleSeg, a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception. Our method reframes segmentation as a simple sequence generation problem: the model directly predicts sequences of points (textual coordinates) delineating object boundaries, entirely within its language space. To achieve high fidelity, we introduce a two-stage SFtoRL training pipeline, where Reinforcement Learning with an IoU-based reward refines the point sequences to accurately match ground-truth contours. We find that the standard MLLM architecture possesses a strong, inherent capacity for low-level perception that can be unlocked without any specialized architecture. On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs. This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs. Homepage: https://simpleseg.github.io/
Community
Project Page: https://simpleseg.github.io/
Github: https://github.com/songtianhui/SimpleSeg
HuggingFace: https://huggingface.co/collections/sthui/simpleseg
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Grounding Everything in Tokens for Multimodal Large Language Models (2025)
- DiG: Differential Grounding for Enhancing Fine-Grained Perception in Multimodal Large Language Model (2025)
- CoT4Det: A Chain-of-Thought Framework for Perception-Oriented Vision-Language Tasks (2025)
- STEP3-VL-10B Technical Report (2026)
- FishDetector-R1: Unified MLLM-Based Framework with Reinforcement Fine-Tuning for Weakly Supervised Fish Detection, Segmentation, and Counting (2025)
- GeM-VG: Towards Generalized Multi-image Visual Grounding with Multimodal Large Language Models (2026)
- VGent: Visual Grounding via Modular Design for Disentangling Reasoning and Prediction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper