--- task_categories: - image-segmentation license: cc-by-nc-4.0 tags: - reasoning - reinforcement-learning - zero-shot dataset_info: features: - name: id dtype: string - name: problem dtype: string - name: solution dtype: string - name: image dtype: image - name: img_height dtype: int64 - name: img_width dtype: int64 splits: - name: train num_bytes: 1872591792.0 num_examples: 2000 download_size: 1090458784 dataset_size: 1872591792.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Seg-Zero Dataset: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement This repository hosts the training dataset introduced in the paper [Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://huggingface.co/papers/2503.06520). ## Abstract Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes. To address these limitations, we propose Seg-Zero, a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement. Seg-Zero introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precious pixel-level masks. We design a sophisticated reward mechanism that integrates both format and accuracy rewards to effectively guide optimization directions. Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process. ## Code and Project Links * **Paper Link:** [https://huggingface.co/papers/2503.06520](https://huggingface.co/papers/2503.06520) * **Code Repository:** [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero) ## Dataset Description This dataset is designed for training and evaluating models on reasoning-chain guided image segmentation tasks. It contains `2000` examples in the `train` split, with each entry comprising: - `id`: Unique identifier for the sample. - `problem`: The reasoning problem or question. - `solution`: The explicit reasoning chain or solution. - `image`: The input image. - `img_height`: Height of the image. - `img_width`: Width of the image. ## Key Features of Seg-Zero (Associated Framework) This dataset supports the Seg-Zero framework, which demonstrates the following key features: 1. **Emergent Test-Time Reasoning**: Seg-Zero exhibits emergent test-time reasoning ability. It generates a reasoning chain before producing the final segmentation mask. 2. **Reinforcement Learning Only**: Seg-Zero is trained exclusively using reinforcement learning, without any explicit supervised reasoning data. 3. **Superior Generalization**: Compared to supervised fine-tuning, Seg-Zero achieves superior performance on both in-domain and out-of-domain data. ## Sample Usage You can load the dataset using the Hugging Face `datasets` library: ```python from datasets import load_dataset # Load the training split of the Seg-Zero dataset dataset = load_dataset("Ricky06662/Seg-Zero", split="train") # Access the first example print(dataset[0]) # Example of accessing image and problem statement print(f"Problem: {dataset[0]['problem']}") dataset[0]['image'].save("first_image.png") print("First image saved as first_image.png") ``` ## Citation If you find this dataset or the associated work useful, please cite the paper: ```bibtex @article{liu2025segzero, title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement}, author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya}, journal = {arXiv preprint arXiv:2503.06520}, year = {2025} } ```