File size: 2,043 Bytes
7da4979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3014dbd
7da4979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3014dbd
 
 
7da4979
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: apache-2.0
pipeline_tag: mask-generation
base_model:
  - OpenGVLab/InternVL2.5-4B
  - facebook/sam2.1-hiera-large
tags:
  - SeC
---

# SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction

[\[๐Ÿ“‚ GitHub\]](https://github.com/OpenIXCLab/SeC)
[\[๐Ÿ“ฆ Benchmark\]](https://huggingface.co/datasets/OpenIXCLab/SeCVOS)
[\[๐ŸŒ Homepage\]](https://rookiexiong7.github.io/projects/SeC/)
[\[๐Ÿ“„ Paper\]](https://arxiv.org/abs/2507.15852)

## Highlights

- ๐Ÿ”ฅWe introduce **Segment Concept (SeC)**, a **concept-driven** segmentation framework for **video object segmentation** that integrates **Large Vision-Language Models (LVLMs)** for robust, object-centric representations.
- ๐Ÿ”ฅSeC dynamically balances **semantic reasoning** with **feature matching**, adaptively adjusting computational efforts based on **scene complexity** for optimal segmentation performance.
- ๐Ÿ”ฅWe propose the **Semantic Complex Scenarios Video Object Segmentation (SeCVOS)** benchmark, designed to evaluate segmentation in challenging scenarios.

## SeC Performance

| Model | SA-V val | SA-V test | LVOS v2 val | MOSE val | DAVIS 2017 val | YTVOS 2019 val | SeCVOS |
| :------ | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
| SAM 2.1 | 78.6 | 79.6 | 84.1 | 74.5 | 90.6 | 88.7 | 58.2 |
| SAMURAI | 79.8 | 80.0 | 84.2 | 72.6 | 89.9 | 88.3 | 62.2 |
| SAM2.1Long | 81.1 | 81.2 | 85.9 | 75.2 | 91.4 | 88.7 | 62.3 |
| **SeC (Ours)** | **82.7**  | **81.7** | **86.5**  | **75.3**  | **91.3** | **88.6** | **70.0** |

---
## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{zhang2025sec,
  title     = {SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction},
  author    = {Zhixiong Zhang and Shuangrui Ding and Xiaoyi Dong and Songxin He and Jianfan Lin and Junsong Tang and Yuhang Zang and Yuhang Cao and Dahua Lin and Jiaqi Wang},
  journal   = {arXiv preprint arXiv:2507.15852},
  year      = {2025}
}
```