# SeC
**Repository Path**: zjchenchujie/SeC
## Basic Information
- **Project Name**: SeC
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-11-04
- **Last Updated**: 2025-11-04
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction
πππ Official implementation of **SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction**
Zhixiong Zhang* Β·
Shuangrui Ding* Β·
Xiaoyi Dongβ Β·
Songxin He Β·
Jianfan Lin Β·
Junsong Tang
Yuhang Zang Β·
Yuhang Cao Β·
Dahua Lin Β·
Jiaqi Wangβ
## Demo Video
https://github.com/user-attachments/assets/40fbf928-5722-45e1-adae-adb70c1251f7
## π News
π [2025/10/14] SeC is cited by [SAM 3](https://openreview.net/forum?id=r35clVtGzw) and used as a baseline!
π [2025/8/13] SeC sets a new state-of-the-art on the latest MOSE v2 [leaderboard](https://www.codabench.org/competitions/10062/#/results-tab)!
π [2025/7/22] The [Paper](https://arxiv.org/abs/2507.15852) and [Project Page](https://rookiexiong7.github.io/projects/SeC/) are released!
## π‘ Highlights
- π₯We introduce **Segment Concept (SeC)**, a **concept-driven** segmentation framework for **video object segmentation** that integrates **Large Vision-Language Models (LVLMs)** for robust, object-centric representations.
- π₯SeC dynamically balances **semantic reasoning** with **feature matching**, adaptively adjusting computational efforts based on **scene complexity** for optimal segmentation performance.
- π₯We propose the **Semantic Complex Scenarios Video Object Segmentation (SeCVOS)** benchmark, designed to evaluate segmentation in challenging scenarios.
## β¨ SeC Performance
| Model | SA-V val | SA-V test | LVOS v2 val | MOSE val | DAVIS 2017 val | YTVOS 2019 val | SeCVOS |
| :------ | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
| SAM 2.1 | 78.6 | 79.6 | 84.1 | 74.5 | 90.6 | 88.7 | 58.2 |
| SAMURAI | 79.8 | 80.0 | 84.2 | 72.6 | 89.9 | 88.3 | 62.2 |
| SAM2.1Long | 81.1 | 81.2 | 85.9 | 75.2 | 91.4 | 88.7 | 62.3 |
| **SeC (Ours)** | **82.7** | **81.7** | **86.5** | **75.3** | **91.3** | **88.6** | **70.0** |
## π¨βπ» TODO
- [ ] Release SeC training code
- [x] Release SeCVOS benchmark annotations
- [x] Release SeC inference code and checkpoints
## π οΈ Usage
### 1. Install environment and dependencies
Please make sure using the correct versions of transformers and peft.
```bash
conda create -n sec python=3.10
conda activate sec
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
```
### 2. Download the Pretrained Checkpoints
Download the SeC checkpoint from [π€HuggingFace](https://huggingface.co/OpenIXCLab/SeC-4B) and place it in the following directory :
```
saved_models
βββ SeC-4B
β βββ config.json
β βββ generation_config.json
...
```
### 3. Quick Start
If you want to test **SeC** inference on a single video, please refer to `demo.ipynb`.
### 4. Run the inference and evaluate the results
The inference instruction is in [INFERENCE.md](vos_evaluation/INFERENCE.md).
The evaluation instruction can be found in [EVALUATE.md](vos_evaluation/EVALUATE.md). To evaluate performance on seen and unseen categories in the LVOS dataset, refer to the evaluation code available [here](https://github.com/LingyiHongfd/lvos-evaluation).
## β€οΈ Acknowledgments and License
This repository are licensed under a [Apache License 2.0](LICENSE).
This repo benefits from [SAM 2](https://github.com/facebookresearch/sam2), [SAM2Long](https://github.com/Mark12Ding/SAM2Long) and [Sa2VA](https://github.com/magic-research/Sa2VA). Thanks for their wonderful works.
## βοΈ Citation
If you find our work helpful for your research, please consider giving a star β and citation π
```bibtex
@article{zhang2025sec,
title = {SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction},
author = {Zhixiong Zhang and Shuangrui Ding and Xiaoyi Dong and Songxin He and Jianfan Lin and Junsong Tang and Yuhang Zang and Yuhang Cao and Dahua Lin and Jiaqi Wang},
journal = {arXiv preprint arXiv:2507.15852},
year = {2025}
}
```