Spaces:
Running
Running
File size: 2,681 Bytes
4bd967f 6a8beb0 4bd967f 6201999 4bd967f 32fd10a 4bd967f e8668af 4bd967f 6f90ee8 3344287 6f90ee8 f564fdf 6f90ee8 7f88f5d 6f90ee8 d0bc3e1 95be49a 6f90ee8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
title: OpenS2V Eval
emoji: π
colorFrom: gray
colorTo: blue
sdk: gradio
sdk_version: 5.31.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: A Detailed Benchmark for Subject-to-Video Generation
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/63468720dd6d90d82ccf3450/N9kKR052363-MYkJkmD2V.png
---
<div align=center>
<img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px">
</div>
<h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2>
<h5 align="center"> If you like our project, please give us a star β on GitHub for the latest update. </h2>
## β¨ Summary
**OpenS2V-Eval** introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore,
to accurately align human preferences with S2V benchmarks, we propose three automatic metrics: **NexusScore**, **NaturalScore**, **GmeScore**
to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive
evaluation of 18 representative S2V models, highlighting their strengths and weaknesses across different content.
## π£ Evaluate Your Own Models
For how to evaluate your customized model like OpenS2V-Eval in the [OpenS2V-Nexus paper](https://huggingface.co/papers/2505.20292), please refer to [here](https://github.com/PKU-YuanGroup/OpenS2V-Nexus/tree/main/eval).
## βοΈ Get Videos Generated by Different S2V models
For more details, please refer to [here](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval/tree/main/Results).
## π‘ Description
- **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval)
- **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292)
- **Point of Contact:** [Shenghai Yuan](shyuan-cs@hotmail.com)
## βοΈ Citation
If you find our paper and code useful in your research, please consider giving a star and citation.
```BibTeX
@article{yuan2025opens2v,
title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation},
author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2505.20292},
year={2025}
}
``` |