Datasets:

Modalities:
Audio
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
SpokenVisIT / README.md
张绍磊
update
4005e4b
---
license: cc-by-4.0
---
SpokenVisIT
SpokenVisIT is a real-world visual-speech interaction benchmark built upon VisIT-Bench, designed to evaluate the visual-grounded speech interaction capabilities of omni large multimodal models (LMMs).
Our deepest acknowledgment goes to [VisIT-Bench](https://huggingface.co/datasets/mlfoundations/VisIT-Bench) — A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use — which collects a diverse set of real-world visual instructions. SpokenVisIT builds on this foundation by converting the textual instructions into spoken language, enabling the assessment of LMMs' capabilities in spoken interaction. **Please use SpokenVisIT under the license terms of VisIT-Bench.**
For more information on VisIT-Bench, please refer to the [paper](https://arxiv.org/abs/2308.06595), [blog](https://visit-bench.github.io/), and [code](https://github.com/mlfoundations/VisIT-Bench/).
For more information on SpokenVisIT, please refer to the [paper]() and [GitHub repo](https://github.com/ictnlp/Stream-Omni) of Stream-Omni.