DeskVision: Large Scale Desktop Region Captioning for Advanced GUI Agents
[💻Code][📝Paper] [🤗Models][🤗Data]

🔥🔥🔥 We have open-sourced our self-developed GUI multimodal visual understanding model GUIExplorer, which is based on the model architecture of LLaVA OneVision 7B. It has basic GUI visual understanding capabilities, including regional OCR, Grounding, and single-step instruction execution capabilities. For details on how to train and use this model, please refer to the [💻Code].
Citation
If you use GUIExplorer for your research, please cite our [📝Paper]:
@misc{xu2025deskvisionlargescaledesktop,
title={DeskVision: Large Scale Desktop Region Captioning for Advanced GUI Agents},
author={Yibin Xu and Liang Yang and Hao Chen and Hua Wang and Zhi Chen and Yaohua Tang},
year={2025},
eprint={2503.11170},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.11170},
}
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support