Datasets:
File size: 12,453 Bytes
645f236 9482a7d 645f236 98fd904 645f236 07befc6 645f236 07befc6 dac3315 07befc6 9da9082 dac3315 07befc6 9da9082 07befc6 9482a7d 1739f5a 79ea3da 07befc6 79ea3da 07befc6 7e7c93a 79ea3da 07befc6 79ea3da 9da9082 79ea3da 07befc6 79ea3da a5afbba 79ea3da 07befc6 dac3315 07befc6 dac3315 a5afbba 1739f5a dac3315 1739f5a a5afbba 1739f5a a5afbba 1739f5a dac3315 1739f5a dac3315 1739f5a 9da9082 dac3315 9da9082 dac3315 01746ac dac3315 01746ac dac3315 01746ac 9da9082 79ea3da 07befc6 2dbb590 9da9082 ed3e50d 07befc6 79ea3da 07befc6 dac3315 07befc6 79ea3da 33ba62b 79ea3da 0757379 79ea3da 07befc6 9482a7d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
language:
- en
license: apache-2.0
task_categories:
- visual-document-retrieval
dataset_info:
features:
- name: image
dtype: image
- name: instruction
dtype: string
- name: bbox
sequence: float64
- name: bucket
dtype: string
splits:
- name: test
num_bytes: 334903619
num_examples: 1639
download_size: 334903619
dataset_size: 334903619
configs:
- config_name: default
data_files:
- split: test
path: test*
---
# WebClick: A Multimodal Localization Benchmark for Web-Navigation Models
We introduce WebClick, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. WebClick features 1,639 English-language web screenshots from over 100 websites paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.
## Design Goals and Use Case
WebClick is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.
On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.
Project page: https://www.surferh.com
## Dataset Structure
The dataset contains 1,639 samples divided into three key groups:
1. **`agentbrowse` (36%)**: Pages encountered by the SurferH agent while solving web retrieval tasks from [WebVoyager](https://arxiv.org/abs/2401.13919)
2. **`humanbrowse` (31.8%)**: Pages and elements interacted with by humans performing everyday tasks (e-shopping, trip planning, personal organization)
3. **`calendars` (32.2%)**: A specialized subset focusing on calendar interfaces, a known challenge for UI understanding models
Each sample consists of:
- **`image`**: A screenshot of a web page
- **`instruction`**: A natural language instruction describing the desired action
- **`bbox`**: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target, such as an input field or a button
- **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to
The dataset includes several challenging scenarios:
- Disambiguation between similar elements (e.g., "the login button in the middle", “the login button in the top-right”)
- Cases where OCR is insufficient because the visible text isn't the interactive element
- Navigation requiring understanding of relative spatial relationships between information and interaction points
## Dataset Creation: High Quality Annotations and NLP Instructions
A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.
### Curation Rationale
WebClick focuses on realism by capturing authentic interactions: actions taken by humans and agents.
The records of WebClick are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
### Challenging Examples for UI Element Selection
With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
[comment]: # (Link to presentation with images https://docs.google.com/presentation/d/1NQGq75Ao_r-4GF8WCyK0BRPCdvkjzxIE2xP9ttV5UcM/edit#slide=id.g358e1dac3df_0_60)
Our dataset includes examples that go beyond standard object detection or OCR, requiring genuine **UI understanding** and **instruction-based visual reasoning**. These examples highlight failure points in current models and test capabilities critical for real-world interaction with user interfaces, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
### Key Challenges Captured in the Benchmark
- **UI Understanding**
Tasks require comprehension of common UI conventions (e.g., icons, labels, layout). For instance, identifying the correct user settings button may involve recognizing a gear icon, or adding a specific product to a cart might require interpreting both imagery and adjacent labels. State-of-the-art models often fail at such tasks due to lack of contextual or semantic UI awareness.
- **Instruction-Based Disambiguation**
Some instructions describe objects based on spatial position, appearance, or intent (e.g., "middle of the page", "green button"). These tasks require combining textual instruction with visual reasoning in order to solve them — a challange most models do not yet handle robustly.
- **Calendar Navigation**
Even frontier models struggle to interact with calendar widgets. Understanding which dates are available (e.g., not grayed out or marked unavailable) is a frequent failure case, demonstrating gaps in dynamic UI interpretation.
- **Format and Location Sensitivity**
Instructions that rely on regional formats—like time (“18:45”) or date representations—test the model’s resilience to location-specific variations. Models trained on culturally homogeneous data often perform poorly here.
### Example Tasks
| **Category** | **Instruction** | **Image** |
|------------------------|------------------------------------------------|-----------|
| UI Understanding | Access user account settings |  |
| UI Understanding | Add Insignia cable to cart |  |
| UI Understanding | Pick the first available date |  |
| Format Understanding | Choose 18:45 |  |
| UI Disambiguation | Green button to create a travel alert |  |
| UI Disambiguation | Log in button (middle of the page) | .png) |
| UI Disambiguation | Select fifth image in gallery |  |
| Calendar Understanding | Select Aug 7th |  |
# Results of Popular Models
To put our benchmark into context, we evaluate our benchmark alongside the popular Screenspot [1] and ScreenspotV2 [2] benchmarks using a set of popular pre-trained models.
From the table we can observe that the models mostly underperform on WebClick compared to both Screenspot benchmarks, making it a more challenging task. We also find that WebClick provides better signal for downstream performance for agentic applications of the model.
| **Model** | **WebClick (ours)** | Screenspot | Screenspot V2 |
|-------------------------------|----------------------------|------------|---------------|
| osunlp/UGround-V1-2B [3] | 71.69% | 77.12% | 79.31% |
| osunlp/UGround-V1-7B [3] | 82.37% | 85.69% | 84.26% |
| Qwen/Qwen2.5-VL-3B-Instruct [4] | 71.15% | 82.78% | 84.34% |
| Qwen/Qwen2.5-VL-7B-Instruct [4] | 74.37% | 85.53% | 88.04% |
| ByteDance-Seed/UI-TARS-2B-SFT [5] | 64.23% | 66.82% | 69.39% |
| ByteDance-Seed/UI-TARS-7B-DPO [5] | 80.67% | 84.20% | 86.70% |
| Holo1-3B | 81.50% | 86.01% | 87.33% |
| Holo1-7B | 84.03% | 87.42% | 89.85% |
### Annotations
Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries.
All labels were hand-written or hand-reviewed. Instructions were rewritten when needed to only contain non-ambiguous intents rather than visual descriptions. Screenshots were manually reviewed to avoid any personal information, with any identifiable data removed or anonymized.
### Licence
- **Curated by:** H Company
- **Language:** English
- **License:** Apache 2.0
### Dataset Sources
- **Paper:** https://arxiv.org/abs/2506.02865
## Citation
[1] SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents
Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Aug. 2024
[2] OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao
arXiv preprint arXiv:2410.23218 (2024)
[3] Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
Boyu Gou and Ruohan Wang and Boyuan Zheng and Yanan Xie and Cheng Chang and Yiheng Shu and Huan Sun and Yu Su
The Thirteenth International Conference on Learning Representations (2025)
[4] Qwen2.5-VL Technical Report
Qwen Team
arXiv preprint arXiv:2502.13923 (2025)
[5] UI-TARS: Pioneering Automated GUI Interaction with Native Agents
Yujia Qin, Yining Ye, Junjie Fang, Haoming Wang, Shihao Liang, Shizuo Tian, Junda Zhang, Jiahao Li, Yunxin Li, Shijue Huang, Wanjun Zhong, Kuanye Li, Jiale Yang, Yu Miao, Woyu Lin, Longxiang Liu, Xu Jiang, Qianli Ma, Jingyu Li, Xiaojun Xiao, Kai Cai, Chuang Li, Yaowei Zheng, Chaolin Jin, Chen Li, Xiao Zhou, Minchao Wang, Haoli Chen, Zhaojian Li, Haihua Yang, Haifeng Liu, Feng Lin, Tao Peng, Xin Liu, Guang Shi
arXiv:2501.12326 (2025)
**BibTeX:**
```
@dataset{hcompany2025uinavigate,
author = {H Company Research Team},
title = {WebClick: A Multimodal Localization Benchmark for Web-Navigation Models},
year = {2025},
publisher = {H Company},
}
@misc{andreux2025surferhmeetsholo1costefficient,
title={Surfer-H Meets Holo1: Cost-Efficient Web Agent Powered by Open Weights},
author={Mathieu Andreux and Breno Baldas Skuk and Hamza Benchekroun and Emilien Biré and Antoine Bonnet and Riaz Bordie and Matthias Brunel and Pierre-Louis Cedoz and Antoine Chassang and Mickaël Chen and Alexandra D. Constantinou and Antoine d'Andigné and Hubert de La Jonquière and Aurélien Delfosse and Ludovic Denoyer and Alexis Deprez and Augustin Derupti and Michael Eickenberg and Mathïs Federico and Charles Kantor and Xavier Koegler and Yann Labbé and Matthew C. H. Lee and Erwan Le Jumeau de Kergaradec and Amir Mahla and Avshalom Manevich and Adrien Maret and Charles Masson and Rafaël Maurin and Arturo Mena and Philippe Modard and Axel Moyal and Axel Nguyen Kerbel and Julien Revelle and Mats L. Richter and María Santos and Laurent Sifre and Maxime Theillard and Marc Thibault and Louis Thiry and Léo Tronchon and Nicolas Usunier and Tony Wu},
year={2025},
eprint={2506.02865},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.02865},
}
```
## Dataset Card Contact
research@hcompany.ai |