Update README.md
Browse files
README.md
CHANGED
@@ -28,32 +28,23 @@ configs:
|
|
28 |
path: test*
|
29 |
---
|
30 |
|
31 |
-
# Pixel-Navigator:
|
|
|
32 |
|
33 |
-
|
|
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
|
39 |
-
Pixel-Navigator is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.
|
40 |
|
41 |
-
A key strength of this evaluation set is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.
|
42 |
|
43 |
-
|
44 |
-
- **Language:** English
|
45 |
-
- **License:** Apache 2.0
|
46 |
-
|
47 |
-
### Dataset Sources
|
48 |
|
49 |
-
- **Repository:** [Hugging Face Repository URL]
|
50 |
-
- **Paper:** [Coming soon]
|
51 |
|
52 |
-
## Uses
|
53 |
|
54 |
-
### Direct Use
|
55 |
|
56 |
-
This dataset is intended for benchmarking multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.
|
57 |
|
58 |
## Dataset Structure
|
59 |
|
@@ -66,7 +57,7 @@ The dataset contains 1,639 samples divided into three key groups:
|
|
66 |
Each sample consists of:
|
67 |
- **`image`**: A screenshot of a web page
|
68 |
- **`instruction`**: A natural language instruction describing the desired action
|
69 |
-
- **`bbox`**: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target
|
70 |
- **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to
|
71 |
|
72 |
The dataset includes several challenging scenarios:
|
@@ -78,19 +69,41 @@ The dataset includes several challenging scenarios:
|
|
78 |
|
79 |
### Curation Rationale
|
80 |
|
81 |
-
Pixel
|
82 |
-
The records of Pixel Navigator are English-language, desktop-size screenshots of websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
|
83 |
|
84 |
|
85 |
The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
|
86 |
|
87 |
With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
|
88 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
### Annotations
|
91 |
|
92 |
Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries.
|
93 |
-
All labels were hand-written or hand-reviewed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
## Citation
|
96 |
|
@@ -108,5 +121,3 @@ All labels were hand-written or hand-reviewed. Intents were rewritten when neede
|
|
108 |
|
109 |
research@hcompany.ai
|
110 |
|
111 |
-
|
112 |
-
|
|
|
28 |
path: test*
|
29 |
---
|
30 |
|
31 |
+
# Pixel-Navigator: A Multimodal Localization Benchmark for Web-Navigation Models
|
32 |
+
We introduce Pixel-Navigator, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. Pixel-Navigator features 1,639 English-language web screenshots paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.
|
33 |
|
34 |
+
## Design Goals and Use Case
|
35 |
+
Pixel-Navigator is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.
|
36 |
|
37 |
+
On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.
|
38 |
|
39 |
+
## Technical Details: High Quality Annotations and NLP Instructions
|
40 |
|
|
|
41 |
|
|
|
42 |
|
43 |
+
A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.
|
|
|
|
|
|
|
|
|
44 |
|
|
|
|
|
45 |
|
|
|
46 |
|
|
|
47 |
|
|
|
48 |
|
49 |
## Dataset Structure
|
50 |
|
|
|
57 |
Each sample consists of:
|
58 |
- **`image`**: A screenshot of a web page
|
59 |
- **`instruction`**: A natural language instruction describing the desired action
|
60 |
+
- **`bbox`**: Coordinates of the bounding box (relative to the image dimensions) that identify the correct click target, such as an input field or a button
|
61 |
- **`bucket`**: One of `agentbrowse`, `humanbrowse`, `calendars`: group this row belongs to
|
62 |
|
63 |
The dataset includes several challenging scenarios:
|
|
|
69 |
|
70 |
### Curation Rationale
|
71 |
|
72 |
+
Pixel-Navigator focuses on realism by capturing authentic interactions: actions taken by humans and agents.
|
73 |
+
The records of Pixel Navigator are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
|
74 |
|
75 |
|
76 |
The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
|
77 |
|
78 |
With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
|
79 |
|
80 |
+
### Examples
|
81 |
+
|
82 |
+
[comment]: # (Link to presentation with images https://docs.google.com/presentation/d/1NQGq75Ao_r-4GF8WCyK0BRPCdvkjzxIE2xP9ttV5UcM/edit#slide=id.g358e1dac3df_0_60)
|
83 |
+
|
84 |
+
#### UI Understanding
|
85 |
+
|
86 |
+
|
87 |
+
|
88 |
+
# Results of Popular Models
|
89 |
+
|
90 |
+
*INSERT TABLE HERE
|
91 |
+
|
92 |
|
93 |
### Annotations
|
94 |
|
95 |
Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries.
|
96 |
+
All labels were hand-written or hand-reviewed. Instructions were rewritten when needed to only contain non-ambiguous intents rather than visual descriptions. Screenshots were manually reviewed to avoid any personal information, with any identifiable data removed or anonymized.
|
97 |
+
|
98 |
+
### Licence
|
99 |
+
- **Curated by:** H Company
|
100 |
+
- **Language:** English
|
101 |
+
- **License:** Apache 2.0
|
102 |
+
|
103 |
+
### Dataset Sources
|
104 |
+
|
105 |
+
- **Repository:** [Hugging Face Repository URL]
|
106 |
+
- **Paper:** [Coming soon]
|
107 |
|
108 |
## Citation
|
109 |
|
|
|
121 |
|
122 |
research@hcompany.ai
|
123 |
|
|
|
|