Hokin commited on
Commit
a05b6ae
Β·
verified Β·
1 Parent(s): 1dc2094

Upload Hegarty dataset with 2x3 design structure

Browse files
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - multiple-choice
6
+ language:
7
+ - en
8
+ pretty_name: Hegarty Dataset
9
+ size_categories:
10
+ - 10K<n<100K
11
+ tags:
12
+ - spatial-reasoning
13
+ - perspective-taking
14
+ - visual-reasoning
15
+ - cognitive-science
16
+ - vision-language
17
+ configs:
18
+ - config_name: lab_visual_access
19
+ description: Lab setting - Visual Access Reasoning (Can you see X from viewpoint Y?)
20
+ - config_name: lab_mental_reasoning
21
+ description: Lab setting - Mental Reasoning (Spatial relationships between objects)
22
+ - config_name: lab_perspective_taking
23
+ description: Lab setting - Perspective-Taking Reasoning (What can another observer see?)
24
+ - config_name: real_visual_access
25
+ description: Real-world setting - Visual Access Reasoning
26
+ - config_name: real_mental_reasoning
27
+ description: Real-world setting - Mental Reasoning
28
+ - config_name: real_perspective_taking
29
+ description: Real-world setting - Perspective-Taking Reasoning
30
+ ---
31
+
32
+ # Hegarty Dataset
33
+
34
+ ## πŸ”οΈ Overview
35
+
36
+ The **Hegarty Dataset** is a comprehensive benchmark for evaluating spatial reasoning and perspective-taking abilities in vision-language models. Inspired by Piaget's classic Three Mountains Task, this dataset provides a systematic evaluation across three cognitive dimensions in both controlled lab and real-world settings.
37
+
38
+ ## πŸ“Š Dataset Structure
39
+
40
+ ### 2Γ—3 Design
41
+ - **2 Settings**: Lab (controlled) and Real-world
42
+ - **3 Task Types**:
43
+ - **Visual Access Reasoning**: Determining object visibility from different viewpoints
44
+ - **Mental Reasoning**: Understanding spatial relationships between objects
45
+ - **Perspective-Taking Reasoning**: Understanding what others can see from their viewpoint
46
+
47
+ ### Dataset Statistics
48
+
49
+ | Setting | Task Type | Samples | Description |
50
+ |---------|-----------|---------|-------------|
51
+ | Lab | Visual Access | ~1,000 | Controlled geometric scenarios |
52
+ | Lab | Mental Reasoning | ~500 | Spatial relationship tasks |
53
+ | Lab | Perspective-Taking | ~300 | Other-perspective understanding |
54
+ | Real | Visual Access | ~2,000 | Real-world ego-exo scenarios |
55
+ | Real | Mental Reasoning | ~2,000 | Natural spatial reasoning |
56
+ | Real | Perspective-Taking | ~2,000 | Real-world perspective-taking |
57
+
58
+ ## 🎯 Task Descriptions
59
+
60
+ ### Visual Access Reasoning (formerly Perspective-Taking L1)
61
+ - **Question**: "Can you see object X from viewpoint Y?"
62
+ - **Format**: Binary (Yes/No) or Multiple Choice
63
+ - **Cognitive Ability**: Basic visual perspective-taking
64
+
65
+ ### Mental Reasoning (formerly Spatiality)
66
+ - **Question**: "What is the spatial relationship between objects?"
67
+ - **Format**: Multiple Choice (4 options)
68
+ - **Cognitive Ability**: Spatial reasoning and mental rotation
69
+
70
+ ### Perspective-Taking Reasoning (formerly Perspective-Taking L2)
71
+ - **Question**: "What can the observer at position X see?"
72
+ - **Format**: Multiple Choice (4 options)
73
+ - **Cognitive Ability**: Theory of mind and advanced perspective-taking
74
+
75
+ ## πŸ’‘ Usage
76
+
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ # Load specific configuration
81
+ dataset = load_dataset("Hokin/hegarty", "lab_visual_access")
82
+
83
+ # Load all lab settings
84
+ lab_data = {
85
+ 'visual_access': load_dataset("Hokin/hegarty", "lab_visual_access"),
86
+ 'mental_reasoning': load_dataset("Hokin/hegarty", "lab_mental_reasoning"),
87
+ 'perspective_taking': load_dataset("Hokin/hegarty", "lab_perspective_taking")
88
+ }
89
+
90
+ # Load all real-world settings
91
+ real_data = {
92
+ 'visual_access': load_dataset("Hokin/hegarty", "real_visual_access"),
93
+ 'mental_reasoning': load_dataset("Hokin/hegarty", "real_mental_reasoning"),
94
+ 'perspective_taking': load_dataset("Hokin/hegarty", "real_perspective_taking")
95
+ }
96
+ ```
97
+
98
+ ## πŸ“ˆ Model Performance
99
+
100
+ Current state-of-the-art models show significant gaps compared to human performance:
101
+ - **Humans**: 80-93% accuracy across all tasks
102
+ - **Best AI Models**: 30-55% accuracy
103
+ - **Key Finding**: Models perform better on visual access than mental reasoning or perspective-taking
104
+
105
+ ## πŸ”— Related Resources
106
+
107
+ - **Paper**: [Coming Soon]
108
+ - **GitHub**: [grow-ai-like-a-child/hegarty](https://github.com/grow-ai-like-a-child/hegarty)
109
+ - **Leaderboard**: [Coming Soon]
110
+
111
+ ## πŸ“ Citation
112
+
113
+ ```bibtex
114
+ @dataset{three_mountain_2024,
115
+ title={Hegarty: A Comprehensive Benchmark for Spatial Reasoning and Perspective-Taking},
116
+ author={Hokin Deng and Contributors},
117
+ year={2024},
118
+ publisher={HuggingFace}
119
+ }
120
+ ```
121
+
122
+ ## πŸ“„ License
123
+
124
+ This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
125
+
126
+ ## 🀝 Contributing
127
+
128
+ We welcome contributions! Please see our GitHub repository for guidelines.
129
+
130
+ ## 🏷️ Tags
131
+
132
+ `spatial-reasoning` `perspective-taking` `visual-reasoning` `cognitive-science` `vision-language` `benchmark` `evaluation`
lab_mental_reasoning.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bf593464e407c9933df9f57d74a8f8f5152dd82dffcd674afad606b84ac3e17
3
+ size 100887513
lab_perspective_taking.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5b63a07c806a57f0aedb7b2925e104e3695dd97c9466d1acf0d6e5de15dc60e
3
+ size 6769823
lab_visual_access.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8f122e7902f2341dede176e003c249dc36565ba6bef07ad8faeb3af5e3dd10f
3
+ size 38719856
real_mental_reasoning.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96b478e0547800fdd84b32fe8644d2991d612fc81aa139597a6ae7a957174c78
3
+ size 335283266
real_visual_access.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c15f0c52e4f061bd8d717a605e54fdfa50b0a7d0b31eb47f8a04e2d67b58d32
3
+ size 235691390