Dataset Viewer
Search is not available for this dataset
image
imagewidth (px) 138
4.5k
|
---|
End of preview. Expand
in Data Studio
HC-Bench
HC-Bench is a compact multi-part image benchmark for evaluating recognition and prompting robustness, especially in hidden-content scenes. It contains:
- object/ — 56 base images and 56 hidden variants of the same lemmas, plus prompts and metadata.
- text/ — 56 Latin/English and 56 Chinese lemma–description pairs with matching PNGs.
- wild/ — 53 in-the-wild images for additional generalization checks.
Repository structure
HC-Bench/
├─ object/
│ ├─ base/ # 56 base images (7 types × 8 lemmas)
│ ├─ hidden/ # 56 hidden-content variants (same lemmas)
│ ├─ image\_base.txt # 7 types and their 8 lemmas each
│ ├─ image\_generate\_prompts.txt# per-lemma scene prompts used for generation
│ └─ lemmas\_descriptions.json # \[{Type, Lemma, Description}] × 56
├─ text/
│ ├─ Latin/ # 28 English PNGs
│ ├─ Chinese/ # 28 Chinese PNGs
│ ├─ English\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
│ └─ Chinese\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
└─ wild/ # 53 PNGs
Contents
object/
base/
: Canonical image per lemma (e.g.,Apple.jpg
,Einstein.png
).hidden/
: Composite/camouflaged image for the same lemma set (e.g.,apple.png
,einstein.png
).image_base.txt
: The 7 high-level types and their 8 lemmas each (Humans, Species, Buildings, Cartoon, Furniture, Transports, Food).image_generate_prompts.txt
: Per-lemma prompts used to compose/generate scenes (e.g., “A monorail cutting through a futuristic city with elevated walkways” fornotredame
).lemmas_descriptions.json
: Minimal metadata with{Type, Lemma, Description}
aligned 1:1 with the 56 lemmas.
text/
Latin/
&Chinese/
: 28 images each (total 56).English_text.json
&Chinese_text.json
: 56-entry lists pairing lemmas to descriptions in two languages.
(Note: TheEnglish_text.json
/Chinese_text.json
files include extra fieldsLength
andRarity
for flexibility.)
wild/
- 53 natural/urban scenes for robustness and transfer evaluation.
Quick start (🤗 Datasets)
HC-Bench uses the ImageFolder/“imagefolder” style. Class labels are inferred from directory names when present (e.g.,
base
,hidden
). If you prefer raw images without labels, passdrop_labels=True
.
Load object/base and object/hidden
from datasets import load_dataset
base = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
split="train",
drop_labels=True, # drop automatic label inference
)
hidden = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
split="train",
drop_labels=True,
)
Load wild/
wild = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/wild/*",
split="train",
drop_labels=True,
)
Load the JSON metadata (English/Chinese)
from datasets import load_dataset
en = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/English_text.json",
split="train",
)
zh = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/Chinese_text.json",
split="train",
)
Docs reference:
load_dataset
for JSON & files, and ImageFolder for image datasets.
Pairing base/hidden with metadata
Filenames differ in casing/spaces between base/
(Apple.jpg
) and hidden/
(apple.png
). Use object/lemmas_descriptions.json
as the canonical list of 56 lemmas and join by Lemma
:
import pandas as pd
from datasets import load_dataset
# 1) Canonical lemma list
lemmas = load_dataset(
"json",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/lemmas_descriptions.json",
split="train",
).to_pandas()
# 2) Build (lemma -> file) maps
def to_lemma(name): # normalize filenames to lemma
import re, os
stem = os.path.splitext(os.path.basename(name))[0]
return re.sub(r"\s+", "", stem).lower()
base_ds = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
split="train",
drop_labels=True,
)
hidden_ds = load_dataset(
"imagefolder",
data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
split="train",
drop_labels=True,
)
import os
base_map = {to_lemma(x["image"].filename): x["image"] for x in base_ds}
hidden_map= {to_lemma(x["image"].filename): x["image"] for x in hidden_ds}
# 3) Join
lemmas["base_image"] = lemmas["Lemma"].apply(lambda L: base_map.get(L.lower()))
lemmas["hidden_image"] = lemmas["Lemma"].apply(lambda L: hidden_map.get(L.lower()))
Statistics
object/base
: 56 imagesobject/hidden
: 56 imagestext/Latin
: 28 imagestext/Chinese
: 28 imageswild
: 53 images
Citation
If you use HC-Bench, please cite:
@misc{li2025semvinkadvancingvlmssemantic,
title={SemVink: Advancing VLMs' Semantic Understanding of Optical Illusions via Visual Global Thinking},
author={Sifan Li and Yujun Cai and Yiwei Wang},
year={2025},
eprint={2506.02803},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.02803},
}
- Downloads last month
- 454