Visual-Counterfact / README.md
mgolov's picture
Update README.md
2798ba4 verified
metadata
dataset_info:
  features:
    - name: correct_answer
      dtype: string
    - name: incorrect_answer
      dtype: string
    - name: object
      dtype: string
    - name: original_image
      dtype: image
    - name: counterfact_image
      dtype: image
  splits:
    - name: color
      num_bytes: 260676475.2330435
      num_examples: 573
    - name: size
      num_bytes: 121141676.58352403
      num_examples: 872
  download_size: 379711088
  dataset_size: 381818151.81656754
configs:
  - config_name: default
    data_files:
      - split: color
        path: data/color-*
      - split: size
        path: data/size-*

Visual CounterFact: Controlling Knowledge Priors in Vision-Language Models through Visual Counterfactuals

This dataset is part of the work "Pixels Versus Priors: Controlling Knowledge Priors in Vision-Language Models through Visual Counterfacts".
📖 Read the Paper
💾 GitHub Repository

Overview

Visual CounterFact is a novel dataset designed to investigate how Multimodal Large Language Models (MLLMs) balance memorized world knowledge priors (e.g., "strawberries are red") with the visual evidence present in input images (e.g., a blue strawberry). The dataset features visually realistic counterfactuals that create direct conflicts between what the model has learned and what it sees. This allows for studying and controlling whether model predictions rely on memorized priors or the actual image content.

Dataset Splits

The dataset contains two distinct splits, each corresponding to a specific visual attribute reasoning task:

Color (color)

  • Description: Contains images of objects where the color attribute is either consistent with common world knowledge or is a counterfactual color designed to contradict it (e.g., a blue strawberry).
  • Purpose: Evaluate how models reconcile conflicting color information between prior knowledge and visual input.
  • Example Queries:
    • "What color is this strawberry?"
    • "What color are most strawberries?"

Size (size)

  • Description: Consists of object images with size relations manipulated to contradict typical real-world size expectations (e.g., a fly larger than a strawberry).
  • Purpose: Test model understanding of size priors versus visual evidence.
  • Example Queries:
    • "Which object is bigger in this image, the fly or the strawberry?"
    • "Are strawberries bigger than flies?"

Citation

If you use this dataset, please cite:

Pixels Versus Priors: Controlling Knowledge Priors in Vision-Language Models through Visual Counterfacts