HCTQA / README.md
shamz15531's picture
Update README.md
9769780 verified
metadata
language:
  - en
license: mit
tags:
  - tables
  - benchmark
  - qa
  - llms
  - document-understanding
  - multimodal
pretty_name: Human Centric Tables Question Answering (HCTQA)
size_categories:
  - 10K<n<100K
task_categories:
  - question-answering
task_ids:
  - document-question-answering
  - visual-question-answering
annotations_creators:
  - expert-generated
configs:
  - config_name: default
    data_files:
      - split: train
        path: train.parquet
      - split: validation
        path: val.parquet
      - split: test
        path: test.parquet
dataset_info:
  - config_name: default
    features:
      - name: table_id
        dtype: string
      - name: table_csv_path
        dtype: string
      - name: table_image_url
        dtype: string
      - name: table_image_local_path
        dtype: string
      - name: table_csv_format
        dtype: string
      - name: table_properties
        dtype: string
      - name: question_id
        dtype: string
      - name: question
        dtype: string
      - name: question_template
        dtype: string
      - name: question_properties
        dtype: string
      - name: answer
        dtype: string
      - name: prompt
        dtype: string
      - name: prompt_without_system
        dtype: string
      - name: dataset_type
        dtype: string
    description: >
      Human Centric Tables Question Answering (HCTQA) is a benchmark designed
      for evaluating the performance of LLMs on question answering over complex,
      real-world and synthetic tables. This dataset contains both real-world and
      synthetic tables with associated images, CSVs,  and structured metadata.
      The dataset includes questions with varying levels of complexity, 
      requiring models to handle reasoning across complex structures, numeric
      aggregation, and context-dependent  understanding. The `dataset_type`
      field indicates whether a sample is from the real world data sources
      (`realWorldHCTs`) or synthetically created (`syntheticHCTs`).

HCT-QA: Human-Centric Tables Question Answering

HCT-QA is a benchmark dataset designed to evaluate large language models (LLMs) on question answering over complex, human-centric tables (HCTs). These tables often appear in documents such as research papers, reports, and webpages and present significant challenges for traditional table QA due to their non-standard layouts and compositional structure.

The dataset includes:

  • 2,188 real-world tables with 9,835 human-annotated QA pairs
  • 4,679 synthetic tables with 67,500 programmatically generated QA pairs
  • Logical and structural metadata for each table and question

πŸ“„ Paper: [Title TBD]
The associated paper is currently under review and will be linked here once published.


πŸ“Š Dataset Splits

Config Split # Examples (Placeholder)
RealWorld Train 7,500
RealWorld Test 2,335
Synthetic Train 55,000
Synthetic Test 12,500

πŸ† Leaderboard

Model Name FT (Finetuned) Recall Precision
Model-A True 0.81 0.78
Model-B False 0.64 0.61
Model-C True 0.72 0.69

πŸ“Œ If you're evaluating on this dataset, open a pull request to update the leaderboard.


Dataset Structure

Each entry in the dataset is a dictionary with the following structure:

Sample Entry

{
  "table_id": "arxiv--1--1118",
  "table_info": {
    "table_csv_path": "../tables/csvs/arxiv--1--1118.csv",
    "table_image_url": "https://hcsdtables.qcri.org/datasets/all_images/arxiv_1_1118.jpg",
    "table_image_local_path": "../tables/images/arxiv--1--1118.jpg",
    "table_properties": {
      "Standard Relational Table": true,
      "Row Nesting": false,
      "Column Aggregation": false,
      ...
    },
    "table_formats": {
      "csv": ",0,1,2\n0,Domain,Average Text Length,Aspects Identified\n1,Journalism,50,44\n..."
    }
  },
  "questions": [
    {
      "question_id": "arxiv--1--1118--M0",
      "question": "Report the Domain and the Average Text Length where the Aspects Identified equals 72",
      "gt": "{Psychology | 86} || {Linguistics | 90}",
      "question_properties": {
        "Row Filter": true,
        "Aggregation": false,
        "Returned Columns": true,
        ...
      }
    }
    ...
  ]
}

Ground Truth Format

Explain the GT format here
Example: {value1 | value2} || {value3 | value4}

Table Properties

For details on table and question properties please see our paper