SciPredict / README.md
utkarsh4430's picture
Upload README.md with huggingface_hub
6f022aa verified
metadata
license: mit
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - science
  - physics
  - biology
  - chemistry
  - experimental-prediction
  - benchmark
size_categories:
  - n<1K

SciPredict: Can LLMs Predict the Outcomes of Research Experiments?

Paper: SciPredict: Can LLMs Predict the Outcomes of Research Experiments in Natural Sciences?

Overview

SciPredict is a benchmark evaluating whether AI systems can predict experimental outcomes in physics, biology, and chemistry. The dataset comprises 405 questions derived from recently published empirical studies (post-March 2025), spanning 33 subdomains.

Dataset Structure

  • Total Questions: 405 (5,716 rows including model responses)
  • Domains: Physics (9 subdomains), Chemistry (10 subdomains), Biology (14 subdomains)
  • Question Formats: Multiple-choice (MCQ), Free-format, Numerical

Key Fields

  • DOMAIN: Scientific domain (Physics, Biology, Chemistry)
  • FIELD: Specific field within the domain
  • PQ_FORMAT: Question format (MCQ, Free-Format, Numerical)
  • TITLE: Paper title
  • URL: Paper URL
  • PUBLISHING_DATE: Publication date
  • EXPERIMENTAL_SETUP: Description of the experimental configuration
  • MEASUREMENT_TAKEN: What was measured in the experiment
  • OUTCOME_PREDICTION_QUESTION: The prediction task
  • GTA: Ground truth answer
  • BACKGROUND_KNOWLEDGE: Expert-curated background knowledge
  • RELATED_PAPERS_DATA: Related papers information

Key Findings

  • Model accuracy: 14-26% (vs. ~20% human expert accuracy)
  • Poor calibration: Models cannot distinguish reliable from unreliable predictions
  • Background knowledge helps: Providing expert-curated context improves performance
  • Format matters: Performance degrades from MCQ → Free-form → Numerical