CSK / README.md
tonyhong's picture
Update README.md
d035c8d verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - story
  - reasoning
  - script
  - surprisal
pretty_name: csk
size_categories:
  - n<1K

Dataset Card for Causality in Script Knowledge (CSK)

Hugging Face Dataset | Paper (*SEM 2024) | Code

CSK is a small, psycholinguistically controlled corpus for testing whether language models (and humans) integrate script knowledge when making causal inferences in narratives. It contains 21 short English stories about everyday activities (e.g., baking a cake, taking a bath), each realized in three causal conditions that manipulate the presence of a cause event A for a later event B:

  • A→B (control): the cause A is stated;
  • ¬A→B (failure): A is explicitly negated;
  • nil→B (target): A is omitted.

This yields 63 stimuli total (21 × 3). Each item is segmented into “before B”, “B”, and “after B” text chunks to mirror the human experiment design in the paper.

  • Language(s): English
  • License: Apache-2.0 (see the dataset page).

Dataset Details

Dataset Links

  • *Proceedings (SEM 2024): Do large language models and humans have similar behaviours in causal inference with script knowledge? (Hong, Ryzhova, Biondi, Demberg, 2024)

Dataset Description

CSK provides controlled narrative stimuli for measuring how surprising event B is under different causal contexts. The human study showed longer reading times at B when A had been negated (¬A→B) relative to A→B, while nil→B behaved similarly to A→B, suggesting humans easily infer omitted but script-plausible causes. LLMs evaluated on the same materials often fail to mirror this nil vs. ¬A asymmetry.

  • Curated by: Xudong Hong, Margarita Ryzhova, Daniel Biondi, Vera Demberg
  • Funding: See acknowledgments in the paper.

Dataset Structure

The Hugging Face dataset exposes one split with 63 rows (21 stories × 3 conditions). File format: CSV. Fields:

  • item (int) — story ID (1–21).
  • condition type (str) — one of control, failure, target mapping to A→B, ¬A→B, nil→B respectively.
  • before B (str) — context up to but not including the critical sentence B.
  • B (str) — the critical sentence whose surprisal/processing is measured.
  • after B (str) — continuation after B.

(Additional unnamed columns may appear from CSV export; they can be ignored.)

Splits

  • train: 63 rows (the full set).

Uses

Direct Use

  • Psycholinguistic evaluation of LMs: Compare model surprisal at B across A→B / ¬A→B / nil→B to test sensitivity to causal contingencies and script knowledge.
  • Controlled narrative reasoning tests: Probing causal inference without lexical overlap confounds (A and B are separated by intervening script steps).

Out-of-Scope Use

  • Training general-purpose language models (dataset is intentionally small).
  • Broad commonsense QA or open-domain reasoning benchmarks—CSK targets a very specific causal-inference phenomenon in scripted narratives.

Dataset Creation

Curation Rationale

To enable a clean, minimal test of whether LMs (and humans) integrate script knowledge when causes are omitted versus negated.

Source Data

Stories cover common scripts (e.g., baking, shopping, travel). Each story includes a script initiation, multiple intervening steps, and a critical event B that depends on a prior A (either stated, negated, or omitted).

Data Collection and Processing

Authors authored the stories by transforming script event lists into narratives and segmenting them into chunks (“before B”, “B”, “after B”). The B chunk is a single sentence in a fixed template, and the design controls for length/structure across conditions.

Who are the source data producers?

Stories were authored by the paper’s authors as experimental stimuli; the paper also reports human reading-time experiments with native English speakers recruited via Prolific (human data are not included in this dataset).


Annotations

Annotation process

No conventional labeling; materials are authored stimuli. In the human study, participants also gave Likert-scale judgments about whether events A/B occurred (not included here).

Who are the annotators?

Not applicable (authored stimuli). Human participants in the paper were native English speakers recruited on Prolific.


Personal and Sensitive Information

Stories are fictional and do not include personal data. No sensitive information is collected or released.


Bias, Risks, and Limitations

  • Small size: 21 stories; results may not generalize beyond the tested scripts.
  • Language: English only.
  • Content scope: Everyday activities; minimal world knowledge coverage.
  • Intended for evaluation rather than training.

See paper for full discussion and empirical findings.


How to Load

from datasets import load_dataset

ds = load_dataset("tonyhong/CSK")  # {'train': Dataset}
row = ds["train"][0]
print(row.keys())  # dict_keys(['item','condition type','before B','B','after B', ...])

# Example: select the three variants of a story
story_id = 1
variants = [r for r in ds["train"] if r["item"] == story_id]
for v in variants:
    print(v["condition type"], "\n--- before B ---\n", v["before B"], "\n[B]\n", v["B"], "\n--- after B ---\n", v["after B"], "\n")

Citation

Hong, X., Ryzhova, M., Biondi, D., & Demberg, V. (2024). Do large language models and humans have similar behaviours in causal inference with script knowledge? In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), 421–437. Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.starsem-1.34

BibTeX:

@inproceedings{hong-etal-2024-large,
  title = {Do large language models and humans have similar behaviours in causal inference with script knowledge?},
  author = {Hong, Xudong and Ryzhova, Margarita and Biondi, Daniel and Demberg, Vera},
  booktitle = {Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)},
  pages = {421--437},
  year = {2024},
  address = {Mexico City, Mexico},
  publisher = {Association for Computational Linguistics},
  url = {https://aclanthology.org/2024.starsem-1.34/},
  doi = {10.18653/v1/2024.starsem-1.34}
}

Dataset Card Authors

Xudong Hong

Dataset Card Contact

xLASTNAME@lst.uni-saarland.de


Disclaimer

CSK consists of short, fictional narratives authored for controlled experiments. The dataset on Hugging Face contains only the story texts and condition labels (no human behavioral data). License and file layout follow the dataset page.


For more experimental details, model lists, and analysis code, see the paper and the linked repository.