File size: 5,489 Bytes
a09be6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92748e2
 
a09be6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1639646
a09be6f
 
 
 
 
 
 
 
9b323d4
 
 
 
 
 
 
 
 
a09be6f
9b323d4
8dffc26
9b323d4
ac673c1
 
4c913fc
a09be6f
4c913fc
ac673c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c913fc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
dataset_info:
  features:
  - name: pun
    dtype: string
  - name: prefix
    dtype: string
  - name: definition
    dtype: string
  - name: answer
    sequence: string
  - name: phonetic
    dtype: int64
  - name: realistic
    dtype: int64
  - name: typology
    sequence: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: main
    num_bytes: 49417
    num_examples: 350
  - name: contaminated
    num_bytes: 2642
    num_examples: 20
  - name: few_shot
    num_bytes: 1382
    num_examples: 10
  download_size: 37114
  dataset_size: 53441
configs:
- config_name: default
  data_files:
  - split: main
    path: data/main-*
  - split: contaminated
    path: data/contaminated-*
  - split: few_shot
    path: data/few_shot-*
license: mit
task_categories:
- question-answering
language:
- en
---

# Phunny: A Humor-Based QA Benchmark for Evaluating LLM Generalization

Welcome to **Phunny**, a humor-based question answering (QA) benchmark designed to evaluate the reasoning and generalization abilities of large language models (LLMs) through structured puns.

This repository accompanies our **ACL 2025 main track paper**:  
["What do you call a dog that is incontrovertibly true? Dogma: Testing LLM Generalization through Humor"](https://aclanthology.org/2025.acl-long.1117.pdf)

To reproduce our experiments: [Code available on GitHub](https://github.com/disi-unibo-nlp/Phunny)

## Overview

**Phunny** consists of 350 novel, manually curated structured puns, created through a two-stage process: creative human design followed by automated contamination checks to ensure novelty.

All puns follow the same strcuture: 
```
What do you call a X that Y? XZ
```

- **X** is a prefix (subword of XZ)
- **Y** is a natural language definition of the answer XZ
- **XZ** is the pun answer (that starts with the prefix X), meant to be humorous

For example:

> What do you call a dog that is incontrovertibly true? **Dogma**  
> → “Dog” (X) + “dogma” (XZ), where “dogma” means a set of incontrovertible truths.

We define three tasks to evaluate different aspects of LLM capabilities:

- **Pun Comprehension**  
  Can an LLM distinguish between coherent and nonsensical puns?

- **Pun Resolution**  
  Can an LLM infer the correct punchline based on the question?

- **Pun Generation**  
  Can an LLM produce novel Phunny-style puns? We test this in two modes:  
  - *Free*: unconstrained generation  
  - *Constrained*: generation based on a provided prefix X

     
## Data Fields

- `pun`: the complete pun (question/answer)
- `prefix`: the subject of the question/pun
- `definition`: the meaning of the question/pun
- `answer`: the punchline
- `phonetic`: whether the punchline is phonetically correlated (starts with same pronunciation) w.r.t. the prefix
- `realistic`: whether the pun itself is real
- `typology`: whether the prefix itself is a noun, adjective, or verb

## Data Splits

This dataset has 3 splits: _Main_, _Contaminated_, and _Few-shot_.

| Dataset Split | Number of Instances | Content                                                                        |
| ------------- | --------------------| ------------------------------------------------------------------------------ |
| Main          | 350                 | set of puns used in our experiments to evaluate LLMs                           |
| Contaminated  | 20                  | list of Phunny-like puns already present on the web (excluded from our evaluation)                            |
| Few-shot      | 10                  | puns used as in-context exemples for the Resolution and Generation tasks       |

# Cite article
```
@inproceedings{cocchieri-etal-2025-call,
    title = "``What do you call a dog that is incontrovertibly true? Dogma'': Testing {LLM} Generalization through Humor",
    author = "Cocchieri, Alessio  and
      Ragazzi, Luca  and
      Italiani, Paolo  and
      Tagliavini, Giuseppe  and
      Moro, Gianluca",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-long.1117/",
    doi = "10.18653/v1/2025.acl-long.1117",
    pages = "22922--22937",
    ISBN = "979-8-89176-251-0",
    abstract = "Humor, requiring creativity and contextual understanding, is a hallmark of human intelligence, showcasing adaptability across linguistic scenarios. While recent advances in large language models (LLMs) demonstrate strong reasoning on various benchmarks, it remains unclear whether they truly adapt to new tasks like humans (i.e., generalize) or merely replicate memorized content. To explore this, we introduce Phunny, a new humor-based question-answering benchmark designed to assess LLMs' reasoning through carefully crafted puns. Our dataset is manually curated to ensure novelty and minimize data contamination, providing a robust evaluation of LLMs' linguistic comprehension. Experiments on pun comprehension, resolution, and generation reveal that most LLMs struggle with generalization, even on simple tasks, consistently underperforming the human baseline. Additionally, our detailed error analysis provides valuable insights to guide future research."
}
```