|
# PIQA |
|
|
|
### Paper |
|
|
|
Title: `PIQA: Reasoning about Physical Commonsense in Natural Language` |
|
|
|
Abstract: https://arxiv.org/abs/1911.11641 |
|
|
|
Physical Interaction: Question Answering (PIQA) is a physical commonsense |
|
reasoning and a corresponding benchmark dataset. PIQA was designed to investigate |
|
the physical knowledge of existing models. To what extent are current approaches |
|
actually learning about the world? |
|
|
|
Homepage: https://yonatanbisk.com/piqa/ |
|
|
|
### Citation |
|
|
|
``` |
|
@inproceedings{Bisk2020, |
|
author = {Yonatan Bisk and Rowan Zellers and |
|
Ronan Le Bras and Jianfeng Gao |
|
and Yejin Choi}, |
|
title = {PIQA: Reasoning about Physical Commonsense in |
|
Natural Language}, |
|
booktitle = {Thirty-Fourth AAAI Conference on |
|
Artificial Intelligence}, |
|
year = {2020}, |
|
} |
|
``` |
|
|
|
### Groups and Tasks |
|
|
|
#### Groups |
|
|
|
* Not part of a group yet. |
|
|
|
#### Tasks |
|
|
|
* `piqa` |
|
|
|
### Checklist |
|
|
|
For adding novel benchmarks/datasets to the library: |
|
* [ ] Is the task an existing benchmark in the literature? |
|
* [ ] Have you referenced the original paper that introduced the task? |
|
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? |
|
|
|
|
|
If other tasks on this dataset are already supported: |
|
* [ ] Is the "Main" variant of this task clearly denoted? |
|
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? |
|
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
|
|