peacock-data-public-datasets-idc-cronscript
/
lm-evaluation-harness
/lm_eval
/tasks
/coqa
/README.md
# CoQA | |
### Paper | |
Title: `CoQA: A Conversational Question Answering Challenge` | |
Abstract: https://arxiv.org/pdf/1808.07042.pdf | |
CoQA is a large-scale dataset for building Conversational Question Answering | |
systems. The goal of the CoQA challenge is to measure the ability of machines to | |
understand a text passage and answer a series of interconnected questions that | |
appear in a conversation. | |
Homepage: https://stanfordnlp.github.io/coqa/ | |
### Citation | |
``` | |
BibTeX-formatted citation goes here | |
``` | |
### Groups and Tasks | |
#### Groups | |
* Not part of a group yet | |
#### Tasks | |
* `coqa` | |
### Checklist | |
For adding novel benchmarks/datasets to the library: | |
* [ ] Is the task an existing benchmark in the literature? | |
* [ ] Have you referenced the original paper that introduced the task? | |
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | |
If other tasks on this dataset are already supported: | |
* [ ] Is the "Main" variant of this task clearly denoted? | |
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | |
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | |