|
--- |
|
license: mit |
|
task_categories: |
|
- question-answering |
|
- text-classification |
|
language: |
|
- en |
|
arxiv: 2501.14851 |
|
--- |
|
# JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models |
|
|
|
[[Paper]](https://arxiv.org/abs/2501.14851) [[Github]](https://github.com/michaelchen-lab/JustLogic) |
|
|
|
JustLogic is a deductive reasoning datataset that is |
|
|
|
1. highly complex, capable of generating a diverse range of linguistic patterns, vocabulary, and argument structures; |
|
2. prior knowledge independent, eliminating the advantage of models possessing prior knowledge and ensuring that only deductive reasoning is used to answer questions; and |
|
3. capable of in-depth error analysis on the heterogeneous effects of reasoning depth and argument form on model accuracy. |
|
|
|
## Dataset Format |
|
|
|
- `premises`: List of premises in the question, in the form of a Python list. |
|
- `paragraph`: A paragraph consisting of the above `premises`. This is given as input to models. |
|
- `conclusion`: The expected conclusion of the given premises. |
|
- `question`: The statement in which models must determine its truth-value. |
|
- `label`: True | False | Uncertain |
|
- `arg`: The argument structure |
|
- `statements`: Matching symbols in `arg` to their corresponding natural language statements. |
|
- `depth`: The argument depth of the given question |
|
|
|
## Dataset Construction |
|
|
|
JustLogic is a synthetically generated dataset. The script to construct your own dataset can be found in the [Github repo](https://github.com/michaelchen-lab/JustLogic). |
|
|
|
## Citation |
|
|
|
``` |
|
@article{chen2025justlogic, |
|
title={JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models}, |
|
author={Chen, Michael K and Zhang, Xikun and Tao, Dacheng}, |
|
journal={arXiv preprint arXiv:2501.14851}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
--- |
|
license: mit |
|
--- |