File size: 1,828 Bytes
aaa947f 98603ec 6d4d6c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: mit
task_categories:
- question-answering
- text-classification
language:
- en
arxiv: 2501.14851
---
# JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models
[[Paper]](https://arxiv.org/abs/2501.14851) [[Github]](https://github.com/michaelchen-lab/JustLogic)
JustLogic is a deductive reasoning datataset that is
1. highly complex, capable of generating a diverse range of linguistic patterns, vocabulary, and argument structures;
2. prior knowledge independent, eliminating the advantage of models possessing prior knowledge and ensuring that only deductive reasoning is used to answer questions; and
3. capable of in-depth error analysis on the heterogeneous effects of reasoning depth and argument form on model accuracy.
## Dataset Format
- `premises`: List of premises in the question, in the form of a Python list.
- `paragraph`: A paragraph consisting of the above `premises`. This is given as input to models.
- `conclusion`: The expected conclusion of the given premises.
- `question`: The statement in which models must determine its truth-value.
- `label`: True | False | Uncertain
- `arg`: The argument structure
- `statements`: Matching symbols in `arg` to their corresponding natural language statements.
- `depth`: The argument depth of the given question
## Dataset Construction
JustLogic is a synthetically generated dataset. The script to construct your own dataset can be found in the [Github repo](https://github.com/michaelchen-lab/JustLogic).
## Citation
```
@article{chen2025justlogic,
title={JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models},
author={Chen, Michael K and Zhang, Xikun and Tao, Dacheng},
journal={arXiv preprint arXiv:2501.14851},
year={2025}
}
```
---
license: mit
--- |