|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
size_categories: |
|
- 1B<n<10B |
|
pretty_name: ProofLang Corpus |
|
dataset_info: |
|
- config_name: proofs |
|
num_bytes: 3197091800 |
|
num_examples: 3681901 |
|
features: |
|
- name: fileID |
|
dtype: string |
|
- name: proof |
|
dtype: string |
|
|
|
- config_name: sentences |
|
num_bytes: 3736579062 |
|
num_examples: 38899130 |
|
features: |
|
- name: fileID |
|
dtype: string |
|
- name: sentence |
|
dtype: string |
|
|
|
download_size: 6933683563 |
|
dataset_size: 6933670862 |
|
--- |
|
# Dataset Card for the ProofLang Corpus |
|
|
|
## Dataset Summary |
|
|
|
The ProofLang Corpus includes 3.7M proofs (558 million words) mechanically extracted from papers that were posted on [arXiv.org](https://arXiv.org) between 1992 and 2020. |
|
The focus of this corpus is proofs, rather than the explanatory text that surrounds them, and more specifically on the *language* used in such proofs. |
|
Specific mathematical content is filtered out, resulting in sentences such as `Let MATH be the restriction of MATH to MATH.` |
|
|
|
This dataset reflects how people prefer to write (non-formalized) proofs, and is also amenable to statistical analyses and experiments with Natural Language Processing (NLP) techniques. |
|
We hope it can serve as an aid in the development of language-based proof assistants and proof checkers for professional and educational purposes. |
|
|
|
## Dataset Structure |
|
|
|
There are multiple TSV versions of the data. Primarily, `proofs` divides up the data proof-by-proof, and `sentences` further divides up the same data sentence-by-sentence. |
|
The `raw` dataset is a less-cleaned-up version of `proofs`. More usefully, the `tags` dataset gives arXiv subject tags for each paper ID found in the other data files. |
|
|
|
* The data in `proofs` (and `raw`) consists of a `paper` ID (identifying where the proof was extracted from), and the `proof` as a string. |
|
|
|
* The data in `sentences` consists of a `paper` ID, and the `sentence` as a string. |
|
|
|
* The data in `tags` consists of a `paper` ID, and the arXiv subject tags for that paper as a single comma-separated string. |
|
|
|
Further metadata about papers can be queried from arXiv.org using the paper ID. |
|
|
|
In particular, each paper `<id>` in the dataset can be accessed online at the url `https://arxiv.org/abs/<id>` |
|
|
|
## Dataset Size |
|
|
|
* `proofs` is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples. |
|
* `sentences` is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples. |
|
* `tags` is 7,967,839 bytes (unzipped) and has 328,642 rows. |
|
* `raw` is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples. |
|
|
|
## Dataset Statistics |
|
|
|
* The average length of `sentences` is 14.1 words. |
|
|
|
* The average length of `proofs` is 10.5 sentences. |
|
|
|
## Dataset Usage |
|
|
|
Data can be downloaded as (zipped) TSV files. |
|
|
|
Accessing the data programmatically from Python is also possible using the `Datasets` library. |
|
For example, to print the first 10 proofs: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True') |
|
for d in dataset.take(10): |
|
print(d['paper'], d['proof']) |
|
``` |
|
|
|
To look at individual sentences from the proofs, |
|
|
|
```python |
|
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True') |
|
for d in dataset.take(10): |
|
print(d['paper'], d['sentence']) |
|
``` |
|
|
|
To get a comma-separated list of arXiv subject tags for each paper, |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True') |
|
for d in dataset.take(10): |
|
print(d['paper'], d['tags']) |
|
``` |
|
|
|
Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction), |
|
|
|
```python |
|
dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True') |
|
for d in dataset.take(10): |
|
print(d['paper'], d['proof']) |
|
``` |
|
|
|
|
|
### Data Splits |
|
|
|
There is currently no train/test split; all the data is in `train`. |
|
|
|
|
|
## Dataset Creation |
|
|
|
We started with the LaTeX source of 1.6M papers that were submitted to [arXiv.org](https://arXiv.org) between 1992 and April 2022. |
|
|
|
The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros). |
|
It does no actual typesetting, throws away output not between `\begin{proof}...\end{proof}`, and skips math content. During extraction, |
|
|
|
* Math-mode formulas (signalled by `$`, `\begin{equation}`, etc.) become `MATH` |
|
* `\ref{...}` and variants (`autoref`, `\subref`, etc.) become `REF` |
|
* `\cite{...}` and variants (`\Citet`, `\shortciteNP`, etc.) become `CITE` |
|
* Words that appear to be proper names become `NAME` |
|
* `\item` becomes `CASE:` |
|
|
|
We then run a cleanup pass on the extracted proofs that includes |
|
|
|
* Cleaning up common extraction errors (e.g., due to uninterpreted macros) |
|
* Replacing more references by `REF`, e.g., `Theorem 2(a)` or `Postulate (*)` |
|
* Replacing more citations with `CITE`, e.g., `Page 47 of CITE` |
|
* Replacing more proof-case markers with `CASE:`, e.g., `Case (a).` |
|
* Fixing a few common misspellings |
|
|
|
|
|
## Additional Information |
|
|
|
This dataset is released under the Creative Commons Attribution 4.0 licence. |
|
|
|
Copyright for the actual proofs remains with the authors of the papers on [arXiv.org](https://arXiv.org), but these simplified snippets are fair use under US copyright law. |
|
|
|
|