Datasets:
File size: 5,369 Bytes
5f5f8b7 56557ff de2ee73 56557ff 38cea5a 30f51db 38cea5a 30f51db 38cea5a 30f51db de2ee73 cfb4ccd a569c18 cfb4ccd a569c18 cfb4ccd a569c18 cfb4ccd a569c18 38cea5a a569c18 38cea5a a569c18 38cea5a cd5ca73 38cea5a cd5ca73 38cea5a cfb4ccd a569c18 cfb4ccd a569c18 cfb4ccd a569c18 cfb4ccd a569c18 cfb4ccd 38cea5a cfb4ccd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1B<n<10B
pretty_name: ProofLang Corpus
dataset_info:
- config_name: proofs
num_bytes: 3197091800
num_examples: 3681901
features:
- name: fileID
dtype: string
- name: proof
dtype: string
- config_name: sentences
num_bytes: 3736579062
num_examples: 38899130
features:
- name: fileID
dtype: string
- name: sentence
dtype: string
download_size: 6933683563
dataset_size: 6933670862
---
# Dataset Card for the ProofLang Corpus
## Dataset Summary
The ProofLang Corpus includes 3.7M proofs (558 million words) mechanically extracted from papers that were posted on [arXiv.org](https://arXiv.org) between 1992 and 2020.
The focus of this corpus is proofs, rather than the explanatory text that surrounds them, and more specifically on the *language* used in such proofs.
Specific mathematical content is filtered out, resulting in sentences such as `Let MATH be the restriction of MATH to MATH.`
This dataset reflects how people prefer to write (non-formalized) proofs, and is also amenable to statistical analyses and experiments with Natural Language Processing (NLP) techniques.
We hope it can serve as an aid in the development of language-based proof assistants and proof checkers for professional and educational purposes.
## Dataset Structure
There are multiple TSV versions of the data. Primarily, `proofs` divides up the data proof-by-proof, and `sentences` further divides up the same data sentence-by-sentence.
The `raw` dataset is a less-cleaned-up version of `proofs`. More usefully, the `tags` dataset gives arXiv subject tags for each paper ID found in the other data files.
* The data in `proofs` (and `raw`) consists of a `paper` ID (identifying where the proof was extracted from), and the `proof` as a string.
* The data in `sentences` consists of a `paper` ID, and the `sentence` as a string.
* The data in `tags` consists of a `paper` ID, and the arXiv subject tags for that paper as a single comma-separated string.
Further metadata about papers can be queried from arXiv.org using the paper ID.
In particular, each paper `<id>` in the dataset can be accessed online at the url `https://arxiv.org/abs/<id>`
## Dataset Size
* `proofs` is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples.
* `sentences` is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples.
* `tags` is 7,967,839 bytes (unzipped) and has 328,642 rows.
* `raw` is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples.
## Dataset Statistics
* The average length of `sentences` is 14.1 words.
* The average length of `proofs` is 10.5 sentences.
## Dataset Usage
Data can be downloaded as (zipped) TSV files.
Accessing the data programmatically from Python is also possible using the `Datasets` library.
For example, to print the first 10 proofs:
```python
from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['proof'])
```
To look at individual sentences from the proofs,
```python
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['sentence'])
```
To get a comma-separated list of arXiv subject tags for each paper,
```python
from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['tags'])
```
Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction),
```python
dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['proof'])
```
### Data Splits
There is currently no train/test split; all the data is in `train`.
## Dataset Creation
We started with the LaTeX source of 1.6M papers that were submitted to [arXiv.org](https://arXiv.org) between 1992 and April 2022.
The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros).
It does no actual typesetting, throws away output not between `\begin{proof}...\end{proof}`, and skips math content. During extraction,
* Math-mode formulas (signalled by `$`, `\begin{equation}`, etc.) become `MATH`
* `\ref{...}` and variants (`autoref`, `\subref`, etc.) become `REF`
* `\cite{...}` and variants (`\Citet`, `\shortciteNP`, etc.) become `CITE`
* Words that appear to be proper names become `NAME`
* `\item` becomes `CASE:`
We then run a cleanup pass on the extracted proofs that includes
* Cleaning up common extraction errors (e.g., due to uninterpreted macros)
* Replacing more references by `REF`, e.g., `Theorem 2(a)` or `Postulate (*)`
* Replacing more citations with `CITE`, e.g., `Page 47 of CITE`
* Replacing more proof-case markers with `CASE:`, e.g., `Case (a).`
* Fixing a few common misspellings
## Additional Information
This dataset is released under the Creative Commons Attribution 4.0 licence.
Copyright for the actual proofs remains with the authors of the papers on [arXiv.org](https://arXiv.org), but these simplified snippets are fair use under US copyright law.
|