Datasets:
image imagewidth (px) 303 5.95k |
|---|
DocParsingBench is a document intelligence benchmark of 1,400 images, systematically collected and annotated from real business workflows. It is the first dataset to systematically catalogue the document elements most frequently encountered in enterprise settings, covering five major domains: finance, legal, scientific research, manufacturing, and education.
π Latest Updates
[2026.04.17] DocParsingBench evaluation toolkit release. Unified scoring is now available for the three core elements of document parsing β text, formulas, and tables β together with batch CLI evaluation, segment-level matching, visualization analysis, and leaderboard generation. π
[2026.03.09] DocParsingBench dataset release. The first document parsing dataset built for real-world industry scenarios, covering finance, legal, scientific research, manufacturing, and education. Now available on Hugging Face and ModelScope! π₯π₯π₯
π― Key Highlights
β 1. Five Real-World Industry Domains
Samples were drawn from real business workflows, preserving scan noise, stamp occlusion, and blurred characters:
| Domain | Document Types | Characteristics |
|---|---|---|
| Finance | Brokerage research reports, listed-company annual reports, prospectuses | Multi-column tables, stamped PDF scans, complex tables |
| Legal | Legal documents, contract clauses, industry standards | Standardized headers and footers, dense footnotes, cross-referenced citations |
| Scientific Research | Academic papers, programming textbooks, full-text patents | Mixed two- and three-column layouts, formula-heavy text, code blocks, chemical formulas |
| Manufacturing | Operating SOPs, forms and receipts | Blurry scans, handwritten fill-ins, QR codes / barcodes |
| Education | Textbooks across English, chemistry, and mathematics | Chemical structures, reaction equations, multiple-choice options, fill-in-the-blanks |
β 2. Full Coverage of Complex Layouts
- Single-column: textbooks, legal documents
- Two-column: academic papers, some research reports
- Three or more columns: brokerage financial reports, data reports
- Mixed layouts: interleaved text and figures, tables spanning columns, nested sidebars
π Dataset Composition
| Dimension | Category |
|---|---|
| Total samples | 1,400 pages |
| Languages | Chinese, English, Chinese-English mixed |
| Industry distribution | Finance / Legal / Scientific Research / Manufacturing / Education |
| Layout distribution | Single-column / Two-column / Three-column / Mixed |
| Annotation format | Markdown |
| Chemistry annotation | Follows the SoMarkdown specification, combining SMILES with LaTeX to fully render chemical structures |
ποΈ Annotation Philosophy
- Fully human-annotated, without exception β the "ground truth" of document parsing should not be something a model guesses. In practice, we found that errors from model pre-annotation are silently accepted by annotators: when shown an existing markdown draft, annotators tend to tweak rather than redraw, and the model's mistakes get frozen into the dataset. Every bounding box is drawn stroke by stroke; every character is typed by hand. It is not the fastest approach, but it is the one that yields the highest quality.
- This was the most time-consuming part of the project. We wrote detailed boundary rules for every class of document element, then pilot-annotated, iterated, and finalized them class by class.
- Every spec went through 3+ rounds of pilot annotation, team-wide training, and consistency testing, keeping inter-annotator disagreement on the same page below 5%. In total, 6 full batches were rejected and re-annotated, with an annotation-to-review time ratio of 1 : 0.8.
π Evaluation Benchmark
The companion industrial document parsing benchmark is released alongside this dataset β community submissions are welcome! π
β€οΈ Acknowledgements
This is work that demands extraordinary patience. A single complex page mixing chemical formulas and tables can take hours; one boundary disagreement can send an entire batch back for re-annotation, costing hundreds of person-hours. Behind every annotated page is a human eye parsing the document's structure and a human hand confirming the markdown. Thanks to every annotator who saw this through.
We also thank the SoMarkdown project for the chemical-structure annotation specification that let us express LLM-readable chemistry precisely.
π Citation
If you use DocParsingBench in your research, please cite it as follows:
@misc{DocParsingBench-2026,
title={DocParsingBench},
author={SoMark},
year={2026},
publisher={Hugging Face,ModelScope},
howpublished={\url{https://modelscope.cn/datasets/SoMark/DocParsingBench}}
}
π License
This project is released under the ODC-BY (Open Data Commons Attribution License) and is open to both academic research and commercial use.
π If you find this dataset useful, please consider giving us a β on ModelScope / Hugging Face!
How to Download
π€ Hugging Face
from datasets import load_dataset
ds = load_dataset("SoMarkAI/DocParsingBench")
βοΈ ModelScope
from modelscope.msdatasets import MsDataset
ds = MsDataset.load('SoMark/DocParsingBench')
Git Clone
# Hugging Face
git clone https://huggingface.co/datasets/SoMarkAI/DocParsingBench
# ModelScope
git clone https://www.modelscope.cn/datasets/SoMark/DocParsingBench.git
- Downloads last month
- 1,652