Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 3,542 Bytes
0daf46e
 
 
3a3abf1
0daf46e
 
 
 
 
588bc1b
0daf46e
346dd6d
 
 
 
 
60c7422
0daf46e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbd935a
0daf46e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: cdla-permissive-2.0
task_categories:
- image-text-to-text
tags:
- code
- ocr
size_categories:
- 1M<n<10M
pretty_name: SynthCodeNet
---
# SynthCodeNet
<div style="display: flex; justify-content: center; align-items: center;">
    <img src="https://cdn-uploads.huggingface.co/production/uploads/663e1254887b6f5645a0399f/whc8Bpip5P8uuzZOS0MQJ.png" alt="Code Example" style="width: 500px; height: auto">
</div>

**SynthCodeNet** is a multimodal dataset created for training the **SmolDocling** model. It consists of over **9.3 million** synthetically generated image-text pairs, covering code snippets from **56** different programming languages. Text data was sourced from permissively licensed sources, while images were synthetically generated at 120 DPI using LaTeX and Pygments to ensure visual diversity.

---

## Dataset Statistics

* **Total samples**: 9,334,257

  * **Training set**: 8,400,838
  * **Validation set**: 466,703
  * **Test set**: 466,716

* **Modalities**: Image, Text

* **Image Generation**: Synthetic (LaTeX, Pygments)

### Programming Languages & Sample Counts

| Language | Samples | Language   | Samples | Language    | Samples   |
| -------- | ------- | ---------- | ------- | ----------- | --------- |
| Ada      | 20,094  | Dart       | 20,415  | Matlab      | 1,170     |
| Awk      | 22,334  | Dockerfile | 99,459  | MoonScript  | 6,237     |
| Bash     | 98,950  | Elixir     | 20,387  | Nim         | 37,236    |
| C        | 599,096 | Erlang     | 20,039  | OCaml       | 32,297    |
| C#       | 303,720 | FORTRAN    | 34,023  | ObjectiveC  | 158,398   |
| C++      | 698,870 | Forth      | 5,548   | Octave      | 2,537     |
| CMake    | 19,910  | Go         | 333,722 | PHP         | 249,566   |
| COBOL    | 5,153   | HTML       | 245,228 | Pascal      | 28,254    |
| CSS      | 236,596 | Haskell    | 39,848  | Perl        | 33,938    |
| Ceylon   | 8,369   | Haxe       | 20,070  | Prolog      | 2,058     |
| Clojure  | 20,765  | Java       | 698,421 | Python      | 1,797,063 |
| Crystal  | 24,720  | JavaScript | 530,899 | Racket      | 4,340     |
| Cuda     | 142,344 | Julia      | 29,681  | Ruby        | 348,976   |
| Cython   | 22,136  | Kotlin     | 292,986 | Rust        | 344,491   |
| D        | 20,338  | Lisp       | 29,749  | SML         | 19,333    |
| Lua      | 25,328  | SQL        | 493,412 | YAML        | 249,011   |
| Scala    | 273,825 | Scheme     | 23,242  | VisualBasic | 13,908    |
| Swift    | 25,374  | TypeScript | 255,475 | XML         | 246,209   |
| bc       | 249     | dc         | 1,713   |             |           |

---

## Data Format

Each dataset entry is structured as follows:

```json
{
  "images": [PIL Image],
  "texts": [
    {
      "assistant": "<loc_x0><loc_y0><loc_x1><loc_y1><_Language_>CODE_SNIPPET</code>",
      "source": "SynthCodeNetNoImageTag",
      "user": "<code>"
    }
  ]
}
```

---

## Intended Use

* Training multimodal models for **document understanding**, specifically:

  * Code snippet extraction and transcription

---

## Citation

If you use SynthCodeNet, please cite:

```bibtex
@article{nassar2025smoldocling,
  title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion},
  author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others},
  journal={arXiv preprint arXiv:2503.11576},
  year={2025}
}
```