craffel HF Staff commited on
Commit
f5c81ad
·
verified ·
1 Parent(s): 710e09f

Add token counts and mixing rates to readme

Browse files
Files changed (1) hide show
  1. README.md +63 -165
README.md CHANGED
@@ -1,170 +1,68 @@
1
- ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path:
7
- - "*/*.gz"
8
- - config_name: arxiv_abstracts
9
- data_files:
10
- - split: train
11
- path:
12
- - "arxiv_abstracts/*.gz"
13
- - config_name: arxiv_papers
14
- data_files:
15
- - split: train
16
- path:
17
- - "arxiv_papers/*.gz"
18
- - config_name: biodiversity_heritage_library
19
- data_files:
20
- - split: train
21
- path:
22
- - "biodiversity_heritage_library/*.gz"
23
- - config_name: caselaw_access_project
24
- data_files:
25
- - split: train
26
- path:
27
- - "caselaw_access_project/*.gz"
28
- - config_name: cccc
29
- data_files:
30
- - split: train
31
- path:
32
- - "cccc/*.gz"
33
- - config_name: data_provenance_initiative
34
- data_files:
35
- - split: train
36
- path:
37
- - "data_provenance_initiative/*.gz"
38
- - config_name: doab
39
- data_files:
40
- - split: train
41
- path:
42
- - "doab/*.gz"
43
- - config_name: foodista
44
- data_files:
45
- - split: train
46
- path:
47
- - "foodista/*.gz"
48
- - config_name: github_archive
49
- data_files:
50
- - split: train
51
- path:
52
- - "github_archive/*.gz"
53
- - config_name: library_of_congress
54
- data_files:
55
- - split: train
56
- path:
57
- - "library_of_congress/*.gz"
58
- - config_name: libretexts
59
- data_files:
60
- - split: train
61
- path:
62
- - "libretexts/*.gz"
63
- - config_name: news
64
- data_files:
65
- - split: train
66
- path:
67
- - "news/*.gz"
68
- - config_name: oercommons
69
- data_files:
70
- - split: train
71
- path:
72
- - "oercommons/*.gz"
73
- - config_name: peS2o
74
- data_files:
75
- - split: train
76
- path:
77
- - "peS2o/*.gz"
78
- - config_name: pre_1929_books
79
- data_files:
80
- - split: train
81
- path:
82
- - "pre_1929_books/*.gz"
83
- - config_name: pressbooks
84
- data_files:
85
- - split: train
86
- path:
87
- - "pressbooks/*.gz"
88
- - config_name: project_gutenberg
89
- data_files:
90
- - split: train
91
- path:
92
- - "project_gutenberg/*.gz"
93
- - config_name: public_domain_review
94
- data_files:
95
- - split: train
96
- path:
97
- - "public_domain_review/*.gz"
98
- - config_name: pubmed
99
- data_files:
100
- - split: train
101
- path:
102
- - "pubmed/*.gz"
103
- - config_name: python_enhancement_proposals
104
- data_files:
105
- - split: train
106
- path:
107
- - "python_enhancement_proposals/*.gz"
108
- - config_name: regulations
109
- data_files:
110
- - split: train
111
- path:
112
- - "regulations/*.gz"
113
- - config_name: stackexchange
114
- data_files:
115
- - split: train
116
- path:
117
- - "stackexchange/*.gz"
118
- - config_name: stackv2_edu
119
- data_files:
120
- - split: train
121
- path:
122
- - "stackv2_edu/*.gz"
123
- - config_name: stackv2_html
124
- data_files:
125
- - split: train
126
- path:
127
- - "stackv2_html/*.gz"
128
- - config_name: ubuntu_irc
129
- data_files:
130
- - split: train
131
- path:
132
- - "ubuntu_irc/*.gz"
133
- - config_name: uk_hansard
134
- data_files:
135
- - split: train
136
- path:
137
- - "uk_hansard/*.gz"
138
- - config_name: usgpo
139
- data_files:
140
- - split: train
141
- path:
142
- - "usgpo/*.gz"
143
- - config_name: uspto
144
- data_files:
145
- - split: train
146
- path:
147
- - "uspto/*.gz"
148
- - config_name: wikimedia
149
- data_files:
150
- - split: train
151
- path:
152
- - "wikimedia/*.gz"
153
- - config_name: wikiteam
154
- data_files:
155
- - split: train
156
- path:
157
- - "wikiteam/*.gz"
158
- - config_name: youtube
159
- data_files:
160
- - split: train
161
- path:
162
- - "youtube/*.gz"
163
- ---
164
-
165
  # Comma v0.1 dataset
166
 
167
  This repository contains the dataset used to train [Comma v0.1-1T](https://huggingface.co/common-pile/comma-v0.1-1t) and [Comma v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t).
168
  It is a slightly modified and consolidated version of the [Common Pile v0.1 "filtered" data](https://huggingface.co/collections/common-pile/common-pile-v01-filtered-data-68300bb0a946d10dda697663).
169
  If you are looknig for the raw Common Pile v0.1 data, please see [this collection](https://huggingface.co/collections/common-pile/common-pile-v01-raw-data-6826b454a5a6a445d0b51b37).
170
- You can learn more about Common Pile in [our paper](https://huggingface.co/papers/2506.05209).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Comma v0.1 dataset
2
 
3
  This repository contains the dataset used to train [Comma v0.1-1T](https://huggingface.co/common-pile/comma-v0.1-1t) and [Comma v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t).
4
  It is a slightly modified and consolidated version of the [Common Pile v0.1 "filtered" data](https://huggingface.co/collections/common-pile/common-pile-v01-filtered-data-68300bb0a946d10dda697663).
5
  If you are looknig for the raw Common Pile v0.1 data, please see [this collection](https://huggingface.co/collections/common-pile/common-pile-v01-raw-data-6826b454a5a6a445d0b51b37).
6
+ You can learn more about Common Pile in [our paper](https://huggingface.co/papers/2506.05209).
7
+
8
+ ## Mixing rates and token counts
9
+
10
+ The Comma v0.1 models were trained in two stages, a "main" stage and a "cooldown" stage.
11
+ During each stage, we heuristically set mixing rates to up or downweight different sources.
12
+ In the two tables below, we provide per-source token count, repeat rate, and effective token count (after up/downweighting) for the main and cooldown stage of the Comma v0.1-1T training run.
13
+ For the Comma v0.1-2T training run, all sources are repeated 2x as many times in both stages.
14
+
15
+
16
+ | Main stage | Tokens (B) | Repeats | Effective tokens (B) |
17
+ |-------------------------------|------------|---------|----------------------|
18
+ | arxiv_abstracts | 0.57 | 6 | 3.4 |
19
+ | arxiv_papers | 6.0 | 6 | 35.8 |
20
+ | biodiversity_heritage_library | 9.8 | 0.25 | 2.5 |
21
+ | caselaw_access_project | 19.7 | 1 | 19.7 |
22
+ | cccc | 15.2 | 6 | 91.4 |
23
+ | data_provenance_initiative | 0.92 | 6 | 5.5 |
24
+ | doab | 3.0 | 6 | 18.2 |
25
+ | foodista | 0.025 | 6 | 0.15 |
26
+ | github_archive | 11.0 | 6 | 66.1 |
27
+ | library_of_congress | 9.5 | 0.25 | 2.4 |
28
+ | libretexts | 0.093 | 6 | 0.56 |
29
+ | news | 0.064 | 6 | 0.38 |
30
+ | oercommons | 0.012 | 6 | 0.07 |
31
+ | peS2o | 43.3 | 6 | 260.0 |
32
+ | pre_1929_books | 12.4 | 1 | 12.4 |
33
+ | pressbooks | 0.14 | 6 | 0.86 |
34
+ | project_gutenberg | 5.7 | 1 | 5.7 |
35
+ | public_domain_review | 0.0017 | 6 | 0.010 |
36
+ | pubmed | 36.6 | 1 | 36.6 |
37
+ | python_enhancement_proposals | 0.0027 | 6 | 0.016 |
38
+ | regulations | 1.4 | 6 | 8.2 |
39
+ | stackexchange | 23.9 | 6 | 143.2 |
40
+ | stackv2_edu | 67.8 | 2 | 135.5 |
41
+ | stackv2_html | 1.2 | 2 | 2.5 |
42
+ | ubuntu_irc | 1.9 | 6 | 11.1 |
43
+ | uk_hansard | 2.3 | 6 | 14.0 |
44
+ | usgpo | 8.8 | 0.25 | 2.2 |
45
+ | uspto | 157.4 | 0.25 | 39.4 |
46
+ | wikimedia | 15.8 | 6 | 94.7 |
47
+ | wikiteam | 4.3 | 4 | 17.2 |
48
+ | youtube | 4.7 | 1 | 4.7 |
49
+ | Total | 463.6 | | 1034.4 |
50
+
51
+ | Cooldown stage | Tokens (B) | Repeats | Effective tokens (B) |
52
+ |------------------------------|------------|---------|----------------------|
53
+ | arxiv_papers | 6.0 | 0.5 | 3.0 |
54
+ | cccc | 15.2 | 0.3 | 4.6 |
55
+ | data_provenance_initiative | 0.92 | 2 | 1.8 |
56
+ | doab | 3.0 | 2 | 6.1 |
57
+ | foodista | 0.025 | 2 | 0.05 |
58
+ | libretexts | 0.093 | 2 | 0.19 |
59
+ | news | 0.064 | 2 | 0.13 |
60
+ | oercommons | 0.012 | 2 | 0.02 |
61
+ | peS2o | 43.3 | 0.1 | 4.3 |
62
+ | pressbooks | 0.14 | 2 | 0.29 |
63
+ | public_domain_review | 0.0017 | 2 | 0.003 |
64
+ | python_enhancement_proposals | 0.0027 | 2 | 0.005 |
65
+ | stackexchange | 23.9 | 0.25 | 6.0 |
66
+ | stackv2_edu | 67.8 | 0.1 | 6.8 |
67
+ | wikimedia | 15.8 | 0.4 | 6.3 |
68
+ | Total | 176.2 | | 39.5 |