Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
b0ebb50
·
verified ·
1 Parent(s): ed9a440

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +223 -313
README.md CHANGED
@@ -1,341 +1,251 @@
1
  ---
2
  annotations_creators:
3
- - expert-generated
4
  language_creators:
5
  - expert-generated
6
  language:
7
- - af
8
- - am
9
- - ar
10
- - az
11
- - ba
12
- - be
13
- - bg
14
- - bn
15
- - bo
16
- - bs
17
- - ca
18
- - cs
19
- - cy
20
- - da
21
- - de
22
- - dv
23
- - dz
24
- - ee
25
- - el
26
- - et
27
- - eu
28
- - fa
29
- - fa
30
- - fi
 
 
31
  - fil
32
- - fj
33
- - fj
34
- - fo
35
- - fr
36
- - gd
37
- - gu
38
- - ha
39
- - he
40
- - hi
41
  - hmn
42
- - hr
43
- - hu
44
- - hy
45
- - id
46
- - ig
47
- - is
48
- - it
49
- - ja
50
- - kk
51
- - km
52
- - kn
53
- - ko
54
- - ku
55
- - ku
56
- - ky
57
- - lb
58
- - lo
59
- - lt
60
- - lv
61
- - mi
62
- - mk
63
- - mn
64
- - mr
65
- - ms
66
- - ms
67
- - mt
68
- - my
69
- - nb
70
- - nd
71
- - ne
72
- - nl
73
- - nn
74
- - ny
75
- - om
76
- - oy
77
- - pa
78
- - ps
79
- - pt
80
- - ro
81
- - ru
82
- - rw
83
- - sd
84
- - sh
 
 
85
  - shi
86
- - si
87
- - sk
88
- - sl
89
- - sm
90
- - sn
91
- - so
92
- - sq
93
- - sr
94
- - ss
95
- - st
96
- - sv
97
- - sw
98
- - ta
99
- - te
100
- - tg
101
- - th
102
- - tk
103
- - tn
104
- - to
105
- - tr
106
- - tt
107
- - ty
108
- - uk
109
- - ur
110
- - uz
111
- - ve
112
- - vi
113
- - wo
114
- - xh
115
- - yo
116
- - zh
117
- - zh
118
- - zu
119
- license:
120
- - cc-by-sa-4.0
121
- multilinguality:
122
- - translation
 
 
 
123
  task_categories:
124
  - translation
125
- size_categories:
126
- - "1997"
127
  configs:
128
- - config_name: default
129
- data_files:
130
- - split: test
131
- path: test.parquet
 
 
 
132
  ---
133
- ## Dataset Description
134
 
135
- NTREX -- News Test References for MT Evaluation from English into a total of 128 target languages. See [original GitHub repo](https://github.com/MicrosoftTranslator/NTREX/tree/main) for full details.
 
 
 
 
136
 
137
- Example of loading:
138
- ```python
139
- dataset = load_dataset("davidstap/NTREX", "rus_Cyrl", trust_remote_code=True)
140
- ```
 
 
 
141
 
142
- ## Languages
143
 
144
- The following languages are available:
145
 
146
- | Language Code | Language Name |
147
- |-----------------|-----------------------------|
148
- | `afr_Latn` | Afrikaans |
149
- | `amh_Ethi` | Amharic |
150
- | `arb_Arab` | Arabic |
151
- | `aze_Latn` | Azerbaijani |
152
- | `bak_Cyrl` | Bashkir |
153
- | `bel_Cyrl` | Belarusian |
154
- | `bem_Latn` | Bemba |
155
- | `ben_Beng` | Bengali |
156
- | `bod_Tibt` | Tibetan |
157
- | `bos_Latn` | Bosnian |
158
- | `bul_Cyrl` | Bulgarian |
159
- | `cat_Latn` | Catalan |
160
- | `ces_Latn` | Czech |
161
- | `ckb_Arab` | Sorani Kurdish |
162
- | `cym_Latn` | Welsh |
163
- | `dan_Latn` | Danish |
164
- | `deu_Latn` | German |
165
- | `div_Thaa` | Dhivehi |
166
- | `dzo_Tibt` | Dzongkha |
167
- | `ell_Grek` | Greek |
168
- | `eng-GB_Latn` | English (Great Britain) |
169
- | `eng-IN_Latn` | English (India) |
170
- | `eng-US_Latn` | English (United States) |
171
- | `eng_Latn` | English |
172
- | `est_Latn` | Estonian |
173
- | `eus_Latn` | Basque |
174
- | `ewe_Latn` | Ewe |
175
- | `fao_Latn` | Faroese |
176
- | `fas_Arab` | Persian |
177
- | `fij_Latn` | Fijian |
178
- | `fil_Latn` | Filipino |
179
- | `fin_Latn` | Finnish |
180
- | `fra-CA_Latn` | French (Canada) |
181
- | `fra_Latn` | French |
182
- | `fuc_Latn` | Pulaar |
183
- | `gle_Latn` | Irish |
184
- | `glg_Latn` | Galician |
185
- | `guj_Gujr` | Gujarati |
186
- | `hau_Latn` | Hausa |
187
- | `heb_Hebr` | Hebrew |
188
- | `hin_Deva` | Hindi |
189
- | `hmn_Latn` | Hmong |
190
- | `hrv_Latn` | Croatian |
191
- | `hun_Latn` | Hungarian |
192
- | `hye_Armn` | Armenian |
193
- | `ibo_Latn` | Igbo |
194
- | `ind_Latn` | Indonesian |
195
- | `isl_Latn` | Icelandic |
196
- | `ita_Latn` | Italian |
197
- | `jpn_Jpan` | Japanese |
198
- | `kan_Knda` | Kannada |
199
- | `kat_Geor` | Georgian |
200
- | `kaz_Cyrl` | Kazakh |
201
- | `khm_Khmr` | Khmer |
202
- | `kin_Latn` | Kinyarwanda |
203
- | `kir_Cyrl` | Kyrgyz |
204
- | `kmr_Latn` | Northern Kurdish |
205
- | `kor_Hang` | Korean |
206
- | `lao_Laoo` | Lao |
207
- | `lav_Latn` | Latvian |
208
- | `lit_Latn` | Lithuanian |
209
- | `ltz_Latn` | Luxembourgish |
210
- | `mal_Mlym` | Malayalam |
211
- | `mar_Deva` | Marathi |
212
- | `mey_Arab` | Hassaniya Arabic |
213
- | `mkd_Cyrl` | Macedonian |
214
- | `mlg_Latn` | Malagasy |
215
- | `mlt_Latn` | Maltese |
216
- | `mon_Mong` | Mongolian |
217
- | `mri_Latn` | Maori |
218
- | `msa_Latn` | Malay |
219
- | `mya_Mymr` | Burmese |
220
- | `nde_Latn` | Ndebele |
221
- | `nep_Deva` | Nepali |
222
- | `nld_Latn` | Dutch |
223
- | `nno_Latn` | Norwegian Nynorsk |
224
- | `nob_Latn` | Norwegian Bokmål |
225
- | `nso_Latn` | Northern Sotho |
226
- | `nya_Latn` | Chichewa |
227
- | `orm_Ethi` | Oromo |
228
- | `pan_Guru` | Punjabi (Gurmukhi) |
229
- | `pol_Latn` | Polish |
230
- | `por-BR_Latn` | Portuguese (Brazil) |
231
- | `por_Latn` | Portuguese |
232
- | `prs_Arab` | Dari |
233
- | `pus_Arab` | Pashto |
234
- | `ron_Latn` | Romanian |
235
- | `rus_Cyrl` | Russian |
236
- | `shi_Arab` | Tachelhit |
237
- | `sin_Sinh` | Sinhala |
238
- | `slk_Latn` | Slovak |
239
- | `slv_Latn` | Slovenian |
240
- | `smo_Latn` | Samoan |
241
- | `sna_Latn` | Shona |
242
- | `snd_Arab` | Sindhi |
243
- | `som_Latn` | Somali |
244
- | `spa-MX_Latn` | Spanish (Mexico) |
245
- | `spa_Latn` | Spanish |
246
- | `sqi_Latn` | Albanian |
247
- | `srp_Cyrl` | Serbian (Cyrillic) |
248
- | `srp_Latn` | Serbian (Latin) |
249
- | `ssw_Latn` | Swati |
250
- | `swa_Latn` | Swahili |
251
- | `swe_Latn` | Swedish |
252
- | `tah_Latn` | Tahitian |
253
- | `tam_Taml` | Tamil |
254
- | `tat_Cyrl` | Tatar |
255
- | `tel_Telu` | Telugu |
256
- | `tgk_Cyrl` | Tajik |
257
- | `tha_Thai` | Thai |
258
- | `tir_Ethi` | Tigrinya |
259
- | `ton_Latn` | Tongan |
260
- | `tsn_Latn` | Tswana |
261
- | `tuk_Latn` | Turkmen |
262
- | `tur_Latn` | Turkish |
263
- | `uig_Arab` | Uighur |
264
- | `ukr_Cyrl` | Ukrainian |
265
- | `urd_Arab` | Urdu |
266
- | `uzb_Latn` | Uzbek |
267
- | `ven_Latn` | Venda |
268
- | `vie_Latn` | Vietnamese |
269
- | `wol_Latn` | Wolof |
270
- | `xho_Latn` | Xhosa |
271
- | `yor_Latn` | Yoruba |
272
- | `yue_Hant` | Cantonese |
273
- | `zho_Hans` | Chinese (Simplified) |
274
- | `zho_Hant` | Chinese (Traditional) |
275
- | `zul_Latn` | Zulu |
276
 
 
 
277
 
278
- ### Citation Information
279
- For the original NTREX-128 dataset, please cite:
280
 
 
 
281
  ```
 
 
 
 
 
 
 
 
 
 
282
  @inproceedings{federmann-etal-2022-ntrex,
283
- title = "{NTREX}-128 {--} News Test References for {MT} Evaluation of 128 Languages",
284
- author = "Federmann, Christian and Kocmi, Tom and Xin, Ying",
285
- booktitle = "Proceedings of the First Workshop on Scaling Up Multilingual Evaluation",
286
- month = "nov",
287
- year = "2022",
288
- address = "Online",
289
- publisher = "Association for Computational Linguistics",
290
- url = "https://aclanthology.org/2022.sumeval-1.4",
291
- pages = "21--24",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
292
  }
293
  ```
294
 
295
- as well as the WMT 2019 paper that provided the English source data NTREX-128 is based on:
 
 
 
 
 
 
 
296
 
 
 
 
297
  ```
298
- @inproceedings{barrault-etal-2019-findings,
299
- title = "Findings of the 2019 Conference on Machine Translation ({WMT}19)",
300
- author = {Barrault, Lo{\"\i}c and
301
- Bojar, Ond{\v{r}}ej and
302
- Costa-juss{\`a}, Marta R. and
303
- Federmann, Christian and
304
- Fishel, Mark and
305
- Graham, Yvette and
306
- Haddow, Barry and
307
- Huck, Matthias and
308
- Koehn, Philipp and
309
- Malmasi, Shervin and
310
- Monz, Christof and
311
- M{\"u}ller, Mathias and
312
- Pal, Santanu and
313
- Post, Matt and
314
- Zampieri, Marcos},
315
- editor = "Bojar, Ond{\v{r}}ej and
316
- Chatterjee, Rajen and
317
- Federmann, Christian and
318
- Fishel, Mark and
319
- Graham, Yvette and
320
- Haddow, Barry and
321
- Huck, Matthias and
322
- Yepes, Antonio Jimeno and
323
- Koehn, Philipp and
324
- Martins, Andr{\'e} and
325
- Monz, Christof and
326
- Negri, Matteo and
327
- N{\'e}v{\'e}ol, Aur{\'e}lie and
328
- Neves, Mariana and
329
- Post, Matt and
330
- Turchi, Marco and
331
- Verspoor, Karin",
332
- booktitle = "Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)",
333
- month = aug,
334
- year = "2019",
335
- address = "Florence, Italy",
336
- publisher = "Association for Computational Linguistics",
337
- url = "https://aclanthology.org/W19-5301",
338
- doi = "10.18653/v1/W19-5301",
339
- pages = "1--61",
340
  }
341
- ```
 
 
 
 
 
 
1
  ---
2
  annotations_creators:
3
+ - expert-annotated
4
  language_creators:
5
  - expert-generated
6
  language:
7
+ - afr
8
+ - amh
9
+ - arb
10
+ - aze
11
+ - bak
12
+ - bel
13
+ - bem
14
+ - ben
15
+ - bod
16
+ - bos
17
+ - bul
18
+ - cat
19
+ - ces
20
+ - ckb
21
+ - cym
22
+ - dan
23
+ - deu
24
+ - div
25
+ - dzo
26
+ - ell
27
+ - eng
28
+ - eus
29
+ - ewe
30
+ - fao
31
+ - fas
32
+ - fij
33
  - fil
34
+ - fin
35
+ - fra
36
+ - fuc
37
+ - gle
38
+ - glg
39
+ - guj
40
+ - hau
41
+ - heb
42
+ - hin
43
  - hmn
44
+ - hrv
45
+ - hun
46
+ - hye
47
+ - ibo
48
+ - ind
49
+ - isl
50
+ - ita
51
+ - jpn
52
+ - kan
53
+ - kat
54
+ - kaz
55
+ - khm
56
+ - kin
57
+ - kir
58
+ - kmr
59
+ - kor
60
+ - lao
61
+ - lav
62
+ - lit
63
+ - ltz
64
+ - mal
65
+ - mar
66
+ - mey
67
+ - mkd
68
+ - mlg
69
+ - mlt
70
+ - mon
71
+ - mri
72
+ - msa
73
+ - mya
74
+ - nde
75
+ - nep
76
+ - nld
77
+ - nno
78
+ - nob
79
+ - nso
80
+ - nya
81
+ - orm
82
+ - pan
83
+ - pol
84
+ - por
85
+ - prs
86
+ - pus
87
+ - ron
88
+ - rus
89
  - shi
90
+ - sin
91
+ - slk
92
+ - slv
93
+ - smo
94
+ - sna
95
+ - snd
96
+ - som
97
+ - spa
98
+ - sqi
99
+ - srp
100
+ - ssw
101
+ - swa
102
+ - swe
103
+ - tah
104
+ - tam
105
+ - tat
106
+ - tel
107
+ - tgk
108
+ - tha
109
+ - tir
110
+ - ton
111
+ - tsn
112
+ - tuk
113
+ - tur
114
+ - uig
115
+ - ukr
116
+ - urd
117
+ - uzb
118
+ - ven
119
+ - vie
120
+ - wol
121
+ - xho
122
+ - yor
123
+ - yue
124
+ - zho
125
+ - zul
126
+ license: cc-by-sa-4.0
127
+ multilinguality: translated
128
+ size_categories:
129
+ - '1997'
130
  task_categories:
131
  - translation
132
+ task_ids: []
 
133
  configs:
134
+ - config_name: default
135
+ data_files:
136
+ - split: test
137
+ path: test.parquet
138
+ tags:
139
+ - mteb
140
+ - text
141
  ---
142
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
143
 
144
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
145
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NTREXBitextMining</h1>
146
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
147
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
148
+ </div>
149
 
150
+ NTREX is a News Test References dataset for Machine Translation Evaluation, covering translation from English into 128 languages. We select language pairs according to the M2M-100 language grouping strategy, resulting in 1916 directions.
151
+
152
+ | | |
153
+ |---------------|---------------------------------------------|
154
+ | Task category | t2t |
155
+ | Domains | News, Written |
156
+ | Reference | https://huggingface.co/datasets/davidstap/NTREX |
157
 
 
158
 
159
+ ## How to evaluate on this task
160
 
161
+ You can evaluate an embedding model on this dataset using the following code:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
+ ```python
164
+ import mteb
165
 
166
+ task = mteb.get_tasks(["NTREXBitextMining"])
167
+ evaluator = mteb.MTEB(task)
168
 
169
+ model = mteb.get_model(YOUR_MODEL)
170
+ evaluator.run(model)
171
  ```
172
+
173
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
174
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
175
+
176
+ ## Citation
177
+
178
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
179
+
180
+ ```bibtex
181
+
182
  @inproceedings{federmann-etal-2022-ntrex,
183
+ address = {Online},
184
+ author = {Federmann, Christian and Kocmi, Tom and Xin, Ying},
185
+ booktitle = {Proceedings of the First Workshop on Scaling Up Multilingual Evaluation},
186
+ month = {nov},
187
+ pages = {21--24},
188
+ publisher = {Association for Computational Linguistics},
189
+ title = {{NTREX}-128 {--} News Test References for {MT} Evaluation of 128 Languages},
190
+ url = {https://aclanthology.org/2022.sumeval-1.4},
191
+ year = {2022},
192
+ }
193
+
194
+
195
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
196
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
197
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
198
+ publisher = {arXiv},
199
+ journal={arXiv preprint arXiv:2502.13595},
200
+ year={2025},
201
+ url={https://arxiv.org/abs/2502.13595},
202
+ doi = {10.48550/arXiv.2502.13595},
203
+ }
204
+
205
+ @article{muennighoff2022mteb,
206
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
207
+ title = {MTEB: Massive Text Embedding Benchmark},
208
+ publisher = {arXiv},
209
+ journal={arXiv preprint arXiv:2210.07316},
210
+ year = {2022}
211
+ url = {https://arxiv.org/abs/2210.07316},
212
+ doi = {10.48550/ARXIV.2210.07316},
213
  }
214
  ```
215
 
216
+ # Dataset Statistics
217
+ <details>
218
+ <summary> Dataset Statistics</summary>
219
+
220
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
221
+
222
+ ```python
223
+ import mteb
224
 
225
+ task = mteb.get_task("NTREXBitextMining")
226
+
227
+ desc_stats = task.metadata.descriptive_stats
228
  ```
229
+
230
+ ```json
231
+ {
232
+ "test": {
233
+ "num_samples": 3826252,
234
+ "number_of_characters": 988355274,
235
+ "unique_pairs": 3820263,
236
+ "min_sentence1_length": 1,
237
+ "average_sentence1_length": 129.15449296073547,
238
+ "max_sentence1_length": 773,
239
+ "unique_sentence1": 241259,
240
+ "min_sentence2_length": 1,
241
+ "average_sentence2_length": 129.15449296073547,
242
+ "max_sentence2_length": 773,
243
+ "unique_sentence2": 241259
244
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
245
  }
246
+ ```
247
+
248
+ </details>
249
+
250
+ ---
251
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*