modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-en-nso
|
Helsinki-NLP
| 2023-08-16T11:30:41Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-nso
* source languages: en
* target languages: nso
* OPUS readme: [en-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.nso | 52.2 | 0.684 |
|
Helsinki-NLP/opus-mt-en-mul
|
Helsinki-NLP
| 2023-08-16T11:30:35Z | 3,003 | 19 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- ca
- es
- os
- eo
- ro
- fy
- cy
- is
- lb
- su
- an
- sq
- fr
- ht
- rm
- cv
- ig
- am
- eu
- tr
- ps
- af
- ny
- ch
- uk
- sl
- lt
- tk
- sg
- ar
- lg
- bg
- be
- ka
- gd
- ja
- si
- br
- mh
- km
- th
- ty
- rw
- te
- mk
- or
- wo
- kl
- mr
- ru
- yo
- hu
- fo
- zh
- ti
- co
- ee
- oc
- sn
- mt
- ts
- pl
- gl
- nb
- bn
- tt
- bo
- lo
- id
- gn
- nv
- hy
- kn
- to
- io
- so
- vi
- da
- fj
- gv
- sm
- nl
- mi
- pt
- hi
- se
- as
- ta
- et
- kw
- ga
- sv
- ln
- na
- mn
- gu
- wa
- lv
- jv
- el
- my
- ba
- it
- hr
- ur
- ce
- nn
- fi
- mg
- rn
- xh
- ab
- de
- cs
- he
- zu
- yi
- ml
- mul
tags:
- translation
license: apache-2.0
---
### eng-mul
* source group: English
* target group: Multiple languages
* OPUS readme: [eng-mul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md)
* model: transformer
* source language(s): eng
* target language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 5.0 | 0.288 |
| newsdev2015-enfi-engfin.eng.fin | 9.3 | 0.418 |
| newsdev2016-enro-engron.eng.ron | 17.2 | 0.488 |
| newsdev2016-entr-engtur.eng.tur | 8.2 | 0.402 |
| newsdev2017-enlv-englav.eng.lav | 12.9 | 0.444 |
| newsdev2017-enzh-engzho.eng.zho | 17.6 | 0.170 |
| newsdev2018-enet-engest.eng.est | 10.9 | 0.423 |
| newsdev2019-engu-engguj.eng.guj | 5.2 | 0.284 |
| newsdev2019-enlt-englit.eng.lit | 11.0 | 0.431 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 22.6 | 0.521 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 25.9 | 0.546 |
| newssyscomb2009-engces.eng.ces | 10.3 | 0.394 |
| newssyscomb2009-engdeu.eng.deu | 13.3 | 0.459 |
| newssyscomb2009-engfra.eng.fra | 21.5 | 0.522 |
| newssyscomb2009-enghun.eng.hun | 8.1 | 0.371 |
| newssyscomb2009-engita.eng.ita | 22.1 | 0.540 |
| newssyscomb2009-engspa.eng.spa | 23.8 | 0.531 |
| news-test2008-engces.eng.ces | 9.0 | 0.376 |
| news-test2008-engdeu.eng.deu | 14.2 | 0.451 |
| news-test2008-engfra.eng.fra | 19.8 | 0.500 |
| news-test2008-engspa.eng.spa | 22.8 | 0.518 |
| newstest2009-engces.eng.ces | 9.8 | 0.392 |
| newstest2009-engdeu.eng.deu | 13.7 | 0.454 |
| newstest2009-engfra.eng.fra | 20.7 | 0.514 |
| newstest2009-enghun.eng.hun | 8.4 | 0.370 |
| newstest2009-engita.eng.ita | 22.4 | 0.538 |
| newstest2009-engspa.eng.spa | 23.5 | 0.532 |
| newstest2010-engces.eng.ces | 10.0 | 0.393 |
| newstest2010-engdeu.eng.deu | 15.2 | 0.463 |
| newstest2010-engfra.eng.fra | 22.0 | 0.524 |
| newstest2010-engspa.eng.spa | 27.2 | 0.556 |
| newstest2011-engces.eng.ces | 10.8 | 0.392 |
| newstest2011-engdeu.eng.deu | 14.2 | 0.449 |
| newstest2011-engfra.eng.fra | 24.3 | 0.544 |
| newstest2011-engspa.eng.spa | 28.3 | 0.559 |
| newstest2012-engces.eng.ces | 9.9 | 0.377 |
| newstest2012-engdeu.eng.deu | 14.3 | 0.449 |
| newstest2012-engfra.eng.fra | 23.2 | 0.530 |
| newstest2012-engrus.eng.rus | 16.0 | 0.463 |
| newstest2012-engspa.eng.spa | 27.8 | 0.555 |
| newstest2013-engces.eng.ces | 11.0 | 0.392 |
| newstest2013-engdeu.eng.deu | 16.4 | 0.469 |
| newstest2013-engfra.eng.fra | 22.6 | 0.515 |
| newstest2013-engrus.eng.rus | 12.1 | 0.414 |
| newstest2013-engspa.eng.spa | 24.9 | 0.532 |
| newstest2014-hien-enghin.eng.hin | 7.2 | 0.311 |
| newstest2015-encs-engces.eng.ces | 10.9 | 0.396 |
| newstest2015-ende-engdeu.eng.deu | 18.3 | 0.490 |
| newstest2015-enfi-engfin.eng.fin | 10.1 | 0.421 |
| newstest2015-enru-engrus.eng.rus | 14.5 | 0.445 |
| newstest2016-encs-engces.eng.ces | 12.2 | 0.408 |
| newstest2016-ende-engdeu.eng.deu | 21.4 | 0.517 |
| newstest2016-enfi-engfin.eng.fin | 11.2 | 0.435 |
| newstest2016-enro-engron.eng.ron | 16.6 | 0.472 |
| newstest2016-enru-engrus.eng.rus | 13.4 | 0.435 |
| newstest2016-entr-engtur.eng.tur | 8.1 | 0.385 |
| newstest2017-encs-engces.eng.ces | 9.6 | 0.377 |
| newstest2017-ende-engdeu.eng.deu | 17.9 | 0.482 |
| newstest2017-enfi-engfin.eng.fin | 11.8 | 0.440 |
| newstest2017-enlv-englav.eng.lav | 9.6 | 0.412 |
| newstest2017-enru-engrus.eng.rus | 14.1 | 0.446 |
| newstest2017-entr-engtur.eng.tur | 8.0 | 0.378 |
| newstest2017-enzh-engzho.eng.zho | 16.8 | 0.175 |
| newstest2018-encs-engces.eng.ces | 9.8 | 0.380 |
| newstest2018-ende-engdeu.eng.deu | 23.8 | 0.536 |
| newstest2018-enet-engest.eng.est | 11.8 | 0.433 |
| newstest2018-enfi-engfin.eng.fin | 7.8 | 0.398 |
| newstest2018-enru-engrus.eng.rus | 12.2 | 0.434 |
| newstest2018-entr-engtur.eng.tur | 7.5 | 0.383 |
| newstest2018-enzh-engzho.eng.zho | 18.3 | 0.179 |
| newstest2019-encs-engces.eng.ces | 10.7 | 0.389 |
| newstest2019-ende-engdeu.eng.deu | 21.0 | 0.512 |
| newstest2019-enfi-engfin.eng.fin | 10.4 | 0.420 |
| newstest2019-engu-engguj.eng.guj | 5.8 | 0.297 |
| newstest2019-enlt-englit.eng.lit | 8.0 | 0.388 |
| newstest2019-enru-engrus.eng.rus | 13.0 | 0.415 |
| newstest2019-enzh-engzho.eng.zho | 15.0 | 0.192 |
| newstestB2016-enfi-engfin.eng.fin | 9.0 | 0.414 |
| newstestB2017-enfi-engfin.eng.fin | 9.5 | 0.415 |
| Tatoeba-test.eng-abk.eng.abk | 4.2 | 0.275 |
| Tatoeba-test.eng-ady.eng.ady | 0.4 | 0.006 |
| Tatoeba-test.eng-afh.eng.afh | 1.0 | 0.058 |
| Tatoeba-test.eng-afr.eng.afr | 47.0 | 0.663 |
| Tatoeba-test.eng-akl.eng.akl | 2.7 | 0.080 |
| Tatoeba-test.eng-amh.eng.amh | 8.5 | 0.455 |
| Tatoeba-test.eng-ang.eng.ang | 6.2 | 0.138 |
| Tatoeba-test.eng-ara.eng.ara | 6.3 | 0.325 |
| Tatoeba-test.eng-arg.eng.arg | 1.5 | 0.107 |
| Tatoeba-test.eng-asm.eng.asm | 2.1 | 0.265 |
| Tatoeba-test.eng-ast.eng.ast | 15.7 | 0.393 |
| Tatoeba-test.eng-avk.eng.avk | 0.2 | 0.095 |
| Tatoeba-test.eng-awa.eng.awa | 0.1 | 0.002 |
| Tatoeba-test.eng-aze.eng.aze | 19.0 | 0.500 |
| Tatoeba-test.eng-bak.eng.bak | 12.7 | 0.379 |
| Tatoeba-test.eng-bam.eng.bam | 8.3 | 0.037 |
| Tatoeba-test.eng-bel.eng.bel | 13.5 | 0.396 |
| Tatoeba-test.eng-ben.eng.ben | 10.0 | 0.383 |
| Tatoeba-test.eng-bho.eng.bho | 0.1 | 0.003 |
| Tatoeba-test.eng-bod.eng.bod | 0.0 | 0.147 |
| Tatoeba-test.eng-bre.eng.bre | 7.6 | 0.275 |
| Tatoeba-test.eng-brx.eng.brx | 0.8 | 0.060 |
| Tatoeba-test.eng-bul.eng.bul | 32.1 | 0.542 |
| Tatoeba-test.eng-cat.eng.cat | 37.0 | 0.595 |
| Tatoeba-test.eng-ceb.eng.ceb | 9.6 | 0.409 |
| Tatoeba-test.eng-ces.eng.ces | 24.0 | 0.475 |
| Tatoeba-test.eng-cha.eng.cha | 3.9 | 0.228 |
| Tatoeba-test.eng-che.eng.che | 0.7 | 0.013 |
| Tatoeba-test.eng-chm.eng.chm | 2.6 | 0.212 |
| Tatoeba-test.eng-chr.eng.chr | 6.0 | 0.190 |
| Tatoeba-test.eng-chv.eng.chv | 6.5 | 0.369 |
| Tatoeba-test.eng-cor.eng.cor | 0.9 | 0.086 |
| Tatoeba-test.eng-cos.eng.cos | 4.2 | 0.174 |
| Tatoeba-test.eng-crh.eng.crh | 9.9 | 0.361 |
| Tatoeba-test.eng-csb.eng.csb | 3.4 | 0.230 |
| Tatoeba-test.eng-cym.eng.cym | 18.0 | 0.418 |
| Tatoeba-test.eng-dan.eng.dan | 42.5 | 0.624 |
| Tatoeba-test.eng-deu.eng.deu | 25.2 | 0.505 |
| Tatoeba-test.eng-dsb.eng.dsb | 0.9 | 0.121 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.3 | 0.084 |
| Tatoeba-test.eng-dws.eng.dws | 0.2 | 0.040 |
| Tatoeba-test.eng-egl.eng.egl | 0.4 | 0.085 |
| Tatoeba-test.eng-ell.eng.ell | 28.7 | 0.543 |
| Tatoeba-test.eng-enm.eng.enm | 3.3 | 0.295 |
| Tatoeba-test.eng-epo.eng.epo | 33.4 | 0.570 |
| Tatoeba-test.eng-est.eng.est | 30.3 | 0.545 |
| Tatoeba-test.eng-eus.eng.eus | 18.5 | 0.486 |
| Tatoeba-test.eng-ewe.eng.ewe | 6.8 | 0.272 |
| Tatoeba-test.eng-ext.eng.ext | 5.0 | 0.228 |
| Tatoeba-test.eng-fao.eng.fao | 5.2 | 0.277 |
| Tatoeba-test.eng-fas.eng.fas | 6.9 | 0.265 |
| Tatoeba-test.eng-fij.eng.fij | 31.5 | 0.365 |
| Tatoeba-test.eng-fin.eng.fin | 18.5 | 0.459 |
| Tatoeba-test.eng-fkv.eng.fkv | 0.9 | 0.132 |
| Tatoeba-test.eng-fra.eng.fra | 31.5 | 0.546 |
| Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.128 |
| Tatoeba-test.eng-frr.eng.frr | 3.0 | 0.025 |
| Tatoeba-test.eng-fry.eng.fry | 14.4 | 0.387 |
| Tatoeba-test.eng-ful.eng.ful | 0.4 | 0.061 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.3 | 0.075 |
| Tatoeba-test.eng-gil.eng.gil | 47.4 | 0.706 |
| Tatoeba-test.eng-gla.eng.gla | 10.9 | 0.341 |
| Tatoeba-test.eng-gle.eng.gle | 26.8 | 0.493 |
| Tatoeba-test.eng-glg.eng.glg | 32.5 | 0.565 |
| Tatoeba-test.eng-glv.eng.glv | 21.5 | 0.395 |
| Tatoeba-test.eng-gos.eng.gos | 0.3 | 0.124 |
| Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-grn.eng.grn | 1.5 | 0.129 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.6 | 0.106 |
| Tatoeba-test.eng-guj.eng.guj | 15.4 | 0.347 |
| Tatoeba-test.eng-hat.eng.hat | 31.1 | 0.527 |
| Tatoeba-test.eng-hau.eng.hau | 6.5 | 0.385 |
| Tatoeba-test.eng-haw.eng.haw | 0.2 | 0.066 |
| Tatoeba-test.eng-hbs.eng.hbs | 28.7 | 0.531 |
| Tatoeba-test.eng-heb.eng.heb | 21.3 | 0.443 |
| Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.268 |
| Tatoeba-test.eng-hil.eng.hil | 12.0 | 0.463 |
| Tatoeba-test.eng-hin.eng.hin | 13.0 | 0.401 |
| Tatoeba-test.eng-hmn.eng.hmn | 0.2 | 0.073 |
| Tatoeba-test.eng-hoc.eng.hoc | 0.2 | 0.077 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.308 |
| Tatoeba-test.eng-hun.eng.hun | 17.1 | 0.431 |
| Tatoeba-test.eng-hye.eng.hye | 15.0 | 0.378 |
| Tatoeba-test.eng-iba.eng.iba | 16.0 | 0.437 |
| Tatoeba-test.eng-ibo.eng.ibo | 2.9 | 0.221 |
| Tatoeba-test.eng-ido.eng.ido | 11.5 | 0.403 |
| Tatoeba-test.eng-iku.eng.iku | 2.3 | 0.089 |
| Tatoeba-test.eng-ile.eng.ile | 4.3 | 0.282 |
| Tatoeba-test.eng-ilo.eng.ilo | 26.4 | 0.522 |
| Tatoeba-test.eng-ina.eng.ina | 20.9 | 0.493 |
| Tatoeba-test.eng-isl.eng.isl | 12.5 | 0.375 |
| Tatoeba-test.eng-ita.eng.ita | 33.9 | 0.592 |
| Tatoeba-test.eng-izh.eng.izh | 4.6 | 0.050 |
| Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.328 |
| Tatoeba-test.eng-jbo.eng.jbo | 0.1 | 0.123 |
| Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 |
| Tatoeba-test.eng-jpn.eng.jpn | 0.0 | 0.000 |
| Tatoeba-test.eng-kab.eng.kab | 5.9 | 0.261 |
| Tatoeba-test.eng-kal.eng.kal | 13.4 | 0.382 |
| Tatoeba-test.eng-kan.eng.kan | 4.8 | 0.358 |
| Tatoeba-test.eng-kat.eng.kat | 1.8 | 0.115 |
| Tatoeba-test.eng-kaz.eng.kaz | 8.8 | 0.354 |
| Tatoeba-test.eng-kek.eng.kek | 3.7 | 0.188 |
| Tatoeba-test.eng-kha.eng.kha | 0.5 | 0.094 |
| Tatoeba-test.eng-khm.eng.khm | 0.4 | 0.243 |
| Tatoeba-test.eng-kin.eng.kin | 5.2 | 0.362 |
| Tatoeba-test.eng-kir.eng.kir | 17.2 | 0.416 |
| Tatoeba-test.eng-kjh.eng.kjh | 0.6 | 0.009 |
| Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.005 |
| Tatoeba-test.eng-kom.eng.kom | 2.4 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 2.0 | 0.099 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.4 | 0.074 |
| Tatoeba-test.eng-kum.eng.kum | 0.9 | 0.007 |
| Tatoeba-test.eng-kur.eng.kur | 9.1 | 0.174 |
| Tatoeba-test.eng-lad.eng.lad | 1.2 | 0.154 |
| Tatoeba-test.eng-lah.eng.lah | 0.1 | 0.001 |
| Tatoeba-test.eng-lao.eng.lao | 0.6 | 0.426 |
| Tatoeba-test.eng-lat.eng.lat | 8.2 | 0.366 |
| Tatoeba-test.eng-lav.eng.lav | 20.4 | 0.475 |
| Tatoeba-test.eng-ldn.eng.ldn | 0.3 | 0.059 |
| Tatoeba-test.eng-lfn.eng.lfn | 0.5 | 0.104 |
| Tatoeba-test.eng-lij.eng.lij | 0.2 | 0.094 |
| Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.276 |
| Tatoeba-test.eng-lit.eng.lit | 17.4 | 0.488 |
| Tatoeba-test.eng-liv.eng.liv | 0.3 | 0.039 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.3 | 0.041 |
| Tatoeba-test.eng-lld.eng.lld | 0.1 | 0.083 |
| Tatoeba-test.eng-lmo.eng.lmo | 1.4 | 0.154 |
| Tatoeba-test.eng-ltz.eng.ltz | 19.1 | 0.395 |
| Tatoeba-test.eng-lug.eng.lug | 4.2 | 0.382 |
| Tatoeba-test.eng-mad.eng.mad | 2.1 | 0.075 |
| Tatoeba-test.eng-mah.eng.mah | 9.5 | 0.331 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.372 |
| Tatoeba-test.eng-mal.eng.mal | 8.3 | 0.437 |
| Tatoeba-test.eng-mar.eng.mar | 13.5 | 0.410 |
| Tatoeba-test.eng-mdf.eng.mdf | 2.3 | 0.008 |
| Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.905 |
| Tatoeba-test.eng-mic.eng.mic | 7.6 | 0.214 |
| Tatoeba-test.eng-mkd.eng.mkd | 31.8 | 0.540 |
| Tatoeba-test.eng-mlg.eng.mlg | 31.3 | 0.464 |
| Tatoeba-test.eng-mlt.eng.mlt | 11.7 | 0.427 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.1 | 0.000 |
| Tatoeba-test.eng-moh.eng.moh | 0.6 | 0.067 |
| Tatoeba-test.eng-mon.eng.mon | 8.5 | 0.323 |
| Tatoeba-test.eng-mri.eng.mri | 8.5 | 0.320 |
| Tatoeba-test.eng-msa.eng.msa | 24.5 | 0.498 |
| Tatoeba-test.eng.multi | 22.4 | 0.451 |
| Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.169 |
| Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.123 |
| Tatoeba-test.eng-myv.eng.myv | 1.1 | 0.014 |
| Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.109 |
| Tatoeba-test.eng-nav.eng.nav | 1.8 | 0.149 |
| Tatoeba-test.eng-nds.eng.nds | 11.3 | 0.365 |
| Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.004 |
| Tatoeba-test.eng-niu.eng.niu | 34.4 | 0.501 |
| Tatoeba-test.eng-nld.eng.nld | 37.6 | 0.598 |
| Tatoeba-test.eng-nog.eng.nog | 0.2 | 0.010 |
| Tatoeba-test.eng-non.eng.non | 0.2 | 0.096 |
| Tatoeba-test.eng-nor.eng.nor | 36.3 | 0.577 |
| Tatoeba-test.eng-nov.eng.nov | 0.9 | 0.180 |
| Tatoeba-test.eng-nya.eng.nya | 9.8 | 0.524 |
| Tatoeba-test.eng-oci.eng.oci | 6.3 | 0.288 |
| Tatoeba-test.eng-ori.eng.ori | 5.3 | 0.273 |
| Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.007 |
| Tatoeba-test.eng-oss.eng.oss | 3.0 | 0.230 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.053 |
| Tatoeba-test.eng-pag.eng.pag | 20.2 | 0.513 |
| Tatoeba-test.eng-pan.eng.pan | 6.4 | 0.301 |
| Tatoeba-test.eng-pap.eng.pap | 44.7 | 0.624 |
| Tatoeba-test.eng-pau.eng.pau | 0.8 | 0.098 |
| Tatoeba-test.eng-pdc.eng.pdc | 2.9 | 0.143 |
| Tatoeba-test.eng-pms.eng.pms | 0.6 | 0.124 |
| Tatoeba-test.eng-pol.eng.pol | 22.7 | 0.500 |
| Tatoeba-test.eng-por.eng.por | 31.6 | 0.570 |
| Tatoeba-test.eng-ppl.eng.ppl | 0.5 | 0.085 |
| Tatoeba-test.eng-prg.eng.prg | 0.1 | 0.078 |
| Tatoeba-test.eng-pus.eng.pus | 0.9 | 0.137 |
| Tatoeba-test.eng-quc.eng.quc | 2.7 | 0.255 |
| Tatoeba-test.eng-qya.eng.qya | 0.4 | 0.084 |
| Tatoeba-test.eng-rap.eng.rap | 1.9 | 0.050 |
| Tatoeba-test.eng-rif.eng.rif | 1.3 | 0.102 |
| Tatoeba-test.eng-roh.eng.roh | 1.4 | 0.169 |
| Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.329 |
| Tatoeba-test.eng-ron.eng.ron | 27.0 | 0.530 |
| Tatoeba-test.eng-rue.eng.rue | 0.1 | 0.009 |
| Tatoeba-test.eng-run.eng.run | 9.8 | 0.434 |
| Tatoeba-test.eng-rus.eng.rus | 22.2 | 0.465 |
| Tatoeba-test.eng-sag.eng.sag | 4.8 | 0.155 |
| Tatoeba-test.eng-sah.eng.sah | 0.2 | 0.007 |
| Tatoeba-test.eng-san.eng.san | 1.7 | 0.143 |
| Tatoeba-test.eng-scn.eng.scn | 1.5 | 0.083 |
| Tatoeba-test.eng-sco.eng.sco | 30.3 | 0.514 |
| Tatoeba-test.eng-sgs.eng.sgs | 1.6 | 0.104 |
| Tatoeba-test.eng-shs.eng.shs | 0.7 | 0.049 |
| Tatoeba-test.eng-shy.eng.shy | 0.6 | 0.064 |
| Tatoeba-test.eng-sin.eng.sin | 5.4 | 0.317 |
| Tatoeba-test.eng-sjn.eng.sjn | 0.3 | 0.074 |
| Tatoeba-test.eng-slv.eng.slv | 12.8 | 0.313 |
| Tatoeba-test.eng-sma.eng.sma | 0.8 | 0.063 |
| Tatoeba-test.eng-sme.eng.sme | 13.2 | 0.290 |
| Tatoeba-test.eng-smo.eng.smo | 12.1 | 0.416 |
| Tatoeba-test.eng-sna.eng.sna | 27.1 | 0.533 |
| Tatoeba-test.eng-snd.eng.snd | 6.0 | 0.359 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.274 |
| Tatoeba-test.eng-spa.eng.spa | 36.7 | 0.603 |
| Tatoeba-test.eng-sqi.eng.sqi | 32.3 | 0.573 |
| Tatoeba-test.eng-stq.eng.stq | 0.6 | 0.198 |
| Tatoeba-test.eng-sun.eng.sun | 39.0 | 0.447 |
| Tatoeba-test.eng-swa.eng.swa | 1.1 | 0.109 |
| Tatoeba-test.eng-swe.eng.swe | 42.7 | 0.614 |
| Tatoeba-test.eng-swg.eng.swg | 0.6 | 0.118 |
| Tatoeba-test.eng-tah.eng.tah | 12.4 | 0.294 |
| Tatoeba-test.eng-tam.eng.tam | 5.0 | 0.404 |
| Tatoeba-test.eng-tat.eng.tat | 9.9 | 0.326 |
| Tatoeba-test.eng-tel.eng.tel | 4.7 | 0.326 |
| Tatoeba-test.eng-tet.eng.tet | 0.7 | 0.100 |
| Tatoeba-test.eng-tgk.eng.tgk | 5.5 | 0.304 |
| Tatoeba-test.eng-tha.eng.tha | 2.2 | 0.456 |
| Tatoeba-test.eng-tir.eng.tir | 1.5 | 0.197 |
| Tatoeba-test.eng-tlh.eng.tlh | 0.0 | 0.032 |
| Tatoeba-test.eng-tly.eng.tly | 0.3 | 0.061 |
| Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.219 |
| Tatoeba-test.eng-ton.eng.ton | 32.7 | 0.619 |
| Tatoeba-test.eng-tpw.eng.tpw | 1.4 | 0.136 |
| Tatoeba-test.eng-tso.eng.tso | 9.6 | 0.465 |
| Tatoeba-test.eng-tuk.eng.tuk | 9.4 | 0.383 |
| Tatoeba-test.eng-tur.eng.tur | 24.1 | 0.542 |
| Tatoeba-test.eng-tvl.eng.tvl | 8.9 | 0.398 |
| Tatoeba-test.eng-tyv.eng.tyv | 10.4 | 0.249 |
| Tatoeba-test.eng-tzl.eng.tzl | 0.2 | 0.098 |
| Tatoeba-test.eng-udm.eng.udm | 6.5 | 0.212 |
| Tatoeba-test.eng-uig.eng.uig | 2.1 | 0.266 |
| Tatoeba-test.eng-ukr.eng.ukr | 24.3 | 0.479 |
| Tatoeba-test.eng-umb.eng.umb | 4.4 | 0.274 |
| Tatoeba-test.eng-urd.eng.urd | 8.6 | 0.344 |
| Tatoeba-test.eng-uzb.eng.uzb | 6.9 | 0.343 |
| Tatoeba-test.eng-vec.eng.vec | 1.0 | 0.094 |
| Tatoeba-test.eng-vie.eng.vie | 23.2 | 0.420 |
| Tatoeba-test.eng-vol.eng.vol | 0.3 | 0.086 |
| Tatoeba-test.eng-war.eng.war | 11.4 | 0.415 |
| Tatoeba-test.eng-wln.eng.wln | 8.4 | 0.218 |
| Tatoeba-test.eng-wol.eng.wol | 11.5 | 0.252 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.007 |
| Tatoeba-test.eng-xho.eng.xho | 19.5 | 0.552 |
| Tatoeba-test.eng-yid.eng.yid | 4.0 | 0.256 |
| Tatoeba-test.eng-yor.eng.yor | 8.8 | 0.247 |
| Tatoeba-test.eng-zho.eng.zho | 21.8 | 0.192 |
| Tatoeba-test.eng-zul.eng.zul | 34.3 | 0.655 |
| Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.080 |
### System Info:
- hf_name: eng-mul
- source_languages: eng
- target_languages: mul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']
- src_constituents: {'eng'}
- tgt_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: mul
- short_pair: en-mul
- chrF2_score: 0.451
- bleu: 22.4
- brevity_penalty: 0.987
- ref_len: 68724.0
- src_name: English
- tgt_name: Multiple languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: mul
- prefer_old: False
- long_pair: eng-mul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-mt
|
Helsinki-NLP
| 2023-08-16T11:30:34Z | 28,743 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mt
* source languages: en
* target languages: mt
* OPUS readme: [en-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mt | 47.5 | 0.640 |
| Tatoeba.en.mt | 25.0 | 0.620 |
|
Helsinki-NLP/opus-mt-en-mos
|
Helsinki-NLP
| 2023-08-16T11:30:32Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mos
* source languages: en
* target languages: mos
* OPUS readme: [en-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mos/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mos | 26.9 | 0.417 |
|
Helsinki-NLP/opus-mt-en-mkh
|
Helsinki-NLP
| 2023-08-16T11:30:30Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"km",
"mkh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- vi
- km
- mkh
tags:
- translation
license: apache-2.0
---
### eng-mkh
* source group: English
* target group: Mon-Khmer languages
* OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md)
* model: transformer
* source language(s): eng
* target language(s): kha khm khm_Latn mnw vie vie_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 |
| Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 |
| Tatoeba-test.eng.multi | 16.5 | 0.330 |
| Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 |
### System Info:
- hf_name: eng-mkh
- source_languages: eng
- target_languages: mkh
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi', 'km', 'mkh']
- src_constituents: {'eng'}
- tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: mkh
- short_pair: en-mkh
- chrF2_score: 0.33
- bleu: 16.5
- brevity_penalty: 1.0
- ref_len: 34734.0
- src_name: English
- tgt_name: Mon-Khmer languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: mkh
- prefer_old: False
- long_pair: eng-mkh
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-mh
|
Helsinki-NLP
| 2023-08-16T11:30:28Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mh
* source languages: en
* target languages: mh
* OPUS readme: [en-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mh | 29.7 | 0.479 |
|
Helsinki-NLP/opus-mt-en-mg
|
Helsinki-NLP
| 2023-08-16T11:30:27Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mg
* source languages: en
* target languages: mg
* OPUS readme: [en-mg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.en.mg | 22.3 | 0.565 |
| Tatoeba.en.mg | 35.5 | 0.548 |
|
Helsinki-NLP/opus-mt-en-mfe
|
Helsinki-NLP
| 2023-08-16T11:30:25Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mfe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mfe
* source languages: en
* target languages: mfe
* OPUS readme: [en-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mfe | 32.1 | 0.509 |
|
Helsinki-NLP/opus-mt-en-map
|
Helsinki-NLP
| 2023-08-16T11:30:24Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"map",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- map
tags:
- translation
license: apache-2.0
---
### eng-map
* source group: English
* target group: Austronesian languages
* OPUS readme: [eng-map](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 2.2 | 0.103 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.7 | 0.425 |
| Tatoeba-test.eng-cha.eng.cha | 3.2 | 0.201 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.5 | 0.120 |
| Tatoeba-test.eng-fij.eng.fij | 26.8 | 0.453 |
| Tatoeba-test.eng-gil.eng.gil | 59.3 | 0.762 |
| Tatoeba-test.eng-haw.eng.haw | 1.0 | 0.116 |
| Tatoeba-test.eng-hil.eng.hil | 19.0 | 0.517 |
| Tatoeba-test.eng-iba.eng.iba | 15.5 | 0.400 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.6 | 0.591 |
| Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.301 |
| Tatoeba-test.eng-lkt.eng.lkt | 1.0 | 0.064 |
| Tatoeba-test.eng-mad.eng.mad | 1.1 | 0.142 |
| Tatoeba-test.eng-mah.eng.mah | 9.1 | 0.374 |
| Tatoeba-test.eng-mlg.eng.mlg | 35.4 | 0.526 |
| Tatoeba-test.eng-mri.eng.mri | 7.6 | 0.309 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.565 |
| Tatoeba-test.eng.multi | 17.6 | 0.411 |
| Tatoeba-test.eng-nau.eng.nau | 1.4 | 0.098 |
| Tatoeba-test.eng-niu.eng.niu | 40.1 | 0.560 |
| Tatoeba-test.eng-pag.eng.pag | 16.8 | 0.526 |
| Tatoeba-test.eng-pau.eng.pau | 1.9 | 0.139 |
| Tatoeba-test.eng-rap.eng.rap | 2.7 | 0.090 |
| Tatoeba-test.eng-smo.eng.smo | 24.9 | 0.453 |
| Tatoeba-test.eng-sun.eng.sun | 33.2 | 0.439 |
| Tatoeba-test.eng-tah.eng.tah | 12.5 | 0.278 |
| Tatoeba-test.eng-tet.eng.tet | 1.6 | 0.140 |
| Tatoeba-test.eng-ton.eng.ton | 25.8 | 0.530 |
| Tatoeba-test.eng-tvl.eng.tvl | 31.1 | 0.523 |
| Tatoeba-test.eng-war.eng.war | 12.8 | 0.436 |
### System Info:
- hf_name: eng-map
- source_languages: eng
- target_languages: map
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'map']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: map
- short_pair: en-map
- chrF2_score: 0.41100000000000003
- bleu: 17.6
- brevity_penalty: 1.0
- ref_len: 66963.0
- src_name: English
- tgt_name: Austronesian languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: map
- prefer_old: False
- long_pair: eng-map
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-lue
|
Helsinki-NLP
| 2023-08-16T11:30:20Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lue
* source languages: en
* target languages: lue
* OPUS readme: [en-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lue | 30.1 | 0.558 |
|
Helsinki-NLP/opus-mt-en-lua
|
Helsinki-NLP
| 2023-08-16T11:30:19Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lua
* source languages: en
* target languages: lua
* OPUS readme: [en-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lua | 35.3 | 0.578 |
|
Helsinki-NLP/opus-mt-en-lu
|
Helsinki-NLP
| 2023-08-16T11:30:18Z | 148 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lu
* source languages: en
* target languages: lu
* OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lu | 34.1 | 0.564 |
|
Helsinki-NLP/opus-mt-en-loz
|
Helsinki-NLP
| 2023-08-16T11:30:17Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"loz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-loz
* source languages: en
* target languages: loz
* OPUS readme: [en-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.loz | 40.1 | 0.596 |
|
Helsinki-NLP/opus-mt-en-kwn
|
Helsinki-NLP
| 2023-08-16T11:30:12Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kwn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kwn
* source languages: en
* target languages: kwn
* OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kwn | 27.6 | 0.513 |
|
Helsinki-NLP/opus-mt-en-kqn
|
Helsinki-NLP
| 2023-08-16T11:30:11Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kqn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kqn
* source languages: en
* target languages: kqn
* OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kqn | 33.1 | 0.567 |
|
Helsinki-NLP/opus-mt-en-kj
|
Helsinki-NLP
| 2023-08-16T11:30:10Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kj
* source languages: en
* target languages: kj
* OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kj | 29.6 | 0.539 |
|
Helsinki-NLP/opus-mt-en-kg
|
Helsinki-NLP
| 2023-08-16T11:30:09Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kg
* source languages: en
* target languages: kg
* OPUS readme: [en-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kg | 39.6 | 0.613 |
|
Helsinki-NLP/opus-mt-en-jap
|
Helsinki-NLP
| 2023-08-16T11:30:07Z | 9,790 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"jap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-jap
* source languages: en
* target languages: jap
* OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.en.jap | 42.1 | 0.960 |
|
Helsinki-NLP/opus-mt-en-itc
|
Helsinki-NLP
| 2023-08-16T11:30:06Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- it
- ca
- rm
- es
- ro
- gl
- sc
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- itc
tags:
- translation
license: apache-2.0
---
### eng-itc
* source group: English
* target group: Italic languages
* OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 |
| newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 |
| newssyscomb2009-engita.eng.ita | 28.6 | 0.586 |
| newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 |
| news-test2008-engfra.eng.fra | 25.0 | 0.536 |
| news-test2008-engspa.eng.spa | 27.1 | 0.548 |
| newstest2009-engfra.eng.fra | 26.7 | 0.557 |
| newstest2009-engita.eng.ita | 28.9 | 0.583 |
| newstest2009-engspa.eng.spa | 28.9 | 0.567 |
| newstest2010-engfra.eng.fra | 29.6 | 0.574 |
| newstest2010-engspa.eng.spa | 33.8 | 0.598 |
| newstest2011-engfra.eng.fra | 30.9 | 0.590 |
| newstest2011-engspa.eng.spa | 34.8 | 0.598 |
| newstest2012-engfra.eng.fra | 29.1 | 0.574 |
| newstest2012-engspa.eng.spa | 34.9 | 0.600 |
| newstest2013-engfra.eng.fra | 30.1 | 0.567 |
| newstest2013-engspa.eng.spa | 31.8 | 0.576 |
| newstest2016-enro-engron.eng.ron | 25.9 | 0.548 |
| Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 |
| Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 |
| Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 |
| Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 |
| Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 |
| Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 |
| Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 |
| Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 |
| Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 |
| Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 |
| Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 |
| Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 |
| Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 |
| Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 |
| Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 |
| Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 |
| Tatoeba-test.eng.multi | 38.0 | 0.588 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 |
| Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 |
| Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 |
| Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 |
| Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 |
| Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 |
| Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 |
| Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 |
| Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 |
| Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 |
### System Info:
- hf_name: eng-itc
- source_languages: eng
- target_languages: itc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: itc
- short_pair: en-itc
- chrF2_score: 0.588
- bleu: 38.0
- brevity_penalty: 0.9670000000000001
- ref_len: 73951.0
- src_name: English
- tgt_name: Italic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: itc
- prefer_old: False
- long_pair: eng-itc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Nextcloud-AI/opus-mt-en-it
|
Nextcloud-AI
| 2023-08-16T11:30:05Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:39:50Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-it
* source languages: en
* target languages: it
* OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.it | 30.9 | 0.606 |
| newstest2009.en.it | 31.9 | 0.604 |
| Tatoeba.en.it | 48.2 | 0.695 |
|
Helsinki-NLP/opus-mt-en-it
|
Helsinki-NLP
| 2023-08-16T11:30:05Z | 157,454 | 17 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-it
* source languages: en
* target languages: it
* OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.it | 30.9 | 0.606 |
| newstest2009.en.it | 31.9 | 0.604 |
| Tatoeba.en.it | 48.2 | 0.695 |
|
Helsinki-NLP/opus-mt-en-is
|
Helsinki-NLP
| 2023-08-16T11:30:02Z | 1,193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"is",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-is
* source languages: en
* target languages: is
* OPUS readme: [en-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-is/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.is | 25.3 | 0.518 |
|
Helsinki-NLP/opus-mt-en-ine
|
Helsinki-NLP
| 2023-08-16T11:30:01Z | 491 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- ca
- es
- os
- ro
- fy
- cy
- sc
- is
- yi
- lb
- an
- sq
- fr
- ht
- rm
- ps
- af
- uk
- sl
- lt
- bg
- be
- gd
- si
- br
- mk
- or
- mr
- ru
- fo
- co
- oc
- pl
- gl
- nb
- bn
- id
- hy
- da
- gv
- nl
- pt
- hi
- as
- kw
- ga
- sv
- gu
- wa
- lv
- el
- it
- hr
- ur
- nn
- de
- cs
- ine
tags:
- translation
license: apache-2.0
---
### eng-ine
* source group: English
* target group: Indo-European languages
* OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 6.2 | 0.317 |
| newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 |
| newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 |
| newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 |
| newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 |
| newssyscomb2009-engces.eng.ces | 14.7 | 0.442 |
| newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 |
| newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 |
| newssyscomb2009-engita.eng.ita | 25.2 | 0.562 |
| newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 |
| news-test2008-engces.eng.ces | 13.0 | 0.417 |
| news-test2008-engdeu.eng.deu | 17.4 | 0.480 |
| news-test2008-engfra.eng.fra | 22.3 | 0.519 |
| news-test2008-engspa.eng.spa | 24.9 | 0.532 |
| newstest2009-engces.eng.ces | 13.6 | 0.432 |
| newstest2009-engdeu.eng.deu | 16.6 | 0.482 |
| newstest2009-engfra.eng.fra | 23.5 | 0.535 |
| newstest2009-engita.eng.ita | 25.5 | 0.561 |
| newstest2009-engspa.eng.spa | 26.3 | 0.551 |
| newstest2010-engces.eng.ces | 14.2 | 0.436 |
| newstest2010-engdeu.eng.deu | 18.3 | 0.492 |
| newstest2010-engfra.eng.fra | 25.7 | 0.550 |
| newstest2010-engspa.eng.spa | 30.5 | 0.578 |
| newstest2011-engces.eng.ces | 15.1 | 0.439 |
| newstest2011-engdeu.eng.deu | 17.1 | 0.478 |
| newstest2011-engfra.eng.fra | 28.0 | 0.569 |
| newstest2011-engspa.eng.spa | 31.9 | 0.580 |
| newstest2012-engces.eng.ces | 13.6 | 0.418 |
| newstest2012-engdeu.eng.deu | 17.0 | 0.475 |
| newstest2012-engfra.eng.fra | 26.1 | 0.553 |
| newstest2012-engrus.eng.rus | 21.4 | 0.506 |
| newstest2012-engspa.eng.spa | 31.4 | 0.577 |
| newstest2013-engces.eng.ces | 15.3 | 0.438 |
| newstest2013-engdeu.eng.deu | 20.3 | 0.501 |
| newstest2013-engfra.eng.fra | 26.0 | 0.540 |
| newstest2013-engrus.eng.rus | 16.1 | 0.449 |
| newstest2013-engspa.eng.spa | 28.6 | 0.555 |
| newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 |
| newstest2015-encs-engces.eng.ces | 14.8 | 0.440 |
| newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 |
| newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 |
| newstest2016-encs-engces.eng.ces | 16.8 | 0.457 |
| newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 |
| newstest2016-enro-engron.eng.ron | 21.2 | 0.510 |
| newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 |
| newstest2017-encs-engces.eng.ces | 13.6 | 0.421 |
| newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 |
| newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 |
| newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 |
| newstest2018-encs-engces.eng.ces | 13.5 | 0.425 |
| newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 |
| newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 |
| newstest2019-encs-engces.eng.ces | 14.8 | 0.435 |
| newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 |
| newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 |
| newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 |
| newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 |
| Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 |
| Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 |
| Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 |
| Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 |
| Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 |
| Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 |
| Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 |
| Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 |
| Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 |
| Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 |
| Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 |
| Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 |
| Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 |
| Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 |
| Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 |
| Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 |
| Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 |
| Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 |
| Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 |
| Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 |
| Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 |
| Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 |
| Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 |
| Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 |
| Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 |
| Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 |
| Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 |
| Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 |
| Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 |
| Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 |
| Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 |
| Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 |
| Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 |
| Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 |
| Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 |
| Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 |
| Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 |
| Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 |
| Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 |
| Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 |
| Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 |
| Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 |
| Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 |
| Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 |
| Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 |
| Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 |
| Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 |
| Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 |
| Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 |
| Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 |
| Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 |
| Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 |
| Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 |
| Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 |
| Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 |
| Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 |
| Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 |
| Tatoeba-test.eng.multi | 32.6 | 0.539 |
| Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 |
| Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 |
| Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 |
| Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 |
| Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 |
| Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 |
| Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 |
| Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 |
| Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 |
| Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 |
| Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 |
| Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 |
| Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 |
| Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 |
| Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 |
| Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 |
| Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 |
| Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 |
| Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 |
| Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 |
| Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 |
| Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 |
| Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 |
| Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 |
| Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 |
| Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 |
| Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 |
| Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 |
| Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 |
| Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 |
| Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 |
| Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 |
| Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 |
| Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 |
| Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 |
| Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 |
| Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 |
| Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 |
| Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 |
| Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 |
| Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 |
| Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 |
| Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 |
### System Info:
- hf_name: eng-ine
- source_languages: eng
- target_languages: ine
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'eng'}
- tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: ine
- short_pair: en-ine
- chrF2_score: 0.539
- bleu: 32.6
- brevity_penalty: 0.973
- ref_len: 68664.0
- src_name: English
- tgt_name: Indo-European languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: ine
- prefer_old: False
- long_pair: eng-ine
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-ilo
|
Helsinki-NLP
| 2023-08-16T11:29:59Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ilo
* source languages: en
* target languages: ilo
* OPUS readme: [en-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ilo | 33.2 | 0.584 |
|
Helsinki-NLP/opus-mt-en-iir
|
Helsinki-NLP
| 2023-08-16T11:29:58Z | 141 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- bn
- or
- gu
- mr
- ur
- hi
- ps
- os
- as
- si
- iir
tags:
- translation
license: apache-2.0
---
### eng-iir
* source group: English
* target group: Indo-Iranian languages
* OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md)
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 6.7 | 0.326 |
| newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 |
| newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 |
| newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 |
| Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 |
| Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 |
| Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 |
| Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 |
| Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 |
| Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 |
| Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 |
| Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 |
| Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 |
| Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 |
| Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 |
| Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 |
| Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 |
| Tatoeba-test.eng.multi | 13.7 | 0.392 |
| Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 |
| Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 |
| Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 |
| Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 |
| Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 |
| Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 |
| Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 |
| Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 |
| Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 |
| Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 |
| Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 |
| Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 |
| Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 |
### System Info:
- hf_name: eng-iir
- source_languages: eng
- target_languages: iir
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
- src_constituents: {'eng'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: iir
- short_pair: en-iir
- chrF2_score: 0.392
- bleu: 13.7
- brevity_penalty: 1.0
- ref_len: 63351.0
- src_name: English
- tgt_name: Indo-Iranian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: iir
- prefer_old: False
- long_pair: eng-iir
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-ig
|
Helsinki-NLP
| 2023-08-16T11:29:57Z | 207 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ig",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ig
* source languages: en
* target languages: ig
* OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ig | 39.5 | 0.546 |
| Tatoeba.en.ig | 3.8 | 0.297 |
|
Helsinki-NLP/opus-mt-en-hy
|
Helsinki-NLP
| 2023-08-16T11:29:55Z | 1,079 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- hy
tags:
- translation
license: apache-2.0
---
### eng-hye
* source group: English
* target group: Armenian
* OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): hye
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.hye | 16.6 | 0.404 |
### System Info:
- hf_name: eng-hye
- source_languages: eng
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hy']
- src_constituents: {'eng'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: hye
- short_pair: en-hy
- chrF2_score: 0.40399999999999997
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 5115.0
- src_name: English
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: hy
- prefer_old: False
- long_pair: eng-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-hu
|
Helsinki-NLP
| 2023-08-16T11:29:54Z | 2,676 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-hu
* source languages: en
* target languages: hu
* OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.hu | 40.1 | 0.628 |
|
Helsinki-NLP/opus-mt-en-ht
|
Helsinki-NLP
| 2023-08-16T11:29:53Z | 469 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ht",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ht
* source languages: en
* target languages: ht
* OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ht | 38.3 | 0.545 |
| Tatoeba.en.ht | 45.2 | 0.592 |
|
Helsinki-NLP/opus-mt-en-ho
|
Helsinki-NLP
| 2023-08-16T11:29:51Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ho",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ho
* source languages: en
* target languages: ho
* OPUS readme: [en-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ho | 33.9 | 0.563 |
|
Helsinki-NLP/opus-mt-en-ha
|
Helsinki-NLP
| 2023-08-16T11:29:47Z | 177 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ha
* source languages: en
* target languages: ha
* OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ha | 34.1 | 0.544 |
| Tatoeba.en.ha | 17.6 | 0.498 |
|
Helsinki-NLP/opus-mt-en-guw
|
Helsinki-NLP
| 2023-08-16T11:29:45Z | 1,783 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"guw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-guw
* source languages: en
* target languages: guw
* OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.guw | 45.7 | 0.634 |
|
Helsinki-NLP/opus-mt-en-grk
|
Helsinki-NLP
| 2023-08-16T11:29:44Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"el",
"grk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- el
- grk
tags:
- translation
license: apache-2.0
---
### eng-grk
* source group: English
* target group: Greek languages
* OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md)
* model: transformer
* source language(s): eng
* target language(s): ell grc_Grek
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 |
| Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 |
| Tatoeba-test.eng.multi | 45.6 | 0.677 |
### System Info:
- hf_name: eng-grk
- source_languages: eng
- target_languages: grk
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'el', 'grk']
- src_constituents: {'eng'}
- tgt_constituents: {'grc_Grek', 'ell'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: grk
- short_pair: en-grk
- chrF2_score: 0.677
- bleu: 45.6
- brevity_penalty: 1.0
- ref_len: 59951.0
- src_name: English
- tgt_name: Greek languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: grk
- prefer_old: False
- long_pair: eng-grk
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-gmw
|
Helsinki-NLP
| 2023-08-16T11:29:43Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nl",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- nl
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### eng-gmw
* source group: English
* target group: West Germanic languages
* OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 |
| news-test2008-engdeu.eng.deu | 21.0 | 0.510 |
| newstest2009-engdeu.eng.deu | 20.4 | 0.513 |
| newstest2010-engdeu.eng.deu | 22.9 | 0.528 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 21.0 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.7 | 0.533 |
| newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 |
| newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 |
| newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 |
| newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 |
| Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 |
| Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 |
| Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 |
| Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 |
| Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 |
| Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 |
| Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 |
| Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 |
| Tatoeba-test.eng.multi | 41.6 | 0.609 |
| Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 |
| Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 |
| Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 |
| Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 |
| Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 |
| Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 |
| Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 |
### System Info:
- hf_name: eng-gmw
- source_languages: eng
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gmw
- short_pair: en-gmw
- chrF2_score: 0.609
- bleu: 41.6
- brevity_penalty: 0.9890000000000001
- ref_len: 74922.0
- src_name: English
- tgt_name: West Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: eng-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-gem
|
Helsinki-NLP
| 2023-08-16T11:29:38Z | 294 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"lb",
"yi",
"gem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### eng-gem
* source group: English
* target group: Germanic languages
* OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 |
| news-test2008-engdeu.eng.deu | 21.1 | 0.511 |
| newstest2009-engdeu.eng.deu | 20.5 | 0.516 |
| newstest2010-engdeu.eng.deu | 22.5 | 0.526 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 20.8 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.6 | 0.534 |
| newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 |
| newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 |
| newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 |
| newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 |
| Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 |
| Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 |
| Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 |
| Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 |
| Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 |
| Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 |
| Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 |
| Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 |
| Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 |
| Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 |
| Tatoeba-test.eng.multi | 46.5 | 0.641 |
| Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 |
| Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 |
| Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 |
| Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 |
| Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 |
| Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 |
| Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 |
| Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 |
| Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 |
| Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 |
### System Info:
- hf_name: eng-gem
- source_languages: eng
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gem
- short_pair: en-gem
- chrF2_score: 0.6409999999999999
- bleu: 46.5
- brevity_penalty: 0.9790000000000001
- ref_len: 73328.0
- src_name: English
- tgt_name: Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gem
- prefer_old: False
- long_pair: eng-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-gaa
|
Helsinki-NLP
| 2023-08-16T11:29:37Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gaa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-gaa
* source languages: en
* target languages: gaa
* OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.gaa | 39.9 | 0.593 |
|
Helsinki-NLP/opus-mt-en-ga
|
Helsinki-NLP
| 2023-08-16T11:29:36Z | 1,161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ga",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- ga
tags:
- translation
license: apache-2.0
---
### eng-gle
* source group: English
* target group: Irish
* OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): gle
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.gle | 37.5 | 0.593 |
### System Info:
- hf_name: eng-gle
- source_languages: eng
- target_languages: gle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ga']
- src_constituents: {'eng'}
- tgt_constituents: {'gle'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: gle
- short_pair: en-ga
- chrF2_score: 0.593
- bleu: 37.5
- brevity_penalty: 1.0
- ref_len: 12200.0
- src_name: English
- tgt_name: Irish
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ga
- prefer_old: False
- long_pair: eng-gle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-fj
|
Helsinki-NLP
| 2023-08-16T11:29:34Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-fj
* source languages: en
* target languages: fj
* OPUS readme: [en-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.fj | 34.0 | 0.561 |
| Tatoeba.en.fj | 62.5 | 0.781 |
|
Helsinki-NLP/opus-mt-en-fiu
|
Helsinki-NLP
| 2023-08-16T11:29:33Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"se",
"fi",
"hu",
"et",
"fiu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- se
- fi
- hu
- et
- fiu
tags:
- translation
license: apache-2.0
---
### eng-fiu
* source group: English
* target group: Finno-Ugrian languages
* OPUS readme: [eng-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md)
* model: transformer
* source language(s): eng
* target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-engfin.eng.fin | 18.7 | 0.522 |
| newsdev2018-enet-engest.eng.est | 19.4 | 0.521 |
| newssyscomb2009-enghun.eng.hun | 15.5 | 0.472 |
| newstest2009-enghun.eng.hun | 15.4 | 0.468 |
| newstest2015-enfi-engfin.eng.fin | 19.9 | 0.532 |
| newstest2016-enfi-engfin.eng.fin | 21.1 | 0.544 |
| newstest2017-enfi-engfin.eng.fin | 23.8 | 0.567 |
| newstest2018-enet-engest.eng.est | 20.4 | 0.532 |
| newstest2018-enfi-engfin.eng.fin | 15.6 | 0.498 |
| newstest2019-enfi-engfin.eng.fin | 20.0 | 0.520 |
| newstestB2016-enfi-engfin.eng.fin | 17.0 | 0.512 |
| newstestB2017-enfi-engfin.eng.fin | 19.7 | 0.531 |
| Tatoeba-test.eng-chm.eng.chm | 0.9 | 0.115 |
| Tatoeba-test.eng-est.eng.est | 49.8 | 0.689 |
| Tatoeba-test.eng-fin.eng.fin | 34.7 | 0.597 |
| Tatoeba-test.eng-fkv.eng.fkv | 1.3 | 0.187 |
| Tatoeba-test.eng-hun.eng.hun | 35.2 | 0.589 |
| Tatoeba-test.eng-izh.eng.izh | 6.0 | 0.163 |
| Tatoeba-test.eng-kom.eng.kom | 3.4 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 6.4 | 0.202 |
| Tatoeba-test.eng-liv.eng.liv | 1.6 | 0.102 |
| Tatoeba-test.eng-mdf.eng.mdf | 3.7 | 0.008 |
| Tatoeba-test.eng.multi | 35.4 | 0.590 |
| Tatoeba-test.eng-myv.eng.myv | 1.4 | 0.014 |
| Tatoeba-test.eng-sma.eng.sma | 2.6 | 0.097 |
| Tatoeba-test.eng-sme.eng.sme | 7.3 | 0.221 |
| Tatoeba-test.eng-udm.eng.udm | 1.4 | 0.079 |
### System Info:
- hf_name: eng-fiu
- source_languages: eng
- target_languages: fiu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']
- src_constituents: {'eng'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: fiu
- short_pair: en-fiu
- chrF2_score: 0.59
- bleu: 35.4
- brevity_penalty: 0.9440000000000001
- ref_len: 59311.0
- src_name: English
- tgt_name: Finno-Ugrian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: fiu
- prefer_old: False
- long_pair: eng-fiu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-fi
|
Helsinki-NLP
| 2023-08-16T11:29:32Z | 5,195 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md)
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip)
* test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt)
* test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2019-enfi.en.fi | 25.7 | 0.578 |
|
Nextcloud-AI/opus-mt-en-fi
|
Nextcloud-AI
| 2023-08-16T11:29:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:39:32Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md)
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip)
* test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt)
* test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2019-enfi.en.fi | 25.7 | 0.578 |
|
Helsinki-NLP/opus-mt-en-euq
|
Helsinki-NLP
| 2023-08-16T11:29:31Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"euq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- euq
tags:
- translation
license: apache-2.0
---
### eng-euq
* source group: English
* target group: Basque (family)
* OPUS readme: [eng-euq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md)
* model: transformer
* source language(s): eng
* target language(s): eus
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 27.9 | 0.555 |
| Tatoeba-test.eng-eus.eng.eus | 27.9 | 0.555 |
### System Info:
- hf_name: eng-euq
- source_languages: eng
- target_languages: euq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'euq']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: euq
- short_pair: en-euq
- chrF2_score: 0.555
- bleu: 27.9
- brevity_penalty: 0.917
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque (family)
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: euq
- prefer_old: False
- long_pair: eng-euq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-eu
|
Helsinki-NLP
| 2023-08-16T11:29:30Z | 1,220 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"eu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- eu
tags:
- translation
license: apache-2.0
---
### eng-eus
* source group: English
* target group: Basque
* OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 31.8 | 0.590 |
### System Info:
- hf_name: eng-eus
- source_languages: eng
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'eu']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: eus
- short_pair: en-eu
- chrF2_score: 0.59
- bleu: 31.8
- brevity_penalty: 0.9440000000000001
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: eu
- prefer_old: False
- long_pair: eng-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-et
|
Helsinki-NLP
| 2023-08-16T11:29:29Z | 1,224 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"et",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-et
* source languages: en
* target languages: et
* OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2018-enet.en.et | 21.8 | 0.540 |
| newstest2018-enet.en.et | 23.3 | 0.556 |
| Tatoeba.en.et | 54.0 | 0.717 |
|
Helsinki-NLP/opus-mt-en-es
|
Helsinki-NLP
| 2023-08-16T11:29:28Z | 169,264 | 104 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"en",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- es
tags:
- translation
license: apache-2.0
---
### eng-spa
* source group: English
* target group: Spanish
* OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md)
* model: transformer
* source language(s): eng
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 |
| news-test2008-engspa.eng.spa | 29.7 | 0.564 |
| newstest2009-engspa.eng.spa | 30.2 | 0.578 |
| newstest2010-engspa.eng.spa | 36.9 | 0.620 |
| newstest2011-engspa.eng.spa | 38.2 | 0.619 |
| newstest2012-engspa.eng.spa | 39.0 | 0.625 |
| newstest2013-engspa.eng.spa | 35.0 | 0.598 |
| Tatoeba-test.eng.spa | 54.9 | 0.721 |
### System Info:
- hf_name: eng-spa
- source_languages: eng
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'es']
- src_constituents: {'eng'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt
- src_alpha3: eng
- tgt_alpha3: spa
- short_pair: en-es
- chrF2_score: 0.721
- bleu: 54.9
- brevity_penalty: 0.978
- ref_len: 77311.0
- src_name: English
- tgt_name: Spanish
- train_date: 2020-08-18 00:00:00
- src_alpha2: en
- tgt_alpha2: es
- prefer_old: False
- long_pair: eng-spa
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20
|
Helsinki-NLP/opus-mt-en-el
|
Helsinki-NLP
| 2023-08-16T11:29:25Z | 1,889 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-el
* source languages: en
* target languages: el
* OPUS readme: [en-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.el | 56.4 | 0.745 |
|
Helsinki-NLP/opus-mt-en-de
|
Helsinki-NLP
| 2023-08-16T11:29:21Z | 185,671 | 38 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"de",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: cc-by-4.0
---
### opus-mt-en-de
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation
- **Language(s):**
- Source Language: English
- Target Language: German
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md)
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece
* dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt)
#### Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.de | 23.5 | 0.540 |
| news-test2008.en.de | 23.5 | 0.529 |
| newstest2009.en.de | 22.3 | 0.530 |
| newstest2010.en.de | 24.9 | 0.544 |
| newstest2011.en.de | 22.5 | 0.524 |
| newstest2012.en.de | 23.0 | 0.525 |
| newstest2013.en.de | 26.9 | 0.553 |
| newstest2015-ende.en.de | 31.1 | 0.594 |
| newstest2016-ende.en.de | 37.0 | 0.636 |
| newstest2017-ende.en.de | 29.9 | 0.586 |
| newstest2018-ende.en.de | 45.2 | 0.690 |
| newstest2019-ende.en.de | 40.9 | 0.654 |
| Tatoeba.en.de | 47.3 | 0.664 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
```
|
Helsinki-NLP/opus-mt-en-cs
|
Helsinki-NLP
| 2023-08-16T11:29:17Z | 3,859 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-cs
* source languages: en
* target languages: cs
* OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.cs | 22.8 | 0.507 |
| news-test2008.en.cs | 20.7 | 0.485 |
| newstest2009.en.cs | 21.8 | 0.500 |
| newstest2010.en.cs | 22.1 | 0.505 |
| newstest2011.en.cs | 23.2 | 0.507 |
| newstest2012.en.cs | 20.8 | 0.482 |
| newstest2013.en.cs | 24.7 | 0.514 |
| newstest2015-encs.en.cs | 24.9 | 0.527 |
| newstest2016-encs.en.cs | 26.7 | 0.540 |
| newstest2017-encs.en.cs | 22.7 | 0.503 |
| newstest2018-encs.en.cs | 22.9 | 0.504 |
| newstest2019-encs.en.cs | 24.9 | 0.518 |
| Tatoeba.en.cs | 46.1 | 0.647 |
|
Helsinki-NLP/opus-mt-en-crs
|
Helsinki-NLP
| 2023-08-16T11:29:16Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"crs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-crs
* source languages: en
* target languages: crs
* OPUS readme: [en-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.crs | 45.2 | 0.617 |
|
Helsinki-NLP/opus-mt-en-cpp
|
Helsinki-NLP
| 2023-08-16T11:29:15Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"cpp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- id
- cpp
tags:
- translation
license: apache-2.0
---
### eng-cpp
* source group: English
* target group: Creoles and pidgins, Portuguese-based
* OPUS readme: [eng-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md)
* model: transformer
* source language(s): eng
* target language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-msa.eng.msa | 32.6 | 0.573 |
| Tatoeba-test.eng.multi | 32.7 | 0.574 |
| Tatoeba-test.eng-pap.eng.pap | 42.5 | 0.633 |
### System Info:
- hf_name: eng-cpp
- source_languages: eng
- target_languages: cpp
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'id', 'cpp']
- src_constituents: {'eng'}
- tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cpp
- short_pair: en-cpp
- chrF2_score: 0.574
- bleu: 32.7
- brevity_penalty: 0.996
- ref_len: 34010.0
- src_name: English
- tgt_name: Creoles and pidgins, Portuguese-based
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cpp
- prefer_old: False
- long_pair: eng-cpp
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-chk
|
Helsinki-NLP
| 2023-08-16T11:29:13Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"chk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-chk
* source languages: en
* target languages: chk
* OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.chk | 26.1 | 0.468 |
|
Helsinki-NLP/opus-mt-en-ceb
|
Helsinki-NLP
| 2023-08-16T11:29:10Z | 366 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ceb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ceb
* source languages: en
* target languages: ceb
* OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ceb | 51.3 | 0.704 |
| Tatoeba.en.ceb | 31.3 | 0.600 |
|
Helsinki-NLP/opus-mt-en-ca
|
Helsinki-NLP
| 2023-08-16T11:29:09Z | 6,076 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ca | 47.2 | 0.665 |
|
Helsinki-NLP/opus-mt-en-bnt
|
Helsinki-NLP
| 2023-08-16T11:29:07Z | 136 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- sn
- zu
- rw
- lg
- ts
- ln
- ny
- xh
- rn
- bnt
tags:
- translation
license: apache-2.0
---
### eng-bnt
* source group: English
* target group: Bantu languages
* OPUS readme: [eng-bnt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md)
* model: transformer
* source language(s): eng
* target language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kin.eng.kin | 12.5 | 0.519 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.277 |
| Tatoeba-test.eng-lug.eng.lug | 4.8 | 0.415 |
| Tatoeba-test.eng.multi | 12.1 | 0.449 |
| Tatoeba-test.eng-nya.eng.nya | 22.1 | 0.616 |
| Tatoeba-test.eng-run.eng.run | 13.2 | 0.492 |
| Tatoeba-test.eng-sna.eng.sna | 32.1 | 0.669 |
| Tatoeba-test.eng-swa.eng.swa | 1.7 | 0.180 |
| Tatoeba-test.eng-toi.eng.toi | 10.7 | 0.266 |
| Tatoeba-test.eng-tso.eng.tso | 26.9 | 0.631 |
| Tatoeba-test.eng-umb.eng.umb | 5.2 | 0.295 |
| Tatoeba-test.eng-xho.eng.xho | 22.6 | 0.615 |
| Tatoeba-test.eng-zul.eng.zul | 41.1 | 0.769 |
### System Info:
- hf_name: eng-bnt
- source_languages: eng
- target_languages: bnt
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: bnt
- short_pair: en-bnt
- chrF2_score: 0.449
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 9989.0
- src_name: English
- tgt_name: Bantu languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: bnt
- prefer_old: False
- long_pair: eng-bnt
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-bi
|
Helsinki-NLP
| 2023-08-16T11:29:06Z | 163 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-bi
* source languages: en
* target languages: bi
* OPUS readme: [en-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bi | 36.4 | 0.543 |
|
Helsinki-NLP/opus-mt-en-bg
|
Helsinki-NLP
| 2023-08-16T11:29:05Z | 1,916 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- bg
tags:
- translation
license: apache-2.0
---
### eng-bul
* source group: English
* target group: Bulgarian
* OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md)
* model: transformer
* source language(s): eng
* target language(s): bul bul_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.bul | 50.6 | 0.680 |
### System Info:
- hf_name: eng-bul
- source_languages: eng
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bg']
- src_constituents: {'eng'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt
- src_alpha3: eng
- tgt_alpha3: bul
- short_pair: en-bg
- chrF2_score: 0.68
- bleu: 50.6
- brevity_penalty: 0.96
- ref_len: 69504.0
- src_name: English
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: en
- tgt_alpha2: bg
- prefer_old: False
- long_pair: eng-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-ber
|
Helsinki-NLP
| 2023-08-16T11:29:04Z | 123 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ber",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ber
* source languages: en
* target languages: ber
* OPUS readme: [en-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ber | 29.7 | 0.544 |
|
Helsinki-NLP/opus-mt-en-alv
|
Helsinki-NLP
| 2023-08-16T11:28:57Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- alv
tags:
- translation
license: apache-2.0
---
### eng-alv
* source group: English
* target group: Atlantic-Congo languages
* OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md)
* model: transformer
* source language(s): eng
* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ewe.eng.ewe | 4.9 | 0.212 |
| Tatoeba-test.eng-ful.eng.ful | 0.6 | 0.079 |
| Tatoeba-test.eng-ibo.eng.ibo | 3.5 | 0.255 |
| Tatoeba-test.eng-kin.eng.kin | 10.5 | 0.510 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.273 |
| Tatoeba-test.eng-lug.eng.lug | 5.3 | 0.340 |
| Tatoeba-test.eng.multi | 11.4 | 0.429 |
| Tatoeba-test.eng-nya.eng.nya | 18.1 | 0.595 |
| Tatoeba-test.eng-run.eng.run | 13.9 | 0.484 |
| Tatoeba-test.eng-sag.eng.sag | 5.3 | 0.194 |
| Tatoeba-test.eng-sna.eng.sna | 26.2 | 0.623 |
| Tatoeba-test.eng-swa.eng.swa | 1.0 | 0.141 |
| Tatoeba-test.eng-toi.eng.toi | 7.0 | 0.224 |
| Tatoeba-test.eng-tso.eng.tso | 46.7 | 0.643 |
| Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.359 |
| Tatoeba-test.eng-wol.eng.wol | 6.8 | 0.191 |
| Tatoeba-test.eng-xho.eng.xho | 27.1 | 0.629 |
| Tatoeba-test.eng-yor.eng.yor | 17.4 | 0.356 |
| Tatoeba-test.eng-zul.eng.zul | 34.1 | 0.729 |
### System Info:
- hf_name: eng-alv
- source_languages: eng
- target_languages: alv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: alv
- short_pair: en-alv
- chrF2_score: 0.429
- bleu: 11.4
- brevity_penalty: 1.0
- ref_len: 10603.0
- src_name: English
- tgt_name: Atlantic-Congo languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: alv
- prefer_old: False
- long_pair: eng-alv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-afa
|
Helsinki-NLP
| 2023-08-16T11:28:56Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- so
- ti
- am
- he
- mt
- ar
- afa
tags:
- translation
license: apache-2.0
---
### eng-afa
* source group: English
* target group: Afro-Asiatic languages
* OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 |
| Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 |
| Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 |
| Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 |
| Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 |
| Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 |
| Tatoeba-test.eng.multi | 14.4 | 0.375 |
| Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 |
| Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 |
| Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 |
### System Info:
- hf_name: eng-afa
- source_languages: eng
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'eng'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: afa
- short_pair: en-afa
- chrF2_score: 0.375
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 58110.0
- src_name: English
- tgt_name: Afro-Asiatic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: afa
- prefer_old: False
- long_pair: eng-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-af
|
Helsinki-NLP
| 2023-08-16T11:28:54Z | 1,459 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-af
* source languages: en
* target languages: af
* OPUS readme: [en-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-af/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.af | 56.1 | 0.741 |
|
Helsinki-NLP/opus-mt-en-ROMANCE
|
Helsinki-NLP
| 2023-08-16T11:28:52Z | 35,353 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"roa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ROMANCE
* source languages: en
* target languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la
* OPUS readme: [en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-04-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.zip)
* test set translations: [opus-2020-04-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.test.txt)
* test set scores: [opus-2020-04-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.la | 50.1 | 0.693 |
|
Helsinki-NLP/opus-mt-en-CELTIC
|
Helsinki-NLP
| 2023-08-16T11:28:51Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cel",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-INSULAR_CELTIC
* source languages: en
* target languages: ga,cy,br,gd,kw,gv
* OPUS readme: [en-ga+cy+br+gd+kw+gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ga+cy+br+gd+kw+gv/README.md)
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus+techiaith+bt-2020-04-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.zip)
* test set translations: [opus+techiaith+bt-2020-04-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.test.txt)
* test set scores: [opus+techiaith+bt-2020-04-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ga | 22.8 | 0.404 |
|
Helsinki-NLP/opus-mt-el-fr
|
Helsinki-NLP
| 2023-08-16T11:28:49Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-el-fr
* source languages: el
* target languages: fr
* OPUS readme: [el-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.el.fr | 63.0 | 0.741 |
|
Helsinki-NLP/opus-mt-el-ar
|
Helsinki-NLP
| 2023-08-16T11:28:46Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- el
- ar
tags:
- translation
license: apache-2.0
---
### ell-ara
* source group: Modern Greek (1453-)
* target group: Arabic
* OPUS readme: [ell-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md)
* model: transformer
* source language(s): ell
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.ara | 21.9 | 0.485 |
### System Info:
- hf_name: ell-ara
- source_languages: ell
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'ar']
- src_constituents: {'ell'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt
- src_alpha3: ell
- tgt_alpha3: ara
- short_pair: el-ar
- chrF2_score: 0.485
- bleu: 21.9
- brevity_penalty: 0.972
- ref_len: 1686.0
- src_name: Modern Greek (1453-)
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: el
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ell-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-efi-en
|
Helsinki-NLP
| 2023-08-16T11:28:41Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-en
* source languages: efi
* target languages: en
* OPUS readme: [efi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.en | 35.4 | 0.510 |
|
Helsinki-NLP/opus-mt-efi-de
|
Helsinki-NLP
| 2023-08-16T11:28:40Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-de
* source languages: efi
* target languages: de
* OPUS readme: [efi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.de | 21.0 | 0.401 |
|
Helsinki-NLP/opus-mt-ee-fi
|
Helsinki-NLP
| 2023-08-16T11:28:37Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-fi
* source languages: ee
* target languages: fi
* OPUS readme: [ee-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fi | 25.0 | 0.482 |
|
Helsinki-NLP/opus-mt-ee-es
|
Helsinki-NLP
| 2023-08-16T11:28:36Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-es
* source languages: ee
* target languages: es
* OPUS readme: [ee-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.es | 26.4 | 0.449 |
|
Helsinki-NLP/opus-mt-ee-en
|
Helsinki-NLP
| 2023-08-16T11:28:35Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-en
* source languages: ee
* target languages: en
* OPUS readme: [ee-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.en | 39.3 | 0.556 |
| Tatoeba.ee.en | 21.2 | 0.569 |
|
Helsinki-NLP/opus-mt-ee-de
|
Helsinki-NLP
| 2023-08-16T11:28:34Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-de
* source languages: ee
* target languages: de
* OPUS readme: [ee-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.de | 22.3 | 0.430 |
|
Helsinki-NLP/opus-mt-dra-en
|
Helsinki-NLP
| 2023-08-16T11:28:33Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ta",
"kn",
"ml",
"te",
"dra",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ta
- kn
- ml
- te
- dra
- en
tags:
- translation
license: apache-2.0
---
### dra-eng
* source group: Dravidian languages
* target group: English
* OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md)
* model: transformer
* source language(s): kan mal tam tel
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 |
| Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 |
| Tatoeba-test.multi.eng | 30.0 | 0.493 |
| Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 |
| Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 |
### System Info:
- hf_name: dra-eng
- source_languages: dra
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en']
- src_constituents: {'tam', 'kan', 'mal', 'tel'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt
- src_alpha3: dra
- tgt_alpha3: eng
- short_pair: dra-en
- chrF2_score: 0.493
- bleu: 30.0
- brevity_penalty: 1.0
- ref_len: 10641.0
- src_name: Dravidian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: dra
- tgt_alpha2: en
- prefer_old: False
- long_pair: dra-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-tl
|
Helsinki-NLP
| 2023-08-16T11:28:30Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- tl
tags:
- translation
license: apache-2.0
---
### deu-tgl
* source group: German
* target group: Tagalog
* OPUS readme: [deu-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-tgl/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.tgl | 21.2 | 0.541 |
### System Info:
- hf_name: deu-tgl
- source_languages: deu
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'tl']
- src_constituents: {'deu'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-tgl/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: tgl
- short_pair: de-tl
- chrF2_score: 0.541
- bleu: 21.2
- brevity_penalty: 1.0
- ref_len: 2329.0
- src_name: German
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: tl
- prefer_old: False
- long_pair: deu-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-pon
|
Helsinki-NLP
| 2023-08-16T11:28:28Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pon
* source languages: de
* target languages: pon
* OPUS readme: [de-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pon | 21.0 | 0.442 |
|
Helsinki-NLP/opus-mt-de-pl
|
Helsinki-NLP
| 2023-08-16T11:28:27Z | 712 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pl
* source languages: de
* target languages: pl
* OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.pl | 41.2 | 0.631 |
|
Helsinki-NLP/opus-mt-de-pag
|
Helsinki-NLP
| 2023-08-16T11:28:24Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pag
* source languages: de
* target languages: pag
* OPUS readme: [de-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pag | 24.3 | 0.469 |
|
Helsinki-NLP/opus-mt-de-ny
|
Helsinki-NLP
| 2023-08-16T11:28:23Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ny
* source languages: de
* target languages: ny
* OPUS readme: [de-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ny | 21.4 | 0.481 |
|
Helsinki-NLP/opus-mt-de-nso
|
Helsinki-NLP
| 2023-08-16T11:28:22Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"nso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-nso
* source languages: de
* target languages: nso
* OPUS readme: [de-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.nso | 31.1 | 0.519 |
|
Nextcloud-AI/opus-mt-de-nl
|
Nextcloud-AI
| 2023-08-16T11:28:20Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:41Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-nl
* source languages: de
* target languages: nl
* OPUS readme: [de-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.nl | 52.8 | 0.699 |
|
Helsinki-NLP/opus-mt-de-ms
|
Helsinki-NLP
| 2023-08-16T11:28:16Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ms",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- ms
tags:
- translation
license: apache-2.0
---
### deu-msa
* source group: German
* target group: Malay (macrolanguage)
* OPUS readme: [deu-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.msa | 34.0 | 0.607 |
### System Info:
- hf_name: deu-msa
- source_languages: deu
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ms']
- src_constituents: {'deu'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: msa
- short_pair: de-ms
- chrF2_score: 0.607
- bleu: 34.0
- brevity_penalty: 0.9540000000000001
- ref_len: 3729.0
- src_name: German
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: ms
- prefer_old: False
- long_pair: deu-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-lua
|
Helsinki-NLP
| 2023-08-16T11:28:15Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-lua
* source languages: de
* target languages: lua
* OPUS readme: [de-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.lua | 23.1 | 0.467 |
|
Helsinki-NLP/opus-mt-de-lt
|
Helsinki-NLP
| 2023-08-16T11:28:14Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-lt
* source languages: de
* target languages: lt
* OPUS readme: [de-lt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-lt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.lt | 37.9 | 0.633 |
|
Helsinki-NLP/opus-mt-de-ln
|
Helsinki-NLP
| 2023-08-16T11:28:12Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ln",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ln
* source languages: de
* target languages: ln
* OPUS readme: [de-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ln | 26.7 | 0.504 |
|
Nextcloud-AI/opus-mt-de-it
|
Nextcloud-AI
| 2023-08-16T11:28:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:32Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-it
* source languages: de
* target languages: it
* OPUS readme: [de-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.it | 45.3 | 0.671 |
|
Helsinki-NLP/opus-mt-de-it
|
Helsinki-NLP
| 2023-08-16T11:28:10Z | 1,692 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-it
* source languages: de
* target languages: it
* OPUS readme: [de-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.it | 45.3 | 0.671 |
|
Helsinki-NLP/opus-mt-de-ilo
|
Helsinki-NLP
| 2023-08-16T11:28:06Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ilo
* source languages: de
* target languages: ilo
* OPUS readme: [de-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ilo | 29.8 | 0.533 |
|
Helsinki-NLP/opus-mt-de-hu
|
Helsinki-NLP
| 2023-08-16T11:28:04Z | 1,243 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-hu
* source languages: de
* target languages: hu
* OPUS readme: [de-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.hu | 34.3 | 0.588 |
|
Helsinki-NLP/opus-mt-de-ht
|
Helsinki-NLP
| 2023-08-16T11:28:03Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ht",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ht
* source languages: de
* target languages: ht
* OPUS readme: [de-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ht | 21.8 | 0.390 |
|
Helsinki-NLP/opus-mt-de-hr
|
Helsinki-NLP
| 2023-08-16T11:28:02Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"hr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-hr
* source languages: de
* target languages: hr
* OPUS readme: [de-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-hr/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hr/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hr/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.hr | 42.6 | 0.643 |
|
Helsinki-NLP/opus-mt-de-ha
|
Helsinki-NLP
| 2023-08-16T11:27:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ha
* source languages: de
* target languages: ha
* OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ha | 20.7 | 0.417 |
|
Helsinki-NLP/opus-mt-de-gil
|
Helsinki-NLP
| 2023-08-16T11:27:55Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"gil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-gil
* source languages: de
* target languages: gil
* OPUS readme: [de-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.gil | 24.0 | 0.472 |
|
Nextcloud-AI/opus-mt-de-fr
|
Nextcloud-AI
| 2023-08-16T11:27:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:23Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fr
* source languages: de
* target languages: fr
* OPUS readme: [de-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| euelections_dev2019.transformer-align.de | 32.2 | 0.590 |
| newssyscomb2009.de.fr | 26.8 | 0.553 |
| news-test2008.de.fr | 26.4 | 0.548 |
| newstest2009.de.fr | 25.6 | 0.539 |
| newstest2010.de.fr | 29.1 | 0.572 |
| newstest2011.de.fr | 26.9 | 0.551 |
| newstest2012.de.fr | 27.7 | 0.554 |
| newstest2013.de.fr | 29.5 | 0.560 |
| newstest2019-defr.de.fr | 36.6 | 0.625 |
| Tatoeba.de.fr | 49.2 | 0.664 |
|
Helsinki-NLP/opus-mt-de-fj
|
Helsinki-NLP
| 2023-08-16T11:27:52Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fj
* source languages: de
* target languages: fj
* OPUS readme: [de-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.fj | 24.6 | 0.470 |
|
Helsinki-NLP/opus-mt-de-es
|
Helsinki-NLP
| 2023-08-16T11:27:48Z | 32,010 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-es
* source languages: de
* target languages: es
* OPUS readme: [de-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.es | 48.5 | 0.676 |
|
Helsinki-NLP/opus-mt-de-eo
|
Helsinki-NLP
| 2023-08-16T11:27:47Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-eo
* source languages: de
* target languages: eo
* OPUS readme: [de-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.eo | 48.6 | 0.673 |
|
Helsinki-NLP/opus-mt-de-en
|
Helsinki-NLP
| 2023-08-16T11:27:46Z | 673,311 | 44 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"de",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-en
* source languages: de
* target languages: en
* OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.en | 29.4 | 0.557 |
| news-test2008.de.en | 27.8 | 0.548 |
| newstest2009.de.en | 26.8 | 0.543 |
| newstest2010.de.en | 30.2 | 0.584 |
| newstest2011.de.en | 27.4 | 0.556 |
| newstest2012.de.en | 29.1 | 0.569 |
| newstest2013.de.en | 32.1 | 0.583 |
| newstest2014-deen.de.en | 34.0 | 0.600 |
| newstest2015-ende.de.en | 34.2 | 0.599 |
| newstest2016-ende.de.en | 40.4 | 0.649 |
| newstest2017-ende.de.en | 35.7 | 0.610 |
| newstest2018-ende.de.en | 43.7 | 0.667 |
| newstest2019-deen.de.en | 40.1 | 0.642 |
| Tatoeba.de.en | 55.4 | 0.707 |
|
Nextcloud-AI/opus-mt-de-en
|
Nextcloud-AI
| 2023-08-16T11:27:46Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:37:55Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-en
* source languages: de
* target languages: en
* OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.en | 29.4 | 0.557 |
| news-test2008.de.en | 27.8 | 0.548 |
| newstest2009.de.en | 26.8 | 0.543 |
| newstest2010.de.en | 30.2 | 0.584 |
| newstest2011.de.en | 27.4 | 0.556 |
| newstest2012.de.en | 29.1 | 0.569 |
| newstest2013.de.en | 32.1 | 0.583 |
| newstest2014-deen.de.en | 34.0 | 0.600 |
| newstest2015-ende.de.en | 34.2 | 0.599 |
| newstest2016-ende.de.en | 40.4 | 0.649 |
| newstest2017-ende.de.en | 35.7 | 0.610 |
| newstest2018-ende.de.en | 43.7 | 0.667 |
| newstest2019-deen.de.en | 40.1 | 0.642 |
| Tatoeba.de.en | 55.4 | 0.707 |
|
Helsinki-NLP/opus-mt-de-efi
|
Helsinki-NLP
| 2023-08-16T11:27:43Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"efi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-efi
* source languages: de
* target languages: efi
* OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.efi | 24.2 | 0.451 |
|
Helsinki-NLP/opus-mt-de-de
|
Helsinki-NLP
| 2023-08-16T11:27:41Z | 207 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-de
* source languages: de
* target languages: de
* OPUS readme: [de-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.de | 40.7 | 0.616 |
|
Helsinki-NLP/opus-mt-de-cs
|
Helsinki-NLP
| 2023-08-16T11:27:39Z | 312 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-cs
* source languages: de
* target languages: cs
* OPUS readme: [de-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.cs | 22.4 | 0.499 |
| news-test2008.de.cs | 20.2 | 0.487 |
| newstest2009.de.cs | 20.9 | 0.485 |
| newstest2010.de.cs | 22.7 | 0.510 |
| newstest2011.de.cs | 21.2 | 0.487 |
| newstest2012.de.cs | 20.9 | 0.479 |
| newstest2013.de.cs | 23.0 | 0.500 |
| newstest2019-decs.de.cs | 22.5 | 0.495 |
| Tatoeba.de.cs | 42.2 | 0.625 |
|
Helsinki-NLP/opus-mt-de-ca
|
Helsinki-NLP
| 2023-08-16T11:27:37Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- ca
tags:
- translation
license: apache-2.0
---
### deu-cat
* source group: German
* target group: Catalan
* OPUS readme: [deu-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.cat | 37.4 | 0.582 |
### System Info:
- hf_name: deu-cat
- source_languages: deu
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ca']
- src_constituents: {'deu'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt
- src_alpha3: deu
- tgt_alpha3: cat
- short_pair: de-ca
- chrF2_score: 0.5820000000000001
- bleu: 37.4
- brevity_penalty: 0.956
- ref_len: 5507.0
- src_name: German
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: de
- tgt_alpha2: ca
- prefer_old: False
- long_pair: deu-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.