modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-07 12:31:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-07 12:31:42
card
stringlengths
11
1.01M
Helsinki-NLP/opus-mt-es-en
Helsinki-NLP
2023-08-16T11:32:34Z
2,014,228
68
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - es - en tags: - translation license: apache-2.0 --- ### spa-eng * source group: Spanish * target group: English * OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md) * model: transformer * source language(s): spa * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 | | news-test2008-spaeng.spa.eng | 27.9 | 0.553 | | newstest2009-spaeng.spa.eng | 30.4 | 0.572 | | newstest2010-spaeng.spa.eng | 36.1 | 0.614 | | newstest2011-spaeng.spa.eng | 34.2 | 0.599 | | newstest2012-spaeng.spa.eng | 37.9 | 0.624 | | newstest2013-spaeng.spa.eng | 35.3 | 0.609 | | Tatoeba-test.spa.eng | 59.6 | 0.739 | ### System Info: - hf_name: spa-eng - source_languages: spa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'en'] - src_constituents: {'spa'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt - src_alpha3: spa - tgt_alpha3: eng - short_pair: es-en - chrF2_score: 0.7390000000000001 - bleu: 59.6 - brevity_penalty: 0.9740000000000001 - ref_len: 79376.0 - src_name: Spanish - tgt_name: English - train_date: 2020-08-18 00:00:00 - src_alpha2: es - tgt_alpha2: en - prefer_old: False - long_pair: spa-eng - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
Helsinki-NLP/opus-mt-es-efi
Helsinki-NLP
2023-08-16T11:32:32Z
124
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "efi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-efi * source languages: es * target languages: efi * OPUS readme: [es-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.efi | 24.6 | 0.452 |
Helsinki-NLP/opus-mt-es-de
Helsinki-NLP
2023-08-16T11:32:29Z
26,522
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-de * source languages: es * target languages: de * OPUS readme: [es-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.es.de | 50.0 | 0.683 |
Nextcloud-AI/opus-mt-es-de
Nextcloud-AI
2023-08-16T11:32:29Z
110
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:40:41Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-de * source languages: es * target languages: de * OPUS readme: [es-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.es.de | 50.0 | 0.683 |
Helsinki-NLP/opus-mt-es-bzs
Helsinki-NLP
2023-08-16T11:32:20Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "bzs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-bzs * source languages: es * target languages: bzs * OPUS readme: [es-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.bzs | 26.4 | 0.451 |
Helsinki-NLP/opus-mt-es-bi
Helsinki-NLP
2023-08-16T11:32:19Z
117
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "bi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-bi * source languages: es * target languages: bi * OPUS readme: [es-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.bi | 28.0 | 0.473 |
Helsinki-NLP/opus-mt-es-ber
Helsinki-NLP
2023-08-16T11:32:17Z
115
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "ber", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-ber * source languages: es * target languages: ber * OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.es.ber | 21.8 | 0.444 |
Helsinki-NLP/opus-mt-es-ase
Helsinki-NLP
2023-08-16T11:32:14Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "ase", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-ase * source languages: es * target languages: ase * OPUS readme: [es-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ase/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.ase | 31.5 | 0.488 |
Helsinki-NLP/opus-mt-es-ar
Helsinki-NLP
2023-08-16T11:32:13Z
650
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - es - ar tags: - translation license: apache-2.0 --- ### spa-ara * source group: Spanish * target group: Arabic * OPUS readme: [spa-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md) * model: transformer * source language(s): spa * target language(s): apc apc_Latn ara arq * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.spa.ara | 20.0 | 0.517 | ### System Info: - hf_name: spa-ara - source_languages: spa - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'ar'] - src_constituents: {'spa'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt - src_alpha3: spa - tgt_alpha3: ara - short_pair: es-ar - chrF2_score: 0.517 - bleu: 20.0 - brevity_penalty: 0.9390000000000001 - ref_len: 7547.0 - src_name: Spanish - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: es - tgt_alpha2: ar - prefer_old: False - long_pair: spa-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-es-ar
Nextcloud-AI
2023-08-16T11:32:13Z
103
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:40:33Z
--- language: - es - ar tags: - translation license: apache-2.0 --- ### spa-ara * source group: Spanish * target group: Arabic * OPUS readme: [spa-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md) * model: transformer * source language(s): spa * target language(s): apc apc_Latn ara arq * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.spa.ara | 20.0 | 0.517 | ### System Info: - hf_name: spa-ara - source_languages: spa - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'ar'] - src_constituents: {'spa'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt - src_alpha3: spa - tgt_alpha3: ara - short_pair: es-ar - chrF2_score: 0.517 - bleu: 20.0 - brevity_penalty: 0.9390000000000001 - ref_len: 7547.0 - src_name: Spanish - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: es - tgt_alpha2: ar - prefer_old: False - long_pair: spa-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-es-aed
Helsinki-NLP
2023-08-16T11:32:11Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "es", "aed", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-aed * source languages: es * target languages: aed * OPUS readme: [es-aed](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-aed/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.aed | 89.2 | 0.915 |
Helsinki-NLP/opus-mt-eo-sh
Helsinki-NLP
2023-08-16T11:32:08Z
108
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "sh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - sh tags: - translation license: apache-2.0 --- ### epo-hbs * source group: Esperanto * target group: Serbo-Croatian * OPUS readme: [epo-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md) * model: transformer-align * source language(s): epo * target language(s): bos_Latn hrv srp_Cyrl srp_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.hbs | 13.6 | 0.351 | ### System Info: - hf_name: epo-hbs - source_languages: epo - target_languages: hbs - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'sh'] - src_constituents: {'epo'} - tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: hbs - short_pair: eo-sh - chrF2_score: 0.35100000000000003 - bleu: 13.6 - brevity_penalty: 0.888 - ref_len: 17999.0 - src_name: Esperanto - tgt_name: Serbo-Croatian - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: sh - prefer_old: False - long_pair: epo-hbs - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-eo-pt
Helsinki-NLP
2023-08-16T11:32:04Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - pt tags: - translation license: apache-2.0 --- ### epo-por * source group: Esperanto * target group: Portuguese * OPUS readme: [epo-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md) * model: transformer-align * source language(s): epo * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.por | 20.2 | 0.438 | ### System Info: - hf_name: epo-por - source_languages: epo - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'pt'] - src_constituents: {'epo'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: por - short_pair: eo-pt - chrF2_score: 0.43799999999999994 - bleu: 20.2 - brevity_penalty: 0.895 - ref_len: 89991.0 - src_name: Esperanto - tgt_name: Portuguese - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: pt - prefer_old: False - long_pair: epo-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-eo-it
Helsinki-NLP
2023-08-16T11:32:01Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - it tags: - translation license: apache-2.0 --- ### epo-ita * source group: Esperanto * target group: Italian * OPUS readme: [epo-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md) * model: transformer-align * source language(s): epo * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.ita | 23.8 | 0.465 | ### System Info: - hf_name: epo-ita - source_languages: epo - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'it'] - src_constituents: {'epo'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: ita - short_pair: eo-it - chrF2_score: 0.465 - bleu: 23.8 - brevity_penalty: 0.9420000000000001 - ref_len: 67118.0 - src_name: Esperanto - tgt_name: Italian - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: it - prefer_old: False - long_pair: epo-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-eo-es
Helsinki-NLP
2023-08-16T11:31:55Z
110
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-eo-es * source languages: eo * target languages: es * OPUS readme: [eo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.eo.es | 44.2 | 0.631 |
Helsinki-NLP/opus-mt-eo-en
Helsinki-NLP
2023-08-16T11:31:54Z
6,017
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-eo-en * source languages: eo * target languages: en * OPUS readme: [eo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.eo.en | 54.8 | 0.694 |
Helsinki-NLP/opus-mt-eo-de
Helsinki-NLP
2023-08-16T11:31:52Z
138
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-eo-de * source languages: eo * target languages: de * OPUS readme: [eo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.eo.de | 45.5 | 0.644 |
Helsinki-NLP/opus-mt-eo-da
Helsinki-NLP
2023-08-16T11:31:50Z
115
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "da", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - da tags: - translation license: apache-2.0 --- ### epo-dan * source group: Esperanto * target group: Danish * OPUS readme: [epo-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md) * model: transformer-align * source language(s): epo * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.dan | 21.6 | 0.407 | ### System Info: - hf_name: epo-dan - source_languages: epo - target_languages: dan - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'da'] - src_constituents: {'epo'} - tgt_constituents: {'dan'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: dan - short_pair: eo-da - chrF2_score: 0.40700000000000003 - bleu: 21.6 - brevity_penalty: 0.9359999999999999 - ref_len: 72349.0 - src_name: Esperanto - tgt_name: Danish - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: da - prefer_old: False - long_pair: epo-dan - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-eo-cs
Helsinki-NLP
2023-08-16T11:31:49Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "cs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - cs tags: - translation license: apache-2.0 --- ### epo-ces * source group: Esperanto * target group: Czech * OPUS readme: [epo-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md) * model: transformer-align * source language(s): epo * target language(s): ces * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.ces | 17.5 | 0.376 | ### System Info: - hf_name: epo-ces - source_languages: epo - target_languages: ces - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'cs'] - src_constituents: {'epo'} - tgt_constituents: {'ces'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: ces - short_pair: eo-cs - chrF2_score: 0.376 - bleu: 17.5 - brevity_penalty: 0.922 - ref_len: 22148.0 - src_name: Esperanto - tgt_name: Czech - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: cs - prefer_old: False - long_pair: epo-ces - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-eo-af
Helsinki-NLP
2023-08-16T11:31:47Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "af", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - eo - af tags: - translation license: apache-2.0 --- ### epo-afr * source group: Esperanto * target group: Afrikaans * OPUS readme: [epo-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md) * model: transformer-align * source language(s): epo * target language(s): afr * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.afr | 19.5 | 0.369 | ### System Info: - hf_name: epo-afr - source_languages: epo - target_languages: afr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'af'] - src_constituents: {'epo'} - tgt_constituents: {'afr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: afr - short_pair: eo-af - chrF2_score: 0.369 - bleu: 19.5 - brevity_penalty: 0.9570000000000001 - ref_len: 8432.0 - src_name: Esperanto - tgt_name: Afrikaans - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: af - prefer_old: False - long_pair: epo-afr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-zlw
Helsinki-NLP
2023-08-16T11:31:45Z
157
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "pl", "cs", "zlw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - pl - cs - zlw tags: - translation license: apache-2.0 --- ### eng-zlw * source group: English * target group: West Slavic languages * OPUS readme: [eng-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md) * model: transformer * source language(s): eng * target language(s): ces csb_Latn dsb hsb pol * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip) * test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt) * test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engces.eng.ces | 20.6 | 0.488 | | news-test2008-engces.eng.ces | 18.3 | 0.466 | | newstest2009-engces.eng.ces | 19.8 | 0.483 | | newstest2010-engces.eng.ces | 19.8 | 0.486 | | newstest2011-engces.eng.ces | 20.6 | 0.489 | | newstest2012-engces.eng.ces | 18.6 | 0.464 | | newstest2013-engces.eng.ces | 22.3 | 0.495 | | newstest2015-encs-engces.eng.ces | 21.7 | 0.502 | | newstest2016-encs-engces.eng.ces | 24.5 | 0.521 | | newstest2017-encs-engces.eng.ces | 20.1 | 0.480 | | newstest2018-encs-engces.eng.ces | 19.9 | 0.483 | | newstest2019-encs-engces.eng.ces | 21.2 | 0.490 | | Tatoeba-test.eng-ces.eng.ces | 43.7 | 0.632 | | Tatoeba-test.eng-csb.eng.csb | 1.2 | 0.188 | | Tatoeba-test.eng-dsb.eng.dsb | 1.5 | 0.167 | | Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.199 | | Tatoeba-test.eng.multi | 42.8 | 0.632 | | Tatoeba-test.eng-pol.eng.pol | 43.2 | 0.641 | ### System Info: - hf_name: eng-zlw - source_languages: eng - target_languages: zlw - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'pl', 'cs', 'zlw'] - src_constituents: {'eng'} - tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt - src_alpha3: eng - tgt_alpha3: zlw - short_pair: en-zlw - chrF2_score: 0.632 - bleu: 42.8 - brevity_penalty: 0.973 - ref_len: 65397.0 - src_name: English - tgt_name: West Slavic languages - train_date: 2020-08-02 - src_alpha2: en - tgt_alpha2: zlw - prefer_old: False - long_pair: eng-zlw - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-zls
Helsinki-NLP
2023-08-16T11:31:44Z
153
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hr", "mk", "bg", "sl", "zls", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - hr - mk - bg - sl - zls tags: - translation license: apache-2.0 --- ### eng-zls * source group: English * target group: South Slavic languages * OPUS readme: [eng-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md) * model: transformer * source language(s): eng * target language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip) * test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt) * test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-bul.eng.bul | 47.6 | 0.657 | | Tatoeba-test.eng-hbs.eng.hbs | 40.7 | 0.619 | | Tatoeba-test.eng-mkd.eng.mkd | 45.2 | 0.642 | | Tatoeba-test.eng.multi | 42.7 | 0.622 | | Tatoeba-test.eng-slv.eng.slv | 17.9 | 0.351 | ### System Info: - hf_name: eng-zls - source_languages: eng - target_languages: zls - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls'] - src_constituents: {'eng'} - tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt - src_alpha3: eng - tgt_alpha3: zls - short_pair: en-zls - chrF2_score: 0.622 - bleu: 42.7 - brevity_penalty: 0.9690000000000001 - ref_len: 64788.0 - src_name: English - tgt_name: South Slavic languages - train_date: 2020-08-02 - src_alpha2: en - tgt_alpha2: zls - prefer_old: False - long_pair: eng-zls - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-urj
Helsinki-NLP
2023-08-16T11:31:39Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "se", "fi", "hu", "et", "urj", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - se - fi - hu - et - urj tags: - translation license: apache-2.0 --- ### eng-urj * source group: English * target group: Uralic languages * OPUS readme: [eng-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md) * model: transformer * source language(s): eng * target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip) * test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt) * test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-engfin.eng.fin | 18.3 | 0.519 | | newsdev2018-enet-engest.eng.est | 19.3 | 0.520 | | newssyscomb2009-enghun.eng.hun | 15.4 | 0.471 | | newstest2009-enghun.eng.hun | 15.7 | 0.468 | | newstest2015-enfi-engfin.eng.fin | 20.2 | 0.534 | | newstest2016-enfi-engfin.eng.fin | 20.7 | 0.541 | | newstest2017-enfi-engfin.eng.fin | 23.6 | 0.566 | | newstest2018-enet-engest.eng.est | 20.8 | 0.535 | | newstest2018-enfi-engfin.eng.fin | 15.8 | 0.499 | | newstest2019-enfi-engfin.eng.fin | 19.9 | 0.518 | | newstestB2016-enfi-engfin.eng.fin | 16.6 | 0.509 | | newstestB2017-enfi-engfin.eng.fin | 19.4 | 0.529 | | Tatoeba-test.eng-chm.eng.chm | 1.3 | 0.127 | | Tatoeba-test.eng-est.eng.est | 51.0 | 0.692 | | Tatoeba-test.eng-fin.eng.fin | 34.6 | 0.597 | | Tatoeba-test.eng-fkv.eng.fkv | 2.2 | 0.302 | | Tatoeba-test.eng-hun.eng.hun | 35.6 | 0.591 | | Tatoeba-test.eng-izh.eng.izh | 5.7 | 0.211 | | Tatoeba-test.eng-kom.eng.kom | 3.0 | 0.012 | | Tatoeba-test.eng-krl.eng.krl | 8.5 | 0.230 | | Tatoeba-test.eng-liv.eng.liv | 2.7 | 0.077 | | Tatoeba-test.eng-mdf.eng.mdf | 2.8 | 0.007 | | Tatoeba-test.eng.multi | 35.1 | 0.588 | | Tatoeba-test.eng-myv.eng.myv | 1.3 | 0.014 | | Tatoeba-test.eng-sma.eng.sma | 1.8 | 0.095 | | Tatoeba-test.eng-sme.eng.sme | 6.8 | 0.204 | | Tatoeba-test.eng-udm.eng.udm | 1.1 | 0.121 | ### System Info: - hf_name: eng-urj - source_languages: eng - target_languages: urj - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'se', 'fi', 'hu', 'et', 'urj'] - src_constituents: {'eng'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt - src_alpha3: eng - tgt_alpha3: urj - short_pair: en-urj - chrF2_score: 0.588 - bleu: 35.1 - brevity_penalty: 0.943 - ref_len: 59664.0 - src_name: English - tgt_name: Uralic languages - train_date: 2020-08-02 - src_alpha2: en - tgt_alpha2: urj - prefer_old: False - long_pair: eng-urj - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-uk
Helsinki-NLP
2023-08-16T11:31:36Z
12,306
10
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-uk * source languages: en * target languages: uk * OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.uk | 50.2 | 0.674 |
Helsinki-NLP/opus-mt-en-tvl
Helsinki-NLP
2023-08-16T11:31:32Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "tvl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-tvl * source languages: en * target languages: tvl * OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.tvl | 46.9 | 0.625 |
Helsinki-NLP/opus-mt-en-ts
Helsinki-NLP
2023-08-16T11:31:30Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ts", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ts * source languages: en * target languages: ts * OPUS readme: [en-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ts/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ts | 43.4 | 0.639 |
Helsinki-NLP/opus-mt-en-to
Helsinki-NLP
2023-08-16T11:31:25Z
122
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "to", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-to * source languages: en * target languages: to * OPUS readme: [en-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-to/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.to | 56.3 | 0.689 |
Helsinki-NLP/opus-mt-en-tn
Helsinki-NLP
2023-08-16T11:31:24Z
142
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "tn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-tn * source languages: en * target languages: tn * OPUS readme: [en-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.tn | 45.5 | 0.636 |
Helsinki-NLP/opus-mt-en-sw
Helsinki-NLP
2023-08-16T11:31:16Z
24,559
7
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-sw * source languages: en * target languages: sw * OPUS readme: [en-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.en.sw | 24.2 | 0.527 |
Nextcloud-AI/opus-mt-en-sv
Nextcloud-AI
2023-08-16T11:31:15Z
109
0
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:40:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-sv * source languages: en * target languages: sv * OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.sv | 60.1 | 0.736 |
Helsinki-NLP/opus-mt-en-sk
Helsinki-NLP
2023-08-16T11:31:06Z
31,601
4
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-sk * source languages: en * target languages: sk * OPUS readme: [en-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.sk | 36.8 | 0.578 |
Helsinki-NLP/opus-mt-en-sem
Helsinki-NLP
2023-08-16T11:31:03Z
109
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "mt", "ar", "he", "ti", "am", "sem", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - mt - ar - he - ti - am - sem tags: - translation license: apache-2.0 --- ### eng-sem * source group: English * target group: Semitic languages * OPUS readme: [eng-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md) * model: transformer * source language(s): eng * target language(s): acm afb amh apc ara arq ary arz heb mlt tir * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-amh.eng.amh | 11.2 | 0.480 | | Tatoeba-test.eng-ara.eng.ara | 12.7 | 0.417 | | Tatoeba-test.eng-heb.eng.heb | 33.8 | 0.564 | | Tatoeba-test.eng-mlt.eng.mlt | 18.7 | 0.554 | | Tatoeba-test.eng.multi | 23.5 | 0.486 | | Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.248 | ### System Info: - hf_name: eng-sem - source_languages: eng - target_languages: sem - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem'] - src_constituents: {'eng'} - tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: sem - short_pair: en-sem - chrF2_score: 0.486 - bleu: 23.5 - brevity_penalty: 1.0 - ref_len: 59258.0 - src_name: English - tgt_name: Semitic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: sem - prefer_old: False - long_pair: eng-sem - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-sal
Helsinki-NLP
2023-08-16T11:31:02Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sal", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - sal tags: - translation license: apache-2.0 --- ### eng-sal * source group: English * target group: Salishan languages * OPUS readme: [eng-sal](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md) * model: transformer * source language(s): eng * target language(s): shs_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip) * test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt) * test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.multi | 32.6 | 0.585 | | Tatoeba-test.eng.shs | 1.1 | 0.072 | | Tatoeba-test.eng-shs.eng.shs | 1.2 | 0.065 | ### System Info: - hf_name: eng-sal - source_languages: eng - target_languages: sal - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sal'] - src_constituents: {'eng'} - tgt_constituents: {'shs_Latn'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt - src_alpha3: eng - tgt_alpha3: sal - short_pair: en-sal - chrF2_score: 0.07200000000000001 - bleu: 1.1 - brevity_penalty: 1.0 - ref_len: 199.0 - src_name: English - tgt_name: Salishan languages - train_date: 2020-07-14 - src_alpha2: en - tgt_alpha2: sal - prefer_old: False - long_pair: eng-sal - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-rw
Helsinki-NLP
2023-08-16T11:31:00Z
114
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "rw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-rw * source languages: en * target languages: rw * OPUS readme: [en-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.rw | 33.3 | 0.569 | | Tatoeba.en.rw | 13.8 | 0.503 |
Helsinki-NLP/opus-mt-en-run
Helsinki-NLP
2023-08-16T11:30:59Z
110
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "run", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-run * source languages: en * target languages: run * OPUS readme: [en-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-run/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.run | 34.2 | 0.591 |
Helsinki-NLP/opus-mt-en-ru
Helsinki-NLP
2023-08-16T11:30:58Z
81,689
74
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ru * source languages: en * target languages: ru * OPUS readme: [en-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.zip) * test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.test.txt) * test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012.en.ru | 31.1 | 0.581 | | newstest2013.en.ru | 23.5 | 0.513 | | newstest2015-enru.en.ru | 27.5 | 0.564 | | newstest2016-enru.en.ru | 26.4 | 0.548 | | newstest2017-enru.en.ru | 29.1 | 0.572 | | newstest2018-enru.en.ru | 25.4 | 0.554 | | newstest2019-enru.en.ru | 27.1 | 0.533 | | Tatoeba.en.ru | 48.4 | 0.669 |
Helsinki-NLP/opus-mt-en-ro
Helsinki-NLP
2023-08-16T11:30:56Z
31,002
5
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ro", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ro * source languages: en * target languages: ro * OPUS readme: [en-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ro/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro.en.ro | 30.8 | 0.592 | | newstest2016-enro.en.ro | 28.8 | 0.571 | | Tatoeba.en.ro | 45.3 | 0.670 |
Helsinki-NLP/opus-mt-en-rnd
Helsinki-NLP
2023-08-16T11:30:55Z
120
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "rnd", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-rnd * source languages: en * target languages: rnd * OPUS readme: [en-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rnd/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.rnd | 34.5 | 0.571 |
Helsinki-NLP/opus-mt-en-rn
Helsinki-NLP
2023-08-16T11:30:54Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "rn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - rn tags: - translation license: apache-2.0 --- ### eng-run * source group: English * target group: Rundi * OPUS readme: [eng-run](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-run/README.md) * model: transformer-align * source language(s): eng * target language(s): run * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.run | 10.4 | 0.436 | ### System Info: - hf_name: eng-run - source_languages: eng - target_languages: run - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-run/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'rn'] - src_constituents: {'eng'} - tgt_constituents: {'run'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: run - short_pair: en-rn - chrF2_score: 0.436 - bleu: 10.4 - brevity_penalty: 1.0 - ref_len: 6710.0 - src_name: English - tgt_name: Rundi - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: rn - prefer_old: False - long_pair: eng-run - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-poz
Helsinki-NLP
2023-08-16T11:30:51Z
114
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "poz", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - poz tags: - translation license: apache-2.0 --- ### eng-poz * source group: English * target group: Malayo-Polynesian languages * OPUS readme: [eng-poz](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md) * model: transformer * source language(s): eng * target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-akl.eng.akl | 1.3 | 0.086 | | Tatoeba-test.eng-ceb.eng.ceb | 10.2 | 0.426 | | Tatoeba-test.eng-cha.eng.cha | 1.9 | 0.196 | | Tatoeba-test.eng-dtp.eng.dtp | 0.4 | 0.121 | | Tatoeba-test.eng-fij.eng.fij | 31.0 | 0.463 | | Tatoeba-test.eng-gil.eng.gil | 45.4 | 0.635 | | Tatoeba-test.eng-haw.eng.haw | 0.6 | 0.104 | | Tatoeba-test.eng-hil.eng.hil | 14.4 | 0.498 | | Tatoeba-test.eng-iba.eng.iba | 17.4 | 0.414 | | Tatoeba-test.eng-ilo.eng.ilo | 33.1 | 0.585 | | Tatoeba-test.eng-jav.eng.jav | 6.5 | 0.309 | | Tatoeba-test.eng-lkt.eng.lkt | 0.5 | 0.065 | | Tatoeba-test.eng-mad.eng.mad | 1.7 | 0.156 | | Tatoeba-test.eng-mah.eng.mah | 12.7 | 0.391 | | Tatoeba-test.eng-mlg.eng.mlg | 30.3 | 0.504 | | Tatoeba-test.eng-mri.eng.mri | 8.2 | 0.316 | | Tatoeba-test.eng-msa.eng.msa | 30.4 | 0.561 | | Tatoeba-test.eng.multi | 16.2 | 0.410 | | Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.087 | | Tatoeba-test.eng-niu.eng.niu | 33.2 | 0.482 | | Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.555 | | Tatoeba-test.eng-pau.eng.pau | 1.0 | 0.124 | | Tatoeba-test.eng-rap.eng.rap | 1.4 | 0.090 | | Tatoeba-test.eng-smo.eng.smo | 12.9 | 0.407 | | Tatoeba-test.eng-sun.eng.sun | 15.5 | 0.364 | | Tatoeba-test.eng-tah.eng.tah | 9.5 | 0.295 | | Tatoeba-test.eng-tet.eng.tet | 1.2 | 0.146 | | Tatoeba-test.eng-ton.eng.ton | 23.7 | 0.484 | | Tatoeba-test.eng-tvl.eng.tvl | 32.5 | 0.549 | | Tatoeba-test.eng-war.eng.war | 12.6 | 0.432 | ### System Info: - hf_name: eng-poz - source_languages: eng - target_languages: poz - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'poz'] - src_constituents: {'eng'} - tgt_constituents: set() - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: poz - short_pair: en-poz - chrF2_score: 0.41 - bleu: 16.2 - brevity_penalty: 1.0 - ref_len: 66803.0 - src_name: English - tgt_name: Malayo-Polynesian languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: poz - prefer_old: False - long_pair: eng-poz - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-pon
Helsinki-NLP
2023-08-16T11:30:50Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "pon", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-pon * source languages: en * target languages: pon * OPUS readme: [en-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.pon | 32.4 | 0.542 |
Helsinki-NLP/opus-mt-en-phi
Helsinki-NLP
2023-08-16T11:30:47Z
116
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "phi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - phi tags: - translation license: apache-2.0 --- ### eng-phi * source group: English * target group: Philippine languages * OPUS readme: [eng-phi](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md) * model: transformer * source language(s): eng * target language(s): akl_Latn ceb hil ilo pag war * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-akl.eng.akl | 7.1 | 0.245 | | Tatoeba-test.eng-ceb.eng.ceb | 10.5 | 0.435 | | Tatoeba-test.eng-hil.eng.hil | 18.0 | 0.506 | | Tatoeba-test.eng-ilo.eng.ilo | 33.4 | 0.590 | | Tatoeba-test.eng.multi | 13.1 | 0.392 | | Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.481 | | Tatoeba-test.eng-war.eng.war | 12.8 | 0.441 | ### System Info: - hf_name: eng-phi - source_languages: eng - target_languages: phi - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'phi'] - src_constituents: {'eng'} - tgt_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: phi - short_pair: en-phi - chrF2_score: 0.392 - bleu: 13.1 - brevity_penalty: 1.0 - ref_len: 30022.0 - src_name: English - tgt_name: Philippine languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: phi - prefer_old: False - long_pair: eng-phi - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-pag
Helsinki-NLP
2023-08-16T11:30:45Z
182
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "pag", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-pag * source languages: en * target languages: pag * OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.pag | 37.9 | 0.598 |
Helsinki-NLP/opus-mt-en-ny
Helsinki-NLP
2023-08-16T11:30:42Z
134
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ny", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ny * source languages: en * target languages: ny * OPUS readme: [en-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ny/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ny | 31.4 | 0.570 | | Tatoeba.en.ny | 26.8 | 0.645 |
Helsinki-NLP/opus-mt-en-nso
Helsinki-NLP
2023-08-16T11:30:41Z
112
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "nso", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-nso * source languages: en * target languages: nso * OPUS readme: [en-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.nso | 52.2 | 0.684 |
Nextcloud-AI/opus-mt-en-nl
Nextcloud-AI
2023-08-16T11:30:40Z
103
0
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:39:56Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-nl * source languages: en * target languages: nl * OPUS readme: [en-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.zip) * test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.test.txt) * test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.nl | 57.1 | 0.730 |
Helsinki-NLP/opus-mt-en-niu
Helsinki-NLP
2023-08-16T11:30:39Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "niu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-niu * source languages: en * target languages: niu * OPUS readme: [en-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-niu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.niu | 53.0 | 0.698 |
Helsinki-NLP/opus-mt-en-nic
Helsinki-NLP
2023-08-16T11:30:38Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - sn - rw - wo - ig - sg - ee - zu - lg - ts - ln - ny - yo - rn - xh - nic tags: - translation license: apache-2.0 --- ### eng-nic * source group: English * target group: Niger-Kordofanian languages * OPUS readme: [eng-nic](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md) * model: transformer * source language(s): eng * target language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-bam.eng.bam | 6.2 | 0.029 | | Tatoeba-test.eng-ewe.eng.ewe | 4.5 | 0.258 | | Tatoeba-test.eng-ful.eng.ful | 0.5 | 0.073 | | Tatoeba-test.eng-ibo.eng.ibo | 3.9 | 0.267 | | Tatoeba-test.eng-kin.eng.kin | 6.4 | 0.475 | | Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.308 | | Tatoeba-test.eng-lug.eng.lug | 3.9 | 0.405 | | Tatoeba-test.eng.multi | 11.1 | 0.427 | | Tatoeba-test.eng-nya.eng.nya | 14.0 | 0.622 | | Tatoeba-test.eng-run.eng.run | 13.6 | 0.477 | | Tatoeba-test.eng-sag.eng.sag | 5.5 | 0.199 | | Tatoeba-test.eng-sna.eng.sna | 19.6 | 0.557 | | Tatoeba-test.eng-swa.eng.swa | 1.8 | 0.163 | | Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.231 | | Tatoeba-test.eng-tso.eng.tso | 50.0 | 0.789 | | Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.342 | | Tatoeba-test.eng-wol.eng.wol | 6.7 | 0.143 | | Tatoeba-test.eng-xho.eng.xho | 26.4 | 0.620 | | Tatoeba-test.eng-yor.eng.yor | 15.5 | 0.342 | | Tatoeba-test.eng-zul.eng.zul | 35.9 | 0.750 | ### System Info: - hf_name: eng-nic - source_languages: eng - target_languages: nic - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic'] - src_constituents: {'eng'} - tgt_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: nic - short_pair: en-nic - chrF2_score: 0.42700000000000005 - bleu: 11.1 - brevity_penalty: 1.0 - ref_len: 10625.0 - src_name: English - tgt_name: Niger-Kordofanian languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: nic - prefer_old: False - long_pair: eng-nic - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ng
Helsinki-NLP
2023-08-16T11:30:37Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ng", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ng * source languages: en * target languages: ng * OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ng | 24.8 | 0.496 |
Helsinki-NLP/opus-mt-en-mt
Helsinki-NLP
2023-08-16T11:30:34Z
28,743
3
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "mt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-mt * source languages: en * target languages: mt * OPUS readme: [en-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mt | 47.5 | 0.640 | | Tatoeba.en.mt | 25.0 | 0.620 |
Helsinki-NLP/opus-mt-en-mos
Helsinki-NLP
2023-08-16T11:30:32Z
113
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "mos", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-mos * source languages: en * target languages: mos * OPUS readme: [en-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mos | 26.9 | 0.417 |
Helsinki-NLP/opus-mt-en-ml
Helsinki-NLP
2023-08-16T11:30:31Z
1,490
4
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ml", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ml * source languages: en * target languages: ml * OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md) * dataset: opus+bt+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip) * test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt) * test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ml | 19.1 | 0.536 |
Helsinki-NLP/opus-mt-en-mkh
Helsinki-NLP
2023-08-16T11:30:30Z
144
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "vi", "km", "mkh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - vi - km - mkh tags: - translation license: apache-2.0 --- ### eng-mkh * source group: English * target group: Mon-Khmer languages * OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md) * model: transformer * source language(s): eng * target language(s): kha khm khm_Latn mnw vie vie_Hani * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 | | Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 | | Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 | | Tatoeba-test.eng.multi | 16.5 | 0.330 | | Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 | ### System Info: - hf_name: eng-mkh - source_languages: eng - target_languages: mkh - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'vi', 'km', 'mkh'] - src_constituents: {'eng'} - tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: mkh - short_pair: en-mkh - chrF2_score: 0.33 - bleu: 16.5 - brevity_penalty: 1.0 - ref_len: 34734.0 - src_name: English - tgt_name: Mon-Khmer languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: mkh - prefer_old: False - long_pair: eng-mkh - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-mh
Helsinki-NLP
2023-08-16T11:30:28Z
123
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "mh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-mh * source languages: en * target languages: mh * OPUS readme: [en-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mh | 29.7 | 0.479 |
Helsinki-NLP/opus-mt-en-map
Helsinki-NLP
2023-08-16T11:30:24Z
122
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "map", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - map tags: - translation license: apache-2.0 --- ### eng-map * source group: English * target group: Austronesian languages * OPUS readme: [eng-map](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md) * model: transformer * source language(s): eng * target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-akl.eng.akl | 2.2 | 0.103 | | Tatoeba-test.eng-ceb.eng.ceb | 10.7 | 0.425 | | Tatoeba-test.eng-cha.eng.cha | 3.2 | 0.201 | | Tatoeba-test.eng-dtp.eng.dtp | 0.5 | 0.120 | | Tatoeba-test.eng-fij.eng.fij | 26.8 | 0.453 | | Tatoeba-test.eng-gil.eng.gil | 59.3 | 0.762 | | Tatoeba-test.eng-haw.eng.haw | 1.0 | 0.116 | | Tatoeba-test.eng-hil.eng.hil | 19.0 | 0.517 | | Tatoeba-test.eng-iba.eng.iba | 15.5 | 0.400 | | Tatoeba-test.eng-ilo.eng.ilo | 33.6 | 0.591 | | Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.301 | | Tatoeba-test.eng-lkt.eng.lkt | 1.0 | 0.064 | | Tatoeba-test.eng-mad.eng.mad | 1.1 | 0.142 | | Tatoeba-test.eng-mah.eng.mah | 9.1 | 0.374 | | Tatoeba-test.eng-mlg.eng.mlg | 35.4 | 0.526 | | Tatoeba-test.eng-mri.eng.mri | 7.6 | 0.309 | | Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.565 | | Tatoeba-test.eng.multi | 17.6 | 0.411 | | Tatoeba-test.eng-nau.eng.nau | 1.4 | 0.098 | | Tatoeba-test.eng-niu.eng.niu | 40.1 | 0.560 | | Tatoeba-test.eng-pag.eng.pag | 16.8 | 0.526 | | Tatoeba-test.eng-pau.eng.pau | 1.9 | 0.139 | | Tatoeba-test.eng-rap.eng.rap | 2.7 | 0.090 | | Tatoeba-test.eng-smo.eng.smo | 24.9 | 0.453 | | Tatoeba-test.eng-sun.eng.sun | 33.2 | 0.439 | | Tatoeba-test.eng-tah.eng.tah | 12.5 | 0.278 | | Tatoeba-test.eng-tet.eng.tet | 1.6 | 0.140 | | Tatoeba-test.eng-ton.eng.ton | 25.8 | 0.530 | | Tatoeba-test.eng-tvl.eng.tvl | 31.1 | 0.523 | | Tatoeba-test.eng-war.eng.war | 12.8 | 0.436 | ### System Info: - hf_name: eng-map - source_languages: eng - target_languages: map - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'map'] - src_constituents: {'eng'} - tgt_constituents: set() - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: map - short_pair: en-map - chrF2_score: 0.41100000000000003 - bleu: 17.6 - brevity_penalty: 1.0 - ref_len: 66963.0 - src_name: English - tgt_name: Austronesian languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: map - prefer_old: False - long_pair: eng-map - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-luo
Helsinki-NLP
2023-08-16T11:30:22Z
113
3
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "luo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-luo * source languages: en * target languages: luo * OPUS readme: [en-luo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-luo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.luo | 27.6 | 0.495 |
Helsinki-NLP/opus-mt-en-lun
Helsinki-NLP
2023-08-16T11:30:21Z
108
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lun", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lun * source languages: en * target languages: lun * OPUS readme: [en-lun](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lun/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lun | 28.9 | 0.552 |
Helsinki-NLP/opus-mt-en-lue
Helsinki-NLP
2023-08-16T11:30:20Z
118
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lue * source languages: en * target languages: lue * OPUS readme: [en-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lue/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lue | 30.1 | 0.558 |
Helsinki-NLP/opus-mt-en-lua
Helsinki-NLP
2023-08-16T11:30:19Z
132
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lua", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lua * source languages: en * target languages: lua * OPUS readme: [en-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lua/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lua | 35.3 | 0.578 |
Helsinki-NLP/opus-mt-en-lu
Helsinki-NLP
2023-08-16T11:30:18Z
148
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lu * source languages: en * target languages: lu * OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lu | 34.1 | 0.564 |
Helsinki-NLP/opus-mt-en-lg
Helsinki-NLP
2023-08-16T11:30:14Z
247
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lg * source languages: en * target languages: lg * OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lg | 30.4 | 0.543 | | Tatoeba.en.lg | 5.7 | 0.386 |
Helsinki-NLP/opus-mt-en-kwy
Helsinki-NLP
2023-08-16T11:30:13Z
166
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kwy", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-kwy * source languages: en * target languages: kwy * OPUS readme: [en-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwy | 33.6 | 0.543 |
Helsinki-NLP/opus-mt-en-kwn
Helsinki-NLP
2023-08-16T11:30:12Z
127
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kwn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-kwn * source languages: en * target languages: kwn * OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwn | 27.6 | 0.513 |
Helsinki-NLP/opus-mt-en-kqn
Helsinki-NLP
2023-08-16T11:30:11Z
161
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kqn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-kqn * source languages: en * target languages: kqn * OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kqn | 33.1 | 0.567 |
Helsinki-NLP/opus-mt-en-kj
Helsinki-NLP
2023-08-16T11:30:10Z
125
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "kj", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-kj * source languages: en * target languages: kj * OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kj | 29.6 | 0.539 |
Helsinki-NLP/opus-mt-en-jap
Helsinki-NLP
2023-08-16T11:30:07Z
9,790
8
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "jap", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-jap * source languages: en * target languages: jap * OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.jap | 42.1 | 0.960 |
Helsinki-NLP/opus-mt-en-itc
Helsinki-NLP
2023-08-16T11:30:06Z
115
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - it - ca - rm - es - ro - gl - sc - co - wa - pt - oc - an - id - fr - ht - itc tags: - translation license: apache-2.0 --- ### eng-itc * source group: English * target group: Italic languages * OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md) * model: transformer * source language(s): eng * target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 | | newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 | | newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 | | newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 | | newssyscomb2009-engita.eng.ita | 28.6 | 0.586 | | newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 | | news-test2008-engfra.eng.fra | 25.0 | 0.536 | | news-test2008-engspa.eng.spa | 27.1 | 0.548 | | newstest2009-engfra.eng.fra | 26.7 | 0.557 | | newstest2009-engita.eng.ita | 28.9 | 0.583 | | newstest2009-engspa.eng.spa | 28.9 | 0.567 | | newstest2010-engfra.eng.fra | 29.6 | 0.574 | | newstest2010-engspa.eng.spa | 33.8 | 0.598 | | newstest2011-engfra.eng.fra | 30.9 | 0.590 | | newstest2011-engspa.eng.spa | 34.8 | 0.598 | | newstest2012-engfra.eng.fra | 29.1 | 0.574 | | newstest2012-engspa.eng.spa | 34.9 | 0.600 | | newstest2013-engfra.eng.fra | 30.1 | 0.567 | | newstest2013-engspa.eng.spa | 31.8 | 0.576 | | newstest2016-enro-engron.eng.ron | 25.9 | 0.548 | | Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 | | Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 | | Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 | | Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 | | Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 | | Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 | | Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 | | Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 | | Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 | | Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 | | Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 | | Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 | | Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 | | Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 | | Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 | | Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 | | Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 | | Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 | | Tatoeba-test.eng.multi | 38.0 | 0.588 | | Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 | | Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 | | Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 | | Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 | | Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 | | Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 | | Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 | | Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 | | Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 | | Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 | ### System Info: - hf_name: eng-itc - source_languages: eng - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'eng'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: itc - short_pair: en-itc - chrF2_score: 0.588 - bleu: 38.0 - brevity_penalty: 0.9670000000000001 - ref_len: 73951.0 - src_name: English - tgt_name: Italic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: itc - prefer_old: False - long_pair: eng-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-it
Helsinki-NLP
2023-08-16T11:30:05Z
157,454
17
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-it * source languages: en * target languages: it * OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip) * test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt) * test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.it | 30.9 | 0.606 | | newstest2009.en.it | 31.9 | 0.604 | | Tatoeba.en.it | 48.2 | 0.695 |
Helsinki-NLP/opus-mt-en-is
Helsinki-NLP
2023-08-16T11:30:02Z
1,193
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "is", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-is * source languages: en * target languages: is * OPUS readme: [en-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-is/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.is | 25.3 | 0.518 |
Helsinki-NLP/opus-mt-en-ine
Helsinki-NLP
2023-08-16T11:30:01Z
491
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - ca - es - os - ro - fy - cy - sc - is - yi - lb - an - sq - fr - ht - rm - ps - af - uk - sl - lt - bg - be - gd - si - br - mk - or - mr - ru - fo - co - oc - pl - gl - nb - bn - id - hy - da - gv - nl - pt - hi - as - kw - ga - sv - gu - wa - lv - el - it - hr - ur - nn - de - cs - ine tags: - translation license: apache-2.0 --- ### eng-ine * source group: English * target group: Indo-European languages * OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md) * model: transformer * source language(s): eng * target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.2 | 0.317 | | newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 | | newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 | | newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 | | newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 | | newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 | | newssyscomb2009-engces.eng.ces | 14.7 | 0.442 | | newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 | | newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 | | newssyscomb2009-engita.eng.ita | 25.2 | 0.562 | | newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 | | news-test2008-engces.eng.ces | 13.0 | 0.417 | | news-test2008-engdeu.eng.deu | 17.4 | 0.480 | | news-test2008-engfra.eng.fra | 22.3 | 0.519 | | news-test2008-engspa.eng.spa | 24.9 | 0.532 | | newstest2009-engces.eng.ces | 13.6 | 0.432 | | newstest2009-engdeu.eng.deu | 16.6 | 0.482 | | newstest2009-engfra.eng.fra | 23.5 | 0.535 | | newstest2009-engita.eng.ita | 25.5 | 0.561 | | newstest2009-engspa.eng.spa | 26.3 | 0.551 | | newstest2010-engces.eng.ces | 14.2 | 0.436 | | newstest2010-engdeu.eng.deu | 18.3 | 0.492 | | newstest2010-engfra.eng.fra | 25.7 | 0.550 | | newstest2010-engspa.eng.spa | 30.5 | 0.578 | | newstest2011-engces.eng.ces | 15.1 | 0.439 | | newstest2011-engdeu.eng.deu | 17.1 | 0.478 | | newstest2011-engfra.eng.fra | 28.0 | 0.569 | | newstest2011-engspa.eng.spa | 31.9 | 0.580 | | newstest2012-engces.eng.ces | 13.6 | 0.418 | | newstest2012-engdeu.eng.deu | 17.0 | 0.475 | | newstest2012-engfra.eng.fra | 26.1 | 0.553 | | newstest2012-engrus.eng.rus | 21.4 | 0.506 | | newstest2012-engspa.eng.spa | 31.4 | 0.577 | | newstest2013-engces.eng.ces | 15.3 | 0.438 | | newstest2013-engdeu.eng.deu | 20.3 | 0.501 | | newstest2013-engfra.eng.fra | 26.0 | 0.540 | | newstest2013-engrus.eng.rus | 16.1 | 0.449 | | newstest2013-engspa.eng.spa | 28.6 | 0.555 | | newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 | | newstest2015-encs-engces.eng.ces | 14.8 | 0.440 | | newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 | | newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 | | newstest2016-encs-engces.eng.ces | 16.8 | 0.457 | | newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 | | newstest2016-enro-engron.eng.ron | 21.2 | 0.510 | | newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 | | newstest2017-encs-engces.eng.ces | 13.6 | 0.421 | | newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 | | newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 | | newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 | | newstest2018-encs-engces.eng.ces | 13.5 | 0.425 | | newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 | | newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 | | newstest2019-encs-engces.eng.ces | 14.8 | 0.435 | | newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 | | newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 | | newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 | | newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 | | Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 | | Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 | | Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 | | Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 | | Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 | | Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 | | Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 | | Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 | | Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 | | Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 | | Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 | | Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 | | Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 | | Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 | | Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 | | Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 | | Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 | | Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 | | Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 | | Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 | | Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 | | Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 | | Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 | | Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 | | Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 | | Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 | | Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 | | Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 | | Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 | | Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 | | Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 | | Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 | | Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 | | Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 | | Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 | | Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 | | Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 | | Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 | | Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 | | Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 | | Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 | | Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 | | Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 | | Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 | | Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 | | Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 | | Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 | | Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 | | Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 | | Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 | | Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 | | Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 | | Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 | | Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 | | Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 | | Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 | | Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 | | Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 | | Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 | | Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 | | Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 | | Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 | | Tatoeba-test.eng.multi | 32.6 | 0.539 | | Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 | | Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 | | Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 | | Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 | | Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 | | Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 | | Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 | | Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 | | Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 | | Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 | | Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 | | Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 | | Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 | | Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 | | Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 | | Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 | | Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 | | Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 | | Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 | | Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 | | Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 | | Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 | | Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 | | Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 | | Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 | | Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 | | Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 | | Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 | | Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 | | Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 | | Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 | | Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 | | Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 | | Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 | | Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 | | Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 | | Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 | | Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 | | Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 | ### System Info: - hf_name: eng-ine - source_languages: eng - target_languages: ine - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'eng'} - tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: ine - short_pair: en-ine - chrF2_score: 0.539 - bleu: 32.6 - brevity_penalty: 0.973 - ref_len: 68664.0 - src_name: English - tgt_name: Indo-European languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: ine - prefer_old: False - long_pair: eng-ine - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-inc
Helsinki-NLP
2023-08-16T11:30:00Z
129
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - bn - or - gu - mr - ur - hi - as - si - inc tags: - translation license: apache-2.0 --- ### eng-inc * source group: English * target group: Indic languages * OPUS readme: [eng-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin mai mar npi ori pan_Guru pnb rom san_Deva sin snd_Arab urd * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 8.2 | 0.342 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.293 | | newstest2014-hien-enghin.eng.hin | 11.4 | 0.364 | | newstest2019-engu-engguj.eng.guj | 7.2 | 0.296 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.277 | | Tatoeba-test.eng-awa.eng.awa | 0.5 | 0.132 | | Tatoeba-test.eng-ben.eng.ben | 16.7 | 0.470 | | Tatoeba-test.eng-bho.eng.bho | 4.3 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 17.5 | 0.373 | | Tatoeba-test.eng-hif.eng.hif | 0.6 | 0.028 | | Tatoeba-test.eng-hin.eng.hin | 17.7 | 0.469 | | Tatoeba-test.eng-kok.eng.kok | 1.7 | 0.000 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.028 | | Tatoeba-test.eng-mai.eng.mai | 15.6 | 0.429 | | Tatoeba-test.eng-mar.eng.mar | 21.3 | 0.477 | | Tatoeba-test.eng.multi | 17.3 | 0.448 | | Tatoeba-test.eng-nep.eng.nep | 0.8 | 0.081 | | Tatoeba-test.eng-ori.eng.ori | 2.2 | 0.208 | | Tatoeba-test.eng-pan.eng.pan | 8.0 | 0.347 | | Tatoeba-test.eng-rom.eng.rom | 0.4 | 0.197 | | Tatoeba-test.eng-san.eng.san | 0.5 | 0.108 | | Tatoeba-test.eng-sin.eng.sin | 9.1 | 0.364 | | Tatoeba-test.eng-snd.eng.snd | 4.4 | 0.284 | | Tatoeba-test.eng-urd.eng.urd | 13.3 | 0.423 | ### System Info: - hf_name: eng-inc - source_languages: eng - target_languages: inc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: inc - short_pair: en-inc - chrF2_score: 0.44799999999999995 - bleu: 17.3 - brevity_penalty: 1.0 - ref_len: 59917.0 - src_name: English - tgt_name: Indic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: inc - prefer_old: False - long_pair: eng-inc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-iir
Helsinki-NLP
2023-08-16T11:29:58Z
141
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - bn - or - gu - mr - ur - hi - ps - os - as - si - iir tags: - translation license: apache-2.0 --- ### eng-iir * source group: English * target group: Indo-Iranian languages * OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.7 | 0.326 | | newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 | | newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 | | newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 | | Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 | | Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 | | Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 | | Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 | | Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 | | Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 | | Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 | | Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 | | Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 | | Tatoeba-test.eng.multi | 13.7 | 0.392 | | Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 | | Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 | | Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 | | Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 | | Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 | | Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 | | Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 | | Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 | | Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 | | Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 | ### System Info: - hf_name: eng-iir - source_languages: eng - target_languages: iir - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: iir - short_pair: en-iir - chrF2_score: 0.392 - bleu: 13.7 - brevity_penalty: 1.0 - ref_len: 63351.0 - src_name: English - tgt_name: Indo-Iranian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: iir - prefer_old: False - long_pair: eng-iir - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ig
Helsinki-NLP
2023-08-16T11:29:57Z
207
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ig * source languages: en * target languages: ig * OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ig | 39.5 | 0.546 | | Tatoeba.en.ig | 3.8 | 0.297 |
Helsinki-NLP/opus-mt-en-hu
Helsinki-NLP
2023-08-16T11:29:54Z
2,676
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-hu * source languages: en * target languages: hu * OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.hu | 40.1 | 0.628 |
Helsinki-NLP/opus-mt-en-ht
Helsinki-NLP
2023-08-16T11:29:53Z
469
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ht", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ht * source languages: en * target languages: ht * OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ht | 38.3 | 0.545 | | Tatoeba.en.ht | 45.2 | 0.592 |
Helsinki-NLP/opus-mt-en-hil
Helsinki-NLP
2023-08-16T11:29:50Z
115
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hil", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-hil * source languages: en * target languages: hil * OPUS readme: [en-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.hil | 49.4 | 0.696 |
Helsinki-NLP/opus-mt-en-hi
Helsinki-NLP
2023-08-16T11:29:49Z
15,308
32
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "hi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - hi tags: - translation license: apache-2.0 --- ### eng-hin * source group: English * target group: Hindi * OPUS readme: [eng-hin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md) * model: transformer-align * source language(s): eng * target language(s): hin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014.eng.hin | 6.9 | 0.296 | | newstest2014-hien.eng.hin | 9.9 | 0.323 | | Tatoeba-test.eng.hin | 16.1 | 0.447 | ### System Info: - hf_name: eng-hin - source_languages: eng - target_languages: hin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hi'] - src_constituents: {'eng'} - tgt_constituents: {'hin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: hin - short_pair: en-hi - chrF2_score: 0.447 - bleu: 16.1 - brevity_penalty: 1.0 - ref_len: 32904.0 - src_name: English - tgt_name: Hindi - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: hi - prefer_old: False - long_pair: eng-hin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-he
Helsinki-NLP
2023-08-16T11:29:48Z
13,897
5
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "en", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-he * source languages: en * target languages: he * OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.he | 40.1 | 0.609 |
Helsinki-NLP/opus-mt-en-gv
Helsinki-NLP
2023-08-16T11:29:46Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-gv * source languages: en * target languages: gv * OPUS readme: [en-gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.gv | 70.1 | 0.885 |
Helsinki-NLP/opus-mt-en-guw
Helsinki-NLP
2023-08-16T11:29:45Z
1,783
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "guw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-guw * source languages: en * target languages: guw * OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.guw | 45.7 | 0.634 |
Helsinki-NLP/opus-mt-en-grk
Helsinki-NLP
2023-08-16T11:29:44Z
185
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "el", "grk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - el - grk tags: - translation license: apache-2.0 --- ### eng-grk * source group: English * target group: Greek languages * OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md) * model: transformer * source language(s): eng * target language(s): ell grc_Grek * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 | | Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 | | Tatoeba-test.eng.multi | 45.6 | 0.677 | ### System Info: - hf_name: eng-grk - source_languages: eng - target_languages: grk - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'el', 'grk'] - src_constituents: {'eng'} - tgt_constituents: {'grc_Grek', 'ell'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: grk - short_pair: en-grk - chrF2_score: 0.677 - bleu: 45.6 - brevity_penalty: 1.0 - ref_len: 59951.0 - src_name: English - tgt_name: Greek languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: grk - prefer_old: False - long_pair: eng-grk - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-gmw
Helsinki-NLP
2023-08-16T11:29:43Z
128
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "nl", "lb", "af", "de", "fy", "yi", "gmw", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - nl - lb - af - de - fy - yi - gmw tags: - translation license: apache-2.0 --- ### eng-gmw * source group: English * target group: West Germanic languages * OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 | | news-test2008-engdeu.eng.deu | 21.0 | 0.510 | | newstest2009-engdeu.eng.deu | 20.4 | 0.513 | | newstest2010-engdeu.eng.deu | 22.9 | 0.528 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 21.0 | 0.507 | | newstest2013-engdeu.eng.deu | 24.7 | 0.533 | | newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 | | newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 | | newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 | | newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 | | Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 | | Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 | | Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 | | Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 | | Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 | | Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 | | Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 | | Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 | | Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 | | Tatoeba-test.eng.multi | 41.6 | 0.609 | | Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 | | Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 | | Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 | | Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 | | Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 | | Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 | | Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 | ### System Info: - hf_name: eng-gmw - source_languages: eng - target_languages: gmw - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gmw - short_pair: en-gmw - chrF2_score: 0.609 - bleu: 41.6 - brevity_penalty: 0.9890000000000001 - ref_len: 74922.0 - src_name: English - tgt_name: West Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gmw - prefer_old: False - long_pair: eng-gmw - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-gl
Helsinki-NLP
2023-08-16T11:29:40Z
1,238
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-gl * source languages: en * target languages: gl * OPUS readme: [en-gl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.gl | 36.4 | 0.572 |
Helsinki-NLP/opus-mt-en-gil
Helsinki-NLP
2023-08-16T11:29:39Z
110
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gil", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-gil * source languages: en * target languages: gil * OPUS readme: [en-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gil | 38.8 | 0.604 |
Helsinki-NLP/opus-mt-en-gaa
Helsinki-NLP
2023-08-16T11:29:37Z
109
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "gaa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-gaa * source languages: en * target languages: gaa * OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gaa | 39.9 | 0.593 |
Nextcloud-AI/opus-mt-en-fr
Nextcloud-AI
2023-08-16T11:29:35Z
103
0
transformers
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:39:40Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-fr * source languages: en * target languages: fr * OPUS readme: [en-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.en.fr | 33.8 | 0.602 | | newsdiscusstest2015-enfr.en.fr | 40.0 | 0.643 | | newssyscomb2009.en.fr | 29.8 | 0.584 | | news-test2008.en.fr | 27.5 | 0.554 | | newstest2009.en.fr | 29.4 | 0.577 | | newstest2010.en.fr | 32.7 | 0.596 | | newstest2011.en.fr | 34.3 | 0.611 | | newstest2012.en.fr | 31.8 | 0.592 | | newstest2013.en.fr | 33.2 | 0.589 | | Tatoeba.en.fr | 50.5 | 0.672 |
Helsinki-NLP/opus-mt-en-fiu
Helsinki-NLP
2023-08-16T11:29:33Z
118
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "se", "fi", "hu", "et", "fiu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - se - fi - hu - et - fiu tags: - translation license: apache-2.0 --- ### eng-fiu * source group: English * target group: Finno-Ugrian languages * OPUS readme: [eng-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md) * model: transformer * source language(s): eng * target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-engfin.eng.fin | 18.7 | 0.522 | | newsdev2018-enet-engest.eng.est | 19.4 | 0.521 | | newssyscomb2009-enghun.eng.hun | 15.5 | 0.472 | | newstest2009-enghun.eng.hun | 15.4 | 0.468 | | newstest2015-enfi-engfin.eng.fin | 19.9 | 0.532 | | newstest2016-enfi-engfin.eng.fin | 21.1 | 0.544 | | newstest2017-enfi-engfin.eng.fin | 23.8 | 0.567 | | newstest2018-enet-engest.eng.est | 20.4 | 0.532 | | newstest2018-enfi-engfin.eng.fin | 15.6 | 0.498 | | newstest2019-enfi-engfin.eng.fin | 20.0 | 0.520 | | newstestB2016-enfi-engfin.eng.fin | 17.0 | 0.512 | | newstestB2017-enfi-engfin.eng.fin | 19.7 | 0.531 | | Tatoeba-test.eng-chm.eng.chm | 0.9 | 0.115 | | Tatoeba-test.eng-est.eng.est | 49.8 | 0.689 | | Tatoeba-test.eng-fin.eng.fin | 34.7 | 0.597 | | Tatoeba-test.eng-fkv.eng.fkv | 1.3 | 0.187 | | Tatoeba-test.eng-hun.eng.hun | 35.2 | 0.589 | | Tatoeba-test.eng-izh.eng.izh | 6.0 | 0.163 | | Tatoeba-test.eng-kom.eng.kom | 3.4 | 0.012 | | Tatoeba-test.eng-krl.eng.krl | 6.4 | 0.202 | | Tatoeba-test.eng-liv.eng.liv | 1.6 | 0.102 | | Tatoeba-test.eng-mdf.eng.mdf | 3.7 | 0.008 | | Tatoeba-test.eng.multi | 35.4 | 0.590 | | Tatoeba-test.eng-myv.eng.myv | 1.4 | 0.014 | | Tatoeba-test.eng-sma.eng.sma | 2.6 | 0.097 | | Tatoeba-test.eng-sme.eng.sme | 7.3 | 0.221 | | Tatoeba-test.eng-udm.eng.udm | 1.4 | 0.079 | ### System Info: - hf_name: eng-fiu - source_languages: eng - target_languages: fiu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu'] - src_constituents: {'eng'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: fiu - short_pair: en-fiu - chrF2_score: 0.59 - bleu: 35.4 - brevity_penalty: 0.9440000000000001 - ref_len: 59311.0 - src_name: English - tgt_name: Finno-Ugrian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: fiu - prefer_old: False - long_pair: eng-fiu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-euq
Helsinki-NLP
2023-08-16T11:29:31Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "euq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - euq tags: - translation license: apache-2.0 --- ### eng-euq * source group: English * target group: Basque (family) * OPUS readme: [eng-euq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md) * model: transformer * source language(s): eng * target language(s): eus * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 27.9 | 0.555 | | Tatoeba-test.eng-eus.eng.eus | 27.9 | 0.555 | ### System Info: - hf_name: eng-euq - source_languages: eng - target_languages: euq - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'euq'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: euq - short_pair: en-euq - chrF2_score: 0.555 - bleu: 27.9 - brevity_penalty: 0.917 - ref_len: 7080.0 - src_name: English - tgt_name: Basque (family) - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: euq - prefer_old: False - long_pair: eng-euq - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-eu
Helsinki-NLP
2023-08-16T11:29:30Z
1,220
4
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "eu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - eu tags: - translation license: apache-2.0 --- ### eng-eus * source group: English * target group: Basque * OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md) * model: transformer-align * source language(s): eng * target language(s): eus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 31.8 | 0.590 | ### System Info: - hf_name: eng-eus - source_languages: eng - target_languages: eus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'eu'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: eus - short_pair: en-eu - chrF2_score: 0.59 - bleu: 31.8 - brevity_penalty: 0.9440000000000001 - ref_len: 7080.0 - src_name: English - tgt_name: Basque - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: eu - prefer_old: False - long_pair: eng-eus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-en-es
Nextcloud-AI
2023-08-16T11:29:28Z
109
0
transformers
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "en", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:39:24Z
--- language: - en - es tags: - translation license: apache-2.0 --- ### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
Helsinki-NLP/opus-mt-en-es
Helsinki-NLP
2023-08-16T11:29:28Z
169,264
104
transformers
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "en", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - es tags: - translation license: apache-2.0 --- ### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
Helsinki-NLP/opus-mt-en-dra
Helsinki-NLP
2023-08-16T11:29:22Z
165
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ta", "kn", "ml", "te", "dra", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - ta - kn - ml - te - dra tags: - translation license: apache-2.0 --- ### eng-dra * source group: English * target group: Dravidian languages * OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md) * model: transformer * source language(s): eng * target language(s): kan mal tam tel * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 | | Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 | | Tatoeba-test.eng.multi | 10.7 | 0.463 | | Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 | | Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 | ### System Info: - hf_name: eng-dra - source_languages: eng - target_languages: dra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra'] - src_constituents: {'eng'} - tgt_constituents: {'tam', 'kan', 'mal', 'tel'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: dra - short_pair: en-dra - chrF2_score: 0.46299999999999997 - bleu: 10.7 - brevity_penalty: 1.0 - ref_len: 7928.0 - src_name: English - tgt_name: Dravidian languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: dra - prefer_old: False - long_pair: eng-dra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-cy
Helsinki-NLP
2023-08-16T11:29:19Z
136
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "cy", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-cy * source languages: en * target languages: cy * OPUS readme: [en-cy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.cy | 25.3 | 0.487 |
Helsinki-NLP/opus-mt-en-cs
Helsinki-NLP
2023-08-16T11:29:17Z
3,859
7
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "cs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-cs * source languages: en * target languages: cs * OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.cs | 22.8 | 0.507 | | news-test2008.en.cs | 20.7 | 0.485 | | newstest2009.en.cs | 21.8 | 0.500 | | newstest2010.en.cs | 22.1 | 0.505 | | newstest2011.en.cs | 23.2 | 0.507 | | newstest2012.en.cs | 20.8 | 0.482 | | newstest2013.en.cs | 24.7 | 0.514 | | newstest2015-encs.en.cs | 24.9 | 0.527 | | newstest2016-encs.en.cs | 26.7 | 0.540 | | newstest2017-encs.en.cs | 22.7 | 0.503 | | newstest2018-encs.en.cs | 22.9 | 0.504 | | newstest2019-encs.en.cs | 24.9 | 0.518 | | Tatoeba.en.cs | 46.1 | 0.647 |
Helsinki-NLP/opus-mt-en-cpp
Helsinki-NLP
2023-08-16T11:29:15Z
109
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "id", "cpp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - id - cpp tags: - translation license: apache-2.0 --- ### eng-cpp * source group: English * target group: Creoles and pidgins, Portuguese-based * OPUS readme: [eng-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md) * model: transformer * source language(s): eng * target language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-msa.eng.msa | 32.6 | 0.573 | | Tatoeba-test.eng.multi | 32.7 | 0.574 | | Tatoeba-test.eng-pap.eng.pap | 42.5 | 0.633 | ### System Info: - hf_name: eng-cpp - source_languages: eng - target_languages: cpp - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'id', 'cpp'] - src_constituents: {'eng'} - tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cpp - short_pair: en-cpp - chrF2_score: 0.574 - bleu: 32.7 - brevity_penalty: 0.996 - ref_len: 34010.0 - src_name: English - tgt_name: Creoles and pidgins, Portuguese-based - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cpp - prefer_old: False - long_pair: eng-cpp - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-chk
Helsinki-NLP
2023-08-16T11:29:13Z
133
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "chk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-chk * source languages: en * target languages: chk * OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.chk | 26.1 | 0.468 |
Helsinki-NLP/opus-mt-en-ca
Helsinki-NLP
2023-08-16T11:29:09Z
6,076
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ca * source languages: en * target languages: ca * OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ca | 47.2 | 0.665 |
Helsinki-NLP/opus-mt-en-ber
Helsinki-NLP
2023-08-16T11:29:04Z
123
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "ber", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ber * source languages: en * target languages: ber * OPUS readme: [en-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ber/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ber | 29.7 | 0.544 |
Helsinki-NLP/opus-mt-en-bcl
Helsinki-NLP
2023-08-16T11:29:01Z
224
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bcl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-bcl * source languages: en * target languages: bcl * OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip) * test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt) * test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bcl | 54.3 | 0.722 |
Helsinki-NLP/opus-mt-en-bat
Helsinki-NLP
2023-08-16T11:29:00Z
488
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "lt", "lv", "bat", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - lt - lv - bat tags: - translation license: apache-2.0 --- ### eng-bat * source group: English * target group: Baltic languages * OPUS readme: [eng-bat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md) * model: transformer * source language(s): eng * target language(s): lav lit ltg prg_Latn sgs * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv-englav.eng.lav | 24.0 | 0.546 | | newsdev2019-enlt-englit.eng.lit | 20.9 | 0.533 | | newstest2017-enlv-englav.eng.lav | 18.3 | 0.506 | | newstest2019-enlt-englit.eng.lit | 13.6 | 0.466 | | Tatoeba-test.eng-lav.eng.lav | 42.8 | 0.652 | | Tatoeba-test.eng-lit.eng.lit | 37.1 | 0.650 | | Tatoeba-test.eng.multi | 37.0 | 0.616 | | Tatoeba-test.eng-prg.eng.prg | 0.5 | 0.130 | | Tatoeba-test.eng-sgs.eng.sgs | 4.1 | 0.178 | ### System Info: - hf_name: eng-bat - source_languages: eng - target_languages: bat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'lt', 'lv', 'bat'] - src_constituents: {'eng'} - tgt_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: bat - short_pair: en-bat - chrF2_score: 0.616 - bleu: 37.0 - brevity_penalty: 0.956 - ref_len: 26417.0 - src_name: English - tgt_name: Baltic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: bat - prefer_old: False - long_pair: eng-bat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41