modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-es-ty
|
Helsinki-NLP
| 2023-08-16T11:33:44Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ty",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ty
* source languages: es
* target languages: ty
* OPUS readme: [es-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ty | 37.3 | 0.544 |
|
Helsinki-NLP/opus-mt-es-tvl
|
Helsinki-NLP
| 2023-08-16T11:33:41Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tvl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tvl
* source languages: es
* target languages: tvl
* OPUS readme: [es-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tvl | 28.3 | 0.464 |
|
Helsinki-NLP/opus-mt-es-tpi
|
Helsinki-NLP
| 2023-08-16T11:33:40Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tpi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tpi
* source languages: es
* target languages: tpi
* OPUS readme: [es-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tpi | 27.0 | 0.472 |
|
Helsinki-NLP/opus-mt-es-tn
|
Helsinki-NLP
| 2023-08-16T11:33:38Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tn
* source languages: es
* target languages: tn
* OPUS readme: [es-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tn | 32.2 | 0.528 |
|
Helsinki-NLP/opus-mt-es-tll
|
Helsinki-NLP
| 2023-08-16T11:33:37Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tll
* source languages: es
* target languages: tll
* OPUS readme: [es-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tll | 20.7 | 0.434 |
|
Helsinki-NLP/opus-mt-es-tl
|
Helsinki-NLP
| 2023-08-16T11:33:35Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- tl
tags:
- translation
license: apache-2.0
---
### spa-tgl
* source group: Spanish
* target group: Tagalog
* OPUS readme: [spa-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.tgl | 24.7 | 0.538 |
### System Info:
- hf_name: spa-tgl
- source_languages: spa
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'tl']
- src_constituents: {'spa'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: tgl
- short_pair: es-tl
- chrF2_score: 0.5379999999999999
- bleu: 24.7
- brevity_penalty: 1.0
- ref_len: 4422.0
- src_name: Spanish
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: tl
- prefer_old: False
- long_pair: spa-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-srn
|
Helsinki-NLP
| 2023-08-16T11:33:32Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"srn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-srn
* source languages: es
* target languages: srn
* OPUS readme: [es-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.srn | 28.7 | 0.487 |
|
Helsinki-NLP/opus-mt-es-sg
|
Helsinki-NLP
| 2023-08-16T11:33:28Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"sg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-sg
* source languages: es
* target languages: sg
* OPUS readme: [es-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.sg | 24.8 | 0.435 |
|
Helsinki-NLP/opus-mt-es-ru
|
Helsinki-NLP
| 2023-08-16T11:33:26Z | 618 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ru
* source languages: es
* target languages: ru
* OPUS readme: [es-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.es.ru | 20.9 | 0.489 |
| newstest2013.es.ru | 23.4 | 0.504 |
| Tatoeba.es.ru | 47.0 | 0.657 |
|
Helsinki-NLP/opus-mt-es-ro
|
Helsinki-NLP
| 2023-08-16T11:33:24Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ro
* source languages: es
* target languages: ro
* OPUS readme: [es-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ro | 45.7 | 0.666 |
|
Helsinki-NLP/opus-mt-es-pon
|
Helsinki-NLP
| 2023-08-16T11:33:21Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pon
* source languages: es
* target languages: pon
* OPUS readme: [es-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pon | 21.6 | 0.448 |
|
Helsinki-NLP/opus-mt-es-pl
|
Helsinki-NLP
| 2023-08-16T11:33:20Z | 347 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pl
* source languages: es
* target languages: pl
* OPUS readme: [es-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.pl | 44.6 | 0.649 |
|
Helsinki-NLP/opus-mt-es-pag
|
Helsinki-NLP
| 2023-08-16T11:33:16Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pag
* source languages: es
* target languages: pag
* OPUS readme: [es-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pag | 25.3 | 0.478 |
|
Helsinki-NLP/opus-mt-es-no
|
Helsinki-NLP
| 2023-08-16T11:33:13Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- no
tags:
- translation
license: apache-2.0
---
### spa-nor
* source group: Spanish
* target group: Norwegian
* OPUS readme: [spa-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.nor | 36.7 | 0.565 |
### System Info:
- hf_name: spa-nor
- source_languages: spa
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'no']
- src_constituents: {'spa'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: nor
- short_pair: es-no
- chrF2_score: 0.565
- bleu: 36.7
- brevity_penalty: 0.99
- ref_len: 7217.0
- src_name: Spanish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: no
- prefer_old: False
- long_pair: spa-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-nl
|
Helsinki-NLP
| 2023-08-16T11:33:12Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-nl
* source languages: es
* target languages: nl
* OPUS readme: [es-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.nl | 50.6 | 0.681 |
|
Helsinki-NLP/opus-mt-es-niu
|
Helsinki-NLP
| 2023-08-16T11:33:10Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"niu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-niu
* source languages: es
* target languages: niu
* OPUS readme: [es-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.niu | 29.9 | 0.506 |
|
Helsinki-NLP/opus-mt-es-mt
|
Helsinki-NLP
| 2023-08-16T11:33:09Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-mt
* source languages: es
* target languages: mt
* OPUS readme: [es-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mt | 28.1 | 0.460 |
|
Helsinki-NLP/opus-mt-es-mk
|
Helsinki-NLP
| 2023-08-16T11:33:08Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"mk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- mk
tags:
- translation
license: apache-2.0
---
### spa-mkd
* source group: Spanish
* target group: Macedonian
* OPUS readme: [spa-mkd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): mkd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.mkd | 48.2 | 0.681 |
### System Info:
- hf_name: spa-mkd
- source_languages: spa
- target_languages: mkd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'mk']
- src_constituents: {'spa'}
- tgt_constituents: {'mkd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: mkd
- short_pair: es-mk
- chrF2_score: 0.6809999999999999
- bleu: 48.2
- brevity_penalty: 1.0
- ref_len: 1073.0
- src_name: Spanish
- tgt_name: Macedonian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: mk
- prefer_old: False
- long_pair: spa-mkd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-mfs
|
Helsinki-NLP
| 2023-08-16T11:33:07Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"mfs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-mfs
* source languages: es
* target languages: mfs
* OPUS readme: [es-mfs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mfs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mfs | 88.6 | 0.907 |
|
Helsinki-NLP/opus-mt-es-lus
|
Helsinki-NLP
| 2023-08-16T11:33:06Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"lus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-lus
* source languages: es
* target languages: lus
* OPUS readme: [es-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lus | 20.9 | 0.414 |
|
Helsinki-NLP/opus-mt-es-lua
|
Helsinki-NLP
| 2023-08-16T11:33:05Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-lua
* source languages: es
* target languages: lua
* OPUS readme: [es-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lua | 23.4 | 0.473 |
|
Helsinki-NLP/opus-mt-es-lt
|
Helsinki-NLP
| 2023-08-16T11:33:04Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- lt
tags:
- translation
license: apache-2.0
---
### spa-lit
* source group: Spanish
* target group: Lithuanian
* OPUS readme: [spa-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.lit | 40.2 | 0.643 |
### System Info:
- hf_name: spa-lit
- source_languages: spa
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'lt']
- src_constituents: {'spa'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: lit
- short_pair: es-lt
- chrF2_score: 0.643
- bleu: 40.2
- brevity_penalty: 0.956
- ref_len: 2341.0
- src_name: Spanish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: lt
- prefer_old: False
- long_pair: spa-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
jkhan447/results
|
jkhan447
| 2023-08-16T11:33:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T07:55:18Z |
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5570
- Accuracy: 0.7508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6473 | 0.04 | 50 | 0.5683 | 0.7454 |
| 0.6367 | 0.07 | 100 | 0.5670 | 0.7525 |
| 0.6016 | 0.11 | 150 | 0.5676 | 0.7508 |
| 0.6014 | 0.14 | 200 | 0.5498 | 0.75 |
| 0.5801 | 0.18 | 250 | 0.5446 | 0.75 |
| 0.4534 | 0.21 | 300 | 0.5383 | 0.7512 |
| 0.669 | 0.25 | 350 | 0.5700 | 0.75 |
| 0.5556 | 0.29 | 400 | 0.5536 | 0.7496 |
| 0.5652 | 0.32 | 450 | 0.6341 | 0.75 |
| 0.5801 | 0.36 | 500 | 0.5416 | 0.7454 |
| 0.6476 | 0.39 | 550 | 0.5319 | 0.7508 |
| 0.5473 | 0.43 | 600 | 0.5422 | 0.7492 |
| 0.5094 | 0.46 | 650 | 0.5532 | 0.7504 |
| 0.5656 | 0.5 | 700 | 0.5375 | 0.7504 |
| 0.532 | 0.54 | 750 | 0.5617 | 0.7137 |
| 0.5738 | 0.57 | 800 | 0.5501 | 0.7521 |
| 0.544 | 0.61 | 850 | 0.5449 | 0.7538 |
| 0.5271 | 0.64 | 900 | 0.5682 | 0.7496 |
| 0.9725 | 0.68 | 950 | 0.7980 | 0.4921 |
| 0.5955 | 0.71 | 1000 | 0.5220 | 0.7538 |
| 0.5588 | 0.75 | 1050 | 0.5247 | 0.75 |
| 0.612 | 0.79 | 1100 | 0.5183 | 0.7483 |
| 0.6124 | 0.82 | 1150 | 0.5260 | 0.7542 |
| 0.421 | 0.86 | 1200 | 0.5509 | 0.7508 |
| 0.4582 | 0.89 | 1250 | 0.5249 | 0.75 |
| 0.588 | 0.93 | 1300 | 0.5633 | 0.7267 |
| 0.549 | 0.96 | 1350 | 0.5179 | 0.7492 |
| 0.495 | 1.0 | 1400 | 0.5456 | 0.7512 |
| 0.435 | 1.04 | 1450 | 0.5596 | 0.7504 |
| 0.6061 | 1.07 | 1500 | 0.5421 | 0.7433 |
| 0.5542 | 1.11 | 1550 | 0.5117 | 0.7554 |
| 0.4277 | 1.14 | 1600 | 0.5291 | 0.7521 |
| 0.4415 | 1.18 | 1650 | 0.5354 | 0.7538 |
| 0.5029 | 1.21 | 1700 | 0.5084 | 0.7579 |
| 0.6079 | 1.25 | 1750 | 0.5798 | 0.7554 |
| 0.5692 | 1.29 | 1800 | 0.5003 | 0.755 |
| 0.5297 | 1.32 | 1850 | 0.5563 | 0.7588 |
| 0.6938 | 1.36 | 1900 | 0.5064 | 0.7529 |
| 0.5679 | 1.39 | 1950 | 0.5505 | 0.7508 |
| 0.4503 | 1.43 | 2000 | 0.5133 | 0.7554 |
| 0.519 | 1.46 | 2050 | 0.4946 | 0.7525 |
| 0.513 | 1.5 | 2100 | 0.5156 | 0.7283 |
| 0.5393 | 1.54 | 2150 | 0.5003 | 0.7546 |
| 0.6162 | 1.57 | 2200 | 0.4916 | 0.7625 |
| 0.5526 | 1.61 | 2250 | 0.4980 | 0.755 |
| 0.4472 | 1.64 | 2300 | 0.5001 | 0.76 |
| 0.5678 | 1.68 | 2350 | 0.4958 | 0.7558 |
| 0.3894 | 1.71 | 2400 | 0.4968 | 0.7646 |
| 0.4086 | 1.75 | 2450 | 0.5065 | 0.7583 |
| 0.4652 | 1.79 | 2500 | 0.5091 | 0.7567 |
| 0.4837 | 1.82 | 2550 | 0.5190 | 0.7312 |
| 0.4745 | 1.86 | 2600 | 0.4998 | 0.7567 |
| 0.456 | 1.89 | 2650 | 0.5035 | 0.7558 |
| 0.5784 | 1.93 | 2700 | 0.4997 | 0.7504 |
| 0.452 | 1.96 | 2750 | 0.5315 | 0.7517 |
| 0.5682 | 2.0 | 2800 | 0.5827 | 0.7521 |
| 0.6134 | 2.04 | 2850 | 0.4944 | 0.7421 |
| 0.3451 | 2.07 | 2900 | 0.5505 | 0.7575 |
| 0.3682 | 2.11 | 2950 | 0.5122 | 0.7504 |
| 0.3737 | 2.14 | 3000 | 0.8033 | 0.7546 |
| 0.4899 | 2.18 | 3050 | 0.5645 | 0.7446 |
| 0.4885 | 2.21 | 3100 | 0.5229 | 0.7554 |
| 0.4121 | 2.25 | 3150 | 0.5172 | 0.7425 |
| 0.3926 | 2.29 | 3200 | 0.5685 | 0.7512 |
| 0.4242 | 2.32 | 3250 | 0.5380 | 0.7425 |
| 0.4133 | 2.36 | 3300 | 0.5996 | 0.7488 |
| 0.4322 | 2.39 | 3350 | 0.5769 | 0.7533 |
| 0.4561 | 2.43 | 3400 | 0.5525 | 0.7583 |
| 0.2765 | 2.46 | 3450 | 0.5399 | 0.7546 |
| 0.4422 | 2.5 | 3500 | 0.5782 | 0.7554 |
| 0.4343 | 2.54 | 3550 | 0.5325 | 0.7338 |
| 0.3551 | 2.57 | 3600 | 0.5518 | 0.7504 |
| 0.4058 | 2.61 | 3650 | 0.5585 | 0.7579 |
| 0.4838 | 2.64 | 3700 | 0.5433 | 0.7379 |
| 0.3821 | 2.68 | 3750 | 0.5244 | 0.7562 |
| 0.4906 | 2.71 | 3800 | 0.5202 | 0.7525 |
| 0.3046 | 2.75 | 3850 | 0.5430 | 0.7575 |
| 0.4317 | 2.79 | 3900 | 0.5369 | 0.7546 |
| 0.5641 | 2.82 | 3950 | 0.5406 | 0.7546 |
| 0.4866 | 2.86 | 4000 | 0.5454 | 0.7546 |
| 0.3687 | 2.89 | 4050 | 0.5450 | 0.7558 |
| 0.484 | 2.93 | 4100 | 0.5456 | 0.7521 |
| 0.2599 | 2.96 | 4150 | 0.5472 | 0.7533 |
| 0.3381 | 3.0 | 4200 | 0.5461 | 0.7508 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Helsinki-NLP/opus-mt-es-loz
|
Helsinki-NLP
| 2023-08-16T11:33:02Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"loz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-loz
* source languages: es
* target languages: loz
* OPUS readme: [es-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.loz | 28.6 | 0.493 |
|
Helsinki-NLP/opus-mt-es-ln
|
Helsinki-NLP
| 2023-08-16T11:33:01Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ln",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ln
* source languages: es
* target languages: ln
* OPUS readme: [es-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ln | 27.1 | 0.508 |
|
Helsinki-NLP/opus-mt-es-iso
|
Helsinki-NLP
| 2023-08-16T11:32:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"iso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-iso
* source languages: es
* target languages: iso
* OPUS readme: [es-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.iso | 22.4 | 0.396 |
|
Helsinki-NLP/opus-mt-es-is
|
Helsinki-NLP
| 2023-08-16T11:32:57Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"is",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- is
tags:
- translation
license: apache-2.0
---
### spa-isl
* source group: Spanish
* target group: Icelandic
* OPUS readme: [spa-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): isl
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.isl | 27.1 | 0.528 |
### System Info:
- hf_name: spa-isl
- source_languages: spa
- target_languages: isl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'is']
- src_constituents: {'spa'}
- tgt_constituents: {'isl'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: isl
- short_pair: es-is
- chrF2_score: 0.528
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 1220.0
- src_name: Spanish
- tgt_name: Icelandic
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: is
- prefer_old: False
- long_pair: spa-isl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-ilo
|
Helsinki-NLP
| 2023-08-16T11:32:56Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ilo
* source languages: es
* target languages: ilo
* OPUS readme: [es-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ilo | 31.0 | 0.544 |
|
Helsinki-NLP/opus-mt-es-ig
|
Helsinki-NLP
| 2023-08-16T11:32:55Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ig",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ig
* source languages: es
* target languages: ig
* OPUS readme: [es-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ig | 27.0 | 0.434 |
|
Helsinki-NLP/opus-mt-es-ho
|
Helsinki-NLP
| 2023-08-16T11:32:50Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ho",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ho
* source languages: es
* target languages: ho
* OPUS readme: [es-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ho | 22.8 | 0.463 |
|
Helsinki-NLP/opus-mt-es-hil
|
Helsinki-NLP
| 2023-08-16T11:32:49Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"hil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-hil
* source languages: es
* target languages: hil
* OPUS readme: [es-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.hil | 35.8 | 0.584 |
|
Helsinki-NLP/opus-mt-es-he
|
Helsinki-NLP
| 2023-08-16T11:32:48Z | 171 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- he
tags:
- translation
license: apache-2.0
---
### es-he
* source group: Spanish
* target group: Hebrew
* OPUS readme: [spa-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md)
* model: transformer
* source language(s): spa
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.heb | 43.6 | 0.636 |
### System Info:
- hf_name: es-he
- source_languages: spa
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'he']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt
- src_alpha3: spa
- tgt_alpha3: heb
- chrF2_score: 0.636
- bleu: 43.6
- brevity_penalty: 0.992
- ref_len: 12112.0
- src_name: Spanish
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: es
- tgt_alpha2: he
- prefer_old: False
- short_pair: es-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-11:41
|
Helsinki-NLP/opus-mt-es-guw
|
Helsinki-NLP
| 2023-08-16T11:32:46Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"guw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-guw
* source languages: es
* target languages: guw
* OPUS readme: [es-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.guw | 28.6 | 0.480 |
|
Helsinki-NLP/opus-mt-es-gl
|
Helsinki-NLP
| 2023-08-16T11:32:45Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"gl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- gl
tags:
- translation
license: apache-2.0
---
### spa-glg
* source group: Spanish
* target group: Galician
* OPUS readme: [spa-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.glg | 67.6 | 0.808 |
### System Info:
- hf_name: spa-glg
- source_languages: spa
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'gl']
- src_constituents: {'spa'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt
- src_alpha3: spa
- tgt_alpha3: glg
- short_pair: es-gl
- chrF2_score: 0.8079999999999999
- bleu: 67.6
- brevity_penalty: 0.993
- ref_len: 16581.0
- src_name: Spanish
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: es
- tgt_alpha2: gl
- prefer_old: False
- long_pair: spa-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Nextcloud-AI/opus-mt-es-fr
|
Nextcloud-AI
| 2023-08-16T11:32:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:41:09Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fr
* source languages: es
* target languages: fr
* OPUS readme: [es-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.es.fr | 33.6 | 0.610 |
| news-test2008.es.fr | 32.0 | 0.585 |
| newstest2009.es.fr | 32.5 | 0.590 |
| newstest2010.es.fr | 35.0 | 0.615 |
| newstest2011.es.fr | 33.9 | 0.607 |
| newstest2012.es.fr | 32.4 | 0.602 |
| newstest2013.es.fr | 32.1 | 0.593 |
| Tatoeba.es.fr | 58.4 | 0.731 |
|
Helsinki-NLP/opus-mt-es-fj
|
Helsinki-NLP
| 2023-08-16T11:32:41Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fj
* source languages: es
* target languages: fj
* OPUS readme: [es-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.fj | 24.8 | 0.472 |
|
Nextcloud-AI/opus-mt-es-fi
|
Nextcloud-AI
| 2023-08-16T11:32:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:40:56Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fi
* source languages: es
* target languages: fi
* OPUS readme: [es-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.fi | 44.4 | 0.672 |
|
Helsinki-NLP/opus-mt-es-eo
|
Helsinki-NLP
| 2023-08-16T11:32:35Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-eo
* source languages: es
* target languages: eo
* OPUS readme: [es-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.eo | 44.7 | 0.657 |
|
Helsinki-NLP/opus-mt-es-en
|
Helsinki-NLP
| 2023-08-16T11:32:34Z | 2,014,228 | 68 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- en
tags:
- translation
license: apache-2.0
---
### spa-eng
* source group: Spanish
* target group: English
* OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md)
* model: transformer
* source language(s): spa
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 |
| news-test2008-spaeng.spa.eng | 27.9 | 0.553 |
| newstest2009-spaeng.spa.eng | 30.4 | 0.572 |
| newstest2010-spaeng.spa.eng | 36.1 | 0.614 |
| newstest2011-spaeng.spa.eng | 34.2 | 0.599 |
| newstest2012-spaeng.spa.eng | 37.9 | 0.624 |
| newstest2013-spaeng.spa.eng | 35.3 | 0.609 |
| Tatoeba-test.spa.eng | 59.6 | 0.739 |
### System Info:
- hf_name: spa-eng
- source_languages: spa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'en']
- src_constituents: {'spa'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt
- src_alpha3: spa
- tgt_alpha3: eng
- short_pair: es-en
- chrF2_score: 0.7390000000000001
- bleu: 59.6
- brevity_penalty: 0.9740000000000001
- ref_len: 79376.0
- src_name: Spanish
- tgt_name: English
- train_date: 2020-08-18 00:00:00
- src_alpha2: es
- tgt_alpha2: en
- prefer_old: False
- long_pair: spa-eng
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20
|
Helsinki-NLP/opus-mt-es-el
|
Helsinki-NLP
| 2023-08-16T11:32:33Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-el
* source languages: es
* target languages: el
* OPUS readme: [es-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.el | 48.6 | 0.661 |
|
Nextcloud-AI/opus-mt-es-de
|
Nextcloud-AI
| 2023-08-16T11:32:29Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:40:41Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-de
* source languages: es
* target languages: de
* OPUS readme: [es-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.de | 50.0 | 0.683 |
|
Helsinki-NLP/opus-mt-es-de
|
Helsinki-NLP
| 2023-08-16T11:32:29Z | 26,522 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-de
* source languages: es
* target languages: de
* OPUS readme: [es-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.de | 50.0 | 0.683 |
|
Helsinki-NLP/opus-mt-es-da
|
Helsinki-NLP
| 2023-08-16T11:32:28Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-da
* source languages: es
* target languages: da
* OPUS readme: [es-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.da | 55.7 | 0.712 |
|
Helsinki-NLP/opus-mt-es-csn
|
Helsinki-NLP
| 2023-08-16T11:32:27Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"csn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-csn
* source languages: es
* target languages: csn
* OPUS readme: [es-csn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csn | 87.8 | 0.901 |
|
Helsinki-NLP/opus-mt-es-ceb
|
Helsinki-NLP
| 2023-08-16T11:32:22Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ceb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ceb
* source languages: es
* target languages: ceb
* OPUS readme: [es-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ceb | 33.9 | 0.564 |
|
Helsinki-NLP/opus-mt-es-ca
|
Helsinki-NLP
| 2023-08-16T11:32:21Z | 487 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- es
- ca
tags:
- translation
license: apache-2.0
---
### spa-cat
* source group: Spanish
* target group: Catalan
* OPUS readme: [spa-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.cat | 68.9 | 0.832 |
### System Info:
- hf_name: spa-cat
- source_languages: spa
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ca']
- src_constituents: {'spa'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: cat
- short_pair: es-ca
- chrF2_score: 0.8320000000000001
- bleu: 68.9
- brevity_penalty: 1.0
- ref_len: 12343.0
- src_name: Spanish
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: ca
- prefer_old: False
- long_pair: spa-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-bzs
|
Helsinki-NLP
| 2023-08-16T11:32:20Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"bzs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-bzs
* source languages: es
* target languages: bzs
* OPUS readme: [es-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bzs | 26.4 | 0.451 |
|
Helsinki-NLP/opus-mt-es-ber
|
Helsinki-NLP
| 2023-08-16T11:32:17Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ber",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ber | 21.8 | 0.444 |
|
Helsinki-NLP/opus-mt-es-NORWAY
|
Helsinki-NLP
| 2023-08-16T11:32:10Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-NORWAY
* source languages: es
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [es-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.no | 31.6 | 0.523 |
|
Helsinki-NLP/opus-mt-eo-sv
|
Helsinki-NLP
| 2023-08-16T11:32:09Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- sv
tags:
- translation
license: apache-2.0
---
### epo-swe
* source group: Esperanto
* target group: Swedish
* OPUS readme: [epo-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.swe | 29.5 | 0.463 |
### System Info:
- hf_name: epo-swe
- source_languages: epo
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sv']
- src_constituents: {'epo'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: swe
- short_pair: eo-sv
- chrF2_score: 0.46299999999999997
- bleu: 29.5
- brevity_penalty: 0.9640000000000001
- ref_len: 10977.0
- src_name: Esperanto
- tgt_name: Swedish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sv
- prefer_old: False
- long_pair: epo-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-ru
|
Helsinki-NLP
| 2023-08-16T11:32:07Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- ru
tags:
- translation
license: apache-2.0
---
### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.rus | 17.7 | 0.379 |
### System Info:
- hf_name: epo-rus
- source_languages: epo
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ru']
- src_constituents: {'epo'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: rus
- short_pair: eo-ru
- chrF2_score: 0.379
- bleu: 17.7
- brevity_penalty: 0.9179999999999999
- ref_len: 71288.0
- src_name: Esperanto
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ru
- prefer_old: False
- long_pair: epo-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-pt
|
Helsinki-NLP
| 2023-08-16T11:32:04Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- pt
tags:
- translation
license: apache-2.0
---
### epo-por
* source group: Esperanto
* target group: Portuguese
* OPUS readme: [epo-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.por | 20.2 | 0.438 |
### System Info:
- hf_name: epo-por
- source_languages: epo
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pt']
- src_constituents: {'epo'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: por
- short_pair: eo-pt
- chrF2_score: 0.43799999999999994
- bleu: 20.2
- brevity_penalty: 0.895
- ref_len: 89991.0
- src_name: Esperanto
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pt
- prefer_old: False
- long_pair: epo-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-pl
|
Helsinki-NLP
| 2023-08-16T11:32:03Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- pl
tags:
- translation
license: apache-2.0
---
### epo-pol
* source group: Esperanto
* target group: Polish
* OPUS readme: [epo-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.pol | 17.2 | 0.392 |
### System Info:
- hf_name: epo-pol
- source_languages: epo
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pl']
- src_constituents: {'epo'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: pol
- short_pair: eo-pl
- chrF2_score: 0.392
- bleu: 17.2
- brevity_penalty: 0.893
- ref_len: 15343.0
- src_name: Esperanto
- tgt_name: Polish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pl
- prefer_old: False
- long_pair: epo-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-it
|
Helsinki-NLP
| 2023-08-16T11:32:01Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- it
tags:
- translation
license: apache-2.0
---
### epo-ita
* source group: Esperanto
* target group: Italian
* OPUS readme: [epo-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ita | 23.8 | 0.465 |
### System Info:
- hf_name: epo-ita
- source_languages: epo
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'it']
- src_constituents: {'epo'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ita
- short_pair: eo-it
- chrF2_score: 0.465
- bleu: 23.8
- brevity_penalty: 0.9420000000000001
- ref_len: 67118.0
- src_name: Esperanto
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: it
- prefer_old: False
- long_pair: epo-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-he
|
Helsinki-NLP
| 2023-08-16T11:31:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- he
tags:
- translation
license: apache-2.0
---
### epo-heb
* source group: Esperanto
* target group: Hebrew
* OPUS readme: [epo-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.heb | 11.5 | 0.306 |
### System Info:
- hf_name: epo-heb
- source_languages: epo
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'he']
- src_constituents: {'epo'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: heb
- short_pair: eo-he
- chrF2_score: 0.306
- bleu: 11.5
- brevity_penalty: 0.943
- ref_len: 65645.0
- src_name: Esperanto
- tgt_name: Hebrew
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: he
- prefer_old: False
- long_pair: epo-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-fi
|
Helsinki-NLP
| 2023-08-16T11:31:56Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- fi
tags:
- translation
license: apache-2.0
---
### epo-fin
* source group: Esperanto
* target group: Finnish
* OPUS readme: [epo-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.fin | 15.9 | 0.371 |
### System Info:
- hf_name: epo-fin
- source_languages: epo
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'fi']
- src_constituents: {'epo'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: fin
- short_pair: eo-fi
- chrF2_score: 0.371
- bleu: 15.9
- brevity_penalty: 0.894
- ref_len: 15881.0
- src_name: Esperanto
- tgt_name: Finnish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: fi
- prefer_old: False
- long_pair: epo-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-es
|
Helsinki-NLP
| 2023-08-16T11:31:55Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-eo-es
* source languages: eo
* target languages: es
* OPUS readme: [eo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.es | 44.2 | 0.631 |
|
Helsinki-NLP/opus-mt-eo-el
|
Helsinki-NLP
| 2023-08-16T11:31:53Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- el
tags:
- translation
license: apache-2.0
---
### epo-ell
* source group: Esperanto
* target group: Modern Greek (1453-)
* OPUS readme: [epo-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ell | 23.2 | 0.438 |
### System Info:
- hf_name: epo-ell
- source_languages: epo
- target_languages: ell
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'el']
- src_constituents: {'epo'}
- tgt_constituents: {'ell'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ell
- short_pair: eo-el
- chrF2_score: 0.43799999999999994
- bleu: 23.2
- brevity_penalty: 0.9159999999999999
- ref_len: 3892.0
- src_name: Esperanto
- tgt_name: Modern Greek (1453-)
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: el
- prefer_old: False
- long_pair: epo-ell
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-de
|
Helsinki-NLP
| 2023-08-16T11:31:52Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-eo-de
* source languages: eo
* target languages: de
* OPUS readme: [eo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.de | 45.5 | 0.644 |
|
Helsinki-NLP/opus-mt-eo-da
|
Helsinki-NLP
| 2023-08-16T11:31:50Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- da
tags:
- translation
license: apache-2.0
---
### epo-dan
* source group: Esperanto
* target group: Danish
* OPUS readme: [epo-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.dan | 21.6 | 0.407 |
### System Info:
- hf_name: epo-dan
- source_languages: epo
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'da']
- src_constituents: {'epo'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: dan
- short_pair: eo-da
- chrF2_score: 0.40700000000000003
- bleu: 21.6
- brevity_penalty: 0.9359999999999999
- ref_len: 72349.0
- src_name: Esperanto
- tgt_name: Danish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: da
- prefer_old: False
- long_pair: epo-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-cs
|
Helsinki-NLP
| 2023-08-16T11:31:49Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- cs
tags:
- translation
license: apache-2.0
---
### epo-ces
* source group: Esperanto
* target group: Czech
* OPUS readme: [epo-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ces | 17.5 | 0.376 |
### System Info:
- hf_name: epo-ces
- source_languages: epo
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'cs']
- src_constituents: {'epo'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ces
- short_pair: eo-cs
- chrF2_score: 0.376
- bleu: 17.5
- brevity_penalty: 0.922
- ref_len: 22148.0
- src_name: Esperanto
- tgt_name: Czech
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: cs
- prefer_old: False
- long_pair: epo-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-bg
|
Helsinki-NLP
| 2023-08-16T11:31:48Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- bg
tags:
- translation
license: apache-2.0
---
### epo-bul
* source group: Esperanto
* target group: Bulgarian
* OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.bul | 19.0 | 0.395 |
### System Info:
- hf_name: epo-bul
- source_languages: epo
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'bg']
- src_constituents: {'epo'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: bul
- short_pair: eo-bg
- chrF2_score: 0.395
- bleu: 19.0
- brevity_penalty: 0.8909999999999999
- ref_len: 3961.0
- src_name: Esperanto
- tgt_name: Bulgarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: bg
- prefer_old: False
- long_pair: epo-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-af
|
Helsinki-NLP
| 2023-08-16T11:31:47Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- eo
- af
tags:
- translation
license: apache-2.0
---
### epo-afr
* source group: Esperanto
* target group: Afrikaans
* OPUS readme: [epo-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.afr | 19.5 | 0.369 |
### System Info:
- hf_name: epo-afr
- source_languages: epo
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'af']
- src_constituents: {'epo'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: afr
- short_pair: eo-af
- chrF2_score: 0.369
- bleu: 19.5
- brevity_penalty: 0.9570000000000001
- ref_len: 8432.0
- src_name: Esperanto
- tgt_name: Afrikaans
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: af
- prefer_old: False
- long_pair: epo-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-zh
|
Helsinki-NLP
| 2023-08-16T11:31:42Z | 461,656 | 350 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- zh
tags:
- translation
license: apache-2.0
---
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-xh
|
Helsinki-NLP
| 2023-08-16T11:31:41Z | 180 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"xh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-xh
* source languages: en
* target languages: xh
* OPUS readme: [en-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.xh | 37.9 | 0.652 |
|
Helsinki-NLP/opus-mt-en-vi
|
Helsinki-NLP
| 2023-08-16T11:31:40Z | 4,346 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- vi
tags:
- translation
license: apache-2.0
---
### eng-vie
* source group: English
* target group: Vietnamese
* OPUS readme: [eng-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): vie vie_Hani
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.vie | 37.2 | 0.542 |
### System Info:
- hf_name: eng-vie
- source_languages: eng
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi']
- src_constituents: {'eng'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: vie
- short_pair: en-vi
- chrF2_score: 0.542
- bleu: 37.2
- brevity_penalty: 0.973
- ref_len: 24427.0
- src_name: English
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: vi
- prefer_old: False
- long_pair: eng-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-ur
|
Helsinki-NLP
| 2023-08-16T11:31:38Z | 3,255 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ur",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- ur
tags:
- translation
license: apache-2.0
---
### eng-urd
* source group: English
* target group: Urdu
* OPUS readme: [eng-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.urd | 12.1 | 0.390 |
### System Info:
- hf_name: eng-urd
- source_languages: eng
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ur']
- src_constituents: {'eng'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: urd
- short_pair: en-ur
- chrF2_score: 0.39
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 12155.0
- src_name: English
- tgt_name: Urdu
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ur
- prefer_old: False
- long_pair: eng-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-umb
|
Helsinki-NLP
| 2023-08-16T11:31:37Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"umb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-umb
* source languages: en
* target languages: umb
* OPUS readme: [en-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.umb | 28.6 | 0.510 |
|
Helsinki-NLP/opus-mt-en-ty
|
Helsinki-NLP
| 2023-08-16T11:31:34Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ty",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ty
* source languages: en
* target languages: ty
* OPUS readme: [en-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ty | 46.8 | 0.619 |
|
Helsinki-NLP/opus-mt-en-tw
|
Helsinki-NLP
| 2023-08-16T11:31:33Z | 171 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: [en-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tw | 38.2 | 0.577 |
|
Helsinki-NLP/opus-mt-en-tvl
|
Helsinki-NLP
| 2023-08-16T11:31:32Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tvl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tvl
* source languages: en
* target languages: tvl
* OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tvl | 46.9 | 0.625 |
|
Helsinki-NLP/opus-mt-en-tut
|
Helsinki-NLP
| 2023-08-16T11:31:31Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tut",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- tut
tags:
- translation
license: apache-2.0
---
### eng-tut
* source group: English
* target group: Altaic languages
* OPUS readme: [eng-tut](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum mon nog ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn xal
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.4 | 0.438 |
| newstest2016-entr-engtur.eng.tur | 9.1 | 0.414 |
| newstest2017-entr-engtur.eng.tur | 9.5 | 0.414 |
| newstest2018-entr-engtur.eng.tur | 9.5 | 0.415 |
| Tatoeba-test.eng-aze.eng.aze | 27.2 | 0.580 |
| Tatoeba-test.eng-bak.eng.bak | 5.8 | 0.298 |
| Tatoeba-test.eng-chv.eng.chv | 4.6 | 0.301 |
| Tatoeba-test.eng-crh.eng.crh | 6.5 | 0.342 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.8 | 0.360 |
| Tatoeba-test.eng-kir.eng.kir | 24.6 | 0.499 |
| Tatoeba-test.eng-kjh.eng.kjh | 2.2 | 0.052 |
| Tatoeba-test.eng-kum.eng.kum | 8.0 | 0.229 |
| Tatoeba-test.eng-mon.eng.mon | 10.3 | 0.362 |
| Tatoeba-test.eng.multi | 19.5 | 0.451 |
| Tatoeba-test.eng-nog.eng.nog | 1.5 | 0.117 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.035 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.080 |
| Tatoeba-test.eng-tat.eng.tat | 10.8 | 0.320 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.6 | 0.323 |
| Tatoeba-test.eng-tur.eng.tur | 34.2 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 8.1 | 0.192 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.158 |
| Tatoeba-test.eng-uzb.eng.uzb | 4.2 | 0.298 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.061 |
### System Info:
- hf_name: eng-tut
- source_languages: eng
- target_languages: tut
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tut']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: tut
- short_pair: en-tut
- chrF2_score: 0.451
- bleu: 19.5
- brevity_penalty: 1.0
- ref_len: 57472.0
- src_name: English
- tgt_name: Altaic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: tut
- prefer_old: False
- long_pair: eng-tut
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-trk
|
Helsinki-NLP
| 2023-08-16T11:31:29Z | 12,908 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tt",
"cv",
"tk",
"tr",
"ba",
"trk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- tt
- cv
- tk
- tr
- ba
- trk
tags:
- translation
license: apache-2.0
---
### eng-trk
* source group: English
* target group: Turkic languages
* OPUS readme: [eng-trk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.1 | 0.437 |
| newstest2016-entr-engtur.eng.tur | 9.2 | 0.410 |
| newstest2017-entr-engtur.eng.tur | 9.0 | 0.410 |
| newstest2018-entr-engtur.eng.tur | 9.2 | 0.413 |
| Tatoeba-test.eng-aze.eng.aze | 26.8 | 0.577 |
| Tatoeba-test.eng-bak.eng.bak | 7.6 | 0.308 |
| Tatoeba-test.eng-chv.eng.chv | 4.3 | 0.270 |
| Tatoeba-test.eng-crh.eng.crh | 8.1 | 0.330 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.1 | 0.359 |
| Tatoeba-test.eng-kir.eng.kir | 28.6 | 0.524 |
| Tatoeba-test.eng-kjh.eng.kjh | 1.0 | 0.041 |
| Tatoeba-test.eng-kum.eng.kum | 2.2 | 0.075 |
| Tatoeba-test.eng.multi | 19.9 | 0.455 |
| Tatoeba-test.eng-ota.eng.ota | 0.5 | 0.065 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.030 |
| Tatoeba-test.eng-tat.eng.tat | 9.7 | 0.316 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.9 | 0.317 |
| Tatoeba-test.eng-tur.eng.tur | 34.6 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 5.4 | 0.210 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.155 |
| Tatoeba-test.eng-uzb.eng.uzb | 3.4 | 0.275 |
### System Info:
- hf_name: eng-trk
- source_languages: eng
- target_languages: trk
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']
- src_constituents: {'eng'}
- tgt_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: trk
- short_pair: en-trk
- chrF2_score: 0.455
- bleu: 19.9
- brevity_penalty: 1.0
- ref_len: 57072.0
- src_name: English
- tgt_name: Turkic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: trk
- prefer_old: False
- long_pair: eng-trk
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-tpi
|
Helsinki-NLP
| 2023-08-16T11:31:28Z | 333 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tpi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tpi
* source languages: en
* target languages: tpi
* OPUS readme: [en-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tpi | 38.7 | 0.568 |
|
Helsinki-NLP/opus-mt-en-to
|
Helsinki-NLP
| 2023-08-16T11:31:25Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"to",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-to
* source languages: en
* target languages: to
* OPUS readme: [en-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.to | 56.3 | 0.689 |
|
Helsinki-NLP/opus-mt-en-tn
|
Helsinki-NLP
| 2023-08-16T11:31:24Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tn
* source languages: en
* target languages: tn
* OPUS readme: [en-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tn | 45.5 | 0.636 |
|
Helsinki-NLP/opus-mt-en-tiv
|
Helsinki-NLP
| 2023-08-16T11:31:21Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tiv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tiv
* source languages: en
* target languages: tiv
* OPUS readme: [en-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tiv | 31.6 | 0.497 |
|
Helsinki-NLP/opus-mt-en-swc
|
Helsinki-NLP
| 2023-08-16T11:31:18Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"swc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-swc
* source languages: en
* target languages: swc
* OPUS readme: [en-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.swc | 40.1 | 0.613 |
|
Helsinki-NLP/opus-mt-en-sv
|
Helsinki-NLP
| 2023-08-16T11:31:15Z | 15,146 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sv
* source languages: en
* target languages: sv
* OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sv | 60.1 | 0.736 |
|
Nextcloud-AI/opus-mt-en-sv
|
Nextcloud-AI
| 2023-08-16T11:31:15Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:40:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sv
* source languages: en
* target languages: sv
* OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sv | 60.1 | 0.736 |
|
Helsinki-NLP/opus-mt-en-st
|
Helsinki-NLP
| 2023-08-16T11:31:14Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"st",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-st
* source languages: en
* target languages: st
* OPUS readme: [en-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.st | 49.8 | 0.665 |
|
Helsinki-NLP/opus-mt-en-sq
|
Helsinki-NLP
| 2023-08-16T11:31:12Z | 1,677 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sq
* source languages: en
* target languages: sq
* OPUS readme: [en-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sq/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sq | 46.5 | 0.669 |
|
Helsinki-NLP/opus-mt-en-sn
|
Helsinki-NLP
| 2023-08-16T11:31:10Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sn
* source languages: en
* target languages: sn
* OPUS readme: [en-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sn | 38.0 | 0.646 |
|
Helsinki-NLP/opus-mt-en-sk
|
Helsinki-NLP
| 2023-08-16T11:31:06Z | 31,601 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sk
* source languages: en
* target languages: sk
* OPUS readme: [en-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sk | 36.8 | 0.578 |
|
Helsinki-NLP/opus-mt-en-sit
|
Helsinki-NLP
| 2023-08-16T11:31:05Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- sit
tags:
- translation
license: apache-2.0
---
### eng-sit
* source group: English
* target group: Sino-Tibetan languages
* OPUS readme: [eng-sit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md)
* model: transformer
* source language(s): eng
* target language(s): bod brx brx_Latn cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans mya nan wuu yue yue_Hans yue_Hant zho zho_Hans zho_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enzh-engzho.eng.zho | 23.5 | 0.217 |
| newstest2017-enzh-engzho.eng.zho | 23.2 | 0.223 |
| newstest2018-enzh-engzho.eng.zho | 25.0 | 0.230 |
| newstest2019-enzh-engzho.eng.zho | 20.2 | 0.225 |
| Tatoeba-test.eng-bod.eng.bod | 0.4 | 0.147 |
| Tatoeba-test.eng-brx.eng.brx | 0.5 | 0.012 |
| Tatoeba-test.eng.multi | 25.7 | 0.223 |
| Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.222 |
| Tatoeba-test.eng-zho.eng.zho | 29.2 | 0.249 |
### System Info:
- hf_name: eng-sit
- source_languages: eng
- target_languages: sit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sit']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sit
- short_pair: en-sit
- chrF2_score: 0.223
- bleu: 25.7
- brevity_penalty: 0.907
- ref_len: 109538.0
- src_name: English
- tgt_name: Sino-Tibetan languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sit
- prefer_old: False
- long_pair: eng-sit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-sg
|
Helsinki-NLP
| 2023-08-16T11:31:04Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sg
* source languages: en
* target languages: sg
* OPUS readme: [en-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sg | 37.0 | 0.544 |
|
Helsinki-NLP/opus-mt-en-rw
|
Helsinki-NLP
| 2023-08-16T11:31:00Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"rw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-rw
* source languages: en
* target languages: rw
* OPUS readme: [en-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.rw | 33.3 | 0.569 |
| Tatoeba.en.rw | 13.8 | 0.503 |
|
Helsinki-NLP/opus-mt-en-run
|
Helsinki-NLP
| 2023-08-16T11:30:59Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"run",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-run
* source languages: en
* target languages: run
* OPUS readme: [en-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.run | 34.2 | 0.591 |
|
Helsinki-NLP/opus-mt-en-ru
|
Helsinki-NLP
| 2023-08-16T11:30:58Z | 81,689 | 74 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ru
* source languages: en
* target languages: ru
* OPUS readme: [en-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.en.ru | 31.1 | 0.581 |
| newstest2013.en.ru | 23.5 | 0.513 |
| newstest2015-enru.en.ru | 27.5 | 0.564 |
| newstest2016-enru.en.ru | 26.4 | 0.548 |
| newstest2017-enru.en.ru | 29.1 | 0.572 |
| newstest2018-enru.en.ru | 25.4 | 0.554 |
| newstest2019-enru.en.ru | 27.1 | 0.533 |
| Tatoeba.en.ru | 48.4 | 0.669 |
|
Helsinki-NLP/opus-mt-en-roa
|
Helsinki-NLP
| 2023-08-16T11:30:57Z | 1,729 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"roa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- it
- ca
- rm
- es
- ro
- gl
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- roa
tags:
- translation
license: apache-2.0
---
### eng-roa
* source group: English
* target group: Romance languages
* OPUS readme: [eng-roa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.6 | 0.567 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 30.2 | 0.575 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.5 | 0.612 |
| newssyscomb2009-engfra.eng.fra | 27.9 | 0.570 |
| newssyscomb2009-engita.eng.ita | 29.3 | 0.590 |
| newssyscomb2009-engspa.eng.spa | 29.6 | 0.570 |
| news-test2008-engfra.eng.fra | 25.2 | 0.538 |
| news-test2008-engspa.eng.spa | 27.3 | 0.548 |
| newstest2009-engfra.eng.fra | 26.9 | 0.560 |
| newstest2009-engita.eng.ita | 28.7 | 0.583 |
| newstest2009-engspa.eng.spa | 29.0 | 0.568 |
| newstest2010-engfra.eng.fra | 29.3 | 0.574 |
| newstest2010-engspa.eng.spa | 34.2 | 0.601 |
| newstest2011-engfra.eng.fra | 31.4 | 0.592 |
| newstest2011-engspa.eng.spa | 35.0 | 0.599 |
| newstest2012-engfra.eng.fra | 29.5 | 0.576 |
| newstest2012-engspa.eng.spa | 35.5 | 0.603 |
| newstest2013-engfra.eng.fra | 29.9 | 0.567 |
| newstest2013-engspa.eng.spa | 32.1 | 0.578 |
| newstest2016-enro-engron.eng.ron | 26.1 | 0.551 |
| Tatoeba-test.eng-arg.eng.arg | 1.4 | 0.125 |
| Tatoeba-test.eng-ast.eng.ast | 17.8 | 0.406 |
| Tatoeba-test.eng-cat.eng.cat | 48.3 | 0.676 |
| Tatoeba-test.eng-cos.eng.cos | 3.2 | 0.275 |
| Tatoeba-test.eng-egl.eng.egl | 0.2 | 0.084 |
| Tatoeba-test.eng-ext.eng.ext | 11.2 | 0.344 |
| Tatoeba-test.eng-fra.eng.fra | 45.3 | 0.637 |
| Tatoeba-test.eng-frm.eng.frm | 1.1 | 0.221 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.118 |
| Tatoeba-test.eng-glg.eng.glg | 44.2 | 0.645 |
| Tatoeba-test.eng-hat.eng.hat | 28.0 | 0.502 |
| Tatoeba-test.eng-ita.eng.ita | 45.6 | 0.674 |
| Tatoeba-test.eng-lad.eng.lad | 8.2 | 0.322 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.182 |
| Tatoeba-test.eng-lld.eng.lld | 0.8 | 0.217 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.190 |
| Tatoeba-test.eng-mfe.eng.mfe | 91.9 | 0.956 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.548 |
| Tatoeba-test.eng.multi | 42.9 | 0.636 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.1 | 0.234 |
| Tatoeba-test.eng-oci.eng.oci | 7.9 | 0.297 |
| Tatoeba-test.eng-pap.eng.pap | 44.1 | 0.648 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.190 |
| Tatoeba-test.eng-por.eng.por | 41.8 | 0.639 |
| Tatoeba-test.eng-roh.eng.roh | 3.5 | 0.261 |
| Tatoeba-test.eng-ron.eng.ron | 41.0 | 0.635 |
| Tatoeba-test.eng-scn.eng.scn | 1.7 | 0.184 |
| Tatoeba-test.eng-spa.eng.spa | 50.1 | 0.689 |
| Tatoeba-test.eng-vec.eng.vec | 3.2 | 0.248 |
| Tatoeba-test.eng-wln.eng.wln | 7.2 | 0.220 |
### System Info:
- hf_name: eng-roa
- source_languages: eng
- target_languages: roa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: roa
- short_pair: en-roa
- chrF2_score: 0.636
- bleu: 42.9
- brevity_penalty: 0.978
- ref_len: 72751.0
- src_name: English
- tgt_name: Romance languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: roa
- prefer_old: False
- long_pair: eng-roa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-pqw
|
Helsinki-NLP
| 2023-08-16T11:30:53Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pqw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- pqw
tags:
- translation
license: apache-2.0
---
### eng-pqw
* source group: English
* target group: Western Malayo-Polynesian languages
* OPUS readme: [eng-pqw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp hil iba ilo ind jav jav_Java mad max_Latn min mlg pag pau sun tmw_Latn war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 3.0 | 0.143 |
| Tatoeba-test.eng-ceb.eng.ceb | 11.4 | 0.432 |
| Tatoeba-test.eng-cha.eng.cha | 1.4 | 0.189 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.6 | 0.139 |
| Tatoeba-test.eng-hil.eng.hil | 17.7 | 0.525 |
| Tatoeba-test.eng-iba.eng.iba | 14.6 | 0.365 |
| Tatoeba-test.eng-ilo.eng.ilo | 34.0 | 0.590 |
| Tatoeba-test.eng-jav.eng.jav | 6.2 | 0.299 |
| Tatoeba-test.eng-mad.eng.mad | 2.6 | 0.154 |
| Tatoeba-test.eng-mlg.eng.mlg | 34.3 | 0.518 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.561 |
| Tatoeba-test.eng.multi | 17.5 | 0.422 |
| Tatoeba-test.eng-pag.eng.pag | 19.8 | 0.507 |
| Tatoeba-test.eng-pau.eng.pau | 1.2 | 0.129 |
| Tatoeba-test.eng-sun.eng.sun | 30.3 | 0.418 |
| Tatoeba-test.eng-war.eng.war | 12.6 | 0.439 |
### System Info:
- hf_name: eng-pqw
- source_languages: eng
- target_languages: pqw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pqw']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: pqw
- short_pair: en-pqw
- chrF2_score: 0.42200000000000004
- bleu: 17.5
- brevity_penalty: 1.0
- ref_len: 66758.0
- src_name: English
- tgt_name: Western Malayo-Polynesian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: pqw
- prefer_old: False
- long_pair: eng-pqw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-pqe
|
Helsinki-NLP
| 2023-08-16T11:30:52Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- fj
- mi
- ty
- to
- na
- sm
- mh
- pqe
tags:
- translation
license: apache-2.0
---
### eng-pqe
* source group: English
* target group: Eastern Malayo-Polynesian languages
* OPUS readme: [eng-pqe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqe/README.md)
* model: transformer
* source language(s): eng
* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-fij.eng.fij | 22.1 | 0.396 |
| Tatoeba-test.eng-gil.eng.gil | 41.9 | 0.673 |
| Tatoeba-test.eng-haw.eng.haw | 0.6 | 0.114 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.5 | 0.075 |
| Tatoeba-test.eng-mah.eng.mah | 9.7 | 0.386 |
| Tatoeba-test.eng-mri.eng.mri | 7.7 | 0.301 |
| Tatoeba-test.eng.multi | 11.3 | 0.306 |
| Tatoeba-test.eng-nau.eng.nau | 0.5 | 0.071 |
| Tatoeba-test.eng-niu.eng.niu | 42.5 | 0.560 |
| Tatoeba-test.eng-rap.eng.rap | 3.3 | 0.122 |
| Tatoeba-test.eng-smo.eng.smo | 27.0 | 0.462 |
| Tatoeba-test.eng-tah.eng.tah | 11.3 | 0.307 |
| Tatoeba-test.eng-ton.eng.ton | 27.0 | 0.528 |
| Tatoeba-test.eng-tvl.eng.tvl | 29.3 | 0.513 |
### System Info:
- hf_name: eng-pqe
- source_languages: eng
- target_languages: pqe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']
- src_constituents: {'eng'}
- tgt_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: pqe
- short_pair: en-pqe
- chrF2_score: 0.306
- bleu: 11.3
- brevity_penalty: 1.0
- ref_len: 5786.0
- src_name: English
- tgt_name: Eastern Malayo-Polynesian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: pqe
- prefer_old: False
- long_pair: eng-pqe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-poz
|
Helsinki-NLP
| 2023-08-16T11:30:51Z | 114 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"poz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- poz
tags:
- translation
license: apache-2.0
---
### eng-poz
* source group: English
* target group: Malayo-Polynesian languages
* OPUS readme: [eng-poz](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 1.3 | 0.086 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.2 | 0.426 |
| Tatoeba-test.eng-cha.eng.cha | 1.9 | 0.196 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.4 | 0.121 |
| Tatoeba-test.eng-fij.eng.fij | 31.0 | 0.463 |
| Tatoeba-test.eng-gil.eng.gil | 45.4 | 0.635 |
| Tatoeba-test.eng-haw.eng.haw | 0.6 | 0.104 |
| Tatoeba-test.eng-hil.eng.hil | 14.4 | 0.498 |
| Tatoeba-test.eng-iba.eng.iba | 17.4 | 0.414 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.1 | 0.585 |
| Tatoeba-test.eng-jav.eng.jav | 6.5 | 0.309 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.5 | 0.065 |
| Tatoeba-test.eng-mad.eng.mad | 1.7 | 0.156 |
| Tatoeba-test.eng-mah.eng.mah | 12.7 | 0.391 |
| Tatoeba-test.eng-mlg.eng.mlg | 30.3 | 0.504 |
| Tatoeba-test.eng-mri.eng.mri | 8.2 | 0.316 |
| Tatoeba-test.eng-msa.eng.msa | 30.4 | 0.561 |
| Tatoeba-test.eng.multi | 16.2 | 0.410 |
| Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.087 |
| Tatoeba-test.eng-niu.eng.niu | 33.2 | 0.482 |
| Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.555 |
| Tatoeba-test.eng-pau.eng.pau | 1.0 | 0.124 |
| Tatoeba-test.eng-rap.eng.rap | 1.4 | 0.090 |
| Tatoeba-test.eng-smo.eng.smo | 12.9 | 0.407 |
| Tatoeba-test.eng-sun.eng.sun | 15.5 | 0.364 |
| Tatoeba-test.eng-tah.eng.tah | 9.5 | 0.295 |
| Tatoeba-test.eng-tet.eng.tet | 1.2 | 0.146 |
| Tatoeba-test.eng-ton.eng.ton | 23.7 | 0.484 |
| Tatoeba-test.eng-tvl.eng.tvl | 32.5 | 0.549 |
| Tatoeba-test.eng-war.eng.war | 12.6 | 0.432 |
### System Info:
- hf_name: eng-poz
- source_languages: eng
- target_languages: poz
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'poz']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: poz
- short_pair: en-poz
- chrF2_score: 0.41
- bleu: 16.2
- brevity_penalty: 1.0
- ref_len: 66803.0
- src_name: English
- tgt_name: Malayo-Polynesian languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: poz
- prefer_old: False
- long_pair: eng-poz
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-pon
|
Helsinki-NLP
| 2023-08-16T11:30:50Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pon
* source languages: en
* target languages: pon
* OPUS readme: [en-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pon | 32.4 | 0.542 |
|
Helsinki-NLP/opus-mt-en-pis
|
Helsinki-NLP
| 2023-08-16T11:30:48Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pis
* source languages: en
* target languages: pis
* OPUS readme: [en-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pis | 38.3 | 0.571 |
|
Helsinki-NLP/opus-mt-en-phi
|
Helsinki-NLP
| 2023-08-16T11:30:47Z | 116 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"phi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- phi
tags:
- translation
license: apache-2.0
---
### eng-phi
* source group: English
* target group: Philippine languages
* OPUS readme: [eng-phi](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb hil ilo pag war
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 7.1 | 0.245 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.5 | 0.435 |
| Tatoeba-test.eng-hil.eng.hil | 18.0 | 0.506 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.4 | 0.590 |
| Tatoeba-test.eng.multi | 13.1 | 0.392 |
| Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.481 |
| Tatoeba-test.eng-war.eng.war | 12.8 | 0.441 |
### System Info:
- hf_name: eng-phi
- source_languages: eng
- target_languages: phi
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'phi']
- src_constituents: {'eng'}
- tgt_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: phi
- short_pair: en-phi
- chrF2_score: 0.392
- bleu: 13.1
- brevity_penalty: 1.0
- ref_len: 30022.0
- src_name: English
- tgt_name: Philippine languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: phi
- prefer_old: False
- long_pair: eng-phi
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-pap
|
Helsinki-NLP
| 2023-08-16T11:30:46Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pap
* source languages: en
* target languages: pap
* OPUS readme: [en-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pap | 40.1 | 0.586 |
| Tatoeba.en.pap | 52.8 | 0.665 |
|
Helsinki-NLP/opus-mt-en-pag
|
Helsinki-NLP
| 2023-08-16T11:30:45Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pag
* source languages: en
* target languages: pag
* OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pag | 37.9 | 0.598 |
|
Helsinki-NLP/opus-mt-en-om
|
Helsinki-NLP
| 2023-08-16T11:30:44Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"om",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-om
* source languages: en
* target languages: om
* OPUS readme: [en-om](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-om/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.om | 21.8 | 0.498 |
|
Helsinki-NLP/opus-mt-en-nyk
|
Helsinki-NLP
| 2023-08-16T11:30:43Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nyk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-nyk
* source languages: en
* target languages: nyk
* OPUS readme: [en-nyk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nyk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.nyk | 26.6 | 0.511 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.