modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 18:30:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 18:29:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-el-fi
|
Helsinki-NLP
| 2023-08-16T11:28:48Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-el-fi
* source languages: el
* target languages: fi
* OPUS readme: [el-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.el.fi | 25.3 | 0.517 |
|
Helsinki-NLP/opus-mt-el-eo
|
Helsinki-NLP
| 2023-08-16T11:28:47Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- el
- eo
tags:
- translation
license: apache-2.0
---
### ell-epo
* source group: Modern Greek (1453-)
* target group: Esperanto
* OPUS readme: [ell-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md)
* model: transformer-align
* source language(s): ell
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.epo | 32.4 | 0.517 |
### System Info:
- hf_name: ell-epo
- source_languages: ell
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'eo']
- src_constituents: {'ell'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt
- src_alpha3: ell
- tgt_alpha3: epo
- short_pair: el-eo
- chrF2_score: 0.517
- bleu: 32.4
- brevity_penalty: 0.9790000000000001
- ref_len: 3807.0
- src_name: Modern Greek (1453-)
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: el
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ell-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-el-ar
|
Helsinki-NLP
| 2023-08-16T11:28:46Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- el
- ar
tags:
- translation
license: apache-2.0
---
### ell-ara
* source group: Modern Greek (1453-)
* target group: Arabic
* OPUS readme: [ell-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md)
* model: transformer
* source language(s): ell
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.ara | 21.9 | 0.485 |
### System Info:
- hf_name: ell-ara
- source_languages: ell
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'ar']
- src_constituents: {'ell'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt
- src_alpha3: ell
- tgt_alpha3: ara
- short_pair: el-ar
- chrF2_score: 0.485
- bleu: 21.9
- brevity_penalty: 0.972
- ref_len: 1686.0
- src_name: Modern Greek (1453-)
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: el
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ell-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ee-fr
|
Helsinki-NLP
| 2023-08-16T11:28:38Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-fr
* source languages: ee
* target languages: fr
* OPUS readme: [ee-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fr | 27.1 | 0.450 |
|
Helsinki-NLP/opus-mt-ee-fi
|
Helsinki-NLP
| 2023-08-16T11:28:37Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-fi
* source languages: ee
* target languages: fi
* OPUS readme: [ee-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fi | 25.0 | 0.482 |
|
Helsinki-NLP/opus-mt-ee-en
|
Helsinki-NLP
| 2023-08-16T11:28:35Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-en
* source languages: ee
* target languages: en
* OPUS readme: [ee-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.en | 39.3 | 0.556 |
| Tatoeba.ee.en | 21.2 | 0.569 |
|
Helsinki-NLP/opus-mt-ee-de
|
Helsinki-NLP
| 2023-08-16T11:28:34Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-de
* source languages: ee
* target languages: de
* OPUS readme: [ee-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.de | 22.3 | 0.430 |
|
Helsinki-NLP/opus-mt-dra-en
|
Helsinki-NLP
| 2023-08-16T11:28:33Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ta",
"kn",
"ml",
"te",
"dra",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ta
- kn
- ml
- te
- dra
- en
tags:
- translation
license: apache-2.0
---
### dra-eng
* source group: Dravidian languages
* target group: English
* OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md)
* model: transformer
* source language(s): kan mal tam tel
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 |
| Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 |
| Tatoeba-test.multi.eng | 30.0 | 0.493 |
| Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 |
| Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 |
### System Info:
- hf_name: dra-eng
- source_languages: dra
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en']
- src_constituents: {'tam', 'kan', 'mal', 'tel'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt
- src_alpha3: dra
- tgt_alpha3: eng
- short_pair: dra-en
- chrF2_score: 0.493
- bleu: 30.0
- brevity_penalty: 1.0
- ref_len: 10641.0
- src_name: Dravidian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: dra
- tgt_alpha2: en
- prefer_old: False
- long_pair: dra-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-vi
|
Helsinki-NLP
| 2023-08-16T11:28:32Z | 205 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- vi
tags:
- translation
license: apache-2.0
---
### deu-vie
* source group: German
* target group: Vietnamese
* OPUS readme: [deu-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.vie | 25.0 | 0.443 |
### System Info:
- hf_name: deu-vie
- source_languages: deu
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'vi']
- src_constituents: {'deu'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: vie
- short_pair: de-vi
- chrF2_score: 0.44299999999999995
- bleu: 25.0
- brevity_penalty: 1.0
- ref_len: 3768.0
- src_name: German
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: vi
- prefer_old: False
- long_pair: deu-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-pon
|
Helsinki-NLP
| 2023-08-16T11:28:28Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pon
* source languages: de
* target languages: pon
* OPUS readme: [de-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pon | 21.0 | 0.442 |
|
Helsinki-NLP/opus-mt-de-pl
|
Helsinki-NLP
| 2023-08-16T11:28:27Z | 712 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pl
* source languages: de
* target languages: pl
* OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.pl | 41.2 | 0.631 |
|
Helsinki-NLP/opus-mt-de-pis
|
Helsinki-NLP
| 2023-08-16T11:28:26Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pis
* source languages: de
* target languages: pis
* OPUS readme: [de-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pis/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pis/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pis/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pis | 26.0 | 0.470 |
|
Helsinki-NLP/opus-mt-de-pap
|
Helsinki-NLP
| 2023-08-16T11:28:25Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pap
* source languages: de
* target languages: pap
* OPUS readme: [de-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pap/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pap/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pap/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pap | 25.6 | 0.453 |
|
Helsinki-NLP/opus-mt-de-pag
|
Helsinki-NLP
| 2023-08-16T11:28:24Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pag
* source languages: de
* target languages: pag
* OPUS readme: [de-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pag | 24.3 | 0.469 |
|
Helsinki-NLP/opus-mt-de-ny
|
Helsinki-NLP
| 2023-08-16T11:28:23Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ny
* source languages: de
* target languages: ny
* OPUS readme: [de-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ny | 21.4 | 0.481 |
|
Helsinki-NLP/opus-mt-de-no
|
Helsinki-NLP
| 2023-08-16T11:28:21Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- no
tags:
- translation
license: apache-2.0
---
### deu-nor
* source group: German
* target group: Norwegian
* OPUS readme: [deu-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.nor | 33.2 | 0.554 |
### System Info:
- hf_name: deu-nor
- source_languages: deu
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'no']
- src_constituents: {'deu'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: nor
- short_pair: de-no
- chrF2_score: 0.5539999999999999
- bleu: 33.2
- brevity_penalty: 0.956
- ref_len: 32928.0
- src_name: German
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: no
- prefer_old: False
- long_pair: deu-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-niu
|
Helsinki-NLP
| 2023-08-16T11:28:19Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"niu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-niu
* source languages: de
* target languages: niu
* OPUS readme: [de-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-niu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-niu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-niu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.niu | 28.4 | 0.496 |
|
Helsinki-NLP/opus-mt-de-mt
|
Helsinki-NLP
| 2023-08-16T11:28:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-mt
* source languages: de
* target languages: mt
* OPUS readme: [de-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.mt | 25.0 | 0.436 |
|
Helsinki-NLP/opus-mt-de-lua
|
Helsinki-NLP
| 2023-08-16T11:28:15Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-lua
* source languages: de
* target languages: lua
* OPUS readme: [de-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lua/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.lua | 23.1 | 0.467 |
|
Helsinki-NLP/opus-mt-de-loz
|
Helsinki-NLP
| 2023-08-16T11:28:13Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"loz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-loz
* source languages: de
* target languages: loz
* OPUS readme: [de-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.loz | 27.7 | 0.480 |
|
Helsinki-NLP/opus-mt-de-ln
|
Helsinki-NLP
| 2023-08-16T11:28:12Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ln",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ln
* source languages: de
* target languages: ln
* OPUS readme: [de-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ln | 26.7 | 0.504 |
|
Nextcloud-AI/opus-mt-de-it
|
Nextcloud-AI
| 2023-08-16T11:28:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:32Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-it
* source languages: de
* target languages: it
* OPUS readme: [de-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.it | 45.3 | 0.671 |
|
Helsinki-NLP/opus-mt-de-it
|
Helsinki-NLP
| 2023-08-16T11:28:10Z | 1,692 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-it
* source languages: de
* target languages: it
* OPUS readme: [de-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.it | 45.3 | 0.671 |
|
Helsinki-NLP/opus-mt-de-is
|
Helsinki-NLP
| 2023-08-16T11:28:08Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"is",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- is
tags:
- translation
license: apache-2.0
---
### deu-isl
* source group: German
* target group: Icelandic
* OPUS readme: [deu-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-isl/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): isl
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.isl | 27.1 | 0.533 |
### System Info:
- hf_name: deu-isl
- source_languages: deu
- target_languages: isl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-isl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'is']
- src_constituents: {'deu'}
- tgt_constituents: {'isl'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: isl
- short_pair: de-is
- chrF2_score: 0.5329999999999999
- bleu: 27.1
- brevity_penalty: 0.9620000000000001
- ref_len: 5939.0
- src_name: German
- tgt_name: Icelandic
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: is
- prefer_old: False
- long_pair: deu-isl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-ilo
|
Helsinki-NLP
| 2023-08-16T11:28:06Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ilo
* source languages: de
* target languages: ilo
* OPUS readme: [de-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ilo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ilo | 29.8 | 0.533 |
|
Helsinki-NLP/opus-mt-de-ho
|
Helsinki-NLP
| 2023-08-16T11:28:01Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ho",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ho
* source languages: de
* target languages: ho
* OPUS readme: [de-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ho/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ho/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ho/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ho | 22.6 | 0.461 |
|
Helsinki-NLP/opus-mt-de-hil
|
Helsinki-NLP
| 2023-08-16T11:28:00Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"hil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-hil
* source languages: de
* target languages: hil
* OPUS readme: [de-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.hil | 33.9 | 0.563 |
|
Helsinki-NLP/opus-mt-de-he
|
Helsinki-NLP
| 2023-08-16T11:27:59Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-he
* source languages: de
* target languages: he
* OPUS readme: [de-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-he/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-he/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-he/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.he | 36.6 | 0.581 |
|
Helsinki-NLP/opus-mt-de-ha
|
Helsinki-NLP
| 2023-08-16T11:27:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ha
* source languages: de
* target languages: ha
* OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ha | 20.7 | 0.417 |
|
Helsinki-NLP/opus-mt-de-gaa
|
Helsinki-NLP
| 2023-08-16T11:27:54Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"gaa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-gaa
* source languages: de
* target languages: gaa
* OPUS readme: [de-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.gaa | 26.3 | 0.471 |
|
Nextcloud-AI/opus-mt-de-fr
|
Nextcloud-AI
| 2023-08-16T11:27:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:23Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fr
* source languages: de
* target languages: fr
* OPUS readme: [de-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| euelections_dev2019.transformer-align.de | 32.2 | 0.590 |
| newssyscomb2009.de.fr | 26.8 | 0.553 |
| news-test2008.de.fr | 26.4 | 0.548 |
| newstest2009.de.fr | 25.6 | 0.539 |
| newstest2010.de.fr | 29.1 | 0.572 |
| newstest2011.de.fr | 26.9 | 0.551 |
| newstest2012.de.fr | 27.7 | 0.554 |
| newstest2013.de.fr | 29.5 | 0.560 |
| newstest2019-defr.de.fr | 36.6 | 0.625 |
| Tatoeba.de.fr | 49.2 | 0.664 |
|
Helsinki-NLP/opus-mt-de-fj
|
Helsinki-NLP
| 2023-08-16T11:27:52Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fj
* source languages: de
* target languages: fj
* OPUS readme: [de-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.fj | 24.6 | 0.470 |
|
Helsinki-NLP/opus-mt-de-fi
|
Helsinki-NLP
| 2023-08-16T11:27:51Z | 2,457 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-fi
* source languages: de
* target languages: fi
* OPUS readme: [de-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.fi | 40.0 | 0.628 |
|
Helsinki-NLP/opus-mt-de-es
|
Helsinki-NLP
| 2023-08-16T11:27:48Z | 32,010 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-es
* source languages: de
* target languages: es
* OPUS readme: [de-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.es | 48.5 | 0.676 |
|
Helsinki-NLP/opus-mt-de-eo
|
Helsinki-NLP
| 2023-08-16T11:27:47Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-eo
* source languages: de
* target languages: eo
* OPUS readme: [de-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.eo | 48.6 | 0.673 |
|
Nextcloud-AI/opus-mt-de-en
|
Nextcloud-AI
| 2023-08-16T11:27:46Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:37:55Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-en
* source languages: de
* target languages: en
* OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.en | 29.4 | 0.557 |
| news-test2008.de.en | 27.8 | 0.548 |
| newstest2009.de.en | 26.8 | 0.543 |
| newstest2010.de.en | 30.2 | 0.584 |
| newstest2011.de.en | 27.4 | 0.556 |
| newstest2012.de.en | 29.1 | 0.569 |
| newstest2013.de.en | 32.1 | 0.583 |
| newstest2014-deen.de.en | 34.0 | 0.600 |
| newstest2015-ende.de.en | 34.2 | 0.599 |
| newstest2016-ende.de.en | 40.4 | 0.649 |
| newstest2017-ende.de.en | 35.7 | 0.610 |
| newstest2018-ende.de.en | 43.7 | 0.667 |
| newstest2019-deen.de.en | 40.1 | 0.642 |
| Tatoeba.de.en | 55.4 | 0.707 |
|
Helsinki-NLP/opus-mt-de-efi
|
Helsinki-NLP
| 2023-08-16T11:27:43Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"efi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-efi
* source languages: de
* target languages: efi
* OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.efi | 24.2 | 0.451 |
|
Helsinki-NLP/opus-mt-de-cs
|
Helsinki-NLP
| 2023-08-16T11:27:39Z | 312 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-cs
* source languages: de
* target languages: cs
* OPUS readme: [de-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.cs | 22.4 | 0.499 |
| news-test2008.de.cs | 20.2 | 0.487 |
| newstest2009.de.cs | 20.9 | 0.485 |
| newstest2010.de.cs | 22.7 | 0.510 |
| newstest2011.de.cs | 21.2 | 0.487 |
| newstest2012.de.cs | 20.9 | 0.479 |
| newstest2013.de.cs | 23.0 | 0.500 |
| newstest2019-decs.de.cs | 22.5 | 0.495 |
| Tatoeba.de.cs | 42.2 | 0.625 |
|
Helsinki-NLP/opus-mt-de-crs
|
Helsinki-NLP
| 2023-08-16T11:27:38Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"crs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-crs
* source languages: de
* target languages: crs
* OPUS readme: [de-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.crs | 24.1 | 0.429 |
|
Helsinki-NLP/opus-mt-de-bzs
|
Helsinki-NLP
| 2023-08-16T11:27:36Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"bzs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-bzs
* source languages: de
* target languages: bzs
* OPUS readme: [de-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bzs | 21.0 | 0.389 |
|
Helsinki-NLP/opus-mt-de-bg
|
Helsinki-NLP
| 2023-08-16T11:27:34Z | 472 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- bg
tags:
- translation
license: apache-2.0
---
### deu-bul
* source group: German
* target group: Bulgarian
* OPUS readme: [deu-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-bul/README.md)
* model: transformer
* source language(s): deu
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.bul | 50.7 | 0.683 |
### System Info:
- hf_name: deu-bul
- source_languages: deu
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'bg']
- src_constituents: {'deu'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.test.txt
- src_alpha3: deu
- tgt_alpha3: bul
- short_pair: de-bg
- chrF2_score: 0.6829999999999999
- bleu: 50.7
- brevity_penalty: 0.98
- ref_len: 2032.0
- src_name: German
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: de
- tgt_alpha2: bg
- prefer_old: False
- long_pair: deu-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-bcl
|
Helsinki-NLP
| 2023-08-16T11:27:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"bcl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-bcl
* source languages: de
* target languages: bcl
* OPUS readme: [de-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bcl | 34.6 | 0.563 |
|
Helsinki-NLP/opus-mt-de-ase
|
Helsinki-NLP
| 2023-08-16T11:27:31Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ase",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ase
* source languages: de
* target languages: ase
* OPUS readme: [de-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ase/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ase/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ase/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ase | 30.4 | 0.483 |
|
Nextcloud-AI/opus-mt-de-ar
|
Nextcloud-AI
| 2023-08-16T11:27:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:37:47Z |
---
language:
- de
- ar
tags:
- translation
license: apache-2.0
---
### deu-ara
* source group: German
* target group: Arabic
* OPUS readme: [deu-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): afb apc ara ara_Latn arq arz
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.ara | 19.7 | 0.486 |
### System Info:
- hf_name: deu-ara
- source_languages: deu
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ar']
- src_constituents: {'deu'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt
- src_alpha3: deu
- tgt_alpha3: ara
- short_pair: de-ar
- chrF2_score: 0.486
- bleu: 19.7
- brevity_penalty: 0.993
- ref_len: 6324.0
- src_name: German
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: de
- tgt_alpha2: ar
- prefer_old: False
- long_pair: deu-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-de-ar
|
Helsinki-NLP
| 2023-08-16T11:27:30Z | 624 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"de",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- de
- ar
tags:
- translation
license: apache-2.0
---
### deu-ara
* source group: German
* target group: Arabic
* OPUS readme: [deu-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): afb apc ara ara_Latn arq arz
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.ara | 19.7 | 0.486 |
### System Info:
- hf_name: deu-ara
- source_languages: deu
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ar']
- src_constituents: {'deu'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt
- src_alpha3: deu
- tgt_alpha3: ara
- short_pair: de-ar
- chrF2_score: 0.486
- bleu: 19.7
- brevity_penalty: 0.993
- ref_len: 6324.0
- src_name: German
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: de
- tgt_alpha2: ar
- prefer_old: False
- long_pair: deu-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Nextcloud-AI/opus-mt-de-zh
|
Nextcloud-AI
| 2023-08-16T11:27:28Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:38:53Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ZH
* source languages: de
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.de.zh | 24.4 | 0.335 |
|
Helsinki-NLP/opus-mt-da-no
|
Helsinki-NLP
| 2023-08-16T11:27:26Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- da
- no
tags:
- translation
license: apache-2.0
---
### dan-nor
* source group: Danish
* target group: Norwegian
* OPUS readme: [dan-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-nor/README.md)
* model: transformer-align
* source language(s): dan
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan.nor | 66.4 | 0.801 |
### System Info:
- hf_name: dan-nor
- source_languages: dan
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'no']
- src_constituents: {'dan'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-nor/opus-2020-06-17.test.txt
- src_alpha3: dan
- tgt_alpha3: nor
- short_pair: da-no
- chrF2_score: 0.8009999999999999
- bleu: 66.4
- brevity_penalty: 0.996
- ref_len: 9691.0
- src_name: Danish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: da
- tgt_alpha2: no
- prefer_old: False
- long_pair: dan-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-da-fr
|
Helsinki-NLP
| 2023-08-16T11:27:25Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-da-fr
* source languages: da
* target languages: fr
* OPUS readme: [da-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.fr | 62.2 | 0.751 |
|
Helsinki-NLP/opus-mt-da-eo
|
Helsinki-NLP
| 2023-08-16T11:27:22Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- da
- eo
tags:
- translation
license: apache-2.0
---
### dan-epo
* source group: Danish
* target group: Esperanto
* OPUS readme: [dan-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md)
* model: transformer-align
* source language(s): dan
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan.epo | 23.6 | 0.432 |
### System Info:
- hf_name: dan-epo
- source_languages: dan
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'eo']
- src_constituents: {'dan'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt
- src_alpha3: dan
- tgt_alpha3: epo
- short_pair: da-eo
- chrF2_score: 0.43200000000000005
- bleu: 23.6
- brevity_penalty: 0.9420000000000001
- ref_len: 69856.0
- src_name: Danish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: da
- tgt_alpha2: eo
- prefer_old: False
- long_pair: dan-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-cs-uk
|
Helsinki-NLP
| 2023-08-16T11:27:14Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"cs",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- cs
- uk
tags:
- translation
license: apache-2.0
---
### ces-ukr
* source group: Czech
* target group: Ukrainian
* OPUS readme: [ces-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md)
* model: transformer-align
* source language(s): ces
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces.ukr | 50.9 | 0.680 |
### System Info:
- hf_name: ces-ukr
- source_languages: ces
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['cs', 'uk']
- src_constituents: {'ces'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt
- src_alpha3: ces
- tgt_alpha3: ukr
- short_pair: cs-uk
- chrF2_score: 0.68
- bleu: 50.9
- brevity_penalty: 0.9940000000000001
- ref_len: 8891.0
- src_name: Czech
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: cs
- tgt_alpha2: uk
- prefer_old: False
- long_pair: ces-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-cs-fr
|
Helsinki-NLP
| 2023-08-16T11:27:12Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"cs",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-fr
* source languages: cs
* target languages: fr
* OPUS readme: [cs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.cs.fr | 21.0 | 0.488 |
|
Helsinki-NLP/opus-mt-cs-fi
|
Helsinki-NLP
| 2023-08-16T11:27:11Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"cs",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-fi
* source languages: cs
* target languages: fi
* OPUS readme: [cs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.cs.fi | 25.5 | 0.523 |
|
Helsinki-NLP/opus-mt-cs-eo
|
Helsinki-NLP
| 2023-08-16T11:27:10Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"cs",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- cs
- eo
tags:
- translation
license: apache-2.0
---
### ces-epo
* source group: Czech
* target group: Esperanto
* OPUS readme: [ces-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md)
* model: transformer-align
* source language(s): ces
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces.epo | 26.0 | 0.459 |
### System Info:
- hf_name: ces-epo
- source_languages: ces
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['cs', 'eo']
- src_constituents: {'ces'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt
- src_alpha3: ces
- tgt_alpha3: epo
- short_pair: cs-eo
- chrF2_score: 0.45899999999999996
- bleu: 26.0
- brevity_penalty: 0.94
- ref_len: 24901.0
- src_name: Czech
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: cs
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ces-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-cs-en
|
Helsinki-NLP
| 2023-08-16T11:27:09Z | 28,672 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"cs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-en
* source languages: cs
* target languages: en
* OPUS readme: [cs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2014-csen.cs.en | 34.1 | 0.612 |
| newstest2015-encs.cs.en | 30.4 | 0.565 |
| newstest2016-encs.cs.en | 31.8 | 0.584 |
| newstest2017-encs.cs.en | 28.7 | 0.556 |
| newstest2018-encs.cs.en | 30.3 | 0.566 |
| Tatoeba.cs.en | 58.0 | 0.721 |
|
Helsinki-NLP/opus-mt-crs-fi
|
Helsinki-NLP
| 2023-08-16T11:27:05Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"crs",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-crs-fi
* source languages: crs
* target languages: fi
* OPUS readme: [crs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.fi | 25.6 | 0.479 |
|
Helsinki-NLP/opus-mt-crs-en
|
Helsinki-NLP
| 2023-08-16T11:27:03Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"crs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-crs-en
* source languages: crs
* target languages: en
* OPUS readme: [crs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.en | 42.9 | 0.589 |
|
Helsinki-NLP/opus-mt-cpp-en
|
Helsinki-NLP
| 2023-08-16T11:27:01Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"cpp",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- id
- cpp
- en
tags:
- translation
license: apache-2.0
---
### cpp-eng
* source group: Creoles and pidgins, Portuguese-based
* target group: English
* OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md)
* model: transformer
* source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 |
| Tatoeba-test.multi.eng | 39.7 | 0.580 |
| Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 |
### System Info:
- hf_name: cpp-eng
- source_languages: cpp
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['id', 'cpp', 'en']
- src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpp
- tgt_alpha3: eng
- short_pair: cpp-en
- chrF2_score: 0.58
- bleu: 39.7
- brevity_penalty: 0.972
- ref_len: 37399.0
- src_name: Creoles and pidgins, Portuguese-based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpp
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpp-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-cpf-en
|
Helsinki-NLP
| 2023-08-16T11:26:59Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"cpf",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ht
- cpf
- en
tags:
- translation
license: apache-2.0
---
### cpf-eng
* source group: Creoles and pidgins, French‑based
* target group: English
* OPUS readme: [cpf-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md)
* model: transformer
* source language(s): gcf_Latn hat mfe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gcf-eng.gcf.eng | 8.4 | 0.229 |
| Tatoeba-test.hat-eng.hat.eng | 28.0 | 0.421 |
| Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.808 |
| Tatoeba-test.multi.eng | 16.3 | 0.323 |
### System Info:
- hf_name: cpf-eng
- source_languages: cpf
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ht', 'cpf', 'en']
- src_constituents: {'gcf_Latn', 'hat', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpf
- tgt_alpha3: eng
- short_pair: cpf-en
- chrF2_score: 0.32299999999999995
- bleu: 16.3
- brevity_penalty: 1.0
- ref_len: 990.0
- src_name: Creoles and pidgins, French‑based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpf
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpf-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-chk-fr
|
Helsinki-NLP
| 2023-08-16T11:26:57Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"chk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-chk-fr
* source languages: chk
* target languages: fr
* OPUS readme: [chk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.fr | 22.4 | 0.387 |
|
Helsinki-NLP/opus-mt-chk-es
|
Helsinki-NLP
| 2023-08-16T11:26:56Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"chk",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-chk-es
* source languages: chk
* target languages: es
* OPUS readme: [chk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.es | 20.8 | 0.374 |
|
Helsinki-NLP/opus-mt-cel-en
|
Helsinki-NLP
| 2023-08-16T11:26:54Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- gd
- ga
- br
- kw
- gv
- cy
- cel
- en
tags:
- translation
license: apache-2.0
---
### cel-eng
* source group: Celtic languages
* target group: English
* OPUS readme: [cel-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md)
* model: transformer
* source language(s): bre cor cym gla gle glv
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bre-eng.bre.eng | 17.2 | 0.385 |
| Tatoeba-test.cor-eng.cor.eng | 3.0 | 0.172 |
| Tatoeba-test.cym-eng.cym.eng | 41.5 | 0.582 |
| Tatoeba-test.gla-eng.gla.eng | 15.4 | 0.330 |
| Tatoeba-test.gle-eng.gle.eng | 50.8 | 0.668 |
| Tatoeba-test.glv-eng.glv.eng | 11.0 | 0.297 |
| Tatoeba-test.multi.eng | 22.8 | 0.398 |
### System Info:
- hf_name: cel-eng
- source_languages: cel
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en']
- src_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cel
- tgt_alpha3: eng
- short_pair: cel-en
- chrF2_score: 0.39799999999999996
- bleu: 22.8
- brevity_penalty: 1.0
- ref_len: 42097.0
- src_name: Celtic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cel
- tgt_alpha2: en
- prefer_old: False
- long_pair: cel-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ceb-fr
|
Helsinki-NLP
| 2023-08-16T11:26:52Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ceb",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ceb-fr
* source languages: ceb
* target languages: fr
* OPUS readme: [ceb-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.fr | 30.0 | 0.491 |
|
Helsinki-NLP/opus-mt-ceb-fi
|
Helsinki-NLP
| 2023-08-16T11:26:51Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ceb",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ceb-fi
* source languages: ceb
* target languages: fi
* OPUS readme: [ceb-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.fi | 27.4 | 0.525 |
|
Helsinki-NLP/opus-mt-ceb-en
|
Helsinki-NLP
| 2023-08-16T11:26:49Z | 1,276 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ceb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ceb
- en
tags:
- translation
license: apache-2.0
---
### ceb-eng
* source group: Cebuano
* target group: English
* OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md)
* model: transformer-align
* source language(s): ceb
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ceb.eng | 21.5 | 0.387 |
### System Info:
- hf_name: ceb-eng
- source_languages: ceb
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ceb', 'en']
- src_constituents: {'ceb'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt
- src_alpha3: ceb
- tgt_alpha3: eng
- short_pair: ceb-en
- chrF2_score: 0.387
- bleu: 21.5
- brevity_penalty: 1.0
- ref_len: 2293.0
- src_name: Cebuano
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ceb
- tgt_alpha2: en
- prefer_old: False
- long_pair: ceb-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ccs-en
|
Helsinki-NLP
| 2023-08-16T11:26:48Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ka",
"ccs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ka
- ccs
- en
tags:
- translation
license: apache-2.0
---
### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: [ccs-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md)
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat-eng.kat.eng | 18.0 | 0.357 |
| Tatoeba-test.multi.eng | 18.0 | 0.357 |
### System Info:
- hf_name: ccs-eng
- source_languages: ccs
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'ccs', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt
- src_alpha3: ccs
- tgt_alpha3: eng
- short_pair: ccs-en
- chrF2_score: 0.35700000000000004
- bleu: 18.0
- brevity_penalty: 1.0
- ref_len: 5992.0
- src_name: South Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: ccs
- tgt_alpha2: en
- prefer_old: False
- long_pair: ccs-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-cau-en
|
Helsinki-NLP
| 2023-08-16T11:26:47Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ab",
"ka",
"ce",
"cau",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ab
- ka
- ce
- cau
- en
tags:
- translation
license: apache-2.0
---
### cau-eng
* source group: Caucasian languages
* target group: English
* OPUS readme: [cau-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md)
* model: transformer
* source language(s): abk ady che kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.abk-eng.abk.eng | 0.3 | 0.134 |
| Tatoeba-test.ady-eng.ady.eng | 0.4 | 0.104 |
| Tatoeba-test.che-eng.che.eng | 0.6 | 0.128 |
| Tatoeba-test.kat-eng.kat.eng | 18.6 | 0.366 |
| Tatoeba-test.multi.eng | 16.6 | 0.351 |
### System Info:
- hf_name: cau-eng
- source_languages: cau
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ab', 'ka', 'ce', 'cau', 'en']
- src_constituents: {'abk', 'kat', 'che', 'ady'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cau
- tgt_alpha3: eng
- short_pair: cau-en
- chrF2_score: 0.35100000000000003
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 6285.0
- src_name: Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cau
- tgt_alpha2: en
- prefer_old: False
- long_pair: cau-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ca-pt
|
Helsinki-NLP
| 2023-08-16T11:26:45Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ca
- pt
tags:
- translation
license: apache-2.0
---
### cat-por
* source group: Catalan
* target group: Portuguese
* OPUS readme: [cat-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.por | 44.9 | 0.658 |
### System Info:
- hf_name: cat-por
- source_languages: cat
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'pt']
- src_constituents: {'cat'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt
- src_alpha3: cat
- tgt_alpha3: por
- short_pair: ca-pt
- chrF2_score: 0.6579999999999999
- bleu: 44.9
- brevity_penalty: 0.953
- ref_len: 5847.0
- src_name: Catalan
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: ca
- tgt_alpha2: pt
- prefer_old: False
- long_pair: cat-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ca-nl
|
Helsinki-NLP
| 2023-08-16T11:26:44Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ca
- nl
tags:
- translation
license: apache-2.0
---
### cat-nld
* source group: Catalan
* target group: Dutch
* OPUS readme: [cat-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.nld | 45.1 | 0.632 |
### System Info:
- hf_name: cat-nld
- source_languages: cat
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'nl']
- src_constituents: {'cat'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: nld
- short_pair: ca-nl
- chrF2_score: 0.632
- bleu: 45.1
- brevity_penalty: 0.965
- ref_len: 4157.0
- src_name: Catalan
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: nl
- prefer_old: False
- long_pair: cat-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ca-it
|
Helsinki-NLP
| 2023-08-16T11:26:42Z | 169 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ca
- it
tags:
- translation
license: apache-2.0
---
### cat-ita
* source group: Catalan
* target group: Italian
* OPUS readme: [cat-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.ita | 48.6 | 0.690 |
### System Info:
- hf_name: cat-ita
- source_languages: cat
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'it']
- src_constituents: {'cat'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: ita
- short_pair: ca-it
- chrF2_score: 0.69
- bleu: 48.6
- brevity_penalty: 0.985
- ref_len: 1995.0
- src_name: Catalan
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: it
- prefer_old: False
- long_pair: cat-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ca-en
|
Helsinki-NLP
| 2023-08-16T11:26:39Z | 8,325 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ca-en
* source languages: ca
* target languages: en
* OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ca.en | 51.4 | 0.678 |
|
Helsinki-NLP/opus-mt-ca-de
|
Helsinki-NLP
| 2023-08-16T11:26:38Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ca
- de
tags:
- translation
license: apache-2.0
---
### cat-deu
* source group: Catalan
* target group: German
* OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.deu | 39.5 | 0.593 |
### System Info:
- hf_name: cat-deu
- source_languages: cat
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'de']
- src_constituents: {'cat'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: deu
- short_pair: ca-de
- chrF2_score: 0.593
- bleu: 39.5
- brevity_penalty: 1.0
- ref_len: 5643.0
- src_name: Catalan
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: de
- prefer_old: False
- long_pair: cat-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bzs-fr
|
Helsinki-NLP
| 2023-08-16T11:26:36Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bzs-fr
* source languages: bzs
* target languages: fr
* OPUS readme: [bzs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fr | 30.0 | 0.479 |
|
Helsinki-NLP/opus-mt-bzs-en
|
Helsinki-NLP
| 2023-08-16T11:26:32Z | 261 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bzs-en
* source languages: bzs
* target languages: en
* OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.en | 44.5 | 0.605 |
|
Helsinki-NLP/opus-mt-bn-en
|
Helsinki-NLP
| 2023-08-16T11:26:30Z | 7,721 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bn
- en
tags:
- translation
license: apache-2.0
---
### ben-eng
* source group: Bengali
* target group: English
* OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md)
* model: transformer-align
* source language(s): ben
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ben.eng | 49.7 | 0.641 |
### System Info:
- hf_name: ben-eng
- source_languages: ben
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'en']
- src_constituents: {'ben'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt
- src_alpha3: ben
- tgt_alpha3: eng
- short_pair: bn-en
- chrF2_score: 0.6409999999999999
- bleu: 49.7
- brevity_penalty: 0.976
- ref_len: 13978.0
- src_name: Bengali
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: bn
- tgt_alpha2: en
- prefer_old: False
- long_pair: ben-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bi-sv
|
Helsinki-NLP
| 2023-08-16T11:26:29Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-sv
* source languages: bi
* target languages: sv
* OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.sv | 22.7 | 0.403 |
|
Helsinki-NLP/opus-mt-bi-en
|
Helsinki-NLP
| 2023-08-16T11:26:26Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-en
* source languages: bi
* target languages: en
* OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.en | 30.3 | 0.458 |
|
Helsinki-NLP/opus-mt-bg-tr
|
Helsinki-NLP
| 2023-08-16T11:26:23Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bg
- tr
tags:
- translation
license: apache-2.0
---
### bul-tur
* source group: Bulgarian
* target group: Turkish
* OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.tur | 40.9 | 0.687 |
### System Info:
- hf_name: bul-tur
- source_languages: bul
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'tr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: tur
- short_pair: bg-tr
- chrF2_score: 0.687
- bleu: 40.9
- brevity_penalty: 0.946
- ref_len: 4948.0
- src_name: Bulgarian
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: tr
- prefer_old: False
- long_pair: bul-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bg-sv
|
Helsinki-NLP
| 2023-08-16T11:26:22Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bg-sv
* source languages: bg
* target languages: sv
* OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.sv | 29.1 | 0.494 |
|
Helsinki-NLP/opus-mt-bg-ru
|
Helsinki-NLP
| 2023-08-16T11:26:21Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bg
- ru
tags:
- translation
license: apache-2.0
---
### bul-rus
* source group: Bulgarian
* target group: Russian
* OPUS readme: [bul-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.rus | 48.5 | 0.691 |
### System Info:
- hf_name: bul-rus
- source_languages: bul
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'ru']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: rus
- short_pair: bg-ru
- chrF2_score: 0.691
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 7870.0
- src_name: Bulgarian
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: ru
- prefer_old: False
- long_pair: bul-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bg-it
|
Helsinki-NLP
| 2023-08-16T11:26:20Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bg
- it
tags:
- translation
license: apache-2.0
---
### bul-ita
* source group: Bulgarian
* target group: Italian
* OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md)
* model: transformer
* source language(s): bul
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ita | 43.1 | 0.653 |
### System Info:
- hf_name: bul-ita
- source_languages: bul
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'it']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: ita
- short_pair: bg-it
- chrF2_score: 0.653
- bleu: 43.1
- brevity_penalty: 0.987
- ref_len: 16951.0
- src_name: Bulgarian
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: it
- prefer_old: False
- long_pair: bul-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bg-fi
|
Helsinki-NLP
| 2023-08-16T11:26:18Z | 147 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.fi | 23.7 | 0.505 |
|
Helsinki-NLP/opus-mt-bg-es
|
Helsinki-NLP
| 2023-08-16T11:26:16Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bg
- es
tags:
- translation
license: apache-2.0
---
### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md)
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.spa | 49.1 | 0.661 |
### System Info:
- hf_name: bul-spa
- source_languages: bul
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'es']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: spa
- short_pair: bg-es
- chrF2_score: 0.6609999999999999
- bleu: 49.1
- brevity_penalty: 0.992
- ref_len: 1783.0
- src_name: Bulgarian
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: es
- prefer_old: False
- long_pair: bul-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bg-eo
|
Helsinki-NLP
| 2023-08-16T11:26:15Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- bg
- eo
tags:
- translation
license: apache-2.0
---
### bul-epo
* source group: Bulgarian
* target group: Esperanto
* OPUS readme: [bul-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.epo | 24.5 | 0.438 |
### System Info:
- hf_name: bul-epo
- source_languages: bul
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'eo']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt
- src_alpha3: bul
- tgt_alpha3: epo
- short_pair: bg-eo
- chrF2_score: 0.43799999999999994
- bleu: 24.5
- brevity_penalty: 0.9670000000000001
- ref_len: 4043.0
- src_name: Bulgarian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: bg
- tgt_alpha2: eo
- prefer_old: False
- long_pair: bul-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ber-es
|
Helsinki-NLP
| 2023-08-16T11:26:11Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ber",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ber-es
* source languages: ber
* target languages: es
* OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.es | 33.8 | 0.487 |
|
Helsinki-NLP/opus-mt-ber-en
|
Helsinki-NLP
| 2023-08-16T11:26:10Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ber",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ber-en
* source languages: ber
* target languages: en
* OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.en | 37.3 | 0.566 |
|
Helsinki-NLP/opus-mt-bem-sv
|
Helsinki-NLP
| 2023-08-16T11:26:09Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-sv
* source languages: bem
* target languages: sv
* OPUS readme: [bem-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.sv | 25.6 | 0.434 |
|
Helsinki-NLP/opus-mt-bem-fr
|
Helsinki-NLP
| 2023-08-16T11:26:08Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-fr
* source languages: bem
* target languages: fr
* OPUS readme: [bem-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fr | 25.0 | 0.417 |
|
Helsinki-NLP/opus-mt-bem-fi
|
Helsinki-NLP
| 2023-08-16T11:26:07Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-fi
* source languages: bem
* target languages: fi
* OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fi | 22.8 | 0.439 |
|
Helsinki-NLP/opus-mt-bem-en
|
Helsinki-NLP
| 2023-08-16T11:26:05Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-en
* source languages: bem
* target languages: en
* OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.en | 33.4 | 0.491 |
|
Helsinki-NLP/opus-mt-be-es
|
Helsinki-NLP
| 2023-08-16T11:26:04Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- be
- es
tags:
- translation
license: apache-2.0
---
### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md)
* model: transformer-align
* source language(s): bel bel_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel.spa | 11.8 | 0.272 |
### System Info:
- hf_name: bel-spa
- source_languages: bel
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'es']
- src_constituents: {'bel', 'bel_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt
- src_alpha3: bel
- tgt_alpha3: spa
- short_pair: be-es
- chrF2_score: 0.272
- bleu: 11.8
- brevity_penalty: 0.892
- ref_len: 1412.0
- src_name: Belarusian
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: be
- tgt_alpha2: es
- prefer_old: False
- long_pair: bel-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-bcl-fr
|
Helsinki-NLP
| 2023-08-16T11:26:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-fr
* source languages: bcl
* target languages: fr
* OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fr | 35.0 | 0.527 |
|
Helsinki-NLP/opus-mt-bcl-fi
|
Helsinki-NLP
| 2023-08-16T11:26:01Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-fi
* source languages: bcl
* target languages: fi
* OPUS readme: [bcl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fi | 33.3 | 0.573 |
|
Helsinki-NLP/opus-mt-bcl-en
|
Helsinki-NLP
| 2023-08-16T11:25:59Z | 241 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-en
* source languages: bcl
* target languages: en
* OPUS readme: [bcl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.en | 56.1 | 0.697 |
|
Helsinki-NLP/opus-mt-bcl-de
|
Helsinki-NLP
| 2023-08-16T11:25:58Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-de
* source languages: bcl
* target languages: de
* OPUS readme: [bcl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.de | 30.3 | 0.510 |
|
Helsinki-NLP/opus-mt-az-tr
|
Helsinki-NLP
| 2023-08-16T11:25:56Z | 344 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"az",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- az
- tr
tags:
- translation
license: apache-2.0
---
### aze-tur
* source group: Azerbaijani
* target group: Turkish
* OPUS readme: [aze-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.tur | 24.4 | 0.529 |
### System Info:
- hf_name: aze-tur
- source_languages: aze
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'tr']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: tur
- short_pair: az-tr
- chrF2_score: 0.529
- bleu: 24.4
- brevity_penalty: 0.956
- ref_len: 5380.0
- src_name: Azerbaijani
- tgt_name: Turkish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: tr
- prefer_old: False
- long_pair: aze-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-az-es
|
Helsinki-NLP
| 2023-08-16T11:25:55Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"az",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- az
- es
tags:
- translation
license: apache-2.0
---
### aze-spa
* source group: Azerbaijani
* target group: Spanish
* OPUS readme: [aze-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.spa | 11.8 | 0.346 |
### System Info:
- hf_name: aze-spa
- source_languages: aze
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'es']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: spa
- short_pair: az-es
- chrF2_score: 0.34600000000000003
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 1144.0
- src_name: Azerbaijani
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: es
- prefer_old: False
- long_pair: aze-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ase-en
|
Helsinki-NLP
| 2023-08-16T11:25:49Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-ase-en
* source languages: ase
* target languages: en
* OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.en | 99.5 | 0.997 |
|
Helsinki-NLP/opus-mt-ar-tr
|
Helsinki-NLP
| 2023-08-16T11:25:46Z | 186 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ar
- tr
tags:
- translation
license: apache-2.0
---
### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md)
* model: transformer
* source language(s): apc_Latn ara ara_Latn arq_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.tur | 33.1 | 0.619 |
### System Info:
- hf_name: ara-tur
- source_languages: ara
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'tr']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: tur
- short_pair: ar-tr
- chrF2_score: 0.619
- bleu: 33.1
- brevity_penalty: 0.9570000000000001
- ref_len: 6949.0
- src_name: Arabic
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ara-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Nextcloud-AI/opus-mt-ar-tr
|
Nextcloud-AI
| 2023-08-16T11:25:46Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-23T10:37:39Z |
---
language:
- ar
- tr
tags:
- translation
license: apache-2.0
---
### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md)
* model: transformer
* source language(s): apc_Latn ara ara_Latn arq_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.tur | 33.1 | 0.619 |
### System Info:
- hf_name: ara-tur
- source_languages: ara
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'tr']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: tur
- short_pair: ar-tr
- chrF2_score: 0.619
- bleu: 33.1
- brevity_penalty: 0.9570000000000001
- ref_len: 6949.0
- src_name: Arabic
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ara-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ar-pl
|
Helsinki-NLP
| 2023-08-16T11:25:44Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- ar
- pl
tags:
- translation
license: apache-2.0
---
### ara-pol
* source group: Arabic
* target group: Polish
* OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md)
* model: transformer
* source language(s): ara arz
* target language(s): pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.pol | 38.0 | 0.623 |
### System Info:
- hf_name: ara-pol
- source_languages: ara
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'pl']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: pol
- short_pair: ar-pl
- chrF2_score: 0.623
- bleu: 38.0
- brevity_penalty: 0.948
- ref_len: 1171.0
- src_name: Arabic
- tgt_name: Polish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: pl
- prefer_old: False
- long_pair: ara-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.